bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=uq176Mm0LD
@inproceedings{ chen2023sceneadaptive, title={Scene-adaptive Knowledge Distillation for Sequential Recommendation via Differentiable Architecture Search}, author={Lei Chen}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=uq176Mm0LD} }
Sequential recommender systems (SRS) have become a research hotspot due to their power in modeling user dynamic interests and sequential behavioral patterns. To maximize model expressive ability, a default choice is to apply a larger and deeper network architecture, which, however, often brings high network latency when generating online recommendations. Naturally, we argue that compressing the heavy recommendation models into middle- or light-weight neural networks that reduce inference latency while maintaining recommendation performance is of great importance for practical production systems. To realize such a goal, we propose AdaRec, a knowledge distillation (KD) framework which compresses knowledge of a teacher model into a student model adaptively according to its recommendation scene by using differentiable neural architecture search (NAS). Specifically, we introduce a target-oriented knowledge distillation loss to guide the network structure search process for finding the student network architecture, and a cost-sensitive loss as constraints for model size, which achieves a superior trade-off between recommendation effectiveness and efficiency. In addition, we leverage earth mover's distance (EMD) to realize many-to-many layer mapping during knowledge distillation, which enables each intermediate student layer to learn from other intermediate teacher layers adaptively. Extensive experiments on three real-world recommendation datasets demonstrate that our model achieves significantly better accuracy with notable inference speedup compared to strong counterparts, while discovering diverse architectures for sequential recommendation models under different recommendation scenes.
Scene-adaptive Knowledge Distillation for Sequential Recommendation via Differentiable Architecture Search
[ "Lei Chen", "Fajie Yuan", "Jiaxi Yang", "Chengming Li", "Min Yang" ]
Workshop/WANT
poster
2107.07173
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tKqUiPUE6H
@inproceedings{ deb2023remainingusefullife, title={Remaining-Useful-Life Prediction and Uncertainty Quantification using {LSTM} Ensembles for Aircraft Engines}, author={Oishi Deb and Emmanouil Benetos and Philip Torr}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=tKqUiPUE6H} }
This paper proposes "LSTM (Long Short Term Memory) Ensemble" technique in building a regression model to predict the Remaining-Useful-Life (RUL) of aircraft engines along with uncertainty quantification, utilising the well-known run-to-failure turbo engine degradation dataset. This paper addressed the overlooked yet crucial aspect of uncertainty estimation in previous research, by revamping the LSTM architecture to facilitate uncertainty estimates, employing Negative Log Likelihood (NLL) as the training criterion. Through a series of experiments, the model demonstrated self-awareness of its uncertainty levels, correlating high confidence with low prediction errors and vice versa. This initiative not only enhances predictive maintenance strategies but also significantly improves the safety and reliability of aviation assets by offering a more nuanced understanding of predictive uncertainties. To the best of our knowledge, this is a pioneering work in this application domain from a non-Bayesian approach.
Remaining-Useful-Life Prediction and Uncertainty Quantification using LSTM Ensembles for Aircraft Engines
[ "Oishi Deb", "Emmanouil Benetos", "Philip Torr" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=rBgo4Mi8vZ
@inproceedings{ mohtashami2023cotformer, title={Co{TF}ormer: More Tokens With Attention Make Up For Less Depth}, author={Amirkeivan Mohtashami and Matteo Pagliardini and Martin Jaggi}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=rBgo4Mi8vZ} }
The race to continually develop ever larger and deeper foundational models is underway. However, techniques like the Chain-of-Thought (CoT) method continue to play a pivotal role in achieving optimal downstream performance. In this study, we establish an approximate parallel between the utilization of the chain-of-thought and employing a deeper transformer. Building on this insight, we introduce CoTFormer, a transformer variant that employs an implicit CoT-like mechanism to achieve comparable performance to that of a deeper model. Our empirical findings demonstrate the effectiveness of CoTFormers, as they significantly outperform larger standard transformers.
CoTFormer: More Tokens With Attention Make Up For Less Depth
[ "Amirkeivan Mohtashami", "Matteo Pagliardini", "Martin Jaggi" ]
Workshop/WANT
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qScA3fL49l
@inproceedings{ li2023lightseq, title={LightSeq: : Sequence Level Parallelism for Distributed Training of Long Context Transformers}, author={Dacheng Li and Rulin Shao and Anze Xie and Eric Xing and Joseph Gonzalez and Ion Stoica and Xuezhe Ma and Hao Zhang}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=qScA3fL49l} }
Increasing the context length of large language models (LLMs) unlocks fundamentally new capabilities, but also significantly increases the memory footprints of training. Previous model-parallel systems such as Megatron-LM partition and compute different attention heads in parallel, resulting in large communication volumes, so they cannot scale beyond the number of attention heads, thereby hindering its adoption. In this paper, we introduce a new approach, LightSeq, for long-context LLMs training. LightSeq has many notable advantages. First, LightSeq partitions over the sequence dimension, hence is agnostic to model architectures and readily applicable for models with varying numbers of attention heads, such as Multi-Head, Multi-Query and Grouped-Query attention. Second, LightSeq not only requires up to 4.7× less communication than Megatron-LM on popular LLMs but also overlaps the communication with computation. To further reduce the training time, LightSeq features a novel gradient checkpointing scheme to bypass an forward computation for memory-efficient attention. We evaluate LightSeq on Llama-7B and its variants with sequence lengths from 32K to 512K. Through comprehensive experiments on single and cross-node training, we show that LightSeq achieves up to 1.24-2.01× end-to-end speedup, and a 2-8× longer sequence length on models with fewer heads, compared to Megatron-LM. Codes are available at https://github.com/RulinShao/LightSeq.
LightSeq: : Sequence Level Parallelism for Distributed Training of Long Context Transformers
[ "Dacheng Li", "Rulin Shao", "Anze Xie", "Eric P. Xing", "Joseph E. Gonzalez", "Ion Stoica", "Xuezhe Ma", "Hao Zhang" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ommktfjGRp
@inproceedings{ unsal2023flextrain, title={FlexTrain: A Dynamic Training Framework for Heterogeneous Devices Environments}, author={Mert Unsal and Ali Maatouk and Antonio De Domenico and Nicola Piovesan and Fadhel Ayed}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=ommktfjGRp} }
As deep learning models become increasingly large, they pose significant challenges in heterogeneous devices environments. The size of deep learning models makes it difficult to deploy them on low-power or resource-constrained devices, leading to long inference times and high energy consumption. To address these challenges, we propose FlexTrain, a framework that accommodates the diverse storage and computational resources available on different devices during the training phase. FlexTrain enables efficient deployment of deep learning models, while respecting device constraints, minimizing communication costs, and ensuring seamless integration with diverse devices. We demonstrate the effectiveness of FlexTrain on the CIFAR-100 dataset, where a single global model trained with FlexTrain can be easily deployed on heterogeneous devices, saving training time and energy consumption. We also extend FlexTrain to the federated learning setting, showing that our approach outperforms standard federated learning benchmarks on both CIFAR-10 and CIFAR-100 datasets.
FlexTrain: A Dynamic Training Framework for Heterogeneous Devices Environments
[ "Mert Unsal", "Ali Maatouk", "Antonio De Domenico", "Nicola Piovesan", "Fadhel Ayed" ]
Workshop/WANT
poster
2310.20457
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=obE6BSiUjt
@inproceedings{ chandy2023dyad, title={{DYAD}: A Descriptive Yet Abjuring Density efficient approximation to linear neural network layers}, author={Sarin Eapen Chandy and Varun Prashant Gangal and Yi Yang and Gabriel Maggiotti}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=obE6BSiUjt} }
We devise, implement and performance-asses DYAD, a layer which can serve as a faster and more memory-efficient approximate replacement for linear layers, (nn.Linear() in Pytorch). These layers appear in common subcomponents, such as in the ff module of Transformers. DYAD is based on a bespoke near-sparse matrix structure which approximates the dense "weight" matrix W that matrix-multiplies the input in the typical realization of such a layer, a.k.a DENSE. Our alternative near-sparse matrix structure is decomposable to a sum of 2 matrices permutable to a block-sparse counterpart. These can be represented as 3D tensors, which in unison allow a faster execution of matrix multiplication with the mini-batched input matrix compared to DENSE (O(rows(W) × cols(W)) → O(rows(W)×cols(W)/ (# of blocks )). As the crux of our experiments, we pretrain both DYAD and DENSE variants of 2 sizes of the OPT arch and 1 size of the Pythia arch, including at different token scales of the babyLM benchmark. We find DYAD to be competitive (≥ 90%) of DENSE performance on zero-shot (e.g. BLIMP), few-shot (OPENLM) and finetuning (GLUE) benchmarks, while being ≥7-15% faster to train on-GPU even at 125m scale, besides surfacing larger speedups at increasing scale and model width.
DYAD: A Descriptive Yet Abjuring Density efficient approximation to linear neural network layers
[ "Sarin Eapen Chandy", "Varun Prashant Gangal", "Yi Yang", "Gabriel Maggiotti" ]
Workshop/WANT
poster
2312.06881
[ "https://github.com/asappresearch/dyad" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=oYztmXK2mu
@inproceedings{ pitas2023improving, title={Improving Deep Ensembles without Communication}, author={Konstantinos Pitas and Michael Arbel and Julyan Arbel}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=oYztmXK2mu} }
Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised deep learning. We propose to improve deep ensembles by optimizing a tighter PAC-Bayesian bound than the most popular ones. Our approach has a number of benefits over previous methods: 1) it requires no communication between ensemble members during training to improve performance and is trivially parallelizable, 2) it results in a simple soft thresholding gradient update that is much simpler than alternatives. Empirically, we outperform competing approaches that try to improve ensembles by encouraging diversity. We report test accuracy gains for MLP, LeNet, and WideResNet architectures, and for a variety of datasets.
Improving Deep Ensembles without Communication
[ "Konstantinos Pitas", "Michael Arbel", "Julyan Arbel" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=nErbvDkucY
@inproceedings{ perez2023training, title={Training and inference of large language models using 8-bit floating point}, author={Sergio Perez and Yan Zhang and James Briggs and Charlie Blake and Josh Levy-Kramer and Paul Balanca and Carlo Luschi and Stephen Barlow and Andrew Fitzgibbon}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=nErbvDkucY} }
FP8 formats are gaining popularity to boost the computational efficiency for training and inference of large deep learning models. Their main challenge is that a careful choice of scaling is needed to prevent degradation due to the reduced dynamic range compared to higher-precision formats. Although there exists ample literature about selecting such scalings for INT formats, this critical aspect has yet to be addressed for FP8. This paper presents a methodology to select the scalings for FP8 linear layers, based on dynamically updating per-tensor scales for the weights, gradients and activations. We apply this methodology to train and validate large language models of the type of GPT and Llama 2 using FP8, for model sizes ranging from 111M to 70B. To facilitate the understanding of the FP8 dynamics, our results are accompanied by plots of the per-tensor scale distribution for weights, activations and gradients during both training and inference.
Training and inference of large language models using 8-bit floating point
[ "Sergio P. Perez", "Yan Zhang", "James Briggs", "Charlie Blake", "Josh Levy-Kramer", "Paul Balanca", "Carlo Luschi", "Stephen Barlow", "Andrew W Fitzgibbon" ]
Workshop/WANT
oral
2309.17224
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lWU12ye1qa
@inproceedings{ han2023concatplexer, title={ConcatPlexer : Additional Dim1 Batching for Faster ViTs}, author={Donghoon Han and Seunghyeon Seo and Donghyeon Jeon and Jiho Jang and Chaerin Kong and Nojun Kwak}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=lWU12ye1qa} }
Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications. Yet, the superior performance and modeling flexibility of transformers came with a severe increase in computation costs, and hence several works have proposed methods to reduce this burden. Inspired by a cost-cutting method originally proposed for language models, DataMultiplexing (DataMUX), we propose a novel approach for efficient visual recognition that employs additional dim1 batching (i.e., concatenation) that greatly improves the throughput with little compromise in the accuracy. We first introduce a naive adaptation of DataMux for vision models, Image Multiplexer, and devise novel components to overcome its weaknesses, rendering our final model, ConcatPlexer, at the sweet spot between inference speed and accuracy. The ConcatPlexer was trained on ImageNet1K and CIFAR100 dataset and it achieved 23.5% less GFLOPs than ViT-B/16 with 69.5% and 83.4% validation accuracy, respectively.
ConcatPlexer : Additional Dim1 Batching for Faster ViTs
[ "Donghoon Han", "Seunghyeon Seo", "Donghyeon Jeon", "Jiho Jang", "Chaerin Kong", "Nojun Kwak" ]
Workshop/WANT
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ipTojoQd1f
@inproceedings{ sridhar2023instatune, title={InstaTune: Instantaneous Neural Architecture Search During Fine-Tuning}, author={Sharath Nittur Sridhar and Souvik Kundu and Sairam Sundaresan and Maciej Szankin and Anthony Sarah}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=ipTojoQd1f} }
One-Shot Neural Architecture Search (NAS) algorithms often rely on training a hardware agnostic super-network for a domain specific task. Optimal sub-networks are then extracted from the trained super-network for different hardware platforms. However, training super-networks from scratch can be extremely time consuming and compute intensive especially for large models that rely on a two-stage training process of pre-training and fine-tuning. State of the art pre-trained models are available for a wide range of tasks, but their large sizes significantly limits their applicability on various hardware platforms. We propose InstaTune, a method that leverages off-the-shelf pre-trained weights for large models and generates a super-network during the fine-tuning stage. InstaTune has multiple benefits. Firstly, since the process happens during fine-tuning, it minimizes the overall time and compute resources required for NAS. Secondly, the sub-networks extracted are optimized for the target task, unlike prior work that optimizes on the pre-training objective. Finally, InstaTune is easy to" plug and play" in existing frameworks. By using multi-objective evolutionary search algorithms along with lightly trained predictors, we find Pareto-optimal sub-networks that outperform their respective baselines across different performance objectives such as accuracy and MACs. Specifically, we demonstrate that our approach performs well across both unimodal (ViT and BERT) and multi-modal (BEiT-3) transformer based architectures. Additionally, we show that using our approach to jointly optimize for the network architecture and mixed precision quantization policy, yields sub-networks with significantly lower model-size.
InstaTune: Instantaneous Neural Architecture Search During Fine-Tuning
[ "Sharath Nittur Sridhar", "Souvik Kundu", "Sairam Sundaresan", "Maciej Szankin", "Anthony Sarah" ]
Workshop/WANT
poster
2308.15609
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iifVZTrqDb
@inproceedings{ lialin2023relora, title={ReLo{RA}: High-Rank Training Through Low-Rank Updates}, author={Vladislav Lialin and Sherin Muckatira and Namrata Shivagunde and Anna Rumshisky}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=iifVZTrqDb} }
Despite the dominance and effectiveness of scaling, resulting in large networks with hundreds of billions of parameters, the necessity to train overparametrized models remains poorly understood, while training costs grow exponentially. In this paper, we explore parameter-efficient training techniques as an approach to training large neural networks. We introduce a novel method called ReLoRA, which utilizes low-rank updates to train high-rank networks. We apply ReLoRA to training transformer language models with up to 1.3B parameters and demonstrate comparable performance to regular neural network training. ReLoRA saves up to 5.5Gb of RAM per GPU and improves training speed by 9-40% depending on 10 the model size and hardware setup. Our findings show the potential of parameter-efficient techniques for large-scale pre-training.
ReLoRA: High-Rank Training Through Low-Rank Updates
[ "Vladislav Lialin", "Sherin Muckatira", "Namrata Shivagunde", "Anna Rumshisky" ]
Workshop/WANT
poster
2307.05695
[ "https://github.com/guitaricet/peft_pretraining" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iP4WcJ4EX0
@inproceedings{ saxena2023sparse, title={Sparse Iso-{FLOP} Transformations for Maximizing Training Efficiency}, author={Shreyas Saxena and Vithursan Thangarasa and Abhay Gupta and Sean Lie}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=iP4WcJ4EX0} }
Recent works have explored the use of weight sparsity to improve the training efficiency (test accuracy w.r.t training FLOPs) of deep neural networks (DNNs). These works aim to reduce training FLOPs but training with sparse weights often leads to accuracy loss or requires longer training schedules, making the resulting training efficiency less clear. In contrast, we focus on using sparsity to increase accuracy while using the same FLOPS as the dense model and show training efficiency gains through higher accuracy. In this work, we introduce Sparse-IFT, a family of Sparse Iso-FLOP Transformations which are used as drop-in replacements for dense layers to improve their representational capacity and FLOP efficiency. Each transformation is parameterized by a single hyperparameter (sparsity level) and provides a larger search space to find optimal sparse masks. Without changing any training hyperparameters, replacing dense layers with Sparse-IFT leads to significant improvements across computer vision and natural language processing tasks, including ResNet-18 on ImageNet (+3.5\%) and GPT-3 Small on WikiText-103 (-0.4 PPL), both matching larger dense model variants that use 2x or more FLOPs. To our knowledge, this is the first work to demonstrate the use of sparsity for improving the accuracy of dense models via a simple set of sparse transformations. Code is available at: https://github.com/CerebrasResearch/Sparse-IFT.
Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
[ "Vithursan Thangarasa", "Shreyas Saxena", "Abhay Gupta", "Sean Lie" ]
Workshop/WANT
poster
[ "https://github.com/cerebrasresearch/sift" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=fhQIsEbvlN
@inproceedings{ feng2023embarrassingly, title={Embarrassingly Simple Dataset Distillation}, author={Yunzhen Feng and Shanmukha Ramakrishna Vedantam and Julia Kempe}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=fhQIsEbvlN} }
Training of large-scale models in general requires enormous amounts of traning data. Dataset distillation aims to extract a small set of synthetic training samples from a large dataset with the goal of achieving competitive performance on test data when trained on this sample, thus reducing both dataset size and training time. In this work, we tackle dataset distillation at its core by treating it directly as a bilevel optimization problem. Re-examining the foundational back-propagation through time method, we study the pronounced variance in the gradients, computational burden, and long-term dependencies. We introduce an improved method: Random Truncated Backpropagation Through Time (RaT-BPTT) to address them. RaT-BPTT incorporates a truncation coupled with a random window, effectively stabilizing the gradients and speeding up the optimization while covering long dependencies. This allows us to establish new dataset distillation state-of-the-art for a variety of standard dataset benchmarks.
Embarrassingly Simple Dataset Distillation
[ "Yunzhen Feng", "Shanmukha Ramakrishna Vedantam", "Julia Kempe" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=e9D2STGwLJ
@inproceedings{ ge2023model, title={Model Tells You What to Discard: Adaptive {KV} Cache Compression for {LLM}s}, author={Suyu Ge and Yunan Zhang and Liyuan Liu and Minjia Zhang and Jiawei Han and Jianfeng Gao}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=e9D2STGwLJ} }
In this study, we introduce adaptive KV cache compression, a plug-and-play method that reduces the memory footprint of generative inference for Large Language Models (LLMs). Different from the conventional KV cache that retains key and value vectors for all context tokens, we conduct targeted profiling to discern the intrinsic structure of attention modules. Based on the recognized structure, we then construct the KV cache in an adaptive manner: evicting long-range contexts on attention heads emphasizing local contexts, discarding non-special tokens on attention heads centered on special tokens, and only employing the standard KV cache for attention heads that broadly attend to all tokens. Moreover, with the lightweight attention profiling used to guide the construction of the adaptive KV cache, FastGen can be deployed without resource-intensive fine-tuning or re-training. In our experiments across various asks, FastGen demonstrates substantial reduction on GPU memory consumption with negligible generation quality loss. We will release our code and the compatible CUDA kernel for reproducibility.
Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs
[ "Suyu Ge", "Yunan Zhang", "Liyuan Liu", "Minjia Zhang", "Jiawei Han", "Jianfeng Gao" ]
Workshop/WANT
poster
2310.01801
[ "" ]
https://huggingface.co/papers/2310.01801
1
3
0
6
1
[]
[]
[]
null
https://openreview.net/forum?id=doQwq4kwFc
@inproceedings{ gu2023a, title={A Quadratic Synchronization Rule for Distributed Deep Learning}, author={Xinran Gu and Kaifeng Lyu and Sanjeev Arora and Jingzhao Zhang and Longbo Huang}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=doQwq4kwFc} }
In distributed deep learning with data parallelism, synchronizing gradients at each training step can cause a huge communication overhead, especially when many nodes work together to train large models. Local gradient methods, such as Local SGD, address this issue by allowing workers to compute locally for $H$ steps without synchronizing with others, hence reducing communication frequency. While $H$ has been viewed as a hyperparameter to trade optimization efficiency for communication cost, recent research indicates that setting a proper $H$ value can lead to generalization improvement. Yet, selecting a proper $H$ is elusive. This work proposes a theory-grounded method for determining $H$, named the Quadratic Synchronization Rule (QSR), which recommends dynamically setting $H$ in proportion to $\frac{1}{\eta^2}$ as the learning rate $\eta$ decays over time. Extensive ImageNet experiments on ResNet and ViT show that local gradient methods with QSR consistently improve the test accuracy over other synchronization strategies. Compared to the standard data parallel training, QSR enables Local AdamW to cut the training time on 16 or 64 GPUs down from 26.7 to 20.2 hours or from 8.6 to 5.5 hours and, at the same time, achieves $1.12\%$ or $0.84\%$ higher top-1 validation accuracy.
A Quadratic Synchronization Rule for Distributed Deep Learning
[ "Xinran Gu", "Kaifeng Lyu", "Sanjeev Arora", "Jingzhao Zhang", "Longbo Huang" ]
Workshop/WANT
poster
[ "https://github.com/hmgxr128/qsr" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=d1KZv5aVqn
@inproceedings{ liu2023sparse, title={Sparse Backpropagation for MoE Training}, author={Liyuan Liu and Jianfeng Gao and Weizhu Chen}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=d1KZv5aVqn} }
One defining characteristic of Mixture-of-Expert (MoE) models is their capacity for conducting sparse computation via expert routing, leading to remarkable scalability. However, backpropagation, the cornerstone of deep learning, requires dense computation, thereby posting challenges in MoE gradient computations. Here, we introduce SparseMixer, a scalable gradient estimator that bridges the gap between backpropagation and sparse expert routing. Unlike typical MoE training which strategically neglects certain gradient terms for the sake of sparse computation and scalability, SparseMixer provides scalable gradient approximations for these terms, enabling reliable gradient estimation in MoE training. Grounded in a numerical ODE framework, SparseMixer harnesses the mid-point method, a second-order ODE solver, to deliver precise gradient approximations with negligible computational overhead. Applying SparseMixer to Switch Transformer on both pre-training and machine translation tasks, SparseMixer showcases considerable performance gain, accelerating training convergence up to 2 times.
Sparse Backpropagation for MoE Training
[ "Liyuan Liu", "Jianfeng Gao", "Weizhu Chen" ]
Workshop/WANT
oral
2310.00811
[ "" ]
https://huggingface.co/papers/2310.00811
0
2
0
3
1
[]
[]
[]
null
https://openreview.net/forum?id=ZmuLcqwzkl
@inproceedings{ demidovskij2023darel, title={{DAREL}: Data Reduction with Losses for Training Acceleration of Real and Hypercomplex Neural Networks}, author={Alexander Vladimirovich Demidovskij and Aleksei Trutnev and Artem Tugarev and Igor Salnikov and Stanislav Pavlov}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=ZmuLcqwzkl} }
Neural network training requires a lot of resources, and there are situations where training time and memory usage are limited. It makes specialized algorithms for training neural networks within the constraints of resource limitations an important and significant challenge. Data Reduction with Losses is a novel training data reduction method that operates with training samples based on losses obtained from a currently trained model or a pre-trained one. The proposed method can be used to train Deep Neural Networks for both Computer Vision and Natural Language Processing tasks in real and hypercomplex domains. When this method is applied to Large Language Models fine-tuning, Data Reduction with Losses is recommended to be combined with existing methods for Parameter-Efficient fine-tuning, such as LoRA. The training acceleration for ResNet18 is 2.03x, for Hypercomplex ResNet18 is 2.09x, GPT-2 Medium fine-tuning with DAREL on top of LoRA allows to achieve 1.43x acceleration with corresponding increase of BLEU score by 1.81 p.p. compared to baseline fine-tuning with LoRA method.
DAREL: Data Reduction with Losses for Training Acceleration of Real and Hypercomplex Neural Networks
[ "Alexander Vladimirovich Demidovskij", "Aleksei Trutnev", "Artem Tugarev", "Igor Salnikov", "Stanislav Pavlov" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ZezqFPGZKS
@inproceedings{ sanchez-brizuela2023accelerating, title={Accelerating Deep Learning using Ivy}, author={Guillermo Sanchez-Brizuela and Ved Patwardhan and Matthew Barrett and Paul Anderson and Mustafa Hani and Daniel Lenton}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=ZezqFPGZKS} }
Today's machine learning (ML) ecosystem suffers from deep fragmentation due to the proliferation of numerous incompatible frameworks, compiler infrastructure and hardware. Each unique tool within this fragmented stack has its own set of benefits and drawbacks, making it better suited for certain use-cases. As a result, different areas of industry and academia use different tools for different use cases, which hinders collaboration and democratization, ultimately resulting in costly re-implementations and sub-optimal runtime efficiency when deploying, due to sparse and partial connections to the rest of the stack. In this paper, we present Ivy, a complementary, multi-backend ML framework, and its transpiler, which aims to bridge this gap and solve the fragmentation problem by enabling the integration of code from one framework into another to speed up research, development, and model inference.
Accelerating Deep Learning using Ivy
[ "Guillermo Sanchez-Brizuela", "Ved Patwardhan", "Matthew Barrett", "Paul Anderson", "Mustafa Hani", "Daniel James Lenton" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Y0AHNkVDeu
@inproceedings{ hagemann2023efficient, title={Efficient Parallelization Layouts for Large-Scale Distributed Model Training}, author={Johannes Hagemann and Samuel Weinbach and Konstantin Dobler and Maximilian Schall and Gerard de Melo}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=Y0AHNkVDeu} }
Efficiently training large language models requires parallelizing across hundreds of hardware accelerators and invoking various compute and memory optimizations. When combined, many of these strategies have complex interactions regarding the final training efficiency. Prior work tackling this problem did not have access to the latest set of optimizations, such as FlashAttention or sequence parallelism. In this work, we conduct a comprehensive ablation study of possible training configurations for large language models. We distill this large study into several key recommendations for the most efficient training. For instance, we find that using a micro-batch size of 1 usually enables the most efficient training layouts. Larger micro-batch sizes necessitate activation checkpointing or higher degrees of model parallelism and also lead to larger pipeline bubbles. Our most efficient configurations enable us to achieve state-of-the-art training efficiency results over a range of model sizes, most notably a Model FLOPs utilization of 70.5% when training a Llama-13B model.
Efficient Parallelization Layouts for Large-Scale Distributed Model Training
[ "Johannes Hagemann", "Samuel Weinbach", "Konstantin Dobler", "Maximilian Schall", "Gerard de Melo" ]
Workshop/WANT
oral
2311.05610
[ "https://github.com/aleph-alpha/neurips-want-submission-efficient-parallelization-layouts" ]
https://huggingface.co/papers/2311.05610
1
0
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=VN4tOgbRZU
@inproceedings{ pitas2023something, title={Something for (almost) nothing: improving deep ensemble calibration using unlabeled data}, author={Konstantinos Pitas and Julyan Arbel}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=VN4tOgbRZU} }
We present a method to improve the calibration of deep ensembles in the small training data regime in the presence of unlabeled data. Our approach is extremely simple to implement: given an unlabeled set, for each unlabeled data point, we simply fit a different randomly selected label with each ensemble member. We provide a theoretical analysis based on a PAC-Bayes bound which guarantees that if we fit such a labeling on unlabeled data, and the true labels on the training data, we obtain low negative log-likelihood and high ensemble diversity on testing samples. Crucially, each ensemble member can be trained independently from the rest (apart from the final validation/test step) making a parallel or distributed implementation extremely easy.
Something for (almost) nothing: improving deep ensemble calibration using unlabeled data
[ "Konstantinos Pitas", "Julyan Arbel" ]
Workshop/WANT
poster
2310.02885
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=NxpWp0IhgB
@inproceedings{ zhang2023leanflexgkp, title={LeanFlex-{GKP}: Advancing Hassle-Free Structured Pruning with Simple Flexible Group Count}, author={Jiamu Zhang and Shaochen Zhong and Andrew Ye and Zirui Liu and Kaixiong Zhou and Xia Hu and Shuai Xu and Vipin Chaudhary}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=NxpWp0IhgB} }
Densely structured pruning methods — which generate pruned models in a fully dense format, allowing immediate compression benefits without additional demands — are evolving due to their practical significance. Traditional techniques in this domain mainly revolve around coarser granularities, such as filter pruning, and thereby limit performance due to a restricted pruning freedom. Recent advancements in *Grouped Kernel Pruning (GKP)* have enabled the utilization of finer granularities while maintaining a densely structured format. We observe that existing GKP methods often introduce dynamic operations to different aspects of their procedures at the cost of adding complications and/or imposing limitations (e.g. requiring an expensive mixture of clustering schemes), or contain dynamic pruning rates and sizes among groups which results in a reliance on custom architecture support for its pruned models. In this work, we argue that the best practice to introduce these dynamic operations to GKP is to make `Conv2d(groups)` (a.k.a. group count) flexible under an integral optimization, leveraging its ideal alignment with the infrastructure support *Grouped Convolution*. Pursuing such a direction, we present a one-shot, post-train, data-agnostic GKP method that is more performant, adaptive, and efficient than its predecessors while simultaneously being a lot more user-friendly, with little-to-no hyper-parameter tuning or handcrafting of criteria required.
LeanFlex-GKP: Advancing Hassle-Free Structured Pruning with Simple Flexible Group Count
[ "Jiamu Zhang", "Shaochen Zhong", "Andrew Ye", "Zirui Liu", "Kaixiong Zhou", "Xia Hu", "Shuai Xu", "Vipin Chaudhary" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=M0FlRiFAIW
@inproceedings{ gupta2023patch, title={Patch Gradient Descent: Training Neural Networks on Very Large Images}, author={Deepak Gupta and Gowreesh Mago and Arnav Chavan and Dilip Prasad and Rajat Mani Thomas}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=M0FlRiFAIW} }
Current deep learning models falter when faced with large-scale images, largely due to prohibitive computing and memory demands. Enter Patch Gradient Descent (PatchGD), a groundbreaking learning technique that seamlessly trains deep learning models on expansive images. This innovation takes inspiration from the standard feedforward-backpropagation paradigm. However, instead of processing an entire image simultaneously, PatchGD smartly segments and updates a core information-gathering element using portions of the image before the final evaluation. This ensures wide coverage across iterations, bringing in notable memory and computational efficiencies. When tested on the high-resolution PANDA and UltraMNIST datasets using ResNet50 and MobileNetV2 models, PatchGD clearly outstrips traditional gradient descent techniques, particularly under memory constraints. The future of handling vast image datasets effectively lies with PatchGD.
Patch Gradient Descent: Training Neural Networks on Very Large Images
[ "Deepak Gupta", "Gowreesh Mago", "Arnav Chavan", "Dilip Prasad", "Rajat Mani Thomas" ]
Workshop/WANT
poster
2301.13817
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Lh9CKbFV2I
@inproceedings{ wen2023batched, title={Batched Low-Rank Adaptation of Foundation Models}, author={Yeming Wen and Swarat Chaudhuri}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=Lh9CKbFV2I} }
Low-Rank Adaptation (LoRA) has recently gained attention for fine-tuning foundation models by incorporating trainable low-rank matrices, thereby reducing the number of trainable parameters. While LoRA offers numerous advantages, its applicability for real-time serving to a diverse and global user base is constrained by its incapability to handle multiple task-specific adapters efficiently. This imposes a performance bottleneck in scenarios requiring personalized, task-specific adaptations for each incoming request. To address this, we introduce FLORA (Fast LoRA), a framework in which each input example in a minibatch can be associated with its unique low-rank adaptation weights, allowing for efficient batching of heterogeneous requests. We empirically demonstrate that FLORA retains the performance merits of LoRA, showcasing competitive results on the MultiPL-E code generation benchmark spanning over 6 languages.
Batched Low-Rank Adaptation of Foundation Models
[ "Yeming Wen", "Swarat Chaudhuri" ]
Workshop/WANT
poster
2312.05677
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=LHKmzWP7RN
@inproceedings{ key2023local, title={Local Lo{RA}: Memory-Efficient Fine-Tuning of Large Language Models}, author={Oscar Key and Jean Kaddour and Pasquale Minervini}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=LHKmzWP7RN} }
We present Local LoRA, a memory-flexible fine-tuning approach that, in principle, can fine-tune an arbitrarily large model on fixed hardware, including consumer grade GPUs. Our approach aims to decouple the size of the model and the memory required to fine-tune it by dividing the model into chunks and sequentially fine tuning each chunk. Our results show that Local LoRA closes the gap between the un-tuned model and end-to-end LoRA on math reasoning tasks.
Local LoRA: Memory-Efficient Fine-Tuning of Large Language Models
[ "Oscar Key", "Jean Kaddour", "Pasquale Minervini" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=I2aJVWHA93
@inproceedings{ sanyal2023early, title={Early Weight Averaging meets High Learning Rates for {LLM} Pre-training}, author={Sunny Sanyal and Atula Neerkaje and Jean Kaddour and Abhishek Kumar and sujay sanghavi}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=I2aJVWHA93} }
Training Large Language Models (LLMs) incurs significant cost; hence, any strategy that accelerates model convergence is helpful. In this paper, we investigate the ability of a simple idea – checkpoint averaging along the trajectory of a training run – to improve both convergence and generalization quite early during training. Here we show that models trained with high learning rates observe higher gains due to checkpoint averaging. Furthermore, these gains are amplified when checkpoints are sampled with considerable spacing in training steps. Our training recipe outperforms conventional training and popular checkpoint averaging baselines such as exponential moving average (EMA) and stochastic moving average (SWA). We evaluate our training recipe by pre-training LLMs, where high learning rates are inherently preferred due to extremely large batch sizes. Specifically, we pre-trained nanoGPT-2 models of varying sizes—small (125M), medium (335M), and large (770M)—on the OpenWebText dataset, comprised of 9B tokens. Additionally, we present results for publicly available Pythia LLMs, ranging from 1B to 12B, which were trained on the PILE-deduped dataset containing 207B tokens. Code is available at https://github.com/sanyalsunny111/Early_Weight_Avg.
Early Weight Averaging meets High Learning Rates for LLM Pre-training
[ "Sunny Sanyal", "Atula Tejaswi Neerkaje", "Jean Kaddour", "Abhishek Kumar", "sujay sanghavi" ]
Workshop/WANT
poster
2306.03241
[ "https://github.com/sanyalsunny111/early_weight_avg" ]
https://huggingface.co/papers/2306.03241
0
2
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=HyyhUIdx20
@inproceedings{ lisicki2023banditdriven, title={Bandit-Driven Batch Selection for Robust Learning under Label Noise}, author={Michal Lisicki and Graham W. Taylor and Mihai Nica}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=HyyhUIdx20} }
We introduce a novel approach for batch selection in Stochastic Gradient Descent (SGD) training, leveraging combinatorial bandit algorithms. Our methodology focuses on optimizing the learning process in the presence of label noise, a prevalent issue in real-world datasets. Experimental evaluations on the CIFAR-10 dataset reveal that our approach consistently outperforms existing methods across various levels of label corruption. Importantly, we achieve this superior performance without incurring the computational overhead commonly associated with auxiliary neural network models. This work presents a balanced trade-off between computational efficiency and model efficacy, offering a scalable solution for complex machine learning applications.
Bandit-Driven Batch Selection for Robust Learning under Label Noise
[ "Michal Lisicki", "Graham W. Taylor", "Mihai Nica" ]
Workshop/WANT
poster
2311.00096
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HsuDlFYL82
@inproceedings{ horv{\'a}th2023maestro, title={Maestro: Uncovering Low-Rank Structures via Trainable Decomposition}, author={Samuel Horv{\'a}th and Stefanos Laskaridis and Shashank Rajput and Hongyi Wang}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=HsuDlFYL82} }
Deep Neural Networks (DNNs) have been a large driver and enabler for AI breakthroughs in recent years. These models have been getting larger in their attempt to become more accurate and tackle new upcoming use-cases, including AR/VR and intelligent assistants. However, the training process of such large models is a costly and time-consuming process, which typically yields a single model to fit all targets. To mitigate this, various techniques have been proposed in the literature, including pruning, sparsification or quantization of the model weights and updates. While able to achieve high compression rates, they often incur computational overheads or accuracy penalties. Alternatively, factorization methods have been leveraged to incorporate low-rank compression in the training process. Similarly, such techniques (e.g., SVD) frequently rely on the computationally expensive decomposition of layers and are potentially sub-optimal for non-linear models, such as DNNs. In this work, we take a further step in designing efficient low-rank models and propose Maestro, a framework for trainable low-rank layers. Instead of regularly applying a priori decompositions such as SVD, the low-rank structure is built into the training process through a generalized variant of Ordered Dropout. This method imposes an importance ordering via sampling on the decomposed DNN structure. Our theoretical analysis demonstrates that our method recovers the SVD decomposition of linear mapping on uniformly distributed data and PCA for linear autoencoders. We further apply our technique on DNNs and empirically illustrate that Maestro enables the extraction of lower footprint models that preserve model performance while allowing for graceful accuracy-latency tradeoff for the deployment to devices of different capabilities.
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
[ "Samuel Horváth", "Stefanos Laskaridis", "Shashank Rajput", "Hongyi Wang" ]
Workshop/WANT
poster
2308.14929
[ "https://github.com/samuelhorvath/maestro-lod" ]
https://huggingface.co/papers/2308.14929
1
1
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=GXl0agrQnI
@inproceedings{ sahbi2023tiny, title={Tiny Graph Convolutional Networks with Topologically Consistent Magnitude Pruning}, author={Hichem Sahbi}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=GXl0agrQnI} }
Magnitude pruning is one of the mainstream methods in lightweight architecture design whose goal is to extract subnetworks with the largest weight connections. This method is known to be successful, but under very high pruning regimes, it suffers from topological inconsistency which renders the extracted subnetworks disconnected, and this hinders their generalization ability. In this paper, we devise a novel end-to-end Topologically Consistent Magnitude Pruning (TCMP) method that allows extracting subnetworks while guaranteeing their topological consistency. The latter ensures that only accessible and co-accessible --- impactful --- connections are kept in the resulting lightweight architectures. Our solution is based on a novel reparametrization and two supervisory bi-directional networks which implement accessibility/co-accessibility and guarantee that only connected subnetworks will be selected during training. This solution allows enhancing generalization significantly, under very high pruning regimes, as corroborated through extensive experiments, involving graph convolutional networks, on the challenging task of skeleton-based action recognition.
Tiny Graph Convolutional Networks with Topologically Consistent Magnitude Pruning
[ "Hichem Sahbi" ]
Workshop/WANT
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=93BaEweoRg
@inproceedings{ devvrit2023matformer, title={MatFormer: Nested Transformer for Elastic Inference}, author={Fnu Devvrit and Sneha Kudugunta and Aditya Kusupati and Tim Dettmers and Kaifeng Chen and Inderjit Dhillon and Yulia Tsvetkov and Hannaneh Hajishirzi and Sham Kakade and Ali Farhadi and Prateek Jain}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=93BaEweoRg} }
Transformer models are deployed in a wide range of settings, from multi-accelerator clusters to standalone mobile phones. The diverse inference constraints in these scenarios necessitate practitioners to train foundation models such as PaLM 2, Llama, & ViTs as a series of models of varying sizes. Due to significant training costs, only a select few model sizes are trained and supported, limiting more fine-grained control over relevant tradeoffs, including latency, cost, and accuracy. This work introduces MatFormer, a nested Transformer architecture designed to offer elasticity in a variety of deployment constraints. Each Feed Forward Network (FFN) block of a MatFormer model is jointly optimized with a few nested smaller FFN blocks. This training procedure allows for the Mix'n'Match of model granularities across layers -- i.e., a trained universal MatFormer model enables extraction of hundreds of accurate smaller models, which were never explicitly optimized. We empirically demonstrate MatFormer's effectiveness across different model classes (decoders & encoders), modalities (language & vision), and scales (up to 2.6B parameters). We find that a 2.6B decoder-only MatFormer language model (MatLM) allows us to extract smaller models spanning from 1.5B to 2.6B, each exhibiting comparable validation loss and one-shot downstream evaluations to their independently trained counterparts. Furthermore, we observe that smaller encoders extracted from a universal MatFormer-based ViT (MatViT) encoder preserve the metric-space structure for adaptive large-scale retrieval. Finally, we showcase that speculative decoding with the accurate and consistent submodels extracted from MatFormer can further reduce inference latency.
MatFormer: Nested Transformer for Elastic Inference
[ "Fnu Devvrit", "Sneha Kudugunta", "Aditya Kusupati", "Tim Dettmers", "Kaifeng Chen", "Inderjit S Dhillon", "Yulia Tsvetkov", "Hannaneh Hajishirzi", "Sham M. Kakade", "Ali Farhadi", "Prateek Jain" ]
Workshop/WANT
oral
2310.07707
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=8ApMP162qy
@inproceedings{ shaikh2023donuthole, title={{DONUT}-hole: {DONUT} Sparsification by Harnessing Knowledge and Optimizing Learning Efficiency}, author={Azhar Shaikh and Michael Cochez and Denis Diachkov and Michiel de Rijcke and Sahar Yousefi}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=8ApMP162qy} }
This paper introduces DONUT-hole, a sparse OCR-free visual document understanding (VDU) model that addresses the limitations of its predecessor model, dubbed DONUT. The DONUT model, leveraging a transformer architecture, overcoming the challenges of separate optical character recognition (OCR) and visual semantic understanding (VSU) components. However, its deployment in production environments and edge devices is hindered by high memory and computational demands, particularly in large-scale request services. To overcome these challenges, we propose an optimization strategy based on knowledge distillation and model pruning. Our paradigm to produce DONUT-hole, reduces the model denisty by 54\% while preserving performance. We also achieve a global representational similarity index between DONUT and DONUT-hole based on centered kernel alignment (CKA) metric of 0.79. Moreover, we evaluate the effectiveness of DONUT-hole in the document image key information extraction (KIE) task, highlighting its potential for developing more efficient VDU systems for logistic companies.
DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing Learning Efficiency
[ "Azhar Shaikh", "Michael Cochez", "Denis Diachkov", "Michiel de Rijcke", "Sahar Yousefi" ]
Workshop/WANT
poster
2311.05778
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=6s77hjBNfS
@inproceedings{ xia2023sheared, title={Sheared {LL}a{MA}: Accelerating Language Model Pre-training via Structured Pruning}, author={Mengzhou Xia and Tianyu Gao and Zhiyuan Zeng and Danqi Chen}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=6s77hjBNfS} }
The popularity of LLaMA and other recently emerged moderate-sized large language models (LLMs) highlights the potential of building smaller yet powerful LLMs. Regardless, the cost of training such models from scratch on trillions of tokens remains high. In this work, we study structured pruning as an effective means to develop smaller LLMs from pre-trained, larger models. Our approach employs two key techniques: (1) targeted structured pruning, which prunes a larger model to a specified target shape by removing layers, heads, intermediate and hidden dimensions in an end-to-end manner, and (2) dynamic batch loading, which dynamically updates the composition of sampled data in each training batch based on varying losses across different domains. We demonstrate the efficacy of our approach by presenting the Sheared-LLaMA series, pruning the LLaMA2-7B model down to 1.3B and 2.7B parameters. Sheared-LLaMA models outperform state-of-the-art open-source models of equivalent sizes, such as Pythia, INCITE, and OpenLLaMA models, on a wide range of downstream and instruction tuning evaluations, while requiring less than 3% of compute compared to training such models from scratch. This work provides compelling evidence that leveraging existing LLMs with structured pruning is a far more cost-effective approach for building smaller LLMs.
Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
[ "Mengzhou Xia", "Tianyu Gao", "Zhiyuan Zeng", "Danqi Chen" ]
Workshop/WANT
poster
2310.06694
[ "https://github.com/princeton-nlp/llm-shearing" ]
https://huggingface.co/papers/2310.06694
2
3
1
4
1
[ "princeton-nlp/Sheared-LLaMA-1.3B", "princeton-nlp/Sheared-LLaMA-2.7B", "llama-moe/LLaMA-MoE-v1-3_5B-2_8", "llama-moe/LLaMA-MoE-v1-3_5B-4_16", "llama-moe/LLaMA-MoE-v1-3_0B-2_16", "princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT", "princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT", "princeton-nlp/Sheared-Pythia-160m", "Aryanne/Sheared-LLaMA-2.7B-gguf", "LakoMoor/Sheared-LLaMA-1.3B-ShareGPT-GGUF", "princeton-nlp/Sheared-LLaMA-2.7B-Pruned", "princeton-nlp/Sheared-LLaMA-1.3B-Pruned", "LoneStriker/Sheared-LLaMA-2.7B-ShareGPT-6.0bpw-h6-exl2", "LoneStriker/Sheared-LLaMA-2.7B-ShareGPT-4.0bpw-h6-exl2", "LoneStriker/Sheared-LLaMA-2.7B-ShareGPT-3.0bpw-h6-exl2", "LoneStriker/Sheared-LLaMA-2.7B-ShareGPT-5.0bpw-h6-exl2", "LoneStriker/Sheared-LLaMA-2.7B-ShareGPT-8.0bpw-h8-exl2", "ahnyeonchan/legendary-river-koalpaca", "gradientai/Sheared-LLaMA-1.3B-ShareGPT-jax" ]
[]
[ "AnishKumbhar/princeton-nlp-Sheared-LLaMA-2.7B", "theodac/princeton-nlp-Sheared-LLaMA-1.3B", "Thenujan/princeton-nlp-Sheared-LLaMA-1.3B", "Kjgarza/metadata_transformer", "Vexvoi/princeton-nlp-Sheared-LLaMA-2.7B" ]
null
https://openreview.net/forum?id=4qpwILUP5N
@inproceedings{ aouad2023a, title={A foundation for exact binarized morphological neural networks}, author={Theodore Aouad and Hugues Talbot}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=4qpwILUP5N} }
Training and running deep neural networks (NNs) often demands a lot of computation and energy-intensive specialized hardware (e.g. GPU, TPU...). One way to reduce the computation and power cost is to use binary weight NNs, but these are hard to train because the sign function has a non-smooth gradient. We present a model based on Mathematical Morphology (MM), which can binarize ConvNets without losing performance under certain conditions, but these conditions may not be easy to satisfy in real-world scenarios. To solve this, we propose two new approximation methods and develop a robust theoretical framework for ConvNets binarization using MM. We propose as well regularization losses to improve the optimization. We empirically show that our model can learn a complex morphological network, and explore its performance on a classification task.
A foundation for exact binarized morphological neural networks
[ "Theodore Aouad", "Hugues Talbot" ]
Workshop/WANT
poster
2401.03830
[ "https://github.com/TheodoreAouad/LBQNN2023" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4XtUj6Uzt3
@inproceedings{ li2023training, title={Training Bayesian Neural Networks with Sparse Subspace Variational Inference}, author={Junbo Li and Zichen Miao and Qiang Qiu and Ruqi Zhang}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=4XtUj6Uzt3} }
Bayesian neural networks (BNNs) offer uncertainty quantification but come with the downside of substantially increased training and inference costs. Sparse BNNs have been investigated for efficient inference, typically by either slowly introducing sparsity throughout the training or by post-training compression of dense BNNs. The dilemma of how to cut down massive training costs remains, particularly given the requirement to learn about the uncertainty. To solve this challenge, we introduce Sparse Subspace Variational Inference (SSVI), the first fully sparse BNN framework that maintains a consistently sparse Bayesian model throughout the training and inference phases. Starting from a randomly initialized low-dimensional sparse subspace, our approach alternately optimizes the sparse subspace basis selection and its associated parameters. While basis selection is characterized as a non-differentiable problem, we approximate the optimal solution with a removal-and-addition strategy, guided by novel criteria based on weight distribution statistics. Our extensive experiments show that SSVI sets new benchmarks in crafting sparse BNNs, achieving, for instance, a 10-20× compression in model size with comparable performance, and up to 20× FLOPs reduction during training. Remarkably, SSVI also demonstrates enhanced robustness to hyperparameters, reducing the need for intricate tuning in VI and occasionally even surpassing VI-trained dense BNNs.
Training Bayesian Neural Networks with Sparse Subspace Variational Inference
[ "Junbo Li", "Zichen Miao", "Qiang Qiu", "Ruqi Zhang" ]
Workshop/WANT
poster
2402.11025
[ "https://github.com/ljb121002/ssvi" ]
https://huggingface.co/papers/2402.11025
0
0
0
4
1
[]
[]
[]
null
https://openreview.net/forum?id=4CLNFKi12w
@inproceedings{ chitale2023task, title={Task Arithmetic with Lo{RA} for Continual Learning}, author={Rajas Chitale and Ankit Vaidya and Aditya Kane and Archana Santosh Ghotkar}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=4CLNFKi12w} }
Continual learning refers to the problem where the training data is available in sequential chunks, termed "tasks". The majority of progress in continual learning has been stunted by the problem of catastrophic forgetting, which is caused by sequential training of the model on streams of data. Moreover, it becomes computationally expensive to sequentially train large models multiple times. To mitigate both of these problems at once, we propose a novel method to continually train transformer-based vision models using low-rank adaptation and task arithmetic. Our method completely bypasses the problem of catastrophic forgetting, as well as reducing the computational requirement for training models on each task. When aided with a small memory of 10 samples per class, our method achieves performance close to full-set finetuning. We present rigorous ablations to support the prowess of our method.
Task Arithmetic with LoRA for Continual Learning
[ "Rajas Chitale", "Ankit Vaidya", "Aditya Kane", "Archana Santosh Ghotkar" ]
Workshop/WANT
poster
2311.02428
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=3GL2GETaaL
@inproceedings{ bellinger2023dynamic, title={Dynamic Observation Policies in Observation Cost-Sensitive Reinforcement Learning}, author={Colin Bellinger and Mark Crowley and Isaac Tamblyn}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=3GL2GETaaL} }
Reinforcement learning (RL) has been shown to learn sophisticated control policies for complex tasks including games, robotics, heating and cooling systems and text generation. The action-perception cycle in RL, however, generally assumes that a measurement of the state of the environment is available at each time step without a cost. In applications such as materials design, deep-sea and planetary robot exploration and medicine, however, there can be a high cost associated with measuring, or even approximating, the state of the environment. In this paper, we survey the recently growing literature that adopts the perspective that an RL agent might not need, or even want, a costly measurement at each time step. Within this context, we propose the Deep Dynamic Multi-Step Observationless Agent (DMSOA), contrast it with the literature and empirically evaluate it on OpenAI gym and Atari Pong environments. Our results, show that DMSOA learns a better policy with fewer decision steps and measurements than the considered alternative from the literature.
Dynamic Observation Policies in Observation Cost-Sensitive Reinforcement Learning
[ "Colin Bellinger", "Mark Crowley", "Isaac Tamblyn" ]
Workshop/WANT
poster
2307.02620
[ "https://github.com/cbellinger27/learning-when-to-observe-in-rl" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=2KYHntOYKw
@inproceedings{ fang2023cooperative, title={Cooperative Learning for Cost-Adaptive Inference}, author={Xingli Fang and Richard M Bradford and Jung-Eun Kim}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=2KYHntOYKw} }
We propose a cooperative training framework for deep neural network architectures that enables the runtime network depths to change to satisfy dynamic computing resource requirements. In our framework, the number of layers participating in computation can be chosen dynamically to meet performance-cost trade-offs at inference runtime. Our method trains two Teammate nets and a Leader net, and two sets of Teammate sub-networks with various depths through knowledge distillation. The Teammate nets derive sub-networks and transfer knowledge to them, and to each other, while the Leader net guides Teammate nets to ensure accuracy. The approach trains the framework atomically at once instead of individually training various sizes of models; in a sense, the various-sized networks are all trained at once, in a "package deal." The proposed framework is not tied to any specific architecture but can incorporate any existing models/architectures, therefore it can maintain stable results and is insensitive to the size of a dataset's feature map. Compared with other related approaches, it provides comparable accuracy to its full network while various sizes of models are available.
Cooperative Learning for Cost-Adaptive Inference
[ "Xingli Fang", "Richard M Bradford", "Jung-Eun Kim" ]
Workshop/WANT
poster
2312.08532
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=0UfULsElsz
@inproceedings{ tessera2023generalisable, title={Generalisable Agents for Neural Network Optimisation}, author={Kale-ab Tessera and Callum Tilbury and Sasha Abramowitz and Ruan de Kock and Omayma Mahjoub and Benjamin Rosman and Sara Hooker and Arnu Pretorius}, booktitle={Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@NeurIPS 2023)}, year={2023}, url={https://openreview.net/forum?id=0UfULsElsz} }
Optimising deep neural networks is a challenging task due to complex training dynamics, high computational requirements, and long training times. To address this difficulty, we propose the framework of Generalisable Agents for Neural Network Optimisation (GANNO)---a multi-agent reinforcement learning (MARL) approach that learns to improve neural network optimisation by dynamically and responsively scheduling hyperparameters during training. GANNO utilises an agent per layer that observes localised network dynamics and accordingly takes actions to adjust these dynamics at a layerwise level to collectively improve global performance. In this paper, we use GANNO to control the layerwise learning rate and show that the framework can yield useful and responsive schedules that are competitive with handcrafted heuristics. Furthermore, GANNO is shown to perform robustly across a wide variety of unseen initial conditions, and can successfully generalise to harder problems than it was trained on. Our work presents an overview of the opportunities that this paradigm offers for training neural networks, along with key challenges that remain to be overcome.
Generalisable Agents for Neural Network Optimisation
[ "Kale-ab Tessera", "Callum Rhys Tilbury", "Sasha Abramowitz", "Ruan John de Kock", "Omayma Mahjoub", "Benjamin Rosman", "Sara Hooker", "Arnu Pretorius" ]
Workshop/WANT
poster
2311.18598
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=yQcebEgQfH
@inproceedings{ jing2023alphafold, title={AlphaFold Meets Flow Matching for Generating Protein Ensembles}, author={Bowen Jing and Bonnie Berger and Tommi Jaakkola}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=yQcebEgQfH} }
Recent breakthroughs in protein structure prediction have pointed to structural ensembles as the next frontier in the computational understanding of protein structure. At the same time, iterative refinement techniques such as diffusion have driven significant advancements in generative modeling. We explore the synergy of these developments by combining AlphaFold and ESMFold with flow matching, a powerful modern generative modeling framework, in order to sample the conformational landscape of proteins. When trained on the PDB and evaluated on proteins with multiple recent structures, our method produces ensembles with similar precision and greater diversity compared to MSA subsampling. When further fine-tuned on coarse-grained molecular dynamics trajectories, our model generalizes to unseen proteins and accurately predicts conformational flexbility, captures the joint distribution of atomic positions, and models higher-order physiochemical properties such as intermittent contacts and solvent exposure. These results open exciting avenues in the computational prediction of conformational flexibility.
AlphaFold Meets Flow Matching for Generating Protein Ensembles
[ "Bowen Jing", "Bonnie Berger", "Tommi Jaakkola" ]
Workshop/GenBio
oral
2402.04845
[ "https://github.com/bjing2016/alphaflow" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y14a0nIECr
@inproceedings{ frey2023protein, title={Protein Discovery with Discrete Walk-Jump Sampling}, author={Nathan Frey and Dan Berenberg and Karina Zadorozhny and Joseph Kleinhenz and Julien Lafrance-Vanasse and Isidro Hotzel and Yan Wu and Stephen Ra and Richard Bonneau and Kyunghyun Cho and Vladimir Gligorijevic and Saeed Saremi}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=y14a0nIECr} }
We resolve difficulties in training and sampling from a discrete generative model by learning a smoothed energy function, sampling from the smoothed data manifold with Langevin Markov chain Monte Carlo (MCMC), and projecting back to the true data manifold with one-step denoising. Our $\textit{Discrete Walk-Jump Sampling}$ formalism combines the contrastive divergence training of an energy-based model and improved sample quality of a score-based model, while simplifying training and sampling by requiring only a single noise level. We evaluate the robustness of our approach on generative modeling of antibody proteins and introduce the $\textit{distributional conformity score}$ to benchmark protein generative models. By optimizing and sampling from our models for the proposed distributional conformity score, 97-100\% of generated samples are successfully expressed and purified and 70\% of functional designs show equal or improved binding affinity compared to known functional antibodies on the first attempt in a single round of laboratory experiments. We also report the first demonstration of long-run fast-mixing MCMC chains where diverse antibody protein classes are visited in a single MCMC chain.
Protein Discovery with Discrete Walk-Jump Sampling
[ "Nathan Frey", "Dan Berenberg", "Karina Zadorozhny", "Joseph Kleinhenz", "Julien Lafrance-Vanasse", "Isidro Hotzel", "Yan Wu", "Stephen Ra", "Richard Bonneau", "Kyunghyun Cho", "Vladimir Gligorijevic", "Saeed Saremi" ]
Workshop/GenBio
oral
2306.12360
[ "https://github.com/prescient-design/walk-jump" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=vXXEfmYsvS
@inproceedings{ teufel2023secretogen, title={SecretoGen: towards prediction of signal peptides for efficient protein secretion}, author={Felix Teufel and Carsten Stahlhut and Jan Refsgaard and Henrik Nielsen and Ole Winther and Dennis Madsen}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=vXXEfmYsvS} }
Signal peptides (SPs) are short sequences at the N terminus of proteins that control their secretion in all living organisms. Secretion is of great importance in biotechnology, as industrial production of proteins in host organisms often requires the proteins to be secreted. SPs have varying secretion efficiency that is dependent both on the host organism and the protein they are combined with. Therefore, to optimize production yields, an SP with good efficiency needs to be identified for each protein. While SPs can be predicted accurately by machine learning models, such models have so far shown limited utility for predicting secretion efficiency. We introduce **SecretoGen**, a generative transformer trained on millions of naturally occuring SPs from diverse organisms. Evaluation on a range of secretion efficiency datasets show that SecretoGen's perplexity has promising performance for selecting efficient SPs, without requiring training on experimental efficiency data.
SecretoGen: towards prediction of signal peptides for efficient protein secretion
[ "Felix Teufel", "Carsten Stahlhut", "Jan Refsgaard", "Henrik Nielsen", "Ole Winther", "Dennis Madsen" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=sWCsSKqkXa
@inproceedings{ sternke2023proteinrl, title={Protein{RL}: Reinforcement learning with generative protein language models for property-directed sequence design}, author={Matt Sternke and Joel Karpiak}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=sWCsSKqkXa} }
The overarching goal of protein engineering is the design and optimization of proteins customized for specific purposes. Generative protein language models (PLMs) allow for \textit{de novo} protein sequence generation, however current PLMs lack capabilities for controllable sequence generation of sequences tailored with desired properties. Here we present ProteinRL, a flexible, data-driven reinforcement learning framework for fine-tuning generative PLMs for the \textit{de novo} design of sequences optimized for specific sequence and/or structural properties. We highlight two example cases of realistic protein design goals: a single-objective design for sequences containing unusually high charge content, and a multi-objective design scenario of a hit expansion, diversifying a target sequence with generated sequences having high-confidence structure predictions and high probability predictions of soluble expression. In both cases ProteinRL fine-tuning guides the PLM towards generating sequences optimized for the defined properties, extending to values rarely or never seen in natural sequences or sequences generated without ProteinRL fine-tuning. The demonstrated success and adaptability of the ProteinRL framework allows for the \textit{de novo} design of novel protein sequences optimized for applications across many areas of protein engineering.
ProteinRL: Reinforcement learning with generative protein language models for property-directed sequence design
[ "Matt Sternke", "Joel Karpiak" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=qP69kXPdJM
@inproceedings{ alamdari2023protein, title={Protein generation with evolutionary diffusion}, author={Sarah Alamdari and Nitya Thakkar and Rianne van den Berg and Alex Lu and Nicolo Fusi and Ava Amini and Kevin Yang}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=qP69kXPdJM} }
Diffusion models have demonstrated the ability to generate biologically plausible proteins that are dissimilar to any proteins seen in nature, enabling unprecedented capability and control in de novo protein design. However, current state-of-the-art diffusion models generate protein structures, which limits the scope of their training data and restricts generations to a small and biased subset of protein space. We introduce a general-purpose diffusion framework, EvoDiff, that combines evolutionary-scale data with the conditioning capabilities of diffusion models for controllable protein generation in sequence space. EvoDiff generates high-fidelity, diverse, structurally-plausible proteins that cover natural sequence and functional space. Critically, EvoDiff can generate proteins inaccessible to structure-based models, such as those with disordered regions, and design scaffolds for functional structural motifs, demonstrating the universality of our sequence-based formulation. We envision that EvoDiff will expand capabilities in protein engineering beyond the structure-function paradigm toward programmable, sequence-first design.
Protein generation with evolutionary diffusion: sequence is all you need
[ "Sarah Alamdari", "Nitya Thakkar", "Rianne van den Berg", "Alex Lu", "Nicolo Fusi", "Ava Amini", "Kevin Yang" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=mMKxmeBGTb
@inproceedings{ shanehsazzadeh2023textitin, title={\${\textbackslash}textit\{In vitro\}\$ validated antibody design against multiple therapeutic antigens using generative inverse folding}, author={Amir Shanehsazzadeh}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=mMKxmeBGTb} }
Deep learning approaches have demonstrated the ability to design protein sequences given backbone structures. While these approaches have been applied $\textit{in silico}$ to designing antibody complementarity-determining regions (CDRs), they have yet to be validated $\textit{in vitro}$ for designing antibody binders, which is the true measure of success for antibody design. Here we describe $\textit{IgDesign}$, a deep learning method for antibody CDR design, and demonstrate its robustness with successful binder design for 8 therapeutic antigens. The model is tasked with designing heavy chain CDR3 (HCDR3) or all three heavy chain CDRs (HCDR123) using native backbone structures of antibody-antigen complexes, along with the antigen and antibody framework (FWR) sequences as context. For each of the 8 antigens, we design 100 HCDR3s and 100 HCDR123s, scaffold them into the native antibody's variable region, and screen them for binding against the antigen using surface plasmon resonance (SPR). As a baseline, we screen 100 HCDR3s taken from the model's training set and paired with the native HCDR1 and HCDR2. We observe that both HCDR3 design and HCDR123 design outperform this HCDR3-only baseline. IgDesign is the first experimentally validated antibody inverse folding model. It can design antibody binders to multiple therapeutic antigens with high success rates and, in some cases, improved affinities over clinically validated reference antibodies. Antibody inverse folding has applications to both $\textit{de novo}$ antibody design and lead optimization, making IgDesign a valuable tool for accelerating drug development and enabling therapeutic design.
In vitro validated antibody design against multiple therapeutic antigens using generative inverse folding
[ "Amir Shanehsazzadeh" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=m2BtAzGzyZ
@inproceedings{ subramanian2023unexplored, title={Unexplored regions of the protein sequence-structure map revealed at scale by a library of {\textquotedblleft}foldtuned{\textquotedblright} language models}, author={Arjuna Subramanian and Matt Thomson}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=m2BtAzGzyZ} }
Nature has likely sampled only a fraction of all protein sequences and structures allowed by the laws of biophysics. However, the combinatorial scale of amino-acid sequence-space has traditionally precluded substantive study of the full protein sequence-structure map. In particular, it remains unknown how much of the vast uncharted landscape of far-from-natural sequences consists of alternate ways to encode the familiar ensemble of natural folds; proteins in this category also represent an opportunity to diversify candidates for downstream applications. Here, we characterize sequence-structure mapping in far-from-natural regions of sequence-space guided by the capacity of protein language models (pLMs) to explore sequences outside their natural training data through generation. We demonstrate that pretrained generative pLMs sample a limited structural snapshot of the natural protein universe, including >300 common (sub)domain elements. Incorporating pLM, structure prediction, and structure-based search techniques, we surpass this limitation by developing a novel "foldtuning" strategy that pushes a pretrained pLM into a generative regime that maintains structural similarity to a target protein fold (e.g. TIM barrel, thioredoxin, etc) while maximizing dissimilarity to natural amino-acid sequences. We apply "foldtuning" to build a library of pLMs for >700 naturally-abundant folds in the SCOP database, accessing swaths of proteins that take familiar structures yet lie far from known sequences, spanning targets that include enzymes, immune ligands, and signaling proteins. By revealing protein sequence-structure information at scale outside of the context of evolution, we anticipate that this work will enable future systematic searches for wholly novel folds and facilitate more immediate protein design goals in catalysis and medicine.
Unexplored regions of the protein sequence-structure map revealed at scale by a library of “foldtuned” language models
[ "Arjuna Subramanian", "Matt Thomson" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=lOIKlYR3vX
@inproceedings{ fox2023targeting, title={Targeting tissues via dynamic human systems modeling in generative design}, author={Zachary Fox and Nolan English and Belinda Akpa}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=lOIKlYR3vX} }
Drug discovery is a complex, costly process with high failure rates. A successful drug should bind to a target, be deliverable to an intended site of activity, and promote a desired pharmacological effect without causing toxicity. Typically, these factors are evaluated in series over the course of a pipeline where the number of candidates is reduced from a large initial pool. One promise of AI-driven discovery is the opportunity to evaluate multiple facets of drug performance in parallel. However, despite ML-driven advancements, current models for pharmacological property prediction are exclusively trained to predict molecular properties, ignoring important, dynamic biodistribution and bioactivity effects.
Targeting tissues via dynamic human systems modeling in generative design
[ "Zachary Fox", "Nolan English", "Belinda Akpa" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=kzk5Huibm0
@inproceedings{ plainer2023transition, title={Transition Path Sampling with Boltzmann Generator-based {MCMC} Moves}, author={Michael Plainer and Hannes Stark and Charlotte Bunne and Stephan G{\"u}nnemann}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=kzk5Huibm0} }
Sampling all possible transition paths between two 3D states of a molecular system has various applications ranging from catalyst design to drug discovery. Current approaches to sample transition paths use Markov chain Monte Carlo and rely on time-intensive molecular dynamics simulations to find new paths. Our approach operates in the latent space of a normalizing flow that maps from the molecule's Boltzmann distribution to a Gaussian, where we propose new paths without requiring molecular simulations. Using alanine dipeptide, we explore Metropolis-Hastings acceptance criteria in the latent space for exact sampling and investigate different latent proposal mechanisms.
Transition Path Sampling with Boltzmann Generator-based MCMC Moves
[ "Michael Plainer", "Hannes Stark", "Charlotte Bunne", "Stephan Günnemann" ]
Workshop/GenBio
oral
2312.05340
[ "https://github.com/plainerman/latent-tps" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jbamDZ374t
@inproceedings{ nguyen2023causal, title={Causal Inference in Gene Regulatory Networks with {GF}lowNet: Towards Scalability in Large Systems}, author={Trang Nguyen and Alexander Tong and Kanika Madan and Yoshua Bengio and Dianbo Liu}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=jbamDZ374t} }
Understanding causal relationships within Gene Regulatory Networks (GRNs) is essential for unraveling the gene interactions in cellular processes. However, causal discovery in GRNs is a challenging problem for multiple reasons including the existence of cyclic feedback loops and uncertainty that yields diverse possible causal structures. Previous works in this area either ignore cyclic dynamics (assume acyclic structure) or struggle with scalability. We introduce Swift-DynGFN as a novel framework that enhances causal structure learning in GRNs while addressing scalability concerns. Specifically, Swift-DynGFN exploits gene-wise independence to boost parallelization and to lower computational cost. Experiments on real single-cell RNA velocity and synthetic GRN datasets showcase the advancement in learning causal structure in GRNs and scalability in larger systems.
Causal Discovery in Gene Regulatory Networks with GFlowNet: Towards Scalability in Large Systems
[ "Trang Nguyen", "Alexander Tong", "Kanika Madan", "Yoshua Bengio", "Dianbo Liu" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=jasx3fgu4G
@inproceedings{ kraus2023masked, title={Masked autoencoders are scalable learners of cellular morphology}, author={Oren Kraus and Kian Kenyon-Dean and Saber Saberian and Maryam Fallah and Peter McLean and Jess Leung and Vasudev Sharma and Ayla Khan and Jia Balakrishnan and Safiye Celik and Maciej Sypetkowski and Chi Cheng and Kristen Morse and Maureen Makes and Ben Mabey and Berton Earnshaw}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=jasx3fgu4G} }
Inferring biological relationships from cellular phenotypes in high-content microscopy screens provides significant opportunity and challenge in biological research. Prior results have shown that deep vision models can capture biological signal better than hand-crafted features. This work explores how self-supervised deep learning approaches scale when training larger models on larger microscopy datasets. Our results show that both CNN- and ViT-based masked autoencoders significantly outperform weakly supervised baselines. At the high-end of our scale, a ViT-L/8 trained on over 3.5-billion unique crops sampled from 93-million microscopy images achieves relative improvements as high as 28% over our best weakly supervised baseline at inferring known biological relationships curated from public databases. Relevant code and select models released with this work can be found at: https://github.com/recursionpharma/maes_microscopy.
Masked Autoencoders are Scalable Learners of Cellular Morphology
[ "Oren Kraus", "Kian Kenyon-Dean", "Saber Saberian", "Maryam Fallah", "Peter McLean", "Jess Leung", "Vasudev Sharma", "Ayla Khan", "Jia Balakrishnan", "Safiye Celik", "Maciej Sypetkowski", "Chi Cheng", "Kristen Morse", "Maureen Makes", "Ben Mabey", "Berton Earnshaw" ]
Workshop/GenBio
oral
2309.16064
[ "https://github.com/recursionpharma/maes_microscopy" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=itp5RDMlNV
@inproceedings{ jin2023dsmbind, title={{DSMB}ind: an unsupervised generative modeling framework for binding energy prediction}, author={Wengong Jin and Caroline Uhler and Nir Hacohen}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=itp5RDMlNV} }
Modeling the binding between proteins and other molecules is pivotal to drug discovery. Geometric deep learning is a promising paradigm for protein-ligand/proteinprotein binding energy prediction, but its accuracy is limited by the size of training data as high-throughput binding assays are expensive. Herein, we propose an unsupervised binding energy prediction framework, named DSMBind, which does not need experimental binding data for training. DSMBind is an energy-based model that estimates the likelihood of a protein complex via SE(3) denoising score matching (DSM). This objective, applied at both backbone and side-chain levels, builds on a novel equivariant rotation prediction network derived from Euler’s Rotation Equations. We find that the learned log-likelihood of protein complexes is highly correlated with experimental binding energy across multiple benchmarks, even matching the performance of supervised models trained on experimental data. We further demonstrate DSMBind’s zero-shot binder design capability through a PD-L1 nanobody design task, where we randomize all three complementaritydetermining regions (CDRs) and select the best CDR sequences based on DSMBind score. We experimentally tested the designed nanobodies with ELISA binding assay and successfully discovered a novel PD-L1 binder. In summary, DSMBind offers a versatile framework for binding energy prediction and binder design.
SE(3) denoising score matching for unsupervised binding energy prediction and nanobody design
[ "Wengong Jin", "Caroline Uhler", "Nir Hacohen" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ineR4aPK9p
@inproceedings{ fonnegra2023analysis, title={Analysis of cellular phenotypes with image-based generative models}, author={Ruben Fonnegra and Mohammad Sanian and Zitong Chen and Lassi Paavolainen and Juan Caicedo}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=ineR4aPK9p} }
Observing changes in cellular phenotypes under experimental interventions is a powerful approach for studying biology and has many applications, including treatment design. Unfortunately, not all interventions can be tested experimentally, which limits our ability to study complex phenomena such as combinatorial treatments or continuous time or dose responses. In this work, we explore unbiased, image-based generative models to analyze phenotypic changes in cell morphology and tissue organization. The proposed approach is based on generative adversarial networks (GAN) conditioned on feature representations obtained with self-supervised learning. Our goal is to ensure that image-based phenotypes are accurately encoded in a latent space that can be later manipulated and used for generating images of novel phenotypic variations. We present an evaluation of our approach for phenotype analysis in a drug screen and a cancer tissue dataset.
Analysis of cellular phenotypes with unbiased image-based generative models
[ "Ruben Fonnegra", "Mohammad Sanian", "Zitong Chen", "Lassi Paavolainen", "Juan Caicedo" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iO59l1LFvJ
@inproceedings{ ren2023delta, title={Delta Score: Improving the Binding Assessment of Structure-Based Drug Design Methods}, author={Minsi Ren and Bowen Gao and Bo Qiang and Yanyan Lan}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=iO59l1LFvJ} }
Structure-based drug design (SBDD) stands at the forefront of drug discovery, emphasizing the creation of molecules that target specific binding pockets. Recent advances in this area have witnessed the adoption of deep generative models and geometric deep learning techniques, modeling SBDD as a conditional generation task where the target structure serves as context. Historically, evaluation of these models centered on docking scores, which quantitatively depict the predicted binding affinity between a molecule and its target pocket. Though state-of-the-art models purport that a majority of their generated ligands exceed the docking score of ground truth ligands in test sets, it begs the question: Do these scores align with real-world biological needs? In this paper, we introduce the delta score, a novel evaluation metric grounded in tangible pharmaceutical requisites. Our experiments reveal that molecules produced by current deep generative models significantly lag behind ground truth reference ligands when assessed with the delta score. This novel metric not only complements existing benchmarks but also provides a pivotal direction for subsequent research in the domain.
Delta Score: Improving the Binding Assessment of Structure-Based Drug Design Methods
[ "Minsi Ren", "Bowen Gao", "Bo Qiang", "Yanyan Lan" ]
Workshop/GenBio
poster
2311.12035
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=iEGdcrWDV1
@inproceedings{ khajehnejad2023on, title={On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay}, author={Moein Khajehnejad and Forough Habibollahi and Alon Loeffler and Brett Kagan and Adeel Razi}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=iEGdcrWDV1} }
In this study, we characterize complex network dynamics in live in vitro neuronal systems during two distinct activity states: spontaneous rest state and engagement in a real-time (closed-loop) game environment using the DishBrain system. First, we embed the spiking activity of these channels in a lower-dimensional space using various representation learning methods and then extract a subset of representative channels. Next, by analyzing these low-dimensional representations, we explore the patterns of macroscopic neuronal network dynamics during learning. Remarkably, our findings indicate that just using the low-dimensional embedding of representative channels is sufficient to differentiate the neuronal culture during the Rest and Gameplay. Notably, our investigation shows dynamic changes in the connectivity patterns within the same region and across multiple regions on the multi-electrode array only during Gameplay. These findings underscore the plasticity of neuronal networks in response to external stimuli and highlight the potential for modulating connectivity in a controlled environment. The ability to distinguish between neuronal states using reduced-dimensional representations points to the presence of underlying patterns that could be pivotal for real-time monitoring and manipulation of neuronal cultures. Additionally, this provides insight into how biological based information processing systems rapidly adapt and learn and may lead to new improved algorithms.
On Complex Network Dynamics of an In-Vitro Neuronal System during Rest and Gameplay
[ "Moein Khajehnejad", "Forough Habibollahi", "Alon Loeffler", "Brett Kagan", "Adeel Razi" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=htwkLbaFhk
@inproceedings{ ellington2023contextualized, title={Contextualized Networks Reveal Heterogeneous Transcriptomic Regulation in Tumors at Sample-Specific Resolution}, author={Caleb Ellington and Ben Lengerich and Thomas Watkins and Jiekun Yang and Hanxi Xiao and Manolis Kellis and Eric Xing}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=htwkLbaFhk} }
Cancers are shaped by somatic mutations, microenvironment, and patient background, each altering both gene expression and regulation in complex ways, resulting in highly-variable cellular states and dynamics. Inferring gene regulatory networks (GRNs) from expression data can help characterize this regulation-driven heterogeneity, but network inference requires many statistical samples, traditionally limiting GRNs to cluster-level analyses that ignore intra-cluster heterogeneity. We propose to move beyond cluster-based analyses by using _contextualized_ learning, a multi-task learning paradigm, to generate sample-specific GRNs from sample contexts. We unify three network classes (Correlation, Markov, Neighborhood) and estimate sample-specific GRNs for 7997 tumors across 25 tumor types, with each network contextualized by copy number and driver mutation profiles, tumor microenvironment, and patient demographics. Sample-specific GRNs provide a structured view of expression dynamics at sample-specific resolution, revealing co-expression modules in correlation networks (CNs), as well as cliques and independent regulatory elements in Markov Networks (MNs) and Neighborhood Regression Networks (NNs). Our generative modeling approach predicts GRNs for unseen tumor types based on a pan-cancer model of how somatic mutations affect transcriptomic regulation. Finally, sample-specific networks enable GRN-based precision oncology, explaining known biomarkers via network-mediated effects, leading to novel prognostic intra-disease and inter-disease subtypes.
Contextualized Networks Reveal Heterogeneous Transcriptomic Regulation in Tumors at Sample-Specific Resolution
[ "Caleb Ellington", "Ben Lengerich", "Thomas Watkins", "Jiekun Yang", "Hanxi Xiao", "Manolis Kellis", "Eric Xing" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=hYlfUTyp6p
@inproceedings{ shen2023target, title={Target Conditioned {GF}lowNet for Drug Design}, author={Tony Shen and Mohit Pandey and Martin Ester}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=hYlfUTyp6p} }
We seek to automate the generation of drug-like compounds conditioned to specific protein pocket targets. Most current methods approximate the protein-molecule distribution of a finite dataset and, therefore struggle to generate molecules with significant binding improvement over the training dataset. We instead frame the pocket-conditioned molecular generation task as an RL problem and develop TacoGFN, a target conditional Generative Flow Networks model. Our method is explicitly encouraged to generate molecules with desired properties as opposed to fitting on a pre-existing data distribution. To this end, we develop transformer-based docking score prediction to speed up docking score computation and propose TacoGFN to explore molecule space efficiently. Furthermore, we incorporate several rounds of active learning where generated samples are queried using a docking oracle to improve the docking score prediction. This approach allows us to accurately explore as much of the molecule landscape as we can afford computationally. Empirically, molecules generated using TacoGFN and its variants significantly outperform all baseline methods across every property (Docking score, QED, SA, Lipinski), while being orders of magnitude faster.
TacoGFN: Target Conditioned GFlowNet for Drug Design
[ "Tony Shen", "Mohit Pandey", "Martin Ester" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=cplCMo4II4
@inproceedings{ gong2023microenvironment, title={Microenvironment Flows as Protein Engineers}, author={Chengyue Gong and Lemeng Wu and Daniel Diaz and Xingchao Liu and James Loy and Adam Klivans and qiang liu}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=cplCMo4II4} }
The inverse folding of proteins has tremendous applications in protein design and protein engineering. While machine learning approaches for inverse folding have made significant advancements in recent years, efficient generation of diverse and high-quality sequences remains a significant challenge, limiting their practical utility in protein design and engineering. We propose to do probabilistic flow framework that introduces three key designs for designing an amino acid sequence with target fold. At the input level, compare to existing inverse folding methods, rather than sampling sequences from the backbone scaffold, we demonstrate that analyzing a protein structure via the local chemical environment (micro-environment) at each residue can come to comparable performance. At the method level, rather than optimizing the recovery ratio, we generate diverse suggestions. At the data level, during training, we propose to do data augmentation with sequence with high sequence similarity, and train a probability flow model to capture the diverse sequence information. We demonstrate that we achieve comparable recovery ratio as the SOTA inverse folding models while only using micro-environment as inputs, and further show that we outperforms existing inverse folding methods in several zero-shot thermal stability change prediction tasks.
Microenvironment Flows as Protein Engineers
[ "Chengyue Gong", "Lemeng Wu", "Daniel Diaz", "Xingchao Liu", "James Loy", "Adam Klivans", "qiang liu" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=bxZMKHtlL6
@inproceedings{ h{\o}ie2023antifold, title={AntiFold: Improved antibody structure design using inverse folding}, author={Magnus H{\o}ie and Alissa Hummer and Tobias Olsen and Morten Nielsen and Charlotte Deane}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=bxZMKHtlL6} }
The design and optimization of antibodies, important therapeutic agents, requires an intricate balance across multiple properties. A primary challenge in optimization is ensuring that introduced sequence mutations do not disrupt the antibody structure or its target binding mode. Protein inverse folding models, which predict diverse sequences that fold into the same structure, are promising for maintaining structural integrity during optimization. Here we present AntiFold, an inverse folding model developed for solved and predicted antibody structures, based on the ESM-IF1 model. AntiFold achieves large gains in performance versus existing inverse folding models on sequence recovery across all antibody complementarity determining regions (CDRs) and framework regions. AntiFold-generated sequences show high structural agreement between predicted and experimental structures. The tool efficiently samples hundreds of antibody structures per minute, providing a scalable solution for antibody design. AntiFold is freely available as a downloadable package at: https://opig.stats.ox.ac.uk/data/downloads/AntiFold
AntiFold: Improved antibody structure design using inverse folding
[ "Magnus Høie", "Alissa Hummer", "Tobias Olsen", "Morten Nielsen", "Charlotte Deane" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=bh8dw3guwY
@inproceedings{ lu2023dynamicbind, title={DynamicBind: Predicting ligand-specific protein-ligand complex structure with a deep equivariant generative model}, author={Wei Lu and Jixian Zhang and Huang Weifeng and Ziqiao Zhang and Chengtao Li and Shuangjia Zheng}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=bh8dw3guwY} }
While significant advances have been made in predicting static protein structures, the inherent dynamics of proteins, modulated by ligands, are crucial for understanding protein function and facilitating drug discovery. Traditional docking methods, frequently used in studying protein-ligand interactions, typically treat proteins as rigid. While molecular dynamics simulations can propose appropriate protein conformations, they're computationally demanding due to rare transitions between biologically relevant equilibrium states. In this study, we present DynamicBind, a novel method that employs equivariant geometric diffusion networks to construct a smooth energy landscape, promoting efficient transitions between different equilibrium states. DynamicBind accurately recovers ligand-specific conformations from unbound protein structures without the need for holo-structures or extensive sampling. Our experiments reveal that DynamicBind can accommodate a wide range of large protein conformational changes and identify novel cryptic pockets in unseen protein targets. As a result, DynamicBind shows potential in accelerating the development of small molecules for previously undruggable targets and expanding the horizons of computational drug discovery.
DynamicBind: Predicting ligand-specific protein-ligand complex structure with a deep equivariant generative model
[ "Wei Lu", "Jixian Zhang", "Huang Weifeng", "Ziqiao Zhang", "Chengtao Li", "Shuangjia Zheng" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=bPcgbKDCUQ
@inproceedings{ villegas-morcillo2023guiding, title={Guiding diffusion models for antibody sequence and structure co-design with developability properties}, author={Amelia Villegas-Morcillo and Jana Weber and Marcel Reinders}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=bPcgbKDCUQ} }
Recent advances in deep generative methods have allowed antibody sequence and structure co-design. This study addresses the challenge of tailoring the highly variable complementarity-determining regions (CDRs) in antibodies to fulfill developability requirements. We introduce a novel approach that integrates property guidance into the antibody design process using diffusion probabilistic models. This approach allows us to simultaneously design CDRs conditioned on antigen structures while considering critical properties like solubility and folding stability. Our property-conditioned diffusion model offers versatility by accommodating diverse property constraints, presenting a promising avenue for computational antibody design in therapeutic applications. Code is available at https://github.com/amelvim/antibody-diffusion-properties.
Guiding diffusion models for antibody sequence and structure co-design with developability properties
[ "Amelia Villegas-Morcillo", "Jana Weber", "Marcel Reinders" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=b54p3jCgBw
@inproceedings{ jawaid2023improving, title={Improving few-shot learning-based protein engineering with evolutionary sampling}, author={Muhammad Zaki Jawaid and Aayushma Gautam and T. Gainous and Dan Hart and Robin Yeo and Timothy Daley}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=b54p3jCgBw} }
Designing novel functional proteins remains a slow and expensive process due to a variety of protein engineering challenges; in particular, the number of protein variants that can be experimentally tested in a given assay pales in comparison to the vastness of the overall sequence space, resulting in low hit rates and expensive wet lab testing cycles. ML-guided protein engineering promises to accelerate this process through computational screening of proposed variants in silico. However, exploring the prohibitively large protein sequence space presents a significant challenge for the design of novel functional proteins using ML-guided protein engineering. Here, we propose using evolutionary Monte Carlo search (EMCS) to efficiently explore the fitness landscape and accelerate novel protein design. As a proof-of-concept, we use our approach to design a library of peptides predicted to be functionally capable of transcriptional activation and then experimentally screen them, resulting in a dramatically improved hit rate compared to existing methods. Our method can be easily adapted to other protein engineering and design problems, particularly where the cost associated with obtaining labeled data is significantly high. We have provided open source code for our method at https://github.com/SuperSecretBioTech/evolutionary_monte_carlo_search.
Improving few-shot learning-based protein engineering with evolutionary sampling
[ "Muhammad Zaki Jawaid", "Aayushma Gautam", "T. Gainous", "Dan Hart", "Robin Yeo", "Timothy Daley" ]
Workshop/GenBio
poster
2305.15441
[ "https://github.com/SuperSecretBioTech/evolutionary_monte_carlo_search" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=awRjbuWxe4
@inproceedings{ kozlova2023protein, title={Protein Inpainting Co-Design with ProtFill}, author={Elizaveta Kozlova and Arthur Valentin and Daniel Nakhaee-Zadeh Gutierrez}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=awRjbuWxe4} }
Designing new proteins with specific binding capabilities is a challenging task that has the potential to revolutionize many fields, including medicine and material science. Here we introduce ProtFill, a novel method for the simultaneous design of protein structures and sequences. Employing an $SE(3)$ equivariant diffusion graph neural network, our method excels in both sequence prediction and structure recovery compared to SOTA models. We incorporate edge feature updates in GVP-GNN message passing layers to refine our design process. The model's applicability for the interface redesign task is showcased for antibodies as well as other proteins. The code is available at https://github.com/adaptyvbio/ProtFill.
Inpainting Protein Sequence and Structure with ProtFill
[ "Elizaveta Kozlova", "Arthur Valentin", "Daniel Nakhaee-Zadeh Gutierrez" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Z4ia7s2tpV
@inproceedings{ dunn2023accelerating, title={Accelerating Inference in Molecular Diffusion Models with Latent Representations of Protein Structure}, author={Ian Dunn and David Koes}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=Z4ia7s2tpV} }
Diffusion generative models have emerged as a powerful framework for addressing problems in structural biology and structure-based drug design. These models operate directly on 3D molecular structures. Due to the unfavorable scaling of graph neural networks (GNNs) with graph size as well as the relatively slow inference speeds inherent to diffusion models, many existing molecular diffusion models rely on coarse-grained representations of protein structure to make training and inference feasible. However, such coarse-grained representations discard essential information for modeling molecular interactions and impair the quality of generated structures. In this work, we present a novel GNN-based architecture for learning latent representations of molecular structure. When trained end-to-end with a diffusion model for de novo ligand design, our model achieves comparable performance to one with an all-atom protein representation while exhibiting a 3-fold reduction in inference time.
Accelerating Inference in Molecular Diffusion Models with Latent Representations of Protein Structure
[ "Ian Dunn", "David Koes" ]
Workshop/GenBio
oral
2311.13466
[ "https://github.com/dunni3/keypoint-diffusion" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YTrlu38mM4
@inproceedings{ nori2023evaluating, title={Evaluating Zero-Shot Scoring for In Vitro Antibody Binding Prediction with Experimental Validation}, author={Divya Nori and Simon Mathis and Amir Shanehsazzadeh}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=YTrlu38mM4} }
The success of therapeutic antibodies relies on their ability to selectively bind antigens. AI-based antibody design protocols have shown promise in generating epitope-specific designs. Many of these protocols use an inverse folding step to generate diverse sequences given a backbone structure. Due to prohibitive screening costs, it is key to identify candidate sequences likely to bind in vitro. Here, we compare the efficacy of 8 common scoring paradigms based on open-source models to classify antibody designs as binders or non-binders. We evaluate these approaches on a novel surface plasmon resonance (SPR) dataset, spanning 5 antigens. Our results show that existing methods struggle to detect binders, and performance is highly variable across antigens. We find that metrics computed on flexibly docked antibody-antigen complexes are more robust, and ensembles scores are more consistent than individual metrics. We provide experimental insight to analyze current scoring techniques, highlighting that the development of robust, zero-shot filters is an important research gap.
Evaluating Zero-Shot Scoring for In Vitro Antibody Binding Prediction with Experimental Validation
[ "Divya Nori", "Simon Mathis", "Amir Shanehsazzadeh" ]
Workshop/GenBio
oral
2312.05273
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=YOGss3XXp0
@inproceedings{ cornwall2023finetuned, title={Fine-tuned protein language models capture T cell receptor stochasticity}, author={Lewis Cornwall and Grisha Szep and James Day and S R Gokul Krishnan and David Carter and Jamie Blundell and Lilly Wollman and Neil Dalchau and Aaron Sim}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=YOGss3XXp0} }
The combinatorial explosion of T cell receptor (TCRs) sequences enables our immune systems to recognise and respond to an enormous diversity of pathogens. Modelling the highly stochastic TCR generation and selection processes at both sequence and repertoire levels is important for disease detection and advancing therapeutic research. Here we demonstrate that protein language models fine-tuned on TCR sequences are able to capture TCR statistics in hypervariable regions to which mechanistic models are blind, and show that amino acids exhibit strong dependencies on each other within chains but not across chains. Our approach generates representations that improve the prediction of TCR binding specificities.
Fine-tuned protein language models capture T cell receptor stochasticity
[ "Lewis Cornwall", "Grisha Szep", "James Day", "S R Gokul Krishnan", "David Carter", "Jamie Blundell", "Lilly Wollman", "Neil Dalchau", "Aaron Sim" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=XgkYO1S2vM
@inproceedings{ stark2023harmonic, title={Harmonic Prior Self-conditioned Flow Matching for Multi-Ligand Docking and Binding Site Design}, author={Hannes Stark and Bowen Jing and Regina Barzilay and Tommi Jaakkola}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=XgkYO1S2vM} }
A significant amount of protein function requires binding small molecules, including enzymatic catalysis. As such, designing binding pockets for small molecules has several impactful applications ranging from drug synthesis to energy storage. Towards this goal, we first develop HarmonicFlow an improved generative process over 3D protein-ligand binding structures based on our self-conditioned flow matching objective. FlowSite extends this flow model to jointly generate a protein pocket's discrete residue types and the molecule's binding 3D structure. We show that HarmonicFlow improves upon the state-of-the-art generative processes for docking in simplicity, generality, and performance. Enabled by this structure model, FlowSite designs binding sites substantially better than baseline approaches and provides the first general solution for binding site design.
Harmonic Prior Self-conditioned Flow Matching for Multi-Ligand Docking and Binding Site Design
[ "Hannes Stark", "Bowen Jing", "Regina Barzilay", "Tommi Jaakkola" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=T3ROexdopD
@inproceedings{ yu2023exploring, title={Exploring the building blocks of cell organization as high-order network motifs with graph isomorphism network}, author={Yang Yu and Shuang Wang and Dong Xu and Juexin Wang}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=T3ROexdopD} }
The spatial arrangement of cells within tissues plays a pivotal role in shaping tissue function. A critical spatial pattern is network motif as cell organization. Network motifs can be represented as recurring significant interconnections in a spatial cell-relation graph, i.e., the occurrences of isomorphic subgraphs in the graph, which is computationally infeasible to have an optimal solution with high-order (>3 nodes) subgraphs. We introduce Triangulation Network Motif Neural Network (TrimNN), a neural network-based approach designed to estimate the prevalence of network motifs of any order in a triangulated cell graph. TrimNN simplifies the intricate task of occurrence regression by decomposing it into several binary present/absent predictions on small graphs. TrimNN is trained using representative pairs of predefined subgraphs and triangulated cell graphs to estimate overrepresented network motifs. On typical spatial omics samples within thousands of cells in dozens of cell types, TrimNN robustly infers high-order network motifs in seconds. TrimNN provides an accurate, efficient, and robust approach for quantifying network motifs, which helps pave the way to disclose the biological mechanisms underlying cell organization in multicellular differentiation, development, and disease progression.
Exploring the building blocks of cell organization as high-order network motifs with graph isomorphism network
[ "Yang Yu", "Shuang Wang", "Dong Xu", "Juexin Wang" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SckdgVW3Kq
@inproceedings{ karthikeyan2023conditional, title={Conditional Generation of Antigen Specific T-cell Receptor Sequences}, author={Dhuvarakesh Karthikeyan and Colin Raffel and Benjamin Vincent and Alex Rubinsteyn}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=SckdgVW3Kq} }
Training and evaluating large language models (LLMs) for use in the design of antigen specific T-cell receptor (TCR) sequences is challenging due to the complex many-to-many mapping between TCRs and their targets, a struggle exacerbated by a severe lack of ground truth data. Traditional NLP metrics can be artificially poor indicators of model performance since labels are concentrated on a few examples, and functional in-vitro assessment of generated TCRs is time-consuming and costly. Here, we introduce TCR-BART and TCR-T5, adapted from the prominent BART and T5 models, to explore the use of these LLMs for conditional TCR sequence generation given a specific target epitope. To fairly evaluate such models with limited labeled examples, we propose novel evaluation metrics tailored to the sparsely sampled many-to-many nature of TCR-epitope data and investigate the interplay between accuracy and diversity of generated TCR sequences.
Conditional Generation of Antigen Specific T-cell Receptor Sequences
[ "Dhuvarakesh Karthikeyan", "Colin Raffel", "Benjamin Vincent", "Alex Rubinsteyn" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=SJYY3nQ0cX
@inproceedings{ giannone2023enhancing, title={Enhancing Language Models for Technical Domains with Dynamic Token Injection}, author={Giorgio Giannone and Neil Tenenholtz and James Hall and Nicolo Fusi and David Alvarez-Melis}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=SJYY3nQ0cX} }
Large language models (LLMs) are rapidly advancing the frontier of natural language understanding and generation. Their generalist nature, while adept at handling a wide range of tasks, often lacks the depth and precision required by highly specialized and rapidly evolving technical domains, such as genomics and engineering design. Fine-tuning these models for specific domains can be effective but requires large amounts of data and compromises their general reasoning capabilities. In this work, we introduce a scalable method to infuse specialized knowledge into generalist language models by dynamically extending their vocabulary with specialist tokens. By using a lightweight functional mapping on an extended vocabulary and adjusting the logit distribution, we enable the model to grasp domain-specific nuances. We demonstrate this in an application in genomics, where we extend a standard LLM by introducing knowledge about a large set of genes, allowing it to proficiently tackle tasks involving both textual and genetic data. Functional alignment enables the model to handle novel gene tokens that were never encountered during training, enabling domain-aware out-of-distribution capabilities in generalist language models.
Enhancing Language Models for Technical Domains with Dynamic Token Injection
[ "Giorgio Giannone", "Neil Tenenholtz", "James Hall", "Nicolo Fusi", "David Alvarez-Melis" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Qrx0Dt3RBg
@inproceedings{ pao-huang2023scalable, title={Scalable Multimer Structure Prediction using Diffusion Models}, author={Peter Pao-Huang and Bowen Jing and Bonnie Berger}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=Qrx0Dt3RBg} }
Accurate protein complex structure modeling is a necessary step in understanding the behavior of biological pathways and cellular systems. While some works have attempted to address this challenge, there is still a need for scaling existing methods to larger protein complexes. To address this need, we propose a novel diffusion generative model (DGM) that predicts large multimeric protein structures by learning to rigidly dock its chains together. Additionally, we construct a new dataset specifically for large protein complexes used to train and evaluate our DGM. We substantially improve prediction runtime and completion rates while maintaining competitive accuracy with current methods.
Scalable Multimer Structure Prediction using Diffusion Models
[ "Peter Pao-Huang", "Bowen Jing", "Bonnie Berger" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=QrH4bhWhwY
@inproceedings{ chu2023generative, title={Generative Antibody Design for Complementary Chain Pairing Sequences through Encoder-Decoder Language Model}, author={Simon Chu and Kathy Wei}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=QrH4bhWhwY} }
Current protein language models (pLMs) predominantly focus on single-chain protein sequences and often have not accounted for constraints on generative design imposed by protein-protein interactions. To address this gap, we present paired Antibody T5 (pAbT5), an encoder-decoder model to generate complementary heavy or light chain from its pairing partner. We show that our model respects conservation in framework regions and variability in hypervariable domains, demonstrated by agreement with sequence alignment and variable-length CDR loops. We also show that our model captures chain pairing preferences through the recovery of ground-truth chain type and gene families. Our results showcase the potential of pAbT5 in generative antibody design, incorporating biological constraints from chain pairing preferences.
Generative Antibody Design for Complementary Chain Pairing Sequences through Encoder-Decoder Language Model
[ "Simon Chu", "Kathy Wei" ]
Workshop/GenBio
poster
2301.02748
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Pa6mFTZY4R
@inproceedings{ damani2023generative, title={Generative design for gene therapy: An \${\textbackslash}textit\{in vivo\}\$ validated method}, author={Farhan Damani and David Brookes and Jeffrey Chan and Rishi Jajoo and Alexander Mijalis and Joyce Samson and Flaviu Vadan and Cameron Webster and Stephen Malina and Sam Sinai}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=Pa6mFTZY4R} }
Machine learning-assisted biological sequence design is a topic of intense interest due to its potential impact on healthcare and biotechnology. In recent years many new approaches have been proposed for sequence design through learning from data alone (rather than mechanistic or structural approaches). These black-box approaches roughly fall into two camps: (i) optimization against a learned oracle (ii) sampling designs from a generative model. While both approaches have demonstrated promise, real-world evidence of their effectiveness is limited, whether used alone or in combination. Here we develop a robust generative model named $\texttt{VAEProp}$ and use it to optimize Adeno-associated virus (AAV) capsids, a fundamental gene therapy vector. We show that our method outperforms algorithmic baselines on this design task in the real world. Critically, we demonstrate that our approach is capable of generating vector designs with field-leading therapeutics potential through in-vitro and non-human primate validation experiments.
Generative design for gene therapy: An in vivo validated method
[ "Farhan Damani", "David Brookes", "Jeffrey Chan", "Rishi Jajoo", "Alexander Mijalis", "Joyce Samson", "Flaviu Vadan", "Cameron Webster", "Stephen Malina", "Sam Sinai" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=PQa3giMLZp
@inproceedings{ zhang2023bending, title={Bending and Binding: Predicting Protein Flexibility upon Ligand Interaction using Diffusion Models}, author={Xuejin Zhang and Tomas Geffner and Matt McPartlon and Mehmet Akdel and Dylan Abramson and Graham Holt and Alexander Goncearenco and Luca Naef and Michael Bronstein}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=PQa3giMLZp} }
Predicting protein conformational changes driven by binding of small molecular ligands is imperative to accelerate drug discovery for protein targets with no established binders. This work presents a novel method to capture such conformational changes: given a protein apo conformation (unbound state), we propose an equivariant conditional diffusion model to predict its holo conformations (bound state with external small molecular ligands). We design a novel variant of the EGNN architecture for the score network (score-informed EGNN), which is able to exploit conditioning information in the form of the reference (apo) structure to guide the diffusion's sampling process. Learning from experimentally determined apo/holo conformations, we observe that our model can generate conformations close to holo conditioned only on apo state.
Bending and Binding: Predicting Protein Flexibility upon Ligand Interaction using Diffusion Models
[ "Xuejin Zhang", "Tomas Geffner", "Matt McPartlon", "Mehmet Akdel", "Dylan Abramson", "Graham Holt", "Alexander Goncearenco", "Luca Naef", "Michael Bronstein" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Od1KtMeAYo
@inproceedings{ wang2023generating, title={Generating Molecular Conformer Fields}, author={Yuyang Wang and Ahmed Elhag and Navdeep Jaitly and Joshua Susskind and Miguel Bautista}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=Od1KtMeAYo} }
In this paper we tackle the problem of generating conformers of a molecule in 3D space given its molecular graph. We parameterize these conformers as continuous functions that map elements from the molecular graph to points in 3D space. We then formulate the problem of learning to generate conformers as learning a distribution over these functions using a diffusion generative model, called Molecular Conformer Fields (MCF). Our approach is simple and scalable, and obtains results that are comparable or better than the previous state-of-the-art while making no assumptions about the explicit structure of molecules (\eg modeling torsional angles). MCF represents an advance in extending diffusion models to handle complex scientific problems in a conceptually simple, scalable and effective manner.
Generating Molecular Conformer Fields
[ "Yuyang Wang", "Ahmed Elhag", "Navdeep Jaitly", "Joshua Susskind", "Miguel Bautista" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Nf5BxVllgq
@inproceedings{ harris2023posecheck, title={PoseCheck: Generative Models for 3D Structure-based Drug Design Produce Unrealistic Poses}, author={Charles Harris and Kieran Didi and Arian Jamasb and Chaitanya Joshi and Simon Mathis and Pietro Lio and Tom Blundell}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=Nf5BxVllgq} }
Deep generative models for structure-based drug design (SBDD), where molecule generation is conditioned on a 3D protein pocket, have received considerable interest in recent years. These methods offer the promise of higher-quality molecule generation by explicitly modelling the 3D interaction between a potential drug and a protein receptor. However, previous work has primarily focused on the quality of the generated molecules themselves, with limited evaluation of the 3D \emph{poses} that these methods produce, with most work simply discarding the generated pose and only reporting a ``corrected” pose after redocking with traditional methods. Little is known about whether generated molecules satisfy known physical constraints for binding and the extent to which redocking alters the generated interactions. We introduce \posecheck{}, an extensive analysis of multiple state-of-the-art methods and find that generated molecules have significantly more physical violations and fewer key interactions compared to baselines, calling into question the implicit assumption that providing rich 3D structure information improves molecule complementarity. We make recommendations for future research tackling identified failure modes and hope our benchmark will serve as a springboard for future SBDD generative modelling work to have a real-world impact. Our evaluation suite is easy to use in future 3D SBDD work and is available at \href{https://github.com/cch1999/posecheck}{\texttt{www.github.com/cch1999/posecheck}}.
PoseCheck: Generative Models for 3D Structure-based Drug Design Produce Unrealistic Poses
[ "Charles Harris", "Kieran Didi", "Arian Jamasb", "Chaitanya Joshi", "Simon Mathis", "Pietro Lio", "Tom Blundell" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Mg2DM0F3AY
@inproceedings{ weinberger2023a, title={A deep generative model of single-cell methylomic data}, author={Ethan Weinberger and Su-In Lee}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=Mg2DM0F3AY} }
Single-cell DNA methyolme profiling platforms based on bisulfite sequencing techniques promise to enable the exploration of epigenomic heterogeneity at an unprecedented resolution. However, substantial noise resulting from technical limitations of these platforms can impede downstream analyses of the data. Here we present methylVI, a deep generative model that learns probabilistic representations of single-cell methylation data which explicitly account for the unique characeteristics of bisulfite-sequencing-derived methylomic data. After initially validating the quality of our model's fit, we proceed to demonstrate how methylVI can facilitate common downstream analysis tasks, including integrating data collected using different sequencing platforms and producing denoised methylome profiles. Our implementation of methylVI is publicly available at https://github.com/suinleelab/methylVI.
A deep generative model of single-cell methylomic data
[ "Ethan Weinberger", "Su-In Lee" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JWfvMT43pZ
@inproceedings{ chen2023shapeconditioned, title={Shape-conditioned 3D Molecule Generation via Equivariant Diffusion Models}, author={Ziqi Chen and Bo Peng and srinivasan parthasarathy and Xia Ning}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=JWfvMT43pZ} }
Ligand-based drug design aims to identify novel drug candidates of similar shapes with known active molecules. In this paper, we formulated an in silico shape-conditioned molecule generation problem to generate 3D molecule structures conditioned on the shape of a given molecule. To address this problem, we developed an equivariant shape-conditioned generative model $\mathsf{ShapeMol}$. $\mathsf{ShapeMol}$ consists of an equivariant shape encoder that maps molecular surface shapes into latent embeddings, and an equivariant diffusion model that generates 3D molecules based on these embeddings. Experimental results show that $\mathsf{ShapeMol}$ can generate novel, diverse, drug-like molecules that retain 3D molecular shapes similar to the given shape condition. These results demonstrate the potential of $\mathsf{ShapeMol}$ in designing drug candidates of desired 3D shapes binding to protein target pockets.
Shape-conditioned 3D Molecule Generation via Equivariant Diffusion Models
[ "Ziqi Chen", "Bo Peng", "srinivasan parthasarathy", "Xia Ning" ]
Workshop/GenBio
poster
2308.11890
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=JPOW9FToYX
@inproceedings{ darmawan2023sampling, title={Sampling Protein Language Models for Functional Protein Design}, author={Jeremie Theddy Darmawan and Yarin Gal and Pascal Notin}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=JPOW9FToYX} }
Protein language models have emerged as powerful ways to learn complex representations of proteins, thereby improving their performance on several downstream tasks, from structure prediction to fitness prediction, property prediction, homology detection, and more. By learning a distribution over protein sequences, they are also very promising tools for designing novel and functional proteins, with broad applications in healthcare, new material, or sustainability. Given the vastness of the corresponding sample space, efficient exploration methods are critical to the success of protein engineering efforts. However, the methodologies for adequately sampling these models to achieve core protein design objectives remain underexplored and have predominantly leaned on techniques developed for Natural Language Processing. In this work, we first develop a holistic in silico protein design evaluation framework, to comprehensively compare different sampling methods. After performing a thorough review of sampling methods for language models, we introduce several sampling strategies tailored to protein design. Lastly, we compare the various strategies on our in silico benchmark, investigating the effects of key hyperparameters and highlighting practical guidance on the relative strengths of different methods.
Sampling Protein Language Models for Functional Protein Design
[ "Jeremie Theddy Darmawan", "Yarin Gal", "Pascal Notin" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HVQoom7ip2
@inproceedings{ zhan2023parameterefficient, title={Parameter-Efficient Fine-Tune on Open Pre-trained Transformers for Genomic Sequence}, author={Huixin Zhan and Zijun Frank Zhang}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=HVQoom7ip2} }
Lately, pre-trained foundation models (PFMs) in DNA have achieved notable advancements in unraveling the linguistic nuances of the genome. As these foundational models expand in parameters and the number of downstream genomic tasks increases, Parameter-Efficient Fine-Tuning (PEFT) has become the de facto approach to fine-tune PFMs while decreasing the computational costs. Low-rank adapters and adaptive low-rank adaptation (AdaLoRA) are popular PEFT methods that introduce some learnable truncated singular value decomposition modules for efficient fine-tuning. However, both methods are deterministic, i.e., once a singular value is pruned, it stays pruned throughout the fine-tuning process. Consequently, deterministic PEFTs can underperform if the initial states, before pruning, are suboptimal—a challenge frequently encountered in genomics due to data heterogeneity. To address this issue, we propose an AdaLoRA with random sampling (AdaLoRA+RS) to prune and stochastically reintroduce pruned singular vectors, adhering to a cubic budget schedule. We evaluate the AdaLoRA+RS on PFMs within genome domain, DNABERT 1/2 and Nucleotide Transformer; and language domain, open pre-trained transformers (OPT). Our AdaLoRA+RS approach demonstrates performance ranging from slightly above to on par with the Full-Model Fine-Tuning (FMFT) across $13$ genomic sequence datasets on two genome understanding tasks, while using less than $2\%$ of the trainable parameters. For instance, in the human promoter detection, OPT-$350$M with AdaLoRA+RS achieves a $4.4\%$ AUC increase compared to its FMFT baseline, leveraging only $1.8\%$ of the trainable parameters. Our proposed AdaLoRA+RS provides a powerful PEFT strategy for modeling genomic sequence.
Parameter-Efficient Fine-Tune on Open Pre-trained Transformers for Genomic Sequence
[ "Huixin Zhan", "Zijun Frank Zhang" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FpUqHQSmqC
@inproceedings{ zaballa2023approximation, title={Approximation of Intractable Likelihood Functions in Systems Biology via Normalizing Flows}, author={Vincent Zaballa and Elliot Hui}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=FpUqHQSmqC} }
Systems biology relies on mathematical models that often involve complex and intractable likelihood functions, posing challenges for efficient inference and model selection. Generative models, such as normalizing flows, have shown remarkable ability in approximating complex distributions in various domains. However, their application in systems biology for approximating intractable likelihood functions remains unexplored. Here, we elucidate a framework for leveraging normalizing flows to approximate complex likelihood functions inherent to systems biology models. By using normalizing flows in the Simulation-based inference setting, we demonstrate a method that not only approximates a likelihood function but also allows for model inference in the model selection setting. We showcase the effectiveness of this approach on real-world systems biology problems, providing practical guidance for implementation and highlighting its advantages over traditional computational methods.
Approximation of Intractable Likelihood Functions in Systems Biology via Normalizing Flows
[ "Vincent Zaballa", "Elliot Hui" ]
Workshop/GenBio
poster
2312.02391
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FhFglOZbtZ
@inproceedings{ corso2023the, title={The Discovery of Binding Modes Requires Rethinking Docking Generalization}, author={Gabriele Corso and Arthur Deng and Nicholas Polizzi and Regina Barzilay and Tommi Jaakkola}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=FhFglOZbtZ} }
Accurate blind docking has the potential to lead to new biological breakthroughs, but for this promise to be realized, it is critical that docking methods generalize well across the proteome. However, existing benchmarks fail to rigorously assess generalizability. Therefore, we develop DockGen, a new benchmark based on the ligand-binding domains of proteins, and we show that machine learning-based docking models have very weak generalization abilities even when combined with various data augmentation strategies. Instead, we propose Confidence Bootstrapping, a new training paradigm that solely relies on the interaction between a diffusion and a confidence model. Unlike previous self-training methods from other domains, we directly exploit the multi-resolution generation process of diffusion models using rollouts and confidence scores to reduce the generalization gap. We demonstrate that Confidence Bootstrapping significantly improves the ability of ML-based docking methods to dock to unseen protein classes, edging closer to accurate and generalizable blind docking methods.
The Discovery of Binding Modes Requires Rethinking Docking Generalization
[ "Gabriele Corso", "Arthur Deng", "Nicholas Polizzi", "Regina Barzilay", "Tommi Jaakkola" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=F6LNvZNBsp
@inproceedings{ lal2023reglm, title={reg{LM}: Designing realistic regulatory {DNA} with autoregressive language models}, author={Avantika Lal and Tommaso Biancalani and G{\"o}kcen Eraslan}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=F6LNvZNBsp} }
Designing cis-regulatory DNA elements (CREs) with desired properties is a challenging task with many therapeutic applications. Here, we used autoregressive language models trained on yeast and human putative CREs, in conjunction with supervised sequence-to-function models, to design regulatory DNA with desired patterns of activity. Our framework, regLM, compares favorably to existing CRE design approaches at generating realistic and diverse regulatory DNA, while also providing insights into the cis-regulatory code.
regLM: Designing realistic regulatory DNA with autoregressive language models
[ "Avantika Lal", "Tommaso Biancalani", "Gökcen Eraslan" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Enqxq6TWoZ
@inproceedings{ cohen2023epitopespecific, title={Epitope-specific antibody design using diffusion models on the latent space of {ESM} embeddings}, author={Tomer Cohen and Dina Schneidman-Duhovny}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=Enqxq6TWoZ} }
There was a significant progress in protein design using deep learning approaches. The majority of methods predict sequences for a given structure. Recently, diffusion approaches were developed for generating protein backbones. However, de novo design of epitope-specific antibody binders remains an unsolved problem due to the challenge of simultaneous optimization of the antibody sequence, variable loop structures, and antigen binding. Here we present, EAGLE (Epitope-specific Antibody Generation using Language model Embeddings), a diffusion-based model that does not require input backbone structures. The full antibody sequence (constant and variable regions) is designed in the continuous space using protein language model embeddings. Similarly to denoising diffusion probabilistic models for image generation that condition the sampling on a text prompt, here we condition the sampling of antibody sequences on antigen structure and epitope amino acids. The model is trained on the available antibody and antibody-antigen structures, as well as antibody sequences. Our Top-100 designs include sequences with 55\% identity to known binders for the most variable heavy chain loop. EAGLE's high performance is achieved by tailoring the method specifically for antibody design through integration of continuous latent space diffusion and sampling conditioned on antigen structure and epitope amino acids. Our model enables generating a wide range of diverse, unique, variable loop length antibody binders using straightforward epitope specifications.
Epitope-specific antibody design using diffusion models on the latent space of ESM embeddings
[ "Tomer Cohen", "Dina Schneidman-Duhovny" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=Ejw5zOOgLp
@inproceedings{ paul2023an, title={An Energy Based Model for Incorporating Sequence Priors for Target-Specific Antibody Design}, author={Steffanie Paul and Yining Huang and Debora Marks}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=Ejw5zOOgLp} }
With the growing demand for antibody therapeutics, there is a great need for computational methods to accelerate antibody discovery and optimization. Advances in machine learning on graphs have been leveraged to develop generative models of antibody sequence and structure that condition on specific antigen epitopes. However, the data availability for training models on structure (∼5k antibody binding complexes Schneider et al. [2022]) is dwarfed by the amount of antibody sequence data available (> 550M sequences Olsen et al. [2022]) which have been used to train protein language models useful for antibody generation and optimization Here we motivate the combination of well-trained antibody sequence models and graph generative models on target structures to enhance their performance for target-conditioned antibody design. First, we present the results of an investigation into the sitewise design performance of popular target-conditioned design models. We show that target-conditioned models may not be incorporating target information into the generation of middle loop residues of the complementarity-determining region of the antibody sequence. Next, we propose an energy-based model framework designed to encourage a model to learn target-specific information by supplementing it with pre-trained marginal-sequence information. We present preliminary results on the development of this model and outline future steps to improve the model framework.
An Energy Based Model for Incorporating Sequence Priors for Target-Specific Antibody Design
[ "Yining Huang", "Steffanie Paul", "Debora Marks" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=EKt4NQZ47U
@inproceedings{ park2023preference, title={Preference Optimization for Molecular Language Models}, author={Ryan Park and Ryan Theisen and Rayees Rahman and Anna Cicho{\'n}ska and Marcel Patek and Navriti Sahni}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=EKt4NQZ47U} }
Molecular language modeling is an effective approach to generating novel chemical structures. However, these models do not \emph{a priori} encode certain preferences a chemist may desire. We investigate the use of fine-tuning using Direct Preference Optimization to better align generated molecules with chemist preferences. Our findings suggest that this approach is simple, efficient, and highly effective.
Preference Optimization for Molecular Language Models
[ "Ryan Park", "Ryan Theisen", "Rayees Rahman", "Anna Cichońska", "Marcel Patek", "Navriti Sahni" ]
Workshop/GenBio
poster
2310.12304
[ "https://github.com/harmonic-discovery/pref-opt-for-mols" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=E3HN48zjam
@inproceedings{ ghorbani2023autoregressive, title={Autoregressive fragment-based diffusion for pocket-aware ligand design}, author={Mahdi Ghorbani and Leo Gendelev and Paul Beroza and Michael Keiser}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=E3HN48zjam} }
In this work, we introduce AutoFragDiff, a fragment-based autoregressive diffusion model for generating 3D molecular structures conditioned on target protein structures. We employ geometric vector perceptrons to predict atom types and spatial coordinates of new molecular fragments conditioned on molecular scaffolds and protein pockets. Our approach improves the local geometry of the resulting 3D molecules while maintaining high predicted binding affinity to protein targets. The model can also perform scaffold extension from user-provided starting molecular scaffold.
Autoregressive fragment-based diffusion for pocket-aware ligand design
[ "Mahdi Ghorbani", "Leo Gendelev", "Paul Beroza", "Michael Keiser" ]
Workshop/GenBio
poster
2401.05370
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=DpbMk2KOOX
@inproceedings{ hwang2023genomic, title={Genomic language model predicts protein co-regulation and function}, author={Yunha Hwang and Andre Cornman and Sergey Ovchinnikov and Peter Girguis}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=DpbMk2KOOX} }
Deciphering the relationship between a gene and its genomic context is fundamental to understanding and engineering biological systems. Machine learning has shown promise in learning latent relationships underlying the sequence-structure-function paradigm from massive protein sequence datasets; However, to date, limited attempts have been made in extending this continuum to include higher order genomic context information. Here, we trained a genomic language model (gLM) on millions of metagenomic scaffolds to learn the latent functional and regulatory relationships between genes. gLM learns contextualized protein embeddings that capture the genomic context as well as the protein sequence itself, and appears to encode biologically meaningful and functionally relevant information (e.g. enzymatic function). Our analysis of the attention patterns demonstrates that gLM is learning co-regulated functional modules (i.e. operons). Our findings illustrate that gLM’s unsupervised deep learning of the metagenomic corpus is an effective and promising approach to encode functional semantics and regulatory syntax of genes in their genomic contexts and uncover complex relationships between genes in a genomic region.
Genomic language model predicts protein co-regulation and function
[ "Yunha Hwang", "Andre Cornman", "Sergey Ovchinnikov", "Peter Girguis" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=DUjUJCqqA7
@inproceedings{ lee2023finetuning, title={Fine-tuning protein Language Models by ranking protein fitness}, author={Minji Lee and Kyungmin Lee and Jinwoo Shin}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=DUjUJCqqA7} }
The self-supervised protein language models (pLMs) have demonstrated significant potential in predicting the impact of mutations on protein function and fitness, which is crucial for protein design. There are approaches to further condition pLM to language or multiple sequence alignment (MSA) to produce a protein of a specific family or function. However, most of those conditioning is too coarse-grained to express the function, and still exhibit a weak correlation to fitness and struggle to generate fit variants. To address this challenge, we propose a fine-tuning framework for pLM to align it to a specific fitness by ranking the mutants. We show that constructing the ranked pairs is crucial in fine-tuning pLMs, where we provide a simple yet effective method to improve fitness prediction across various datasets. Through experiments on ProteinGym, our method shows substantial improvements in the fitness prediction tasks even using less than 200 labeled data. Furthermore, we demonstrate that our approach excels in fitness optimization tasks.
Fine-tuning protein Language Models by ranking protein fitness
[ "Minji Lee", "Kyungmin Lee", "Jinwoo Shin" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=D6PJjvaE3D
@inproceedings{ chen2023amalga, title={Amalga: Designable Protein Backbone Generation with Folding and Inverse Folding Guidance}, author={Shugao Chen and Ziyao Li and xiangxiang Zeng and Guolin Ke}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=D6PJjvaE3D} }
Recent advances in deep learning enable new approaches to protein design through inverse folding and backbone generation. However, backbone generators may produce structures that inverse folding struggles to identify sequences for, indicating designability issues. We propose Amalga, an inference-time technique that enhances designability of backbone generators. Amalga leverages folding and inverse folding models to guide backbone generation towards more designable conformations by incorporating ``folded-from-inverse-folded'' (FIF) structures. To generate FIF structures, possible sequences are predicted from step-wise predictions in the reverse diffusion and further folded into new backbones. Being intrinsically designable, the FIF structures guide the generated backbones to a more designable distribution. Experiments on both de novo design and motif-scaffolding demonstrate improved designability and diversity with Amalga on RFdiffusion.
Amalga: Designable Protein Backbone Generation with Folding and Inverse Folding Guidance
[ "Shugao Chen", "Ziyao Li", "xiangxiang Zeng", "Guolin Ke" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=CsjGuWD7hk
@inproceedings{ manshour2023integrating, title={Integrating Protein Structure Prediction and Bayesian Optimization for Peptide Design}, author={Negin Manshour and Fei He and Duolin Wang and Dong Xu}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=CsjGuWD7hk} }
Peptide design, with the goal of identifying peptides possessing unique biological properties, stands as a crucial challenge in peptide-based drug discovery. While traditional and computational methods have made significant strides, they often encounter hurdles due to the complexities and costs of laboratory experiments. Recent advancements in deep learning and Bayesian Optimization have paved the way for innovative research in this domain. In this context, our study presents a novel approach that effectively combines protein structure prediction with Bayesian Optimization for peptide design. By applying carefully designed objective functions, we guide and enhance the optimization trajectory for new peptide sequences. Benchmarked against multiple native structures, our methodology is tailored to generate new peptides to their optimal potential biological properties.
Integrating Protein Structure Prediction and Bayesian Optimization for Peptide Design
[ "Negin Manshour", "Fei He", "Duolin Wang", "Dong Xu" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ChU7MCLk1J
@inproceedings{ gong2023binding, title={Binding Oracle: Fine-Tuning From Stability to Binding Free Energy}, author={Chengyue Gong and Adam Klivans and Jordan Wells and James Loy and qiang liu and Alex Dimakis and Daniel Diaz}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=ChU7MCLk1J} }
The ability to predict changes in binding free energy (▵▵$G_{bind}$) for mutations at protein-protein interfaces (PPIs) is critical for the understanding genetic diseases and engineering novel protein-based therapeutics. Here, we present Binding Oracle: a structure-based graph transformer for predicting ▵▵$G_{bind}$ at PPIs. Binding Oracle fine-tunes Stability Oracle with Selective LoRA: a technique that synergizes layer selection via gradient norms with LoRA. Selective LoRA enables the identification and fine-tuning of the layers most critical for the downstream task, thus, regularizing against overfitting. Additionally, we present new training-test splits of mutational data from the SKEMPI2.0, Ab-Bind, and NABE databases that use a strict 30\% sequence similarity threshold to avoid data leakage during model evaluation. Binding Oracle, when trained with the Thermodynamic Permutations data augmentation technique , achieves SOTA on S487 without using any evolutionary auxiliary features. Our results empirically demonstrate how sparse fine-tuning techniques, such as Selective LoRA, can enable rapid domain adaptation in protein machine learning frameworks.
Binding Oracle: Fine-Tuning From Stability to Binding Free Energy
[ "Chengyue Gong", "Adam Klivans", "Jordan Wells", "James Loy", "qiang liu", "Alex Dimakis", "Daniel Diaz" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=CKCNCW9wxB
@inproceedings{ swanson2023generative, title={Generative {AI} for designing and validating easily synthesizable and structurally novel antibiotics}, author={Kyle Swanson and Gary Liu and Denise Catacutan and James Zou and Jonathan Stokes}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=CKCNCW9wxB} }
The rise of pan-resistant bacteria is creating an urgent need for structurally novel antibiotics. AI methods can discover new antibiotics, but existing methods have significant limitations. Property prediction models, which evaluate molecules one-by-one for a given property, scale poorly to large chemical spaces. Generative models, which directly design molecules, rapidly explore vast chemical spaces but generate molecules that are challenging to synthesize. Here, we introduce SyntheMol, a generative model that designs easily synthesizable compounds from a chemical space of 30 billion molecules. We apply SyntheMol to design molecules that inhibit the growth of Acinetobacter baumannii, a burdensome bacterial pathogen. We synthesize 58 generated molecules and experimentally validate them, with six structurally novel molecules demonstrating potent activity against A. baumannii and several other phylogenetically diverse bacterial pathogens.
Generative AI for designing and validating easily synthesizable and structurally novel antibiotics
[ "Kyle Swanson", "Gary Liu", "Denise Catacutan", "James Zou", "Jonathan Stokes" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=BE2hok0lES
@inproceedings{ alcaide2023umdfit, title={{UMD}-fit: Generating Realistic Ligand Conformations for Distance-Based Deep Docking Models}, author={Eric Alcaide and Ziyao Li and Hang Zheng and Zhifeng Gao and Guolin Ke}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=BE2hok0lES} }
Recent advances in deep learning have enabled fast and accurate prediction of protein-ligand binding poses through methods such as Uni-Mol Docking . These techniques utilize deep neural networks to predict interatomic distances between proteins and ligands. Subsequently, ligand conformations are generated to satisfy the predicted distance constraints. However, directly optimizing atomic coordinates often results in distorted, and thus invalid, ligand geometries; which are disastrous in actual drug development. We introduce UMD-fit as a practical solution to this problem applicable to all distance-based methods. We demonstrate it as an improvement to Uni-Mol Docking , which retains the overall distance prediction pipeline while optimizing ligand positions, orientations, and torsion angles instead. Experimental evidence shows that UMD-fit resolves the vast majority of invalid conformation issues while maintaining accuracy.
UMD-fit: Generating Realistic Ligand Conformations for Distance-Based Deep Docking Models
[ "Eric Alcaide", "Ziyao Li", "Hang Zheng", "Zhifeng Gao", "Guolin Ke" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=AlPg6if5PU
@inproceedings{ guo2023diffdocksite, title={DiffDock-Site: A Novel Paradigm for Enhanced Protein-Ligand Predictions through Binding Site Identification}, author={Huanlei Guo and Song Liu and Mingdi HU and Yilun Lou and Bingyi Jing}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=AlPg6if5PU} }
In the realm of computational drug discovery, molecular docking and ligand-binding site (LBS) identification stand as pivotal contributors, often influencing the direction of innovative drug development. DiffDock, a state-of-the-art method, is renowned for its molecular docking capabilities harnessing diffusion mechanisms. However, its computational demands, arising from its extensive score model designed to cater to a broad dynamic range for denoising score matching, can be challenging. To address this problem, we present DiffDock-Site, a novel paradigm that integrates the precision of PointSite for identifying and initializing the docking pocket. This two-stage strategy then refines the ligand's position, orientation, and rotatable bonds using a more concise score model than traditional DiffDock. By emphasizing the dynamic range around the pinpointed pocket center, our approach dramatically elevates both efficiency and accuracy in molecular docking. We achieve a substantial reduction in mean RMSD and centroid distance, from 7.5 to 5.2 and 5.5 to 2.9, respectively. Remarkably, our approach delivers these precision gains using only 1/6 of the model parameters and expends just 1/13 of the training time, underscoring its unmatched combination of computational efficiency and predictive accuracy.
DiffDock-Site: A Novel Paradigm for Enhanced Protein-Ligand Predictions through Binding Site Identification
[ "Huanlei Guo", "Song Liu", "Mingdi HU", "Yilun Lou", "Bingyi Jing" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=ALsSka1db3
@inproceedings{ pedawi2023through, title={Through the looking glass: navigating in latent space to optimize over combinatorial synthesis libraries}, author={Aryan Pedawi and Saulo De Oliveira and Henry van den Bedem}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=ALsSka1db3} }
Commercially available, synthesis-on-demand virtual libraries contain trillions of readily synthesizable compounds and can serve as a bridge between _in silico_ property optimization and _in vitro_ validation. However, as these libraries continue to grow exponentially in size, traditional enumerative search strategies that scale linearly with the number of compounds encounter significant limitations. Hierarchical enumeration approaches scale more gracefully in library size, but are inherently greedy and implicitly rest on an additivity assumption of the molecular property with respect to its sub-components. In this work, we present a reinforcement learning approach to retrieving compounds from ultra-large libraries that satisfy a set of user-specified constraints. Along the way, we derive what we believe to be a new family of $\alpha$-divergences that may be of general interest in density estimation. Our method first trains a library-constrained generative model over a virtual library and subsequently trains a normalizing flow to learn a distribution over latent space that decodes constraint-satisfying compounds. The proposed approach naturally accommodates specification of multiple molecular property constraints and requires only black box access to the molecular property functions, thereby supporting a broad class of search problems over these libraries.
Through the looking glass: navigating in latent space to optimize over combinatorial synthesis libraries
[ "Aryan Pedawi", "Saulo De Oliveira", "Henry van den Bedem" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=9BQ3l8OVru
@inproceedings{ ghari2023generative, title={Generative Flow Networks Assisted Biological Sequence Editing}, author={Pouya M. Ghari and Alex Tseng and G{\"o}kcen Eraslan and Romain Lopez and Tommaso Biancalani and Gabriele Scalia and Ehsan Hajiramezanali}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=9BQ3l8OVru} }
Editing biological sequences has extensive applications in synthetic biology and medicine, such as designing regulatory elements for nucleic-acid therapeutics and treating genetic disorders. The primary objective in biological-sequence editing is to determine the optimal modifications to a sequence which augment certain biological properties while adhering to a minimal number of alterations to ensure safety and predictability. In this paper, we propose GFNSeqEditor, a novel biological-sequence editing algorithm which builds on the recently proposed area of generative flow networks (GFlowNets). Our proposed GFNSeqEditor identifies elements within a starting seed sequence that may compromise a desired biological property. Then, using a learned stochastic policy, the algorithm makes edits at these identified locations, offering diverse modifications for each sequence in order to enhance the desired property. Notably, GFNSeqEditor prioritizes edits with a higher likelihood of substantially improving the desired property. Furthermore, the number of edits can be regulated through specific hyperparameters. We conducted extensive experiments on a range of real-world datasets and biological applications, and our results underscore the superior performance of our proposed algorithm compared to existing state-of-the-art sequence editing methods.
Generative Flow Networks Assisted Biological Sequence Editing
[ "Pouya M. Ghari", "Alex Tseng", "Gökcen Eraslan", "Romain Lopez", "Tommaso Biancalani", "Gabriele Scalia", "Ehsan Hajiramezanali" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=8PbTU4exnV
@inproceedings{ paul2023combining, title={Combining Structure and Sequence for Superior Fitness Prediction}, author={Steffanie Paul and Aaron Kollasch and Pascal Notin and Debora Marks}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=8PbTU4exnV} }
Deep generative models of protein sequence and inverse folding models have shown great promise as protein design methods. While sequence-based models have shown strong zero-shot mutation effect prediction performance, inverse folding models have not been extensively characterized in this way. As these models use information from protein structures, it is likely that inverse folding models possess inductive biases that make them better predictors of certain function types. Using the collection of model scores contained in the newly updated ProteinGym, we systematically explore the differential zero-shot predictive power of sequence and inverse folding models. We find that inverse folding models consistently outperform the best-in-class sequence models on assays of protein thermostability, but have lower performance on other properties. Motivated by these findings, we develop StructSeq, an ensemble model combining information from sequence, multiple sequence alignments (MSAs), and structure. StructSeq achieves state-of-the-art Spearman correlation on ProteinGym and is robust to different functional assay types.
Combining Structure and Sequence for Superior Fitness Prediction
[ "Steffanie Paul", "Aaron Kollasch", "Pascal Notin", "Debora Marks" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=73wnK2BvWg
@inproceedings{ larsen2023improving, title={Improving Precision in Language Models Learning from Invalid Samples}, author={Niels Larsen and Giorgio Giannone and Ole Winther and Kai Blin}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=73wnK2BvWg} }
Language Models are powerful generative tools capable of learning intricate patterns from vast amounts of unstructured data. Nevertheless, in domains that demand precision, such as science and engineering, the primary objective is to obtain an exact and accurate answer. Precision takes precedence in these contexts. In specialized tasks like chemical compound generation, the emphasis is on output accuracy rather than response diversity. Traditional self-refinement methods are ineffective for such domain-specific input/output pairs, unlike general language tasks. In this study, we introduce invalid2valid, a powerful and general post-processing mechanism that can significantly enhance precision in language models for input/output tasks spanning different domains and specialized applications.
Improving Precision in Language Models Learning from Invalid Samples
[ "Niels Larsen", "Giorgio Giannone", "Ole Winther", "Kai Blin" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=6NtRll9VdH
@inproceedings{ nagaraj2023machine, title={Machine learning derived embeddings of bulk multi-omics data enable clinically significant representations in a pan-cancer cohort}, author={Sanjay Nagaraj and ZACHARY MCCAW and Theofanis Karaletsos and Daphne Koller and Anna Shcherbina}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=6NtRll9VdH} }
Bulk multiomics data provides a comprehensive view of tissue biology, but datasets rarely contain matched transcriptomics and chromatin accessibility data for a given sample. Furthermore, it is difficult to identify relevant genetic signatures from the high-dimensional, sparse representations provided by omics modalities. Machine learning (ML) models have the ability to extract dense, information-rich, denoised representations from omics data, which facilitate finding novel genetic signatures. To this end, we develop and compare generative ML models through an evaluation framework that examines the biological and clinical relevance of the underlying latent embeddings produced. We focus our analysis on pan-cancer multiomics data from a set of 21 diverse cancer metacohorts across three datasets. We additionally investigate if our framework can generate robust representations from oncology imaging modalities (i.e. histopathology slides). Our best performing models learn clinical and biological signals and show improved performance over traditional baselines in our evaluations, including overall survival prediction.
Machine learning derived embeddings of bulk multi-omics data enable clinically significant representations in a pan-cancer cohort
[ "Sanjay Nagaraj", "ZACHARY MCCAW", "Theofanis Karaletsos", "Daphne Koller", "Anna Shcherbina" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4k926QVVM4
@inproceedings{ ngo2023targetaware, title={Target-Aware Variational Auto-Encoders for Ligand Generation with Multi-Modal Protein Modeling}, author={Khang Ngo and Truong Son Hy}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=4k926QVVM4} }
Without knowledge of specific pockets, generating ligands based on the global structure of a protein target plays a crucial role in drug discovery as it helps reduce the search space for potential drug-like candidates in the pipeline. However, contemporary methods require optimizing tailored networks for each protein, which is arduous and costly. To address this issue, we introduce TargetVAE, a target-aware variational auto-encoder that generates ligands with high binding affinities to arbitrary protein targets, guided by a novel prior network that learns from entire protein structures. We showcase the superiority of our approach by conducting extensive experiments and evaluations, including the assessment of generative model quality, ligand generation for unseen targets, docking score computation, and binding affinity prediction. Empirical results demonstrate the promising performance of our proposed approach. Our source code in PyTorch is publicly available at https://github.com/HySonLab/Ligand_Generation
Target-Aware Variational Auto-Encoders for Ligand Generation with Multi-Modal Protein Modeling
[ "Khang Ngo", "Truong Son Hy" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4HQtWpQ4WG
@inproceedings{ zhang2023topodiff, title={TopoDiff: Improve Protein Backbone Generation with Topology-aware Latent Encoding}, author={Yuyang Zhang and Zinnia Ma and Haipeng Gong}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=4HQtWpQ4WG} }
The $\textit{de novo}$ design of protein structures is an intriguing research topic in the field of protein engineering. Recent breakthroughs in diffusion-based generative models have demonstrated substantial promise in tackling this task, notably in the generation of diverse and realistic protein structures. While existing models predominantly focus on unconditional generation or fine-grained conditioning at the residue level, the holistic, top-down approaches to control the overall topological arrangements are still insufficiently explored. In response, we introduce TopoDiff, a diffusion-based framework augmented by a global-structure encoding module, which is capable of unsupervisedly learning a compact latent representation of natural protein topologies with interpretable characteristics and simultaneously harnessing this learned information for controllable protein structure generation. We also propose a novel metric specifically designed to assess the coverage of sampled proteins with respect to the natural protein space. In comparative analyses with existing models, our generative model not only demonstrates comparable performance on established metrics but also exhibits better coverage across the recognized topology landscape. In summary, TopoDiff emerges as a novel solution towards enhancing the controllability and comprehensiveness of $\textit{de novo}$ protein structure generation, presenting new possibilities for innovative applications in protein engineering and beyond.
TopoDiff: Improving Protein Backbone Generation with Topology-aware Latent Encoding
[ "Yuyang Zhang", "Zinnia Ma", "Haipeng Gong" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=46okCAggF5
@inproceedings{ li2023codonbert, title={Codon{BERT}: Large Language Models for m{RNA} design and optimization}, author={Sizhen Li and Saeed Moayedpour and Ruijiang Li and Michael Bailey and Saleh Riahi and Milad Miladi and Jacob Miner and Dinghai Zheng and Jun Wang and Akshay Balsubramani and Khang Tran and Minnie and Monica Wu and Xiaobo Gu and Ryan Clinton and Carla Asquith and Joseph Skaleski and Lianne Boeglin and Sudha Chivukula and Anusha Dias and Fernando Ulloa Montoya and Vikram Agarwal and Ziv Bar-Joseph and Sven Jager}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=46okCAggF5} }
mRNA based vaccines and therapeutics are gaining popularity and usage across a wide range of conditions. One of the critical issues when designing such mRNAs is sequence optimization. Even small proteins or peptides can be encoded by an enormously large number of mRNAs. The actual mRNA sequence can have a large impact on several properties including expression, stability, immunogenicity, and more. To enable the selection of an optimal sequence, we developed CodonBERT, a large language model (LLM) for mRNAs. Unlike prior models, CodonBERT uses codons as inputs which enables it to learn better representations. CodonBERT was trained using more than 10 million mRNA sequences from a diverse set of organisms. The resulting model captures important biological concepts. CodonBERT can also be extended to perform prediction tasks for various mRNA properties. CodonBERT outperforms previous mRNA prediction methods including on a new flu vaccine dataset.
CodonBERT: Large Language Models for mRNA design and optimization
[ "Sizhen Li", "Saeed Moayedpour", "Ruijiang Li", "Michael Bailey", "Saleh Riahi", "Milad Miladi", "Jacob Miner", "Dinghai Zheng", "Jun Wang", "Akshay Balsubramani", "Khang Tran", "Minnie", "Monica Wu", "Xiaobo Gu", "Ryan Clinton", "Carla Asquith", "Joseph Skaleski", "Lianne Boeglin", "Sudha Chivukula", "Anusha Dias", "Fernando Ulloa Montoya", "Vikram Agarwal", "Ziv Bar-Joseph", "Sven Jager" ]
Workshop/GenBio
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]