bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 197
792
| abstract
stringlengths 303
3.45k
| title
stringlengths 10
159
| authors
sequencelengths 1
28
⌀ | id
stringclasses 44
values | type
stringclasses 16
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 444
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
42
| num_comments
int64 -1
13
| n_authors
int64 -1
92
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
11
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=89fKHVtsMR | @inproceedings{
y{\"u}ksel2023firstorder,
title={First-order {ANIL} provably learns representations despite overparametrisation},
author={O{\u{g}}uz Y{\"u}ksel and Etienne Boursier and Nicolas Flammarion},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=89fKHVtsMR}
} | Meta-learning methods leverage data from previous tasks to learn a new task in a sample-efficient manner. In particular, model-agnostic methods look for initialisation points from which gradient descent quickly adapts to any new task.
Although it has been empirically suggested that such methods learns shared representations during pretraining, there is limited theoretical evidence of such behavior. In this direction, this work shows, in the limit of infinite tasks, first-order ANIL with a linear two-layer network successfully learns linear shared representations. This result even holds under _overparametrisation_; having a width larger than the dimension of the shared representations results in an asymptotically low-rank solution. | First-order ANIL provably learns representations despite overparametrisation | [
"Oğuz Yüksel",
"Etienne Boursier",
"Nicolas Flammarion"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=7iibkkg0WI | @inproceedings{
gomes2023a,
title={A Data-Driven Measure of Relative Uncertainty for Misclassification Detection},
author={Eduardo Dadalto C{\^a}mara Gomes and Marco Romanelli and Georg Pichler and Pablo Piantanida},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=7iibkkg0WI}
} | Misclassification detection is an important problem in machine learning, as it allows for the identification of instances where the model's predictions are unreliable. However, conventional uncertainty measures such as Shannon entropy do not provide an effective way to infer the real uncertainty associated with the model's predictions. In this paper, we introduce a novel data-driven measure of relative uncertainty to an observer for misclassification detection. By learning patterns in the distribution of soft-predictions, our uncertainty measure can identify misclassified samples based on the predicted class probabilities. Interestingly, according to the proposed measure, soft-predictions that correspond to misclassified instances can carry a large amount of uncertainty, even though they may have low Shannon entropy. We demonstrate empirical improvements over multiple image classification tasks, outperforming state-of-the-art misclassification detection methods. | A Data-Driven Measure of Relative Uncertainty for Misclassification Detection | [
"Eduardo Dadalto Câmara Gomes",
"Marco Romanelli",
"Georg Pichler",
"Pablo Piantanida"
] | Workshop/M3L | poster | 2306.01710 | [
"https://github.com/edadaltocg/relative-uncertainty"
] | https://huggingface.co/papers/2306.01710 | 0 | 0 | 0 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=7QRaAfbium | @inproceedings{
lotfi2023nonvacuous,
title={Non-Vacuous Generalization Bounds for Large Language Models},
author={Sanae Lotfi and Marc Finzi and Yilun Kuang and Tim Rudner and Micah Goldblum and Andrew Wilson},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=7QRaAfbium}
} | Modern language models can contain billions of parameters, raising the question of whether they can generalize beyond the training data or simply regurgitate their training corpora. We provide the first non-vacuous generalization bounds for pretrained large language models (LLMs), indicating that language models are capable of discovering regularities that generalize to unseen data. In particular, we derive a compression bound that is valid for the unbounded log-likelihood loss, and we extend the bound to handle subsampling, accelerating bound computation on massive datasets. To achieve the extreme level of compression required for non-vacuous generalization bounds, we devise SubLoRA, a low-dimensional non-linear parameterization. Using this approach, we find that larger models have better generalization bounds and are more compressible than smaller models. | Non-Vacuous Generalization Bounds for Large Language Models | [
"Sanae Lotfi",
"Marc Finzi",
"Yilun Kuang",
"Tim Rudner",
"Micah Goldblum",
"Andrew Wilson"
] | Workshop/M3L | poster | 2312.17173 | [
"https://github.com/sanaelotfi/sublora-bounds-for-llms"
] | https://huggingface.co/papers/2312.17173 | 0 | 0 | 0 | 6 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=7OwnWAPfXu | @inproceedings{
ravichandran2023learning,
title={Learning from setbacks: the impact of adversarial initialization on generalization performance},
author={Kavya Ravichandran and Yatin Dandi and Stefani Karp and Francesca Mignacco},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=7OwnWAPfXu}
} | The loss landscape of state-of-the-art neural networks is far from simple. Understanding how optimization algorithms initialized differently navigate such high-dimensional non-convex profiles is a key problem in machine learning. [Liu et al. 2020] use pre-training on random labels to produce adversarial initializations that lead stochastic gradient descent into global minima with poor generalization. This result contrasts with other literature arguing that pre-training on random labels produces positive effects (see, e.g., [Maennel et al. (2020)]). We ask under which conditions this initialization results in solutions that generalize poorly. Our goal is to build a theoretical understanding of the properties of good solutions by isolating this phenomenon in some minimal models. To this end, we posit and study several hypotheses for why the phenomenon might arise in models of varying levels of simplicity, including representation quality and complex structure in data. | Learning from setbacks: the impact of adversarial initialization on generalization performance | [
"Kavya Ravichandran",
"Yatin Dandi",
"Stefani Karp",
"Francesca Mignacco"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=6pfCFDPhy6 | @inproceedings{
bordelon2023depthwise,
title={Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit},
author={Blake Bordelon and Lorenzo Noci and Mufan Li and Boris Hanin and Cengiz Pehlevan},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=6pfCFDPhy6}
} | We study residual networks with a residual branch scale of $1/\sqrt{\text{depth}}$ in combination with the $\mu$P parameterization.
We provide experiments demonstrating that residual architectures including convolutional ResNets and Vision Transformers trained with this parameterization exhibit transfer of optimal hyperparameters across width and depth on CIFAR-10 and ImageNet.
Furthermore, using recent developments in the dynamical mean field theory (DMFT) description of neural network learning dynamics, we show that this parameterization of ResNets admits a well-defined feature learning joint infinite-width and infinite-depth limit and show convergence of finite-size network dynamics towards this limit. | Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit | [
"Blake Bordelon",
"Lorenzo Noci",
"Mufan Li",
"Boris Hanin",
"Cengiz Pehlevan"
] | Workshop/M3L | oral | 2309.16620 | [
""
] | https://huggingface.co/papers/2309.16620 | 0 | 1 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=6ZUH4KRM1o | @inproceedings{
ujv{\'a}ry2023estimating,
title={Estimating optimal {PAC}-Bayes bounds with Hamiltonian Monte Carlo},
author={Szilvia Ujv{\'a}ry and Gergely Flamich and Vincent Fortuin and Jos{\'e} Miguel Hern{\'a}ndez-Lobato},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=6ZUH4KRM1o}
} | An important yet underexplored question in the PAC-Bayes literature is how much tightness we lose by restricting the posterior family to factorized Gaussian distributions when optimizing a PAC-Bayes bound. We investigate this issue by estimating data-independent PAC-Bayes bounds using the optimal posteriors, comparing them to bounds obtained using MFVI. Concretely, we (1) sample from the optimal Gibbs posterior using Hamiltonian Monte Carlo, (2) estimate its KL divergence from the prior with thermodynamic integration, and (3) propose three methods to obtain high-probability bounds under different assumptions. Our experiments on the MNIST dataset reveal significant tightness gaps, as much as 5-6% in some cases. | Estimating optimal PAC-Bayes bounds with Hamiltonian Monte Carlo | [
"Szilvia Ujváry",
"Gergely Flamich",
"Vincent Fortuin",
"José Miguel Hernández-Lobato"
] | Workshop/M3L | poster | 2310.20053 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=6V11PG7w5l | @inproceedings{
schaeffer2023divergence,
title={Divergence at the Interpolation Threshold: Identifying, Interpreting \& Ablating the Sources of a Deep Learning Puzzle},
author={Rylan Schaeffer and Zachary Robertson and Akhilan Boopathy and Mikail Khona and Ila Fiete and Andrey Gromov and Sanmi Koyejo},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=6V11PG7w5l}
} | Machine learning models misbehave, often in unexpected ways. One prominent misbehavior is when the test loss diverges at the interpolation threshold, perhaps best known from its distinctive appearance in double descent. While considerable theoretical effort has gone into understanding generalization of overparameterized models, less effort has been made at understanding why the test loss misbehaves at the interpolation threshold. Moreover, analytically solvable models in this area employ a range of assumptions and use complex techniques from random matrix theory, statistical mechanics, and kernel methods, making it difficult to assess when and why test error might diverge. In this work, we analytically study the simplest supervised model - ordinary linear regression - and show intuitively and rigorously when and why a divergence occurs at the interpolation threshold using basic linear algebra. We identify three interpretable factors that, when all present, cause the divergence. We demonstrate on real data that linear models' test losses diverge at the interpolation threshold and that the divergence disappears when we ablate any one of the three identified factors. | Divergence at the Interpolation Threshold: Identifying, Interpreting Ablating the Sources of a Deep Learning Puzzle | [
"Rylan Schaeffer",
"Zachary Robertson",
"Akhilan Boopathy",
"Mikail Khona",
"Ila Fiete",
"Andrey Gromov",
"Sanmi Koyejo"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=6O15A3h2yl | @inproceedings{
wang2023good,
title={Good regularity creates large learning rate implicit biases: edge of stability, balancing, and catapult},
author={Yuqing Wang and Zhenghao Xu and Tuo Zhao and Molei Tao},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=6O15A3h2yl}
} | Large learning rates, when applied to gradient descent for nonconvex optimization, yield various implicit biases including edge of stability (Cohen et al., 2021), balancing (Wang et al., 2022), and catapult (Lewkowycz et al., 2020). These phenomena cannot be well explained by classical optimization theory. Significant theoretical progress has been made to understand these implicit biases, but it remains unclear for which objective functions would they occur. This paper provides an initial step in answering this question, showing that these implicit biases are different tips of the same iceberg. Specifically, they occur when the optimization objective function has certain regularity. This regularity, together with gradient descent using a large learning rate that favors flatter regions, result in these nontrivial dynamical behaviors. To demonstrate this claim, we develop new global convergence theory under large learning rates for two examples of nonconvex functions without global smoothness, departing from typical assumptions in traditional analyses. We also discuss the implications on training neural networks, where different losses and activations can affect regularity and lead to highly varied training dynamics. | Good regularity creates large learning rate implicit biases: edge of stability, balancing, and catapult | [
"Yuqing Wang",
"Zhenghao Xu",
"Tuo Zhao",
"Molei Tao"
] | Workshop/M3L | poster | 2310.17087 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=5QyvPPnrGf | @inproceedings{
dong2023toward,
title={Toward Student-oriented Teacher Network Training for Knowledge Distillation},
author={Chengyu Dong and Liyuan Liu and Jingbo Shang},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=5QyvPPnrGf}
} | How to conduct teacher training for knowledge distillation is still an open problem. It has been widely observed that a best-performing teacher does not necessarily yield the best-performing student, suggesting a fundamental discrepancy between the current teacher training practice and the ideal teacher training strategy. To fill this gap, we explore the feasibility of training a teacher that is oriented toward student performance with empirical risk minimization (ERM). Our analyses are inspired by the recent findings that the effectiveness of knowledge distillation hinges on the teacher’s capability to approximate the true label distribution of training inputs. We theoretically establish that ERM minimizer can approximate the true label distribution of training data as long as the feature extractor of the learner network is Lipschitz continuous and is robust to feature transformations. In light of our theory, we propose a teacher training method SoTeacher which incorporates Lipschitz regularization and consistency regularization into ERM. Experiments on benchmark datasets using various knowledge distillation algorithms and teacher-student pairs confirm that SoTeacher can improve student accuracy consistently. | Toward Student-oriented Teacher Network Training for Knowledge Distillation | [
"Chengyu Dong",
"Liyuan Liu",
"Jingbo Shang"
] | Workshop/M3L | poster | 2206.06661 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=5CRGXIjjJA | @inproceedings{
bair2023adaptive,
title={Adaptive Sharpness-Aware Pruning for Robust Sparse Networks},
author={Anna Bair and Hongxu Yin and Maying Shen and Pavlo Molchanov and Jose M. Alvarez},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=5CRGXIjjJA}
} | Robustness and compactness are two essential attributes of deep learning models that are deployed in the real world.
The goals of robustness and compactness may seem to be at odds, since robustness requires generalization across domains, while the process of compression exploits specificity in one domain.
We introduce \textit{Adaptive Sharpness-Aware Pruning (AdaSAP)}, which unifies these goals through the lens of network sharpness.
The AdaSAP method produces sparse networks that are robust to input variations which are \textit{unseen at training time}.
We achieve this by strategically incorporating weight perturbations in order to optimize the loss landscape. This allows the model to be both primed for pruning and regularized for improved robustness.
AdaSAP improves the robust accuracy of pruned models on classification and detection over recent methods by up to +6\% on OOD datasets, over a wide range of compression ratios, pruning criteria, and architectures. | Adaptive Sharpness-Aware Pruning for Robust Sparse Networks | [
"Anna Bair",
"Hongxu Yin",
"Maying Shen",
"Pavlo Molchanov",
"Jose M. Alvarez"
] | Workshop/M3L | poster | 2306.14306 | [
""
] | https://huggingface.co/papers/2306.14306 | 2 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=4pPnQqUMLS | @inproceedings{
yaras2023invariant,
title={Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations},
author={Can Yaras and Peng Wang and Wei Hu and Zhihui Zhu and Laura Balzano and Qing Qu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=4pPnQqUMLS}
} | An extensively studied phenomenon of the past few years in training deep networks is the implicit bias of gradient descent towards parsimonious solutions. In this work, we further investigate this phenomenon by narrowing our focus to deep matrix factorization, where we reveal surprising low-dimensional structures in the learning dynamics when the target matrix is low-rank. Specifically, we show that the evolution of gradient descent starting from arbitrary orthogonal initialization only affects a minimal portion of singular vector spaces across all weight matrices. In other words, the learning process happens only within a small invariant subspace of each weight matrix, despite the fact that all parameters are updated throughout training. From this, we provide rigorous justification for low-rank training in a specific, yet practical setting. In particular, we demonstrate that we can construct compressed factorizations that are equivalent to full-width, deep factorizations throughout training for solving low-rank matrix completion problems efficiently. | Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Matrix Factorizations | [
"Can Yaras",
"Peng Wang",
"Wei Hu",
"Zhihui Zhu",
"Laura Balzano",
"Qing Qu"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=4927EDtTI6 | @inproceedings{
nitanda2023how,
title={How Structured Data Guides Feature Learning: A Case Study of the Parity Problem},
author={Atsushi Nitanda and Kazusato Oko and Taiji Suzuki and Denny Wu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=4927EDtTI6}
} | Recent works have shown that neural networks optimized by gradient-based methods can adapt to sparse or low-dimensional target functions through feature learning; an often studied target is classification of the sparse parity function on the unit hypercube. However, such isotropic data setting does not capture the anisotropy and low intrinsic dimensionality exhibited in realistic datasets. In this work, we address this shortcoming by studying how feature learning interacts with structured (anisotropic) input data: we consider the classification of sparse parity on high-dimensional orthotope where the feature coordinates have varying magnitudes. Specifically, we analyze the learning complexity of the mean-field Langevin dynamics (MFLD), which describes the noisy gradient descent update on two-layer neural network, and show that the statistical complexity (i.e. sample size) and computational complexity (i.e. network width) of MFLD can both be improved when prominent directions of the anisotropic input data aligns with the support of the target function. Moreover, we demonstrate the benefit of feature learning by establishing a kernel lower bound on the classification error, which applies to neural networks in the lazy regime. | How Structured Data Guides Feature Learning: A Case Study of the Parity Problem | [
"Atsushi Nitanda",
"Kazusato Oko",
"Taiji Suzuki",
"Denny Wu"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=3vkeYFEZxW | @inproceedings{
bhattamishra2023the,
title={The Next Symbol Prediction Problem: {PAC}-learning and its relation to Language Models},
author={Satwik Bhattamishra and Phil Blunsom and Varun Kanade},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3vkeYFEZxW}
} | The *next symbol prediction* (NSP) problem has been widely used to empirically evaluate the performance of neural sequence models on formal language tasks. We formalize the setting so as to make it amenable to PAC-learning analysis. In the NSP setting, a learning algorithm receives valid sequences (positive examples) from the underlying language, along with rich labels indicating for every prefix, whether the prefix is in the language and what symbols could appear subsequently that lead to accepting string. In the conventional classification setting where learning occurs with only positive and negative examples, the problem of learning regular languages or even subclasses represented by acyclic DFAs is known to be computationally hard based on cryptographic assumptions. In contrast, our main result shows that regular languages are efficiently PAC-learnable in the next symbol prediction setting. Further, we provide a more efficient learning algorithm for the case where the target DFA is known to be acyclic. Given the rich labels required in the NSP setting, one may wonder whether this setting is applicable to non-artificial tasks. We explain how language models can act as a source of such labeled data, and consequently, our algorithm can be applied to fit a finite-state model (DFA) that learns the (truncated) support of the language model. | The Next Symbol Prediction Problem: PAC-learning and its relation to Language Models | [
"Satwik Bhattamishra",
"Phil Blunsom",
"Varun Kanade"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=3vBtKsiHEC | @inproceedings{
d'angelo2023why,
title={Why Do We Need Weight Decay for Overparameterized Deep Networks?},
author={Francesco D'Angelo and Aditya Varre and Maksym Andriushchenko and Nicolas Flammarion},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3vBtKsiHEC}
} | Weight decay is a broadly used technique for training state-of-the-art deep networks. Despite its widespread usage, its role remains poorly understood. In this work, we highlight that the role of weight decay in modern deep learning is different from its regularization effect studied in classical learning theory. For overparameterized deep networks, we show how weight decay modifies the optimization dynamics enhancing the ever-present implicit regularization of SGD via loss stabilization. | Why Do We Need Weight Decay for Overparameterized Deep Networks? | [
"Francesco D'Angelo",
"Aditya Varre",
"Maksym Andriushchenko",
"Nicolas Flammarion"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=3YcBch3ejt | @inproceedings{
cohen2023the,
title={The Double-Edged Sword: Perception and Uncertainty in Inverse Problems},
author={Regev Cohen and Ehud Rivlin and Daniel Freedman},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3YcBch3ejt}
} | Inverse problems pose significant challenges due to their inherent ambiguity in mapping observed data back to its original state. While recent advances have yielded impressive results in restoring degraded data, attaining high perceptual quality comes at the cost of increased hallucinations. This paper investigates this phenomenon to reveal a fundamental tradeoff between perception and uncertainty in solving inverse problems. Using error entropy as a measure of uncertainty, we demonstrate that higher perceptual quality in restoration algorithms is accompanied by a surge in uncertainty. Leveraging Rényi divergence as a perception metric, we derive bounds for this tradeoff, allowing for categorization of different inverse methods based on their performance. Additionally, we connect estimation distortion with uncertainty, offering novel insights into the traditional perception-distortion tradeoff. Our work provides a rigorous framework for analyzing uncertainty in the context of solving inverse problems, highlighting its interplay with perception and distortion, while underscoring the limitations of current approaches to achieving both high perceptual quality and low uncertainty simultaneously. | The Double-Edged Sword: Perception and Uncertainty in Inverse Problems | [
"Regev Cohen",
"Ehud Rivlin",
"Daniel Freedman"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=3WBUUAgPxy | @inproceedings{
wang2023nearinterpolators,
title={Near-Interpolators: Fast Norm Growth and Tempered Near-Overfitting},
author={Yutong Wang and Rishi Sonthalia and Wei Hu},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3WBUUAgPxy}
} | We study linear regression when the input data population
covariance matrix has eigenvalues $\lambda_i \sim i^{-\alpha}$ where $\alpha > 1$.
Under a generic random matrix theory assumption, we prove
that any near-interpolator, i.e., ${\beta}$ whose training error is below the noise floor, must have its squared $\ell_2$-norm growing super-linearly with the number of samples $n$:
$\|{\beta}\|_{2}^{2} = \Omega(n^{\alpha})$. This implies that existing norm-based generalization bounds increase as the number of samples increases, matching the empirical observations from prior work.
On the other hand, such near-interpolators when properly tuned achieve good generalization, where the test errors approach arbitrarily close to the noise floor.
Our work demonstrates that existing norm-based generalization bounds are vacuous for explaining
the generalization capability of \emph{any} near-interpolators.
Moreover, we show that the trade-off between train and test accuracy is better when the norm growth exponential is smaller. | Near-Interpolators: Fast Norm Growth and Tempered Near-Overfitting | [
"Yutong Wang",
"Rishi Sonthalia",
"Wei Hu"
] | Workshop/M3L | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=3BqXz5sAEN | @inproceedings{
tian2023on,
title={On robust overfitting: adversarial training induced distribution matters},
author={Runzhi Tian and Yongyi Mao},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=3BqXz5sAEN}
} | Robust overfitting has been observed to arise in adversarial training. We hypothesize that this phenomenon may be related to the evolution of the data distribution along the training trajectory. To investigate this, we select a set of checkpoints in adversarial training and perform standard training on distributions induced by adversarial perturbation w.r.t the checkpoints. We observe that the obtained models become increasingly harder to generalize when robust overfitting occurs, thereby validating the hypothesis. We show the hardness of generalization on the induced distributions is related to certain local property of the perturbation operator at each checkpoint. The connection between the local property and the generalization on the induced distribution is proved by establishing an upper bound of the generalization error. Other interesting phenomena related to the adversarial training trajectory are also observed. | On robust overfitting: adversarial training induced distribution matters | [
"Runzhi Tian",
"Yongyi Mao"
] | Workshop/M3L | poster | 2311.16526 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=25g5u1t6nC | @inproceedings{
yau2023are,
title={Are Graph Neural Networks Optimal Approximation Algorithms?},
author={Morris Yau and Eric Lu and Nikolaos Karalias and Jessica Xu and Stefanie Jegelka},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=25g5u1t6nC}
} | In this work we design graph neural network architectures that capture optimal approximation algorithms for a large class of combinatorial optimization problems using powerful algorithmic tools from semidefinite programming (SDP).
Concretely, we prove that polynomial-sized message passing algorithms can represent the most powerful polynomial time algorithms for Max Constraint Satisfaction Problems assuming the Unique Games Conjecture. We leverage this result to construct efficient graph neural network architectures, OptGNN, that obtain high-quality approximate solutions on landmark combinatorial optimization problems such as Max Cut and Minimum Vertex Cover. Finally, we take advantage of OptGNN's ability to capture convex relaxations to design an algorithm for producing dual certificates of optimality (bounds on the optimal solution) from the learned embeddings of OptGNN. | Are Graph Neural Networks Optimal Approximation Algorithms? | [
"Morris Yau",
"Eric Lu",
"Nikolaos Karalias",
"Jessica Xu",
"Stefanie Jegelka"
] | Workshop/M3L | poster | 2310.00526 | [
"https://github.com/penlu/bespoke-gnn4do"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=1UGbUnQZv0 | @inproceedings{
tian2023joma,
title={Jo{MA}: Demystifying Multilayer Transformers via {JO}int Dynamics of {MLP} and Attention},
author={Yuandong Tian and Yiping Wang and Zhenyu Zhang and Beidi Chen and Simon Du},
booktitle={NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning},
year={2023},
url={https://openreview.net/forum?id=1UGbUnQZv0}
} | We propose Joint MLP/Attention (JoMA) dynamics, a novel mathematical framework to understand the training procedure of multilayer Transformer architectures. This is achieved by integrating out the self-attention layer in Transformers, producing a modified dynamics of MLP layers only. JoMA removes unrealistic assumptions in previous analysis (e.g., lack of residual connection), and predicts that the attention first becomes sparse (to learn salient tokens), then dense (to learn less salient tokens) in the presence of nonlinear activations, while in the linear case, it is consistent with existing works. We leverage JoMA to qualitatively explains how tokens are combined to form hierarchies in multilayer Transformers, when the input tokens are generated by a latent hierarchical generative model. Experiments on models trained from real-world dataset (Wikitext2/Wikitext103) and various pre-trained models (OPT, Pythia) verify our theoretical findings. | JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention | [
"Yuandong Tian",
"Yiping Wang",
"Zhenyu Zhang",
"Beidi Chen",
"Simon Du"
] | Workshop/M3L | poster | 2310.00535 | [
"https://github.com/facebookresearch/luckmatters"
] | https://huggingface.co/papers/2310.00535 | 2 | 2 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=yx3Hkx5ved | @inproceedings{
liu2023improved,
title={Improved Baselines with Visual Instruction Tuning},
author={Haotian Liu and Chunyuan Li and Yuheng Li and Yong Jae Lee},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=yx3Hkx5ved}
} | Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in ~1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available. | Improved Baselines with Visual Instruction Tuning | [
"Haotian Liu",
"Chunyuan Li",
"Yuheng Li",
"Yong Jae Lee"
] | Workshop/Instruction | 2023 | 2310.03744 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=yi8KGilFFk | @inproceedings{
chen2023can,
title={Can {LLM}-Generated Misinformation Be Detected?},
author={Canyu Chen and Kai Shu},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=yi8KGilFFk}
} | The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures. | Can LLM-Generated Misinformation Be Detected? | [
"Canyu Chen",
"Kai Shu"
] | Workshop/Instruction | 2023 | 2309.13788 | [
"https://github.com/llm-misinformation/llm-misinformation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=yeQvl1q8Vy | @inproceedings{
kim2023prometheus,
title={Prometheus: Inducing Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=yeQvl1q8Vy}
} | Recently, using a powerful proprietary Large Language Model (LLM) (e.g., GPT-4) as an evaluator for long-form responses has become the de facto standard. However, for practitioners with large-scale evaluation tasks and custom criteria in consideration (e.g., child-readability), using proprietary LLMs as an evaluator is unreliable due to the closed-source nature, uncontrolled versioning, and prohibitive costs. In this work, we propose PROMETHEUS, a fully open-source LLM that is on par with GPT-4’s evaluation capabilities when the appropriate reference materials (reference answer, score rubric) are accompanied. We first construct the FEEDBACK COLLECTION, a new dataset that consists of 1K fine-grained score rubrics, 20K instructions, and 100K responses and language feedback generated by GPT-4. Using the FEEDBACK COLLECTION, we train PROMETHEUS, a 13B evaluator LLM that can assess any given long-form text based on customized score rubric provided by the user. Experimental results show that PROMETHEUS scores a Pearson correlation of 0.897 with human evaluators when evaluating 45 customized score rubrics, which is on par with GPT-4 (0.882), and greatly outperforms ChatGPT (0.392). Furthermore, measuring correlation with GPT-4 with 1222 customized score rubrics across four benchmarks (MT Bench, Vicuna Bench, Feedback Bench, Flask Eval) shows similar trends, bolstering PROMETHEUS’s capability as an evaluator LLM. Lastly, PROMETHEUS achieves the highest accuracy on two human preference benchmarks (HHH Alignment & MT Bench Human Judgment) compared to open-sourced reward models explicitly trained on human preference datasets, highlighting its potential as an universal reward model. We open-source our code, dataset, and model at https://github.com/kaistAI/Prometheus. | Prometheus: Inducing Evaluation Capability in Language Models | [
"Seungone Kim",
"Jamin Shin",
"Yejin Cho",
"Joel Jang",
"Shayne Longpre",
"Hwaran Lee",
"Sangdoo Yun",
"Seongjin Shin",
"Sungdong Kim",
"James Thorne",
"Minjoon Seo"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=ydY0xXNhwi | @inproceedings{
aw2023instructiontuned,
title={Instruction-tuned {LLM}s with World Knowledge are More Aligned to the Human Brain},
author={Khai Loong Aw and Syrielle Montariol and Badr AlKhamissi and Martin Schrimpf and Antoine Bosselut},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=ydY0xXNhwi}
} | Instruction-tuning is a widely adopted method of finetuning that enables large language models (LLMs) to generate output that more closely resembles human responses to natural language queries, in many cases leading to human-level performance on diverse testbeds. However, it remains unclear whether instruction-tuning truly makes LLMs more similar to how humans process language. We investigate the effect of instruction-tuning on LLM-human similarity in two ways: (1) brain alignment, the similarity of LLM internal representations to neural activity in the human language system, and (2) behavioral alignment, the similarity of LLM and human behavior on a reading task. We assess 25 vanilla and instruction-tuned LLMs across three datasets involving humans reading naturalistic stories and sentences, and discover that instruction-tuning generally enhances brain alignment by an average of 6%, but does not have a similar effect on behavioral alignment. To identify the factors underlying LLM-brain alignment, we compute the correlation between the brain alignment of LLMs and various model properties, such as model size, performance ability on problem-solving benchmarks, and ability on benchmarks requiring world knowledge spanning various domains. Notably, we find a strong positive correlation between brain alignment and model size (r = 0.95), as well as performance on tasks requiring world knowledge (r = 0.81). Our results demonstrate that instruction-tuning LLMs improves both world knowledge representations and human brain alignment, suggesting that mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain. | Instruction-tuned LLMs with World Knowledge are More Aligned to the Human Brain | [
"Khai Loong Aw",
"Syrielle Montariol",
"Badr AlKhamissi",
"Martin Schrimpf",
"Antoine Bosselut"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=xulyCXgIWH | @inproceedings{
liu2023ring,
title={Ring Attention with Blockwise Transformers for Near-Infinite Context},
author={Hao Liu and Matei Zaharia and Pieter Abbeel},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=xulyCXgIWH}
} | Transformers have emerged as the architecture of choice for many state-of-the-art AI models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands imposed by Transformers limit their ability to handle long sequences, thereby creating challenges for tasks involving extended sequences or long-term dependencies. We present a distinct approach, Ring Attention, which leverages blockwise computation of self-attention to distribute long sequences across multiple devices while overlapping the communication of key-value blocks with the computation of blockwise attention. Ring Attention enables training and inference of sequences that are up to device count times longer than those of prior memory-efficient Transformers, effectively eliminating the memory constraints imposed by individual devices.
Extensive experiments on language modeling tasks demonstrate the effectiveness of Ring Attention in allowing large sequence input size and improving performance. | Ring Attention with Blockwise Transformers for Near-Infinite Context | [
"Hao Liu",
"Matei Zaharia",
"Pieter Abbeel"
] | Workshop/Instruction | 2023 | 2310.01889 | [
"https://github.com/forhaoliu/ringattention"
] | https://huggingface.co/papers/2310.01889 | 0 | 9 | 3 | 3 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=xaqoZZqkPU | @inproceedings{
li2023reflectiontuning,
title={Reflection-Tuning: Recycling Data for Better Instruction-Tuning},
author={Ming Li and Lichang Chen and Jiuhai Chen and Shwai He and Tianyi Zhou},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=xaqoZZqkPU}
} | Recent advancements in Large Language Models (LLMs) have expanded the horizons of natural language understanding and generation. Notably, the output control and alignment with the input of LLMs can be refined through instruction tuning. However, as highlighted in several studies, low-quality data in the training set are usually detrimental to instruction tuning, resulting in inconsistent or even misleading LLM outputs. We propose a novel method, termed ``reflection-tuning,'' which addresses the problem by self-improvement and judging capabilities of LLMs. This approach utilizes an oracle LLM to recycle the original training data by introspecting and enhancing the quality of instructions and responses in the data. Extensive experiments on widely used evaluation benchmarks show that LLMs trained with our recycled data outperform those trained with existing datasets in various benchmarks. Codes, data, and models are available at https://github.com/tianyi-lab/Reflection_Tuning. | Reflection-Tuning: Recycling Data for Better Instruction-Tuning | [
"Ming Li",
"Lichang Chen",
"Jiuhai Chen",
"Shwai He",
"Tianyi Zhou"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=x6MlSOzbmC | @inproceedings{
ge2023supervised,
title={Supervised Fine-Tuning of Large Language Models on Human Demonstrations Through the Lens of Memorization},
author={Yubin Ge and Devamanyu Hazarika and Yang Liu and Mahdi Namazifar},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=x6MlSOzbmC}
} | In recent years, the field of natural language processing (NLP) has witnessed remarkable advancements driven by the development of large language models (LLMs). Various techniques, such as instruction tuning, have emerged as crucial approaches, enhancing LLMs' adaptability to new tasks guided by instructional prompts. Meanwhile, the phenomenon of memorization within LLMs has garnered considerable attention. In this work, we delve into memorization within LLMs during supervised fine-tuning on human demonstrations and find a distinct pattern marked by initial memorization growth followed by stabilization, with different degrees of memorization observed across various tasks. An intriguing observation is the increase in validation perplexity, typically indicative of overfitting, does not result in lower generation quality. We probe deeper by examining the entropy derived from LLM's output probabilities, uncovering a consistent trend of decreasing entropy throughout training under both nucleus sampling and teacher forcing scenarios. This implies growing confidence within the LLM in generating output, while such output may deviate from the expected ground truth. Building upon our investigation, we propose a novel Memorization-Based Curriculum (MBC) learning approach. We leverage likelihood as a proxy for measuring memorization and employ it to construct a data distribution for sampling instances with replacement during supervised fine-tuning, emphasizing data with lower degrees of memorization. Evaluations using GPT-4 as a judge demonstrate the effectiveness of MBC in fine-tuning LLMs on human demonstrations. | Supervised Fine-Tuning of Large Language Models on Human Demonstrations Through the Lens of Memorization | [
"Yubin Ge",
"Devamanyu Hazarika",
"Yang Liu",
"Mahdi Namazifar"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=wq4OaU8tfE | @inproceedings{
wen2023grounding,
title={Grounding Code Generation with Input-Output Specifications},
author={Yeming Wen and Pengcheng Yin and Kensen Shi and Henryk Michalewski and Swarat Chaudhuri and Alex Polozov},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=wq4OaU8tfE}
} | Large language models (LLMs) have demonstrated significant potential in code generation. However, the code generated by these models occasionally deviates from the user's intended outcome, resulting in executable but incorrect code. To mitigate this issue, we propose Gift4Code, a novel approach for the instruction fine-tuning of LLMs specifically tailored for code generation. Our method leverages synthetic data produced by the LLM itself and utilizes execution-derived feedback as a key learning signal. This feedback, in the form of program input-output specifications, is provided to the LLM to facilitate fine-tuning. We evaluated our approach on two challenging data science benchmarks, Arcade and DS-1000. Our results suggest that the method enhances the LLM's alignment with user intentions, reducing the incidence of executable but incorrect outputs. | Grounding Code Generation with Input-Output Specifications | [
"Yeming Wen",
"Pengcheng Yin",
"Kensen Shi",
"Henryk Michalewski",
"Swarat Chaudhuri",
"Alex Polozov"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=wcSx5VjTPP | @inproceedings{
lu2023instag,
title={\#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models},
author={Keming Lu and Hongyi Yuan and Zheng Yuan and Runji Lin and Junyang Lin and Chuanqi Tan and Chang Zhou and Jingren Zhou},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=wcSx5VjTPP}
} | Pre-trained large language models (LLMs) can understand and align with human instructions by supervised fine-tuning (SFT). It is commonly believed that diverse and complex SFT data are of the essence to enable good instruction-following abilities. However, such diversity and complexity are obscure and lack quantitative analyses. In this work, we propose InsTag, an open-set instruction tagging method, to identify semantics and intentions of human instructions by tags that provide access to definitions and quantified analyses of instruction diversity and complexity. We obtain 6.6K fine-grained tags to describe instructions from popular open-sourced SFT datasets comprehensively. We find that the abilities of aligned LLMs benefit from more diverse and complex instructions in SFT data. Based on this observation, we propose a data sampling procedure based on InsTag, and select 6K diverse and complex samples from open-source datasets for SFT. The resulting models, TagLM, outperform open-source models based on considerably larger SFT data evaluated by MT-Bench, echoing the importance of instruction diversity and complexity and the effectiveness of InsTag. InsTag has robust potential to be extended to more applications beyond the data selection as it provides an effective way to analyze the distribution of instructions. | #InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models | [
"Keming Lu",
"Hongyi Yuan",
"Zheng Yuan",
"Runji Lin",
"Junyang Lin",
"Chuanqi Tan",
"Chang Zhou",
"Jingren Zhou"
] | Workshop/Instruction | 2023 | 2308.07074 | [
"https://github.com/ofa-sys/instag"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=wYLASkYFJU | @inproceedings{
lai2023training,
title={Training Speech Recognition Models to Follow Instructions},
author={Cheng-I Lai and Zhiyun Lu and Liangliang Cao and Ruoming Pang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=wYLASkYFJU}
} | Conventional end-to-end Automatic Speech Recognition (ASR) models primarily focus on exact transcription tasks, lacking flexibility for nuanced user interactions. In this paper, we train a speech recognition model to follow a diverse set of free-form text instructions for a multitude of speech recognition tasks -- ranging from simple transcript manipulation to summarization. We emphasize that even without pre-trained LLMs or speech modules, a Listen-Attend-Spell model trained from scratch on Librispeech understands and executes instructions with high fidelity. This preliminary findings highlight the potential of instruction-following training to advance speech foundation models. | Training Speech Recognition Models to Follow Instructions | [
"Cheng-I Lai",
"Zhiyun Lu",
"Liangliang Cao",
"Ruoming Pang"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=vDXvtytjdX | @inproceedings{
zhang2023enhanced,
title={Enhanced Visual Instruction Tuning for Text-Rich Image Understanding},
author={Yanzhe Zhang and Ruiyi Zhang and Jiuxiang Gu and Yufan Zhou and Nedim Lipka and Diyi Yang and Tong Sun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=vDXvtytjdX}
} | Instruction tuning enhances the capability of Large Language Models (LLMs) to interact with humans. Furthermore, recent instruction-following datasets include images as visual input, collecting responses for image-based instructions. However, current visual instruction-tuned models cannot comprehend textual details within images well. This work enhances the current visual instruction tuning pipeline with text-rich images (e.g., movie posters, book covers, etc.). Specifically, we first used publicly available OCR tools to collect results on 422K text-rich images from the LAION dataset. Furthermore, we prompt text-only GPT-4 with recognized text and image captions to generate 16K conversations, each containing question-answer pairs for text-rich images. By combining our collected data with previous multimodal instruction-following data, our model, LLaVAR, substantially improves the capability of the LLaVA model on text-based VQA datasets (up to 20% accuracy improvement). The GPT-4-based instruction-following evaluation also demonstrates the improvement of our model on both natural images and text-rich images. Through qualitative analysis, LLaVAR shows promising interaction skills (e.g., reasoning, writing, and elaboration) with humans based on the latest real-world online content that combines text and images. We make our code/data/models publicly available. | Enhanced Visual Instruction Tuning for Text-Rich Image Understanding | [
"Yanzhe Zhang",
"Ruiyi Zhang",
"Jiuxiang Gu",
"Yufan Zhou",
"Nedim Lipka",
"Diyi Yang",
"Tong Sun"
] | Workshop/Instruction | 2023 | [
"https://github.com/SALT-NLP/LLaVAR"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=umtG8Hs32R | @inproceedings{
sakhinana2023crossmodal,
title={Cross-Modal Learning for Chemistry Property Prediction: Large Language Models Meet Graph Machine Learning},
author={Sagar Sakhinana and Venkataramana Runkana},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=umtG8Hs32R}
} | In the field of chemistry, the objective is to create novel molecules with desired properties, facilitating accurate property predictions for applications such as material design and drug screening. However, existing graph deep learning methods face limitations that curb their expressive power. To address this, we explore the integration of vast molecular domain knowledge from Large Language Models
(LLMs) with the complementary strengths of Graph Neural Networks (GNNs) to enhance performance in property prediction tasks. We introduce a Multi-Modal Fusion (MMF) framework that synergistically harnesses the analytical prowess of GNNs and the linguistic generative and predictive abilities of LLMs, thereby improving accuracy and robustness in predicting molecular properties. Our framework
combines the effectiveness of GNNs in modeling graph-structured data with the zero-shot and few-shot learning capabilities of LLMs, enabling improved predictions while reducing the risk of overfitting. Furthermore, our approach effectively addresses distributional shifts, a common challenge in real-world applications, and showcases the efficacy of learning cross-modal representations, surpassing
state-of-the-art baselines on benchmark datasets for property prediction tasks. | Cross-Modal Learning for Chemistry Property Prediction: Large Language Models Meet Graph Machine Learning | [
"Sagar Sakhinana",
"Venkataramana Runkana"
] | Workshop/Instruction | 2023 | 2408.14964 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=twgpHJmvJy | @inproceedings{
lin2023use,
title={Use Your {INSTINCT}: {INST}ruction optimization usIng Neural bandits Coupled with Transformers},
author={Xiaoqiang Lin and Zhaoxuan Wu and Zhongxiang Dai and Wenyang Hu and Yao Shu and See-Kiong Ng and Patrick Jaillet and Bryan Kian Hsiang Low},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=twgpHJmvJy}
} | Large language models (LLMs) have shown remarkable instruction-following capabilities and achieved impressive performances in various applications. However, the performances of LLMs depend heavily on the instructions given to them, which are typically manually tuned with substantial human efforts. Recent work has used the query-efficient Bayesian optimization (BO) algorithm to automatically optimize the instructions given to black-box LLMs. However, BO usually falls short when optimizing highly sophisticated (e.g., high-dimensional) objective functions, such as the functions mapping an instruction to the performance of an LLM. This is mainly due to the limited expressive power of the Gaussian process (GP) model which is used by BO as a surrogate to model the objective function. Meanwhile, it has been repeatedly shown that neural networks (NNs), especially pre-trained transformers, possess strong expressive power and can model highly complex functions. So, we adopt a neural bandit algorithm which replaces the GP in BO by an NN surrogate to optimize instructions for black-box LLMs. More importantly, the neural bandit algorithm allows us to naturally couple the NN surrogate with the hidden representation learned by a pre-trained transformer (i.e., an open-source LLM), which significantly boosts its performance. These motivate us to propose our INSTruction optimization usIng Neural bandits Coupled with Transformers (INSTINCT) algorithm. We perform instruction optimization for ChatGPT and use extensive experiments to show that our INSTINCT consistently outperforms the existing methods in different tasks, such as in various instruction induction tasks and the task of improving the zero-shot chain-of-thought instruction. | Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers | [
"Xiaoqiang Lin",
"Zhaoxuan Wu",
"Zhongxiang Dai",
"Wenyang Hu",
"Yao Shu",
"See-Kiong Ng",
"Patrick Jaillet",
"Bryan Kian Hsiang Low"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tw2w8rgWMV | @inproceedings{
nayak2023learning,
title={Learning to Generate Instructions to Adapt Language Models to New Tasks},
author={Nihal Nayak and Yiyang Nan and Avi Trost and Stephen Bach},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=tw2w8rgWMV}
} | We present Bonito, the first open-source model for conditional task generation: the problem of converting unannotated corpus into a collection of tasks for instruction tuning. Our goal is to enable efficient task adaptation of instruction tuned language models on users' specialized, private data without relying on proprietary API-access-only models like GPT-4. We create Bonito by remixing existing, general-purpose instruction tuning data into a new training mixture for conditional task generation. Bonito learns to generate new tasks conditioned on the text and desired task type. The generated instructions in the specialized domain can be used to further train language models. We demonstrate that this procedure leads to improved performance on extractive question answering and yes-no question answering: across four datasets, each in a different domain, Bonito improves the F1 score of FLAN T5 Small by an average of 14.5% and FLAN-T5 Base by an average of 4.4%. We also find that Bonito improves FLAN-T5 Large on two out of four datasets but shows a slight negative transfer on the other two datasets. Overall, these results show a promising direction for adapting instruction tuned language models to new tasks without using proprietary models. | Learning to Generate Instructions to Adapt Language Models to New Tasks | [
"Nihal Nayak",
"Yiyang Nan",
"Avi Trost",
"Stephen Bach"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tdqZUxKfIj | @inproceedings{
mitchell2023an,
title={An Emulator for Fine-tuning Large Language Models using Small Language Models},
author={Eric Mitchell and Rafael Rafailov and Archit Sharma and Chelsea Finn and Christopher Manning},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=tdqZUxKfIj}
} | Widely used language models (LMs) are typically built by scaling up a two-stage training pipeline: a pre-training stage that uses a very large, diverse dataset of text and a fine-tuning (sometimes, 'alignment') stage using more targeted examples of specific behaviors and/or human preferences. While it has been hypothesized that knowledge and skills come from pre-training, and fine-tuning mostly filters this knowledge and skillset, this intuition has not been rigorously tested. In this paper, we test this hypothesis with a novel methodology for scaling these two stages independently, essentially asking, *What would happen if we combined the knowledge learned by a large model during pre-training with the knowledge learned by a small model during fine-tuning (or vice versa)?* Using a reinforcement learning-based framework derived from recent developments in learning from human preferences, we introduce *emulated fine-tuning (EFT)*, a principled and practical method for sampling from a distribution that approximates the result of pre-training and fine-tuning at different scales. Our experiments with EFT show that scaling up fine-tuning tends to improve helpfulness, while scaling up pre-training tends to improve factuality. Further, we show that EFT enables test-time adjustment of competing behavioral factors like helpfulness and harmlessness without additional training. Finally, we find that a special case of emulated fine-tuning, which we call LM *up-scaling*, avoids resource-intensive fine-tuning of large pre-trained models by ensembling small fine-tuned models with large pre-trained models, essentially 'emulating' the result of fine-tuning the large pre-trained model. Up-scaling consistently improves helpfulness and factuality of widely used pre-trained models like Llama, Llama-2, and Falcon, without additional hyperparameters or training. | An Emulator for Fine-tuning Large Language Models using Small Language Models | [
"Eric Mitchell",
"Rafael Rafailov",
"Archit Sharma",
"Chelsea Finn",
"Christopher Manning"
] | Workshop/Instruction | 2023 | 2310.12962 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=tC2lDRhHNT | @inproceedings{
zeng2023evaluating,
title={Evaluating Large Language Models at Evaluating Instruction Following},
author={Zhiyuan Zeng and Jiatong Yu and Tianyu Gao and Yu Meng and Tanya Goyal and Danqi Chen},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=tC2lDRhHNT}
} | As research in large language models (LLMs) continues to accelerate, LLM-based evaluation has emerged as a scalable and cost-effective alternative to human evaluations for comparing the ever increasing list of models. This paper investigates the efficacy of these "LLM evaluators", particularly in using them to assess instruction following, a metric that gauges how closely generated text adheres to the given instruction. We introduce a challenging meta-evaluation benchmark, LLMBar, designed to test the ability of an LLM evaluator in discerning instruction-following outputs. The authors manually curated 419 pairs of outputs, one adhering to instructions while the other diverging, yet may possess deceptive qualities that mislead an LLM evaluator, e.g., a more engaging tone. Contrary to existing meta-evaluation, we discover that different evaluators (i.e., combinations of LLMs and prompts) exhibit distinct performance on LLMBar and even the highest-scoring ones have substantial room for improvement. We also present a novel suite of prompting strategies that further close the gap between LLM and human evaluators. With LLMBar, we hope to offer more insight into LLM evaluators and foster future research in developing better instruction-following models. | Evaluating Large Language Models at Evaluating Instruction Following | [
"Zhiyuan Zeng",
"Jiatong Yu",
"Tianyu Gao",
"Yu Meng",
"Tanya Goyal",
"Danqi Chen"
] | Workshop/Instruction | 2023 | 2310.07641 | [
"https://github.com/princeton-nlp/llmbar"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=rz6u0qOVth | @inproceedings{
li2023instructionfollowing,
title={Instruction-following Evaluation through Verbalizer Manipulation},
author={Shiyang Li and Jun Yan and Hai Wang and Zheng Tang and Xiang Ren and Vijay Srinivasan and Hongxia Jin},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=rz6u0qOVth}
} | While instruction-tuned models have shown remarkable success in various natural language processing tasks, accurately evaluating their ability to follow instructions remains challenging. Existing benchmarks primarily focus on common instructions that align well with what the model learned during training. However, proficiency in responding to these instructions does not necessarily imply strong ability in instruction following. In this paper, we propose a novel instruction-following evaluation protocol called verbalizer manipulation. It instructs the model to verbalize the task label with words aligning with model priors to different extents, adopting verbalizers from highly aligned (e.g., outputting "postive" for positive sentiment), to minimally aligned (e.g., outputting "negative" for positive sentiment). Verbalizer manipulation can be seamlessly integrated with any classification benchmark to examine the model's reliance on priors and its ability to override them to accurately follow the instructions. We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them. We observe that the instruction-following abilities of models, across different families and scales, are significantly distinguished by their performance on less natural verbalizers. Even the strongest GPT-4 model struggles to perform better than random guessing on the most challenging verbalizer, emphasizing the need for continued advancements to improve their instruction-following abilities. | Instruction-following Evaluation through Verbalizer Manipulation | [
"Shiyang Li",
"Jun Yan",
"Hai Wang",
"Zheng Tang",
"Xiang Ren",
"Vijay Srinivasan",
"Hongxia Jin"
] | Workshop/Instruction | 2023 | 2307.10558 | [
""
] | https://huggingface.co/papers/2307.10558 | 2 | 3 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=rxEmiOEIFL | @inproceedings{
zheng2023delve,
title={Delve into {PPO}: Implementation Matters for Stable {RLHF}},
author={Rui Zheng and Shihan Dou and Songyang Gao and Yuan Hua and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Yuhao Zhou and Limao Xiong and Lu Chen and Zhiheng Xi and Nuo Xu and Wenbin Lai and Minghao Zhu and Haoran Huang and Tao Gui and Qi Zhang and Xuanjing Huang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=rxEmiOEIFL}
} | Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence. Its primary objective is to function as a human-centric (helpful, honest, and harmless) assistant. Alignment with humans assumes paramount significance, and reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit. Current technical routes usually include reward models to measure human preferences, Proximal Policy Optimization (PPO) to optimize policy model outputs, and process supervision to improve step-by-step reasoning capabilities. However, due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this paper, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training. We identify policy constraints being the key factor for the effective implementation of the PPO algorithm. Therefore, we explore the PPO-max, an advanced version of PPO algorithm, to efficiently improve the training stability of the policy model. Based on our main results, we perform a comprehensive analysis of RLHF abilities compared with SFT models and ChatGPT. Beyond additional qualitative results, we even find that LLMs successfully trained by our algorithm can often better understand the deep meaning of the queries, and its responses are more able to hit people's souls directly. | Delve into PPO: Implementation Matters for Stable RLHF | [
"Rui Zheng",
"Shihan Dou",
"Songyang Gao",
"Yuan Hua",
"Wei Shen",
"Binghai Wang",
"Yan Liu",
"Senjie Jin",
"Yuhao Zhou",
"Limao Xiong",
"Lu Chen",
"Zhiheng Xi",
"Nuo Xu",
"Wenbin Lai",
"Minghao Zhu",
"Haoran Huang",
"Tao Gui",
"Qi Zhang",
"Xuanjing Huang"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=qN9T4cmTEw | @inproceedings{
song2023nlpbench,
title={{NLPB}ench: Evaluating Large Language Models on Solving {NLP} Problems},
author={Linxin Song and Jieyu Zhang and Lechao Cheng and Pengyuan Zhou and Tianyi Zhou and Irene Li},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=qN9T4cmTEw}
} | Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP). Despite these successes, there remains a dearth of research dedicated to the NLP problem-solving abilities of LLMs. To fill the gap in this area, we present a unique benchmarking dataset, NLPBench, comprising 378 college-level NLP questions spanning various NLP topics sourced from some Universities' prior final exams. NLPBench includes questions with context, in which multiple sub-questions share the same public information, and diverse question types, including multiple choice, short answer, and math. Our evaluation, centered on LLMs such as GPT-3.5/4, PaLM-2, and LLAMA-2, incorporates advanced prompting strategies like the chain-of-thought (CoT) and tree-of-thought (ToT). Our study reveals that the effectiveness of the advanced prompting strategies can be inconsistent, occasionally damaging LLM performance, especially in smaller models like the LLAMA-2 (13b). Furthermore, our manual assessment illuminated specific shortcomings in LLMs' scientific problem-solving skills, with weaknesses in logical decomposition and reasoning notably affecting results. | NLPBench: Evaluating Large Language Models on Solving NLP Problems | [
"Linxin Song",
"Jieyu Zhang",
"Lechao Cheng",
"Pengyuan Zhou",
"Tianyi Zhou",
"Irene Li"
] | Workshop/Instruction | 2023 | 2309.15630 | [
"https://github.com/linxins97/nlpbench"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=psJibeRV0T | @inproceedings{
hu2023evoke,
title={Evoke: Evoking Critical Thinking Abilities in {LLM}s via Reviewer-Author Prompt Editing},
author={Xinyu Hu and Pengfei Tang and Simiao Zuo and Zihan Wang and Bowen Song and Qiang Lou and Jian Jiao and Denis Charles},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=psJibeRV0T}
} | Large language models (LLMs) have made impressive progress in natural language processing. These models rely on proper human instructions (or prompts) to generate suitable responses. However, the potential of LLMs are not fully harnessed by commonly-used prompting methods: many human-in-the-loop algorithms employ ad-hoc procedures for prompt selection; while auto prompt generation approaches are essentially searching all possible prompts randomly and inefficiently. We propose Evoke, an automatic prompt refinement framework. In Evoke, there are two instances of a same LLM: one as a reviewer (LLM-Reviewer), it scores the current prompt; the other as an author (LLM-Author), it edits the prompt by considering the edit history and the reviewer's feedback. Such an author-reviewer feedback loop ensures that the prompt is refined in each iteration. We further aggregate a data selection approach to Evoke, where only the hard samples are exposed to the LLM. The hard samples are more important because the LLM can develop deeper understanding of the tasks out of them, while the model may already know how to solve the easier cases. Experimental results show that Evoke significantly outperforms existing methods. For instance, in the challenging task of logical fallacy detection, Evoke scores above 80, while all other baseline methods struggle to reach 20. | Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing | [
"Xinyu Hu",
"Pengfei Tang",
"Simiao Zuo",
"Zihan Wang",
"Bowen Song",
"Qiang Lou",
"Jian Jiao",
"Denis Charles"
] | Workshop/Instruction | 2023 | 2310.13855 | [
""
] | https://huggingface.co/papers/2310.13855 | 0 | 1 | 0 | 8 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=pFHeZzl5ft | @inproceedings{
lin2023urial,
title={{URIAL}: Tuning-Free Instruction Learning and Alignment for Untuned {LLM}s},
author={Bill Yuchen Lin and Abhilasha Ravichander and Ximing Lu and Nouha Dziri and Melanie Sclar and Khyathi Chandu and Chandra Bhagavatula and Yejin Choi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=pFHeZzl5ft}
} | Large language models (LLMs) have shown significant improvements due to alignment tuning, that is, supervised fine-tuning (SFT) on instruction data and reinforcement learning from human feedback (RLHF).
This raises questions about what is precisely learned during the alignment tuning process.
We investigate the effects of alignment tuning through the lens of token distribution shift between untuned LLMs and their aligned counterparts (e.g., Llama-2 versus Llama-2-Chat).
Our findings reveal that most distribution changes lie in stylistic tokens (e.g., transitional words, discourse markers), suggesting that LLMs primarily learn the language style of AI assistants during alignment tuning, while most of useful knowledge has been acquired by untuned LLMs. Thus, we pose the question: Is it necessary to update model weights to attain LLM alignment?
Based on these insights, we propose an alternative tuning-free method for instruction learning and alignment for untuned LLMs, URIAL, which achieves effective alignment solely through in-context learning (ICL) with as few as three curated, stylistic examples and a system prompt.
We also introduce a dataset named just-eval-instruct, which consists of 1,000 examples collected from 9 existing instruction datasets such as those used by AlpacaEval.
Our multi-aspect evaluation demonstrates that \textsc{Urial} can achieve highly satisfactory performance, sometimes equaling or surpassing SFT+RLHF counterparts, especially when the untuned LLM is sufficiently pre-trained.
This implies that fine-tuning may not be as always crucial as previously assumed for LLM alignment, and lightweight alignment methods like \textsc{Urial} hold promise for efficiently tailoring LLM behavior without fine-tuning. | URIAL: Tuning-Free Instruction Learning and Alignment for Untuned LLMs | [
"Bill Yuchen Lin",
"Abhilasha Ravichander",
"Ximing Lu",
"Nouha Dziri",
"Melanie Sclar",
"Khyathi Chandu",
"Chandra Bhagavatula",
"Yejin Choi"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=magEgFpK1y | @inproceedings{
saito2023verbosity,
title={Verbosity Bias in Preference Labeling by Large Language Models},
author={Keita Saito and Akifumi Wachi and Koki Wataoka and Youhei Akimoto},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=magEgFpK1y}
} | In recent years, Large Language Models (LLMs) have witnessed a remarkable surge in prevalence, altering the landscape of natural language processing and machine learning. One key factor in improving the performance of LLMs is alignment with humans achieved with Reinforcement Learning from Human Feedback (RLHF), as for many LLMs such as GPT-4, Bard, etc. In addition, recent studies are investigating the replacement of human feedback with feedback from other LLMs named Reinforcement Learning from AI Feedback (RLAIF). We examine the biases that come along with evaluating LLMs with other LLMs and take a closer look into verbosity bias – a bias where LLMs sometimes prefer more verbose answers even if they have similar qualities. We see that in our problem setting, GPT-4 prefers longer answers more than humans. We also propose a metric to measure this bias. | Verbosity Bias in Preference Labeling by Large Language Models | [
"Keita Saito",
"Akifumi Wachi",
"Koki Wataoka",
"Youhei Akimoto"
] | Workshop/Instruction | 2023 | 2310.10076 | [
""
] | https://huggingface.co/papers/2310.10076 | 0 | 2 | 0 | 4 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=kqtB2AueFm | @inproceedings{
byun2023automatic,
title={Automatic Construction of a Korean Toxic Instruction Dataset for Ethical Tuning of Large Language Models},
author={SungJoo Byun and Dongjun Jang and Hyemi Jo and Hyopil Shin},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=kqtB2AueFm}
} | $\textit{\textbf{Caution}: this paper may include material that could be offensive or distressing.} $
The advent of Large Language Models (LLMs) necessitates the development of training approaches that mitigate the generation of unethical language and aptly manage toxic user queries. Given the challenges related to human labor and the scarcity of data, we present KoTox, comprising 39K unethical instruction-output pairs. This collection of automatically generated toxic instructions refines the training of LLMs and establishes a foundational framework for improving LLMs' ethical awareness and response to various toxic inputs, promoting more secure and responsible interactions in Natural Language Processing (NLP) applications. | Automatic Construction of a Korean Toxic Instruction Dataset for Ethical Tuning of Large Language Models | [
"SungJoo Byun",
"Dongjun Jang",
"Hyemi Jo",
"Hyopil Shin"
] | Workshop/Instruction | 2023 | 2311.18215 | [
""
] | https://huggingface.co/papers/2311.18215 | 1 | 0 | 0 | 4 | 1 | [] | [
"SungJoo/KoTox"
] | [] |
null | https://openreview.net/forum?id=kEK08VdSO5 | @inproceedings{
tian2023finetuning,
title={Fine-tuning Language Models for Factuality},
author={Katherine Tian and Eric Mitchell and Huaxiu Yao and Christopher Manning and Chelsea Finn},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=kEK08VdSO5}
} | The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines. However, language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations', which can harmfully perpetuate myths and misconceptions. Further, manual fact-checking of model responses is a time-consuming process, making human factuality labels expensive to acquire. In this work, we leverage two key recent innovations in NLP to fine-tune language models to be more factual without human labeling, targeting more open-ended generation settings than past work. First, several recent works have proposed methods for scoring the factuality of open-ended text derived from consistency with an external knowledge base or simply a large model's confidence scores. Second, the Direct Preference Optimization algorithm enables straightforward fine-tuning of language models on objectives other than supervised imitation, using a preference ranking over possible model responses. We show that learning from preference rankings generated by either automated criterion significantly improves the factuality of Llama-2 on held-out topics (percent of generated claims that are correct) compared with existing RLHF procedures or decoding strategies targeted at factuality, showing over 50% and 20--30% error reduction for biographies and medical questions respectively. | Fine-tuning Language Models for Factuality | [
"Katherine Tian",
"Eric Mitchell",
"Huaxiu Yao",
"Christopher Manning",
"Chelsea Finn"
] | Workshop/Instruction | 2023 | 2311.08401 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jbNjgmE0OP | @inproceedings{
asai2023selfrag,
title={Self-{RAG}: Self-reflective Retrieval Augmented Generation},
author={Akari Asai and Zeqiu Wu and Yizhong Wang and Avirup Sil and Hannaneh Hajishirzi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=jbNjgmE0OP}
} | Scaling up language models (LMs) or instruction tuning has shown limited effects on improving factuality of LM outputs. Retrieval-Augmented Generation (RAG), an ad hoc approach that augments Language Models (LMs) with retrieval, decreases hallucination issues of large LMs. However, indiscriminately retrieving and incorporating a fixed number of retrieved passages, regardless of whether retrieval is necessary, or passages are relevant, diminishes instruction-following LM versatility or can lead to unhelpful response generation.
In this work, we introduce a new framework called **Self-Reflective Retrieval-Augmented Generation (Self-RAG)** that enhances an LM's quality and factuality through retrieval and self-reflection. Our framework trains a single arbitrary LM to learn to adaptively retrieve passages on-demand, and generate and reflect on retrieved passages and its own generations using special tokens, called *reflection* tokens, on diverse instruction-tuning data with interleaving retrieved passages and reflection tokens. Generating reflection tokens makes the LM controllable during the inference phase, enabling it to tailor its behavior to diverse task requirements. Experiments show that Self-RAG (7B and 13B parameters) significantly outperforms state-of-the-art pre-trained and instruction-follwing LLMs and retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA, fact verification and reasoning tasks, and it shows significant gains in factuality scores and citation accuracy for long-form generations relative to these models. | Self-RAG: Self-reflective Retrieval Augmented Generation | [
"Akari Asai",
"Zeqiu Wu",
"Yizhong Wang",
"Avirup Sil",
"Hannaneh Hajishirzi"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=fhSTeAAVb6 | @inproceedings{
min2023factscore,
title={{FA}ctScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation},
author={Sewon Min and Kalpesh Krishna and Xinxi Lyu and Mike Lewis and Wen-tau Yih and Pang Wei Koh and Mohit Iyyer and Luke Zettlemoyer and Hannaneh Hajishirzi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=fhSTeAAVb6}
} | Evaluating the factuality of long-form text generated by large language models (LMs) is non-trivial because (1) generations often contain a mixture of supported and unsupported pieces of information, making binary judgments of quality inadequate, and (2) human evaluation is time-consuming and costly. In this paper, we introduce FActScore (Factual precision in Atomicity Score), a new evaluation that breaks a generation into a series of atomic facts and computes the percentage of atomic facts supported by a reliable knowledge source. We conduct an extensive human evaluation to obtain FActScores of people biographies generated by several state-of-the-art commercial LMs -- InstructGPT, ChatGPT, and the retrieval-augmented PerplexityAI -- and report new analysis demonstrating the need for such a fine-grained score (e.g., ChatGPT only achieves 58%). Since human evaluation is costly, we also introduce an automated model that estimates FActScore, using retrieval and a strong language model, with less than a 2% error rate. Finally, we use this automated metric to evaluate 6,500 generations from a new set of 13 recent LMs that would have cost $26K if evaluated by humans, with various findings: GPT-4 and ChatGPT are more factual than public models, and Vicuna and Alpaca are some of the best public models. | FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation | [
"Sewon Min",
"Kalpesh Krishna",
"Xinxi Lyu",
"Mike Lewis",
"Wen-tau Yih",
"Pang Wei Koh",
"Mohit Iyyer",
"Luke Zettlemoyer",
"Hannaneh Hajishirzi"
] | Workshop/Instruction | 2023 | 2305.14251 | [
"https://github.com/shmsw25/factscore"
] | https://huggingface.co/papers/2305.14251 | 1 | 1 | 0 | 9 | 1 | [
"kalpeshk2011/instruct-llama-7b-wdiff"
] | [] | [] |
null | https://openreview.net/forum?id=fT3gT2QKOb | @inproceedings{
sakhinana2023hierarchical,
title={Hierarchical Network Fusion for Multi-Modal Electron Micrograph Representation Learning with Foundational Large Language Models},
author={Sagar Sakhinana and Sannidhi Geethan and Venkataramana Runkana},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=fT3gT2QKOb}
} | Characterizing materials with electron micrographs is a crucial task in fields such as semiconductors and quantum materials. The complex hierarchical structure of micrographs often poses challenges for traditional classification methods. In this study, we propose an innovative backbone architecture for analyzing electron micrographs. We create multi-modal representations of the micrographs by tok-
enizing them into patch sequences and, additionally, representing them as vision graphs, commonly referred to as patch attributed graphs. We introduce the Hierarchical Network Fusion (HNF), a multi-layered network structure architecture that facilitates information exchange between the multi-modal representations and knowledge integration across different patch resolutions. Furthermore, we leverage large language models (LLMs) to generate detailed technical descriptions of nanomaterials as auxiliary information to assist in the downstream task. We utilize a cross-modal attention mechanism for knowledge fusion across cross-domain representations(both image-based and linguistic insights) to predict the nanomaterial category. This multi-faceted approach promises a more comprehensive and accurate representation and classification of micrographs for nanomaterial identification. Our framework outperforms traditional methods, overcoming challenges posed by distributional shifts, and facilitating high-throughput screening. | Hierarchical Network Fusion for Multi-Modal Electron Micrograph Representation Learning with Foundational Large Language Models | [
"Sagar Sakhinana",
"Sannidhi Geethan",
"Venkataramana Runkana"
] | Workshop/Instruction | 2023 | 2408.13661 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=fHtLiX36r6 | @inproceedings{
sharma2023exploring,
title={Exploring and Improving the Spatial Reasoning Abilities of Large Language Models},
author={Manasi Sharma},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=fHtLiX36r6}
} | Large Language Models (LLMs) represent formidable tools for sequence modeling, boasting an innate capacity for general pattern recognition. Nevertheless, their broader spatial reasoning capabilities remain insufficiently explored. In this paper, we investigate the zero-shot performance of LLMs when confronted with a limited dataset comprising 3D robotic trajectory data and associated tasks, such as directional and motion labeling. Additionally, we introduce a novel prefix-based prompting mechanism, which yields a 30\% improvement on the 3D trajectory data and an increase of up to 16\% on SpartQA tasks when contrasted with the conventional vanilla prompt baseline (with gains over Chain-of-Thought prompting as well). The experimentation with 3D trajectory data offers an intriguing glimpse into the manner in which LLMs engage with numerical and spatial information, thus laying a solid foundation for the identification of target areas for future enhancements. | Exploring and Improving the Spatial Reasoning Abilities of Large Language Models | [
"Manasi Sharma"
] | Workshop/Instruction | 2023 | 2312.01054 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=eBu11K7THc | @inproceedings{
lee2023investigating,
title={Investigating the Effects of Zero-Shot Chain-of-Thought on Empathetic Dialogue Generation},
author={Young-Jun Lee and Dokyong Lee and Jihui Im and Joo Won Sung and Ho-Jin Choi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=eBu11K7THc}
} | This study investigates the effectiveness of the Zero-shot Chain-of-Thought (CoT) approach, specifically the "Let's think step by step.'', in boosting the empathetic reasoning capabilities of Large Language Models (LLMs). Our experiments, however, reveal that Zero-shot CoT does not sufficiently enhance the empathetic reasoning of LLMs as compared to Zero-shot In-Context Learning (ICL), according to a variety of performance metrics. Importantly, we discovered that the perspective-taking prompting method, or ``\textit{Let's put {speaker} into {interlocutor}'s shoes.}'', surpasses the performance of Zero-shot CoT, especially in terms of emotion and intent accuracy, with an improvement of 21\% and 7\% respectively. The source code will be released after publication. | Investigating the Effects of Zero-Shot Chain-of-Thought on Empathetic Dialogue Generation | [
"Young-Jun Lee",
"Dokyong Lee",
"Jihui Im",
"Joo Won Sung",
"Ho-Jin Choi"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=czQZ4aLUb6 | @inproceedings{
zhou2023analyzing,
title={Analyzing and Mitigating Object Hallucination in Large Vision-Language Models},
author={Yiyang Zhou and Chenhang Cui and Jaehong Yoon and Linjun Zhang and Zhun Deng and Chelsea Finn and Mohit Bansal and Huaxiu Yao},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=czQZ4aLUb6}
} | Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23\% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | [
"Yiyang Zhou",
"Chenhang Cui",
"Jaehong Yoon",
"Linjun Zhang",
"Zhun Deng",
"Chelsea Finn",
"Mohit Bansal",
"Huaxiu Yao"
] | Workshop/Instruction | 2023 | 2310.00754 | [
"https://github.com/yiyangzhou/lure"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=byNSabn9cA | @inproceedings{
lei2023chain,
title={Chain of Natural Language Inference for Reducing Large Language Model Hallucinations},
author={Deren Lei and Yaxi Li and Mengya Hu and Mingyu Wang and Xi Yun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=byNSabn9cA}
} | Large language models (LLMs) can generate fluent natural language texts when given relevant documents as background context. This ability has attracted considerable interest in developing industry applications of LLMs. However, LLMs are prone to generate hallucinations that are not supported by the provided sources. In this paper, we propose a hierarchical framework to detect and mitigate such ungrounded hallucination. Our framework uses Chain of Natural Language Inference (CoNLI) for hallucination detection and hallucination reduction via post-editing. Our approach achieves state-of-the-art performance on hallucination detection and enhances text quality through rewrite, using LLMs without any fine-tuning or domain-specific prompt engineering. We show that this simple plug-and-play framework can serve as an effective choice for hallucination detection and reduction, achieving competitive performance across various contexts. | Chain of Natural Language Inference for Reducing Large Language Model Hallucinations | [
"Deren Lei",
"Yaxi Li",
"Mengya Hu",
"Mingyu Wang",
"Xi Yun"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=bH64KCBzqS | @inproceedings{
zhang2023chainofthought,
title={Chain-of-Thought Reasoning is a Policy Improvement Operator},
author={Hugh Zhang and David Parkes},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=bH64KCBzqS}
} | Large language models have astounded the world with fascinating new capabilities. However, they currently lack the ability to teach themselves new skills, relying instead on large amounts of human-generated training data. We introduce SECToR (Self-Education via Chain-of-Thought Reasoning), a proof-of-concept demonstration that language models can teach themselves new skills using chain-of-thought reasoning. During the self-learning loop, SECToR asks models to solve addition problems using chain-of-thought reasoning before training the next version of the model to solve those same problems directly without using such reasoning. This process often results in an improved model which can, when again augmented with chain-of-thought reasoning, solve even harder problems than the original model, allowing the self-learning loop to continue. Language models trained via SECToR autonomously learn to add up to 29-digit numbers without access to any ground truth examples beyond an initial supervised fine-tuning phase consisting only of numbers with 6 or fewer digits. Our central hypothesis is that chain-of-thought reasoning can act as a policy improvement operator, similarly to how Monte-Carlo Tree Search is used in AlphaZero \citep{silver2017mastering}. We hope that this research can lead to new directions in which language models can learn to teach themselves without the need for human demonstrations. | Chain-of-Thought Reasoning is a Policy Improvement Operator | [
"Hugh Zhang",
"David Parkes"
] | Workshop/Instruction | 2023 | 2309.08589 | [
""
] | https://huggingface.co/papers/2309.08589 | 0 | 1 | 0 | 2 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=b519y1V7fX | @inproceedings{
wang2023beyond,
title={Beyond Reverse {KL}: Generalizing Direct Preference Optimization with Diverse Divergence Constraints},
author={Chaoqi Wang and Yibo Jiang and Chenghao Yang and Han Liu and Yuxin Chen},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=b519y1V7fX}
} | The increasing capabilities of large language models (LLMs) raise opportunities for artificial general intelligence but concurrently amplify safety concerns, such as potential misuse of AI systems, necessitating effective AI alignment. Reinforcement Learning from Human Feedback (RLHF) has emerged as a promising pathway towards AI alignment but brings forth challenges due to its complexity and dependence on a separate reward model. Direct Preference Optimization (DPO) has been proposed as an alternative; and it remains equivalent to RLHF under the reverse KL regularization constraint. This paper presents $f$-DPO, a generalized approach to DPO by incorporating diverse divergence constraints. We show that under certain $f$-divergences, including Jensen-Shannon divergence, forward KL divergences and $\alpha$-divergences, the complex relationship between the reward and optimal policy can also be simplified by addressing the Karush–Kuhn–Tucker conditions. This eliminates the need for estimating the normalizing constant in the Bradley-Terry model and enables a tractable mapping between the reward function and the optimal policy. Our approach optimizes LLMs to align with human preferences in a more efficient and supervised manner under a broad set of divergence constraints. Empirically, adopting these divergences ensures a balance between alignment performance and generation diversity. Importantly, our $f$-DPO outperforms PPO-based methods in divergence efficiency, and divergence constraints directly influence expected calibration error (ECE). | Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints | [
"Chaoqi Wang",
"Yibo Jiang",
"Chenghao Yang",
"Han Liu",
"Yuxin Chen"
] | Workshop/Instruction | 2023 | 2309.16240 | [
""
] | https://huggingface.co/papers/2309.16240 | 1 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=apEdj9baZx | @inproceedings{
sun2023interactive,
title={Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks},
author={Lingfeng Sun and Devesh Jha and Chiori Hori and Siddarth Jain and Radu Corcodel and Xinghao Zhu and Masayoshi Tomizuka and Diego Romeres},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=apEdj9baZx}
} | Designing robotic agents to perform open vocabulary tasks has been the long-standing goal in robotics and AI. Recently, Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks. However, planning for these tasks in the presence of uncertainties is challenging as it requires "chain-of-thought" reasoning, aggregating information from the environment, updating state estimates, and generating actions based on the updated state estimates. In this paper, we present an interactive planning technique for partially observable tasks using LLMs. In the proposed method, an LLM is used to collect missing information from the environment using a robot and infer the state of the underlying problem from collected observations while guiding the robot to perform the required actions. We also use a fine-tuned Llama 2 model via self-instruct and compare its performance against a pre-trained LLM like GPT-4. Results are demonstrated on several tasks in simulation as well as real-world environments. | Interactive Planning Using Large Language Models for Partially Observable Robotics Tasks | [
"Lingfeng Sun",
"Devesh K. Jha",
"Chiori Hori",
"Siddarth Jain",
"Radu Corcodel",
"Xinghao Zhu",
"Masayoshi Tomizuka",
"Diego Romeres"
] | Workshop/Instruction | 2023 | 2312.06876 | [
""
] | https://huggingface.co/papers/2312.06876 | 2 | 1 | 0 | 8 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=YA56eOURrG | @inproceedings{
zhang2023tell,
title={Tell Your Model Where to Attend: Post-hoc Attention Steering for {LLM}s},
author={Qingru Zhang and Chandan Singh and Liyuan Liu and Xiaodong Liu and Bin Yu and Jianfeng Gao and Tuo Zhao},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=YA56eOURrG}
} | In human-written articles, we often leverage the subtleties of text style, such as bold and italics, to guide the attention of readers. These textual emphases are vital for the readers to grasp the conveyed information. When interacting with large language models (LLMs), we have a similar need -- steering the model to pay closer attention to user-specified information, e.g., an instruction. Existing methods, however, are constrained to process plain text and do not support such a mechanism. This motivates us to introduce PASTA -- Post-hoc Attention STeering Approach, a method that allows LLMs to read text with user-specified emphasis marks. To this end, PASTA identifies a small subset of attention heads and applies precise attention reweighting on them, directing the model attention to user-specified parts. Like prompting, PASTA is applied at inference time and does not require changing any model parameters. Experiments demonstrate that PASTA can substantially enhance an LLM's ability to follow user instructions or integrate new knowledge from user inputs, leading to a significant performance improvement on a variety of tasks, e.g., an average accuracy improvement of 22% for LLAMA-7B. Our code is publicly available at https://github.com/QingruZhang/PASTA . | Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs | [
"Qingru Zhang",
"Chandan Singh",
"Liyuan Liu",
"Xiaodong Liu",
"Bin Yu",
"Jianfeng Gao",
"Tuo Zhao"
] | Workshop/Instruction | 2023 | 2311.02262 | [
"https://github.com/qingruzhang/pasta"
] | https://huggingface.co/papers/2311.02262 | 6 | 10 | 2 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=V09d7AMh15 | @inproceedings{
jiang2023identifying,
title={Identifying and Mitigating Vulnerabilities in {LLM}-Integrated Applications},
author={Fengqing Jiang and Zhangchen Xu and Luyao Niu and Boxin Wang and Jinyuan Jia and Bo Li and Radha Poovendran},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=V09d7AMh15}
} | The remarkable instruction following capabilities of large language models (LLMs) allow them to be increasingly deployed as the service backend for LLM-integrated applications such as code completion and AI-powered search. Compared with the traditional usage of LLMs where users directly send queries to an LLM, LLM-integrated applications serve as middleware to refine users’ queries with domain-specific knowledge to better inform LLMs and enhance the responses. Despite numerous opportunities and benefits, blindly following instructions given to LLMs exposes LLM-integrated applications to new attack surfaces. Understanding, minimizing, and eliminating the emerging attack surfaces is a new area of research. In this work, we consider a setup where the user and LLM interact via an LLM-integrated application in the middle. We focus on the communication rounds that begin with user’s queries and end with LLM-integrated application returning responses to the queries, powered by LLMs at the service backend. For this query-response protocol, we identify potential high-risk vulnerabilities that can originate from the malicious application developer or from an outsider threat initiator that is able to control the database access, manipulate and poison data that are high-risk for the user. Successful exploits of the identified vulnerabilities result in the users receiving responses tailored to the intent of a threat initiator (e.g., biased preferences for certain products). We assess such threats against LLM-integrated applications empowered by OpenAI GPT-3.5 and GPT-4. Our empirical results show that the threats can effectively bypass the restrictions and moderation policies of OpenAI, resulting in users receiving responses that contain bias, toxic content, privacy risk, and disinformation. To mitigate those threats, we identify and define four key properties, namely integrity, source identification, attack detectability, and utility preservation, that need to be satisfied by a safe LLM-integrated application. Based on these properties, we develop a lightweight, threat-agnostic defense that mitigates both insider and outsider threats. Our evaluations demonstrate the efficacy of our defense. | Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications | [
"Fengqing Jiang",
"Zhangchen Xu",
"Luyao Niu",
"Boxin Wang",
"Jinyuan Jia",
"Bo Li",
"Radha Poovendran"
] | Workshop/Instruction | 2023 | 2311.16153 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=UWymGURI75 | @inproceedings{
toyer2023tensor,
title={Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game},
author={Sam Toyer and Olivia Watkins and Ethan Mendes and Justin Svegliato and Luke Bailey and Tiffany Wang and Isaac Ong and Karim Elmaaroufi and Pieter Abbeel and Trevor Darrell and Alan Ritter and Stuart Russell},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=UWymGURI75}
} | While Large Language Models (LLMs) are increasingly being used in real-world applications, they remain vulnerable to prompt injection attacks: malicious third party prompts that subvert the intent of the system designer. To help researchers study this problem, we present a dataset of over 126,000 prompt injection attacks and 46,000 prompt-based "defenses" against prompt injection, all created by players of an online game called Tensor Trust. To the best of our knowledge, this is currently the largest dataset of human-generated adversarial examples for instruction-following LLMs. The attacks in our dataset have a lot of easily interpretable structure, and shed light on the weaknesses of LLMs. We also use the dataset to create a benchmark for resistance to two types of prompt injection, which we refer to as prompt extraction and prompt hijacking. Our benchmark results show that many models are vulnerable to the attack strategies in the Tensor Trust dataset. Furthermore, we show that some attack strategies from the dataset generalize to deployed LLM-based applications, even though they have a very different set of constraints to the game. We release all data and source code at https://tensortrust.ai/paper. | Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game | [
"Sam Toyer",
"Olivia Watkins",
"Ethan Mendes",
"Justin Svegliato",
"Luke Bailey",
"Tiffany Wang",
"Isaac Ong",
"Karim Elmaaroufi",
"Pieter Abbeel",
"Trevor Darrell",
"Alan Ritter",
"Stuart Russell"
] | Workshop/Instruction | 2023 | 2311.01011 | [
""
] | https://huggingface.co/papers/2311.01011 | 1 | 0 | 0 | 12 | 1 | [] | [
"qxcv/tensor-trust"
] | [] |
null | https://openreview.net/forum?id=Rye1neGGUd | @inproceedings{
ostapenko2023a,
title={A Case Study of Instruction Tuning with Mixture of Parameter-Efficient Experts},
author={Oleksiy Ostapenko and Lucas Caccia and Zhan Su and Nicolas Le Roux and Laurent Charlin and Alessandro Sordoni},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=Rye1neGGUd}
} | We study the applicability of mixture of parameter-efficient experts (MoPEs) for instruction-tuning large decoder-only language models. Recent literature indicates that MoPEs might enhance performance in specific multi-task instruction-following datasets. In this paper, we extend such previous results and study applicability of MoPEs in settings previously overlooked: a) with open-domain instruction-following datasets; b) with recent decoder-only models and c) with downstream out-of-distribution test sets. We build on top of LLaMA1-13B/-7B and LLaMA2-13B. We study different variants of learned routing, namely per-example routing ([PE]), and a more expensive per-token ([PT]) routing. Overall, we are unable to substantiate strong performance gains observed in related studies in our setting. We observe occasional enhancements of LLAMA2 fine-tuned on Open Platypus dataset in 0-shot SNI evaluation and TruthfulQA evaluation after fine-tuning on a subset of Flan. We shed some light on the inner workings of MoPEs by comparing different routing strategies. We find that [PE] routing tends to collapse at downstream evaluation time reducing the importance of router's application.
We plan to publicly release our code. | A Case Study of Instruction Tuning with Mixture of Parameter-Efficient Experts | [
"Oleksiy Ostapenko",
"Lucas Caccia",
"Zhan Su",
"Nicolas Le Roux",
"Laurent Charlin",
"Alessandro Sordoni"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=RJyfNSoyDC | @inproceedings{
zhai2023investigating,
title={Investigating the Catastrophic Forgetting in Multimodal Large Language Models},
author={Yuexiang Zhai and Shengbang Tong and Xiao Li and Mu Cai and Qing Qu and Yong Jae Lee and Yi Ma},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=RJyfNSoyDC}
} | Following the success of GPT4, there has been a surge in interest in multimodal large language model (MLLM) research. This line of research focuses on developing general-purpose LLMs through fine-tuning pre-trained LLMs and vision models. However, catastrophic forgetting, a notorious phenomenon where the fine-tuned model fails to retain similar performance compared to the pre-trained model, still remains an inherent problem in multimodal LLMs (MLLM). In this paper, we introduce EMT: Evaluating MulTimodality for evaluating the catastrophic forgetting in MLLMs, by treating each MLLM as an image classifier. We first apply EMT to evaluate several open-source fine-tuned MLLMs and we discover that almost all evaluated MLLMs fail to retain the same performance levels as their vision encoders on standard image classification tasks. Moreover, we continue fine-tuning LLaVA, an MLLM and utilize EMT to assess performance throughout the fine-tuning. Interestingly, our results suggest that early-stage fine-tuning on an image dataset improves performance across other image datasets, by enhancing the alignment of text and visual features. However, as fine-tuning proceeds, the MLLMs begin to hallucinate, resulting in a significant loss of generalizability, even when the image encoder remains frozen. Our results suggest that MLLMs have yet to demonstrate performance on par with their vision models on standard image classification tasks and the current MLLM fine-tuning procedure still has room for improvement. | Investigating the Catastrophic Forgetting in Multimodal Large Language Models | [
"Yuexiang Zhai",
"Shengbang Tong",
"Xiao Li",
"Mu Cai",
"Qing Qu",
"Yong Jae Lee",
"Yi Ma"
] | Workshop/Instruction | 2023 | 2309.10313 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=QxtL4Q1enz | @inproceedings{
jha2023limit,
title={{LIMIT}: Less Is More for Instruction Tuning Across Evaluation Paradigms},
author={Aditi Jha and Sam Havens and Jeremy Dohmann and Alexander Trott and Jacob Portes},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=QxtL4Q1enz}
} | Large Language Models are traditionally finetuned on large instruction datasets. However recent studies suggest that small, high-quality datasets can suffice for general purpose instruction following. This lack of consensus surrounding finetuning best practices is in part due to rapidly diverging approaches to LLM evaluation. In this study, we ask whether a small amount of diverse finetuning samples can improve performance on both traditional perplexity-based NLP benchmarks, and on open-ended, model-based evaluation. We finetune open-source MPT-7B and MPT-30B models on finetuning datasets of various sizes ranging from 1k to 60k samples. We find that subsets of 1k-6k instruction finetuning samples are sufficient to achieve good performance on both (1) traditional NLP benchmarks and (2) model-based evaluation. Finally, we show that mixing textbook-style and open-ended QA finetuning datasets optimizes performance on both evaluation paradigms. | LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms | [
"Aditi Jha",
"Sam Havens",
"Jeremy Dohmann",
"Alexander Trott",
"Jacob Portes"
] | Workshop/Instruction | 2023 | 2311.13133 | [
""
] | https://huggingface.co/papers/2311.13133 | 1 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=QkdRqpClab | @inproceedings{
pan2023lets,
title={Let's Reinforce Step by Step},
author={Sarah Pan and Vladislav Lialin and Sherin Muckatira and Anna Rumshisky},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=QkdRqpClab}
} | While recent advances have boosted LM proficiency in linguistic benchmarks, LMs consistently struggle to reason correctly on complex tasks like mathematics. We turn to Reinforcement Learning from Human Feedback (RLHF) as a method with which to shape model reasoning processes. In particular, we explore two reward schemes, outcome-supervised reward models (ORMs) and process-supervised reward models (PRMs), to optimize for logical reasoning. Our results show that the fine-grained reward provided by PRM-based methods enhances accuracy on simple mathematical reasoning (GSM8K) while, unexpectedly, reducing performance in complex tasks (MATH). Furthermore, we show the critical role reward aggregation functions play in model performance. Providing promising avenues for future research, our study underscores the need for further exploration into fine-grained reward modeling for more reliable language models. | Let's Reinforce Step by Step | [
"Sarah Pan",
"Vladislav Lialin",
"Sherin Muckatira",
"Anna Rumshisky"
] | Workshop/Instruction | 2023 | 2311.05821 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=QUwcQFgA5a | @inproceedings{
lee2023dialogcc,
title={Dialog{CC}: An Automated Pipeline for Creating High-Quality Multi-modal Dialogue Datasets},
author={Young-Jun Lee and Byungsoo Ko and Han-Gyu Kim and Jonghwan Hyeon and Ho-Jin Choi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=QUwcQFgA5a}
} | As sharing images in an instant message is a crucial factor, there has been active research on learning an image-text multi-modal dialogue models.
However, training a well-generalized multi-modal dialogue model remains challenging due to the low quality and limited diversity of images per dialogue in existing multi-modal dialogue datasets.
In this paper, we propose an automated pipeline to construct a multi-modal dialogue dataset, ensuring both dialogue quality and image diversity without requiring any human effort.
In our pipeline, to guarantee the coherence between images and dialogue, we prompt GPT-4 to infer potential image-sharing moments - specifically, the utterance, speaker, rationale, and image description.
Furthermore, we leverage CLIP similarity to maintain consistency between aligned multiple images to the utterance.
Through this pipeline, we introduce DialogCC, a high-quality and diverse multi-modal dialogue dataset that surpasses existing datasets in terms of quality and diversity in human evaluation.
Our comprehensive experiments highlight that when multi-modal dialogue models are trained using our dataset, their generalization performance on unseen dialogue datasets is significantly enhanced. We will release the source code and dataset following publication. | DialogCC: An Automated Pipeline for Creating High-Quality Multi-modal Dialogue Datasets | [
"Young-Jun Lee",
"Byungsoo Ko",
"Han-Gyu Kim",
"Jonghwan Hyeon",
"Ho-Jin Choi"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=OQHckRYbpT | @inproceedings{
fabian2023knowledge,
title={Knowledge Augmented Instruction Tuning for Zero-shot Animal Species Recognition},
author={Zalan Fabian and Zhongqi Miao and Chunyuan Li and Yuanhan Zhang and Ziwei Liu and Andres Hernandez and Pablo Arbelaez and Andr{\'e}s Link and Andr{\'e}s Montes-Rojas and Rafael Escucha and Laura Siabatto and Rahul Dodhia and Juan Lavista Ferres},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=OQHckRYbpT}
} | Due to deteriorating environmental conditions and increasing human activity, conservation efforts directed towards wildlife is crucial. Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe. Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts. Reducing the reliance on costly labelled data therefore has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor. In this work, we propose a novel zero-shot species classification framework that leverages multimodal foundation models. In particular, we instruction tune vision-language models to generate detailed visual descriptions of camera trap images using similar terminology to experts. Then, we match the generated caption to an external knowledge base of descriptions in order to determine the species in a zero-shot manner. We investigate techniques to build instruction tuning datasets for detailed animal description generation and propose a novel knowledge augmentation technique to enhance caption quality. We demonstrate the performance of our proposed method on a new camera trap dataset collected in the Magdalena Medio region of Colombia. | Knowledge Augmented Instruction Tuning for Zero-shot Animal Species Recognition | [
"Zalan Fabian",
"Zhongqi Miao",
"Chunyuan Li",
"Yuanhan Zhang",
"Ziwei Liu",
"Andres Hernandez",
"Pablo Arbelaez",
"Andrés Link",
"Andrés Montes-Rojas",
"Rafael Escucha",
"Laura Siabatto",
"Rahul Dodhia",
"Juan Lavista Ferres"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=NiQYQEPUsA | @inproceedings{
coste2023reward,
title={Reward Model Ensembles Help Mitigate Overoptimization},
author={Thomas Coste and Usman Anwar and Robert Kirk and David Krueger},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=NiQYQEPUsA}
} | Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the "true" reward, these learned reward models are susceptible to overoptimization. Gao et al. studied this phenomenon in a synthetic human feedback setup with a significantly larger "gold" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization. | Reward Model Ensembles Help Mitigate Overoptimization | [
"Thomas Coste",
"Usman Anwar",
"Robert Kirk",
"David Krueger"
] | Workshop/Instruction | 2023 | 2310.02743 | [
"https://github.com/tlc4418/llm_optimization"
] | https://huggingface.co/papers/2310.02743 | 2 | 1 | 0 | 4 | 1 | [
"tlc4418/pythia_1.4b_sft_policy"
] | [
"tlc4418/1.4b-policy_preference_data_gold_labelled",
"tlc4418/gold_labelled_gens",
"SJTUwanyi/rm_pref"
] | [] |
null | https://openreview.net/forum?id=Md6RUrGz67 | @inproceedings{
srinivasan2023nexusraven,
title={NexusRaven: a commercially-permissive Language Model for function calling},
author={Venkat Krishna Srinivasan and Zhen Dong and Banghua Zhu and Brian Yu and Hanzi Mao and Damon Mosk-Aoyama and Kurt Keutzer and Jiantao Jiao and Jian Zhang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=Md6RUrGz67}
} | The rise of open-source, commercially permissive large language models (LLMs) is revolutionizing generative AI, presenting organizations with enhanced control, minimized data risks, and cost benefits compared to proprietary models. However, in the field of tool use and function-calling LLMs, many open-source models, such as Gorilla and ToolLLAMA, are dependent on proprietary LLMs like GPT-4 for high-quality training data, which often faces legal restrictions for competitive commercial applications. In this paper, we introduce NexusRaven-13B, an open-source LLM designed for function calls. Originating from the CodeLLAMA-13B lineage, NexusRaven-13B employs a unique data curation via multi-step refinement, ensuring high-quality training data without relying on GPT-4 distillation. NexusRaven-13B matches GPT-3.5 in zero-shot function-calling accuracy. When combined with our second core technique, demonstration retrieval augmentation, its performance significantly surpasses GPT-4. The code, model, and demo will be available after the review process. | NexusRaven: a commercially-permissive Language Model for function calling | [
"Venkat Krishna Srinivasan",
"Zhen Dong",
"Banghua Zhu",
"Brian Yu",
"Hanzi Mao",
"Damon Mosk-Aoyama",
"Kurt Keutzer",
"Jiantao Jiao",
"Jian Zhang"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=LywifFNXV5 | @inproceedings{
li2023how,
title={How Long Can Context Length of Open-Source {LLM}s truly Promise?},
author={Dacheng Li and Rulin Shao and Anze Xie and Ying Sheng and Lianmin Zheng and Joseph Gonzalez and Ion Stoica and Xuezhe Ma and Hao Zhang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=LywifFNXV5}
} | Large language models (LLMs) with long-context instruction following ability has unlocked new potentials, such as supporting long interactive chat sessions. In this paper, we introduce a test suite, LongEval, which enables us to evaluate the long-range retrieval ability of LLMs at various context lengths. We use LongEval to evaluate open-sourced LLMs, and surprisingly, we find many of them fail to achieve their promised context length. In addition, we present a recipe to fine tune a long-context chatbot based on LLaMA models, and introduce LongChat models that supporting conversations of up to 16,384 tokens. We have released our code at https://github.com/DachengLi1/LongChat. | How Long Can Context Length of Open-Source LLMs truly Promise? | [
"Dacheng Li",
"Rulin Shao",
"Anze Xie",
"Ying Sheng",
"Lianmin Zheng",
"Joseph Gonzalez",
"Ion Stoica",
"Xuezhe Ma",
"Hao Zhang"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=LqISMaceov | @inproceedings{
chang2023learning,
title={Learning to Generate Better Than Your {LLM}},
author={Jonathan Chang and Kiant{\'e} Brantley and Rajkumar Ramamurthy and Dipendra Misra and Wen Sun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=LqISMaceov}
} | Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning Large Language Models (LLMs) for text generation. In particular, recent LLMs such as ChatGPT and GPT4 can engage in fluent conversations with users after finetuning with RL. Capitalizing on key properties of text generation, we seek to investigate RL algorithms beyond general purpose algorithms like Proximal Policy Optimization (PPO). In particular, we extend RL algorithms to allow them to interact with a dynamic black-box guide LLM and propose RL with guided feedback (RLGF), a suite of RL algorithms for LLM fine-tuning. We provide two ways for the guide LLM to interact with the LLM to be optimized for maximizing rewards. The guide LLM can generate text which serves as additional starting states for the RL optimization procedure. The guide LLM can also be used to complete the partial sentences generated by the LLM that is being optimized, treating the guide LLM as an expert to imitate and surpass eventually. We experiment on the IMDB positive sentiment, CommonGen, and TL;DR summarization tasks. We show that our RL algorithms achieve higher performance than supervised learning (SL) and the RL baseline PPO, demonstrating the benefit of interaction with the guide LLM. On both CommonGen and TL;DR, we not only outperform our SL baselines but also improve upon PPO across a variety of metrics beyond the one we optimized for. Our code can be found at https://github.com/Cornell-RL/tril. | Learning to Generate Better Than Your LLM | [
"Jonathan Chang",
"Kianté Brantley",
"Rajkumar Ramamurthy",
"Dipendra Misra",
"Wen Sun"
] | Workshop/Instruction | 2023 | 2306.11816 | [
"https://github.com/cornell-rl/tril"
] | https://huggingface.co/papers/2306.11816 | 0 | 0 | 0 | 5 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=KLPLCXo4aD | @inproceedings{
li2023from,
title={From Classification to Generation: Insights into Crosslingual Retrieval Augmented {ICL}},
author={Xiaoqian Li and Ercong Nie and Sheng Liang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=KLPLCXo4aD}
} | The remarkable ability of Large Language Models (LLMs) to understand and follow instructions has sometimes been limited by their in-context learning (ICL) performance in low-resource languages. To address this, we introduce a novel approach that leverages cross-lingual retrieval-augmented in-context learning (CREA-ICL). By extracting semantically similar prompts from high-resource languages, we aim to bolster the zero-shot performance of multilingual pretrained language models (MPLMs) across diverse tasks. Though our approach yields steady improvements in classification tasks, it faces challenges in generation tasks, with Bangla serving as a key case study. Our evaluation offers insights into the performance dynamics of retrieval-augmented in-context learning across both classification and generation domains. | From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL | [
"Xiaoqian Li",
"Ercong Nie",
"Sheng Liang"
] | Workshop/Instruction | 2023 | 2311.06595 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=KGBNAJCuPf | @inproceedings{
wang2023reward,
title={Reward Model Aggregation},
author={Zihao Wang and Chirag Nagpal and Alexander D'Amour and Victor Veitch and Sanmi Koyejo},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=KGBNAJCuPf}
} | Aligning language models requires guiding outputs towards desired properties using reward models. This paper tackles the challenge of combining multiple reward models for diverse objectives. We introduce methods for aggregating these rewards using logical operations. Experiments confirm our methods beat traditional aggregation techniques and underscore the significance of proper reference values. | Reward Model Aggregation | [
"Zihao Wang",
"Chirag Nagpal",
"Alexander D'Amour",
"Victor Veitch",
"Sanmi Koyejo"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=IqJ3CU3flr | @inproceedings{
kim2023distort,
title={Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions},
author={Taehyeon Kim and Joonkee Kim and Gihun Lee and Se-Young Yun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=IqJ3CU3flr}
} | While instruction-tuned language models have demonstrated impressive zero-shot generalization, these models often struggle to generate accurate responses when faced with instructions that fall outside their training set. This paper presents Instructive Decoding (ID), a simple yet effective approach that augments the efficacy of instruction-tuned models. Specifically, ID adjusts the logits for next-token prediction in a contrastive manner, utilizing predictions generated from a manipulated version of the original instruction, referred to as a noisy instruction. This noisy instruction aims to elicit responses that could diverge from the intended instruction yet remain plausible. We conduct experiments across a spectrum of such noisy instructions, ranging from those that insert semantic noise via random words to others like 'opposite' that elicit the deviated responses. Our approach achieves considerable performance gains across various instruction-tuned models and tasks without necessitating any additional parameter updates. Notably, utilizing 'opposite' as the noisy instruction in ID, which shows the maximum divergence from the original instruction, consistently produces the most significant performance gains across multiple models and tasks. | Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions | [
"Taehyeon Kim",
"Joonkee Kim",
"Gihun Lee",
"Se-Young Yun"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=IPJqprsrNX | @inproceedings{
jin2023dataefficient,
title={Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language},
author={Di Jin and Shikib Mehri and Devamanyu Hazarika and Aishwarya Padmakumar and SUNGJIN LEE and Yang Liu and Mahdi Namazifar},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=IPJqprsrNX}
} | Learning from human feedback is a prominent technique to align the output of large language models (LLMs) with human expectations. Reinforcement learning from human feedback (RLHF) leverages human preference signals that are in the form of ranking of response pairs to perform this alignment. However, human preference on LLM outputs can come in much richer forms including natural language, which may provide detailed feedback on strengths and weaknesses of a given response. In this work we investigate data efficiency of modeling human feedback that is in natural language. Specifically, we fine-tune an open-source LLM, e.g., Falcon-40B-Instruct, on a relatively small amount (1000 records or even less) of human feedback in natural language in the form of critiques and revisions of responses. We show that this model is able to improve the quality of responses from even some of the strongest LLMs such as ChatGPT, BARD, and Vicuna, through critique and revision of those responses. For instance, through one iteration of revision of ChatGPT responses, the revised responses have 56.6% win rate over the original ones, and this win rate can be further improved to 65.9% after applying the revision for five iterations. | Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language | [
"Di Jin",
"Shikib Mehri",
"Devamanyu Hazarika",
"Aishwarya Padmakumar",
"SUNGJIN LEE",
"Yang Liu",
"Mahdi Namazifar"
] | Workshop/Instruction | 2023 | 2311.14543 | [
""
] | https://huggingface.co/papers/2311.14543 | 0 | 1 | 0 | 7 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=HgSViBZd1A | @inproceedings{
grzywinski2023releasing,
title={Releasing the {CR}a{QA}n (Coreference Resolution in Question-Answering): An open-source dataset and dataset creation methodology using instruction-following models},
author={Rob Grzywinski and Joshua D'Arcy and Robert Naidoff and Ashish Shukla and Alex Browne and Ren Gibbons and Brinnae Bent},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=HgSViBZd1A}
} | Instruction-following language models demand robust methodologies for information retrieval to augment instructions for question-answering applications. A primary challenge is the resolution of coreferences in the context of chunking strategies for long documents. The critical barrier to experimentation of handling coreferences is a lack of open source datasets, specifically in question-answering tasks that require coreference resolution. In this work we present our Coreference Resolution in Question-Answering (CRaQAn) dataset, an open-source dataset that caters to the nuanced information retrieval requirements of coreference resolution in question-answering tasks by providing over 250 question-answer pairs containing coreferences. To develop this dataset, we developed a novel approach for creating high-quality datasets using an instruction-following model (GPT-4) and a Recursive Criticism and Improvement Loop. | Releasing the CRaQAn (Coreference Resolution in Question-Answering): An open-source dataset and dataset creation methodology using instruction-following models | [
"Rob Grzywinski",
"Joshua D'Arcy",
"Robert Naidoff",
"Ashish Shukla",
"Alex Browne",
"Ren Gibbons",
"Brinnae Bent"
] | Workshop/Instruction | 2023 | 2311.16338 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=HVduJbHSSO | @inproceedings{
hu2023ciem,
title={{CIEM}: Contrastive Instruction Evaluation Method for Better Instruction Tuning},
author={Hongyu Hu and Jiyuan Zhang and Minyi Zhao and Zhenbang Sun},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=HVduJbHSSO}
} | Nowadays, the research on Large Vision-Language Models (LVLMs) has been significantly promoted thanks to the success of Large Language Models (LLM). Nevertheless, these Vision-Language Models (VLMs) are suffering from the drawback of hallucination -- due to insufficient understanding of vision and language modalities, VLMs may generate incorrect perception information when doing downstream applications, for example, captioning a non-existent entity. To address the hallucination phenomenon, on the one hand, we introduce a $\textbf{C}$ontrastive $\textbf{I}$nstruction $\textbf{E}$valuation $\textbf{M}$ethod (CIEM), which is an automatic pipeline that leverages an annotated image-text dataset coupled with an LLM to generate factual/contrastive question-answer pairs for the evaluation of the hallucination of VLMs. On the other hand, based on CIEM, we further propose a new instruction tuning method called CIT (the abbreviation of $\textbf{C}$ontrastive $\textbf{I}$nstruction $\textbf{T}$uning) to alleviate the hallucination of VLMs by automatically producing high-quality factual/contrastive question-answer pairs and corresponding justifications for model tuning. Through extensive experiments on CIEM and CIT, we pinpoint the hallucination issues commonly present in existing VLMs, the disability of the current instruction-tuning dataset to handle the hallucination phenomenon and the superiority of CIT-tuned VLMs over both CIEM and public datasets. Please contact the authors for code and generated dataset. | CIEM: Contrastive Instruction Evaluation Method for Better Instruction Tuning | [
"Hongyu Hu",
"Jiyuan Zhang",
"Minyi Zhao",
"Zhenbang Sun"
] | Workshop/Instruction | 2023 | 2309.02301 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=GO8aPQ9Odp | @inproceedings{
siththaranjan2023understanding,
title={Understanding Hidden Context in Preference Learning: Consequences for {RLHF}},
author={Anand Siththaranjan and Cassidy Laidlaw and Dylan Hadfield-Menell},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=GO8aPQ9Odp}
} | In practice, preference learning from human feedback depends on incomplete data with hidden context. Hidden context refers to data that affects the feedback received, but which is not represented in the data used to train a preference model. This captures common issues of data collection, such as having human annotators with varied preferences, cognitive processes that result in seemingly irrational behavior, and combining data labeled according to different criteria. We prove that standard applications of preference learning, including reinforcement learning from human feedback (RLHF), implicitly aggregate over hidden contexts according to a well-known voting rule called Borda count. We show this can produce counter-intuitive results that are very different from other methods which implicitly aggregate via expected utility. Furthermore, our analysis formalizes the way that preference learning from users with diverse values tacitly implements a social choice function. A key implication of this result is that annotators have an incentive to misreport their preferences in order to influence the learned model, leading to vulnerabilities in the deployment of RLHF. As a step towards mitigating these problems, we introduce a class of methods called distributional preference learning (DPL). DPL methods estimate a distribution of possible score values for each alternative in order to better account for hidden context. Experimental results indicate that applying DPL to RLHF for LLM chatbots identifies hidden context in the data and significantly reduces subsequent jailbreak vulnerability. | Understanding Hidden Context in Preference Learning: Consequences for RLHF | [
"Anand Siththaranjan",
"Cassidy Laidlaw",
"Dylan Hadfield-Menell"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=G8ArB0aApM | @inproceedings{
shin2023past,
title={Past as a Guide: Leveraging Retrospective Learning for Python Code Completion},
author={Seungyoun Shin and Seunggyu Chang and Sungjoon Choi},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=G8ArB0aApM}
} | This work presents Past as a Guide (PaG), a simple approach for Large Language Models (LLMs) to improve the coding capabilities by integrating the past history with interactive and iterative code refinements.
To be specific, inspired by human cognitive processes, the proposed method enables LLMs to utilize previous programming and debugging experiences to enhance the Python code completion tasks.
The framework facilitates LLMs to iteratively refine the Python code based on previous execution and debugging results and optimize learning and reasoning capabilities.
The proposed methodology achieved a 92\% pass@1 on HumanEval, demonstrating the potential to advance the field by leveraging retrospection from past experiences and interactive and iterative refinement processes without external correctness indicators. | Past as a Guide: Leveraging Retrospective Learning for Python Code Completion | [
"Seungyoun Shin",
"Seunggyu Chang",
"Sungjoon Choi"
] | Workshop/Instruction | 2023 | 2311.07635 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=FuOMomaQa8 | @inproceedings{
wang2023fingpt,
title={Fin{GPT}: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets},
author={Neng Wang and Hongyang Yang and Christina Wang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=FuOMomaQa8}
} | In the swiftly expanding domain of Natural Language Processing (NLP), the potential of GPT-based models for the financial sector is increasingly evident. However, the integration of these models with financial datasets presents challenges, notably in determining their adeptness and relevance. This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models, specifically adapted for financial contexts. Through this methodology, we capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration. We begin by explaining the Instruction Tuning paradigm, highlighting its effectiveness for immediate integration. The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression. Firstly, we assess basic competencies and fundamental tasks, such as Named Entity Recognition (NER) and sentiment analysis to enhance specialization. Next, we delve into a comprehensive model, executing multi-task operations by amalgamating all instructional tunings to examine versatility. Finally, we explore the zero-shot capabilities by earmarking unseen tasks and incorporating novel datasets to understand adaptability in uncharted terrains. Such a paradigm fortifies the principles of openness and reproducibility, laying a robust foundation for future investigations in open-source financial large language models (FinLLMs). | FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets | [
"Neng Wang",
"Hongyang Yang",
"Christina Wang"
] | Workshop/Instruction | 2023 | 2310.04793 | [
"https://github.com/ai4finance-foundation/fingpt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=EAuteBjTMw | @inproceedings{
qi2023large,
title={Large Language Models are Zero Shot Hypothesis Proposers},
author={Biqing Qi and Kaiyan Zhang and Haoxiang Li and Kai Tian and Sihang Zeng and Zhang-Ren Chen and Bowen Zhou},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=EAuteBjTMw}
} | Significant scientific discoveries have driven the progress of human civilisation. The explosion of scientific literature and data has created information barriers across disciplines that have slowed the pace of scientific discovery. Large Language Models (LLMs) hold a wealth of global and interdisciplinary knowledge that promises to break down these information barriers and foster a new wave of scientific discovery. However, the potential of LLMs for scientific discovery has not been formally explored. In this paper, we start from investigating whether LLMs can propose scientific hypotheses. To this end, we construct a dataset consist of background knowledge and hypothesis pairs from biomedical literature. The dataset is divided into training, seen, and unseen test sets based on the publication date to control visibility. We subsequently evaluate the hypothesis generation capabilities of various top-tier instructed models in zero-shot, few-shot, and fine-tuning settings, including both closed and open-source LLMs. Additionally, we introduce an LLM-based multi-agent cooperative framework with different role designs and external tools to enhance the capabilities related to generating hypotheses. We also design four metrics through a comprehensive review to evaluate the generated hypotheses for both ChatGPT-based and human evaluations. Through experiments and analyses, we arrive at the following findings: 1) LLMs surprisingly generate untrained yet validated hypotheses from testing literature. 2) Increasing uncertainty facilitates candidate generation, potentially enhancing zero-shot hypothesis generation capabilities. These findings strongly support the potential of LLMs as catalysts for new scientific discoveries and guide further exploration. | Large Language Models are Zero Shot Hypothesis Proposers | [
"Biqing Qi",
"Kaiyan Zhang",
"Haoxiang Li",
"Kai Tian",
"Sihang Zeng",
"Zhang-Ren Chen",
"Bowen Zhou"
] | Workshop/Instruction | 2023 | 2311.05965 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=CjrPqvvUXL | @inproceedings{
muennighoff2023octopack,
title={OctoPack: Instruction Tuning Code Large Language Models},
author={Niklas Muennighoff and Qian Liu and Armel Zebaze and Qinkai Zheng and Binyuan Hui and Terry Yue Zhuo and Swayam Singh and Xiangru Tang and Leandro Von Werra and Shayne Longpre},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=CjrPqvvUXL}
} | Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack’s benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack. | OctoPack: Instruction Tuning Code Large Language Models | [
"Niklas Muennighoff",
"Qian Liu",
"Armel Zebaze",
"Qinkai Zheng",
"Binyuan Hui",
"Terry Yue Zhuo",
"Swayam Singh",
"Xiangru Tang",
"Leandro Von Werra",
"Shayne Longpre"
] | Workshop/Instruction | 2023 | 2308.07124 | [
"https://github.com/bigcode-project/octopack"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=CZJOOFgXZj | @inproceedings{
li2023approximate,
title={Approximate Clustering for Extracting Task Relationships in Multi-Instruction Tuning},
author={Dongyue Li and Jinhong Yu and Hongyang R. Zhang},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=CZJOOFgXZj}
} | The development of language models involves the evaluation of a broad range of learning tasks. Recent work has shown that by using carefully designed instructions to teach a large transformer model, they can be fine-tuned on a wide range of downstream tasks. However, when the number of instructions increases, they can negatively interfere with each other if trained together. Existing works have relied on domain expertise and manual inspection to construct multi-instruction sets, which can be time-consuming and difficult to scale. To address this challenge, this paper develops a clustering algorithm to find groups of similar tasks based on a given set of task affinity scores. This is an NP-hard problem, and conventional algorithms such as spectral and Llyod's clustering are sensitive to variations in the scale of task losses. Our algorithm instead uses a semidefinite relaxation to maximize the average density of clusters and then rounds the solution with a threshold. We adaptively build the clusters by gradually adding tasks so that the affinities only need to be computed in the existing clusters. Then, we construct an evaluation benchmark to assess task grouping algorithms with verified group structures. The evaluation set includes 63 cases, spanning multitask instruction tuning, multi-instruction tuning, and in-context learning of multiple functions. We validate our algorithm on this evaluation set by showing that it recovers the group structure found by an exhaustive search. We also show that our approach improves performance over multi-instruction and soft-prompt tuning by up to 6% on several sentence classification and structure-to-text generative tasks. | Approximate Clustering for Extracting Task Relationships in Multi-Instruction Tuning | [
"Dongyue Li",
"Jinhong Yu",
"Hongyang R. Zhang"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=Bc3S2G1PxH | @inproceedings{
kirk2023understanding,
title={Understanding the Effects of {RLHF} on {LLM} Generalisation and Diversity},
author={Robert Kirk and Ishita Mediratta and Christoforos Nalmpantis and Jelena Luketina and Eric Hambro and Edward Grefenstette and Roberta Raileanu},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=Bc3S2G1PxH}
} | Large language models (LLMs) fine-tuned with reinforcement learning from human feedback (RLHF) have been used in some of the most widely deployed AI models to date, such as OpenAI’s ChatGPT or Anthropic’s Claude. While there has been significant work developing these methods, our understanding of the benefits and downsides of each stage in RLHF is still limited. To fill this gap, we present an extensive analysis of how each stage of the process (i.e. supervised fine-tuning (SFT), reward modelling, and RLHF) affects two key properties: out-of-distribution generalisation (OOD) and output diversity. OOD generalisation is crucial given the wide range of real-world scenarios in which these models are being used, while output diversity refers to the model’s ability to generate varied outputs, and is important for a variety of use cases. We perform our analysis across two base models on both summarisation and instruction following tasks, the latter being highly relevant for current LLM use cases. We find that RLHF generalises better than SFT to new inputs, particularly as the distribution shift between train and test becomes larger. However, RLHF significantly reduces output diversity compared to SFT across a variety of measures, implying a tradeoff in current LLM fine-tuning methods between generalisation and diversity. Our results provide guidance on which fine-tuning method should be used depending on the application, and show that more research is needed to improve the tradeoff between generalisation and diversity. | Understanding the Effects of RLHF on LLM Generalisation and Diversity | [
"Robert Kirk",
"Ishita Mediratta",
"Christoforos Nalmpantis",
"Jelena Luketina",
"Eric Hambro",
"Edward Grefenstette",
"Roberta Raileanu"
] | Workshop/Instruction | 2023 | 2310.06452 | [
"https://github.com/facebookresearch/rlfh-gen-div"
] | https://huggingface.co/papers/2310.06452 | 1 | 2 | 0 | 7 | 1 | [] | [
"UCL-DARK/sequential-instructions",
"UCL-DARK/openai-tldr-filtered",
"UCL-DARK/openai-tldr-summarisation-preferences",
"UCL-DARK/openai-tldr-filtered-queries",
"UCL-DARK/alpaca-farm-id-test"
] | [] |
null | https://openreview.net/forum?id=7KxUgWTZbz | @inproceedings{
zhao2023group,
title={Group Preference Optimization: Few-Shot Alignment of Large Language Models},
author={Siyan Zhao and John Dang and Aditya Grover},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=7KxUgWTZbz}
} | Many applications of large language models (LLMs), ranging from chatbots to creative writing, require nuanced subjective judgments that can differ significantly across different groups.
Existing alignment algorithms can be expensive to align for each group, requiring prohibitive amounts of group-specific preference data and computation for real-world use cases.
We introduce Group Preference Optimization (GPO), an alignment framework that steers language models to preferences of individual groups in a few-shot manner.
In GPO, we augment the base LLM with an independent transformer module trained to predict the preferences of a group for the LLM generations.
For few-shot learning, we parameterize this module as an in-context autoregressive transformer and train it via meta-learning on several groups. We empirically validate the efficacy of GPO through rigorous evaluations using LLMs with varied sizes on three human opinion adaptation tasks. These tasks involve adapting to the preferences of US demographic groups, global countries, and individual users. Our results demonstrate that GPO not only aligns models more accurately but also requires fewer group-specific preferences, and less training and inference computing resources, outperforming existing strategies such as in-context steering and fine-tuning methods. | Group Preference Optimization: Few-Shot Alignment of Large Language Models | [
"Siyan Zhao",
"John Dang",
"Aditya Grover"
] | Workshop/Instruction | 2023 | 2310.11523 | [
"https://github.com/jamqd/Group-Preference-Optimization"
] | https://huggingface.co/papers/2310.11523 | 0 | 0 | 0 | 3 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=6579t0X8X2 | @inproceedings{
lee2023platypus,
title={Platypus: Quick, Cheap, and Powerful Refinement of {LLM}s},
author={Ariel Lee and Cole Hunter and Nataniel Ruiz},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=6579t0X8X2}
} | We present **Platypus**, a family of fine-tuned and merged Large Language Models (LLMs) that achieved the strongest performance and stood at first place in HuggingFace's Open LLM Leaderboard at the time of writing. In this work we describe (1) our curated dataset **Open-Platypus**, that is a subset of other open datasets and which we release to the public (2) our process of fine-tuning and merging LoRA modules in order to conserve the strong prior of pretrained LLMs, while bringing specific domain knowledge to the surface (3) our efforts in checking for test data leaks and contamination in the training data, which can inform future research. Specifically, the Platypus family achieves strong performance in quantitative LLM metrics across model sizes, topping the global Open LLM leaderboard while using just a fraction of the fine-tuning data and overall compute that are required for other state-of-the-art fine-tuned LLMs. In particular, a 13B Platypus model can be trained on a single A100 GPU using 25k questions in 5 hours. This is a testament of the quality of our Open-Platypus dataset, and opens opportunities for more improvements in the field. | Platypus: Quick, Cheap, and Powerful Refinement of LLMs | [
"Ariel Lee",
"Cole Hunter",
"Nataniel Ruiz"
] | Workshop/Instruction | 2023 | 2308.07317 | [
"https://github.com/arielnlee/Platypus"
] | https://huggingface.co/papers/2308.07317 | 3 | 23 | 4 | 3 | 1 | [
"Open-Orca/OpenOrca-Platypus2-13B",
"garage-bAInd/Platypus2-70B-instruct",
"TheBloke/OpenOrca-Platypus2-13B-GGML",
"TheBloke/OpenOrca-Platypus2-13B-GPTQ",
"TheBloke/Platypus2-70B-Instruct-GPTQ",
"kyujinpy/KO-Platypus2-7B-ex",
"garage-bAInd/Platypus2-70B",
"garage-bAInd/Stable-Platypus2-13B",
"garage-bAInd/Platypus2-13B",
"TheBloke/OpenOrca-Platypus2-13B-GGUF",
"TheBloke/Platypus2-70B-Instruct-GGML",
"garage-bAInd/Camel-Platypus2-70B",
"TheBloke/Platypus2-70B-Instruct-GGUF",
"TheBloke/Platypus2-70B-GGML",
"TheBloke/Stable-Platypus2-13B-GPTQ",
"garage-bAInd/Platypus2-7B",
"TheBloke/Camel-Platypus2-13B-GGML",
"TheBloke/Platypus2-70B-GPTQ",
"TheBloke/Stable-Platypus2-13B-GGML",
"TheBloke/Platypus2-13B-GGML",
"TheBloke/Camel-Platypus2-70B-GGML",
"TheBloke/Platypus2-70B-GGUF",
"TheBloke/Camel-Platypus2-13B-GPTQ",
"TheBloke/OpenOrca-Platypus2-13B-AWQ",
"TheBloke/Platypus2-13B-GPTQ",
"TheBloke/Camel-Platypus2-70B-GPTQ",
"garage-bAInd/Camel-Platypus2-13B",
"TheBloke/Camel-Platypus2-70B-GGUF",
"garage-bAInd/Platypus-70B-adapters",
"TheBloke/Platypus2-13B-GGUF",
"TheBloke/Stable-Platypus2-13B-GGUF",
"TheBloke/Camel-Platypus2-13B-GGUF",
"TheBloke/Platypus2-70B-Instruct-AWQ",
"TheBloke/Camel-Platypus2-13B-AWQ",
"TheBloke/Platypus2-13B-AWQ",
"TheBloke/Platypus2-70B-AWQ",
"TheBloke/Camel-Platypus2-70B-AWQ",
"AzureBlack/Platypus2-70B-instruct-4.1bpw-6h-exl2",
"Aryanne/OpenLlama-Platypus-3B-gguf",
"RichardErkhov/garage-bAInd_-_Platypus2-70B-gguf",
"garage-bAInd/Platypus-13B-adapters",
"uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b",
"uukuguy/speechless-orca-platypus-coig-lite-4k-0.5e-13b",
"uukuguy/speechless-orca-platypus-coig-lite-4k-0.6e-13b",
"TheBloke/Stable-Platypus2-13B-AWQ",
"garage-bAInd/Platypus-7B-adapters",
"RichardErkhov/garage-bAInd_-_Platypus2-7B-8bits",
"RichardErkhov/garage-bAInd_-_Platypus2-7B-gguf",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-8bits",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-4bits",
"RichardErkhov/garage-bAInd_-_Platypus2-13B-gguf",
"RichardErkhov/garage-bAInd_-_Stable-Platypus2-13B-gguf",
"RichardErkhov/garage-bAInd_-_Camel-Platypus2-13B-gguf"
] | [
"garage-bAInd/Open-Platypus",
"kyujinpy/KOpen-platypus",
"kyujinpy/Open-platypus-Commercial",
"botp/Open-Platypus"
] | [
"open-llm-leaderboard/open_llm_leaderboard",
"Intel/low_bit_open_llm_leaderboard",
"BAAI/open_cn_llm_leaderboard",
"Open-Orca/OpenOrca-Platypus2-13B",
"gsaivinay/open_llm_leaderboard",
"open-llm-leaderboard-old/open_llm_leaderboard",
"EvanTHU/MotionLLM",
"GTBench/GTBench",
"felixz/open_llm_leaderboard",
"OPTML-Group/UnlearnCanvas-Benchmark",
"Vikhrmodels/small-shlepa-lb",
"bardsai/performance-llm-board",
"Satyam-Singh/garage-bAInd-Platypus2-70B",
"barunsaha/slides-wizard",
"neubla/neubla-llm-evaluation-board",
"rodrigomasini/data_only_open_llm_leaderboard",
"Docfile/open_llm_leaderboard",
"PrarthanaTS/tsai-gpt-from-scratch",
"HemaAM/GPT_train_on_LLaMa",
"anantgupta129/LitGPT-Pythia-160M",
"RaviNaik/ERA-SESSION22",
"Sijuade/GPTNEXTWORD",
"mikeee/s3nh-garage-bAInd-Stable-Platypus2-13B-GGML",
"kyungeun/llm_tasks_chat",
"smothiki/open_llm_leaderboard",
"Hyperion-js/Open-Orca-OpenOrca-Platypus2-13B",
"olgazju/Open-Orca-OpenOrca-Platypus2-13B",
"tellview/Open-Orca-OpenOrca-Platypus2-13B",
"something01/Open-Orca-OpenOrca-Platypus2-13B",
"0x1668/open_llm_leaderboard",
"bburli/Open-Orca-OpenOrca-Platypus2-13B",
"pngwn/open_llm_leaderboard-check",
"AlexFierro9/Open-Orca-OpenOrca-Platypus2-13B",
"asir0z/open_llm_leaderboard",
"kbmlcoding/open_llm_leaderboard_free",
"pri7ansh/Open-Orca-OpenOrca-Platypus2-13B",
"E-Hospital/oop-deploy",
"aichampions/open_llm_leaderboard",
"Adeco/open_llm_leaderboard",
"anirudh937/open_llm_leaderboard",
"smothiki/open_llm_leaderboard2",
"piyushgrover/MiniGPT_S22",
"supra-e-acc/Pythia-160M-text-generate",
"venkyyuvy/GPT_redpajama",
"TharunSivamani/GPT-Predictor",
"VarunSivamani/GPT-From-Scratch",
"mkthoma/GPT_From_Scratch",
"sanjanatule/GPTNext",
"RashiAgarwal/TSAIGPTRedPajama",
"neuralorbs/DialogGen",
"MadhurGarg/TSAIGPTRedPajama",
"Navyabhat/ERAV1-Session-22",
"GunaKoppula/ERA-Session-22",
"Vaish2705/ERA_S22",
"ToletiSri/TSAI_S22",
"huaijin/garage-bAInd-Camel-Platypus2-70B",
"smothiki/open_llm_leaderboard_old",
"mdugger/garage-bAInd-Platypus2-70B",
"Zeros0sZero/garage-bAInd-Platypus2-70B-instruct",
"loganblack0/garage-bAInd-Platypus2-70B-instruct",
"Utopian2/garage-bAInd-Platypus2-70B-instruct",
"blazingbunny/garage-bAInd-Platypus2-70B-instruct",
"PeepDaSlan9/garage-bAInd-Platypus2-70B-instruct",
"Ragunandha/garage-bAInd-Platypus2-70B-instruct",
"Vexvoi/garage-bAInd-Platypus2-70B-instruct",
"fika9903/garage-bAInd-Platypus2-70B-instruct",
"saidloyens/garage-bAInd-Platypus2-70B-instruct",
"AV29/garage-bAInd-Platypus2-70B-instruct",
"prasaugus/garage-bAInd-Platypus2-70B-instruct",
"cclarkson125/garage-bAInd-Platypus2-70B-instruct",
"phxdev/garage-bAInd-Platypus2-70B-instruct",
"joaopaulopresa/workshop_llm_ufg_chatbot",
"srossitto79/AgentLlama007B",
"Wallndir/garage-bAInd-Platypus2-70B-instruct",
"Asiya057/Incarna-Mind",
"Asiya057/Incarna-Mind-POC",
"Xhaheen/AI_safety_testing",
"mikeee/codellama-13b-python-ggml",
"hongjong/kyujinpy-KO-Platypus2-7B-ex"
] |
null | https://openreview.net/forum?id=5dI6ZphLYX | @inproceedings{
ye2023flask,
title={{FLASK}: Fine-grained Language Model Evaluation based on Alignment Skill Sets},
author={Seonghyeon Ye and Doyoung Kim and Sungdong Kim and Hyeonbin Hwang and Seungone Kim and Yongrae Jo and James Thorne and Juho Kim and Minjoon Seo},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=5dI6ZphLYX}
} | Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it does not consider the nature of user instructions that require instance-wise skill composition. In this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on Alignment Skill Sets), a fine-grained evaluation protocol for both human-based and model-based evaluation which decomposes coarse-level scoring to a skill set-level scoring for each instruction. We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance and increasing the reliability of the evaluation. Using FLASK, we compare multiple open-source and proprietary LLMs and observe a high correlation between model-based and human-based evaluations. | FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets | [
"Seonghyeon Ye",
"Doyoung Kim",
"Sungdong Kim",
"Hyeonbin Hwang",
"Seungone Kim",
"Yongrae Jo",
"James Thorne",
"Juho Kim",
"Minjoon Seo"
] | Workshop/Instruction | 2023 | 2307.10928 | [
"https://github.com/kaistai/flask"
] | https://huggingface.co/papers/2307.10928 | 9 | 12 | 2 | 9 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=5O9JBt35zg | @inproceedings{
yang2023learning,
title={Learning Interactive Real-World Simulators},
author={Sherry Yang and Yilun Du and Seyed Kamyar Seyed Ghasemipour and Jonathan Tompson and Dale Schuurmans and Pieter Abbeel},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=5O9JBt35zg}
} | Generative models trained on internet data have revolutionized how text, image, and video content can be created. Perhaps the next milestone for generative models is to simulate realistic experience in response to actions taken by humans, robots, and other interactive agents. Applications of a real-world simulator range from controllable content creation in games and movies, to training embodied agents purely in simulation that can be directly deployed in the real world. We explore the possibility of learning a universal simulator (UniSim) of real-world interaction through generative modeling. We first make the important observation that natural datasets available for learning a real-world simulator are often rich along different axes (e.g., abundant objects in image data, densely sampled actions in robotics data, and diverse movements in navigation data). With careful orchestration of diverse datasets, each providing a different aspect of the overall experience, UniSim can emulate how humans and agents interact with the world by simulating the visual outcome of both high-level instructions such as “open the drawer” and low-level controls such as “move by x,y” from otherwise static scenes and objects. There are numerous use cases for such a real-world simulator. As an example, we use UniSim to train both high-level vision-language planners and low-level reinforcement learning policies, each of which exhibit zero-shot real-world transfer after training purely in a learned real-world simulator. We also show that other types of intelligence such as video captioning models can benefit from training with simulated experience in UniSim, opening up even wider applications. | Learning Interactive Real-World Simulators | [
"Sherry Yang",
"Yilun Du",
"Seyed Kamyar Seyed Ghasemipour",
"Jonathan Tompson",
"Dale Schuurmans",
"Pieter Abbeel"
] | Workshop/Instruction | 2023 | 2310.06114 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=5BqWC1Fz8F | @inproceedings{
liu2023fingpt,
title={Fin{GPT}: Democratizing Internet-scale Data for Financial Large Language Models},
author={Xiao-Yang Liu and Guoxuan Wang and Hongyang Yang and Daochen Zha},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=5BqWC1Fz8F}
} | Large language models (LLMs) have demonstrated remarkable proficiency in understanding and generating human-like texts, which may potentially revolutionize the finance industry. However, existing LLMs often fall short in the financial field, which is mainly attributed to the disparities between general text data and financial text data.
Unfortunately, there is only a limited number of financial text datasets available, and BloombergGPT \cite{wu2023bloomberggpt}, the first financial LLM (FinLLM), is close-sourced (only the training logs were released). In light of this, we aim to democratize Internet-scale financial data for LLMs, which is an open challenge due to diverse data sources, low signal-to-noise ratio, and high time-validity. To address the challenges, we introduce an open-sourced and data-centric framework, \textit{Financial Generative Pre-trained Transformer (FinGPT)}, that automates the collection and curation of real-time financial data from $\geq 34$ diverse sources on the Internet, providing researchers and practitioners with accessible and transparent resources to develop their FinLLMs. Additionally, we propose a simple yet effective strategy for fine-tuning FinLLM using the inherent feedback from the market, dubbed \textit{Reinforcement Learning with Stock Prices} (RLSP). We also adopt the Low-rank Adaptation (LoRA, QLoRA) method that enables users to customize their own FinLLMs from open-source general-purpose LLMs at a low cost. Finally, we showcase several FinGPT applications, including robo-advisor, sentiment analysis for algorithmic trading, and low-code development. FinGPT aims to democratize FinLLMs, stimulate innovation, and unlock new opportunities in open finance. The codes have been open-sourced. | FinGPT: Democratizing Internet-scale Data for Financial Large Language Models | [
"Xiao-Yang Liu",
"Guoxuan Wang",
"Hongyang Yang",
"Daochen Zha"
] | Workshop/Instruction | 2023 | 2307.10485 | [
"https://github.com/ai4finance-foundation/fingpt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=4YlMoQoNhL | @inproceedings{
tu2023sight,
title={Sight Beyond Text: Multi-Modal Training Enhances {LLM}s in Truthfulness and Ethics},
author={Haoqin Tu and Bingchen Zhao and Chen Wei and Cihang Xie},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=4YlMoQoNhL}
} | Multi-modal large language models (MLLMs) are trained based on large language models (LLM), with an enhanced capability to comprehend multi-modal inputs and generate textual responses. While they excel in multi-modal tasks, the pure NLP abilities of MLLMs are often underestimated and left untested.
In this study, we get out of the box and unveil an intriguing characteristic of MLLMs --- our preliminary results suggest that visual instruction tuning, a prevailing strategy for transitioning LLMs into MLLMs, unexpectedly and interestingly helps models attain both improved truthfulness and ethical alignment in the pure NLP context.
For example, a visual-instruction-tuned LLaMA2 7B model surpasses the performance of the LLaMA2-chat 7B model, fine-tuned with over one million human annotations, on \texttt{TruthfulQA} and \texttt{Ethics} benchmarks.
Further analysis reveals that the improved alignment can be attributed to the superior instruction quality inherent to visual-text data. In releasing our code at \url{github.com/UCSC-VLAA/Sight-Beyond-Text}, we aspire to foster further exploration into the intrinsic value of visual-text synergies and, in a broader scope, multi-modal interactions in alignment research. | Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics | [
"Haoqin Tu",
"Bingchen Zhao",
"Chen Wei",
"Cihang Xie"
] | Workshop/Instruction | 2023 | 2309.07120 | [
"https://github.com/ucsc-vlaa/sight-beyond-text"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=3xGnOrUqt1 | @inproceedings{
cai2023a,
title={A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event Extraction},
author={Erica Cai and Brendan O'Connor},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=3xGnOrUqt1}
} | Current social science efforts automatically populate event databases of "who did what to whom?'' tuples, by applying event extraction (EE) to text such as news. The event databases are used to analyze sociopolitical dynamics between actor pairs (dyads) in, e.g., international relations. While most EE methods heavily rely on rules or supervised learning, \emph{zero-shot} event extraction could potentially allow researchers to flexibly specify arbitrary event classes for new research questions. Unfortunately, we find that current zero-shot EE methods, as well as a naive zero-shot approach of simple generative language model (LM) prompting, perform poorly for dyadic event extraction; most suffer from word sense ambiguity, modality sensitivity, and computational inefficiency. We address these challenges with a new fine-grained, multi-stage instruction-following generative LM pipeline, proposing a Monte Carlo approach to deal with, and even take advantage of, nondeterminism of generative outputs. Our pipeline includes explicit stages of linguistic analysis (synonym generation, contextual disambiguation, argument realization, event modality), \textit{improving control and interpretability} compared to purely neural methods. This method outperforms other zero-shot EE approaches and outperforms naive applications of generative LMs by at least 17 F1 percent points. The pipeline's filtering mechanism greatly improves computational efficiency, allowing it to perform as few as 12% of queries that a previous zero-shot method uses. Finally, we demonstrate our pipeline's application to dyadic international relations analysis. | A Monte Carlo Language Model Pipeline for Zero-Shot Sociopolitical Event Extraction | [
"Erica Cai",
"Brendan O'Connor"
] | Workshop/Instruction | 2023 | 2305.15051 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=3846Xhv7mm | @inproceedings{
lu2023an,
title={An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models},
author={Yadong Lu and Chunyuan Li and Haotian Liu and Jianwei Yang and Jianfeng Gao and yelong shen},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=3846Xhv7mm}
} | Visual instruction tuning has recently shown encouraging progress with open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However, most existing studies of open-source LMM are performed using models with 13B parameters or smaller. In this paper we present an empirical study of scaling LLaVA up to 33B and 65B/70B, and share our findings from our explorations in image resolution, data mixing and parameter-efficient training methods such as LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves language capabilities, and performance of LoRA/QLoRA tuning of LMM are comparable to the performance of full-model fine-tuning. Additionally, the study highlights the importance of higher image resolutions and mixing multimodal-language data to improve LMM performance, and visual instruction tuning can sometimes improve LMM's pure language capability. We hope this study makes state-of-the-art LMM research at a larger scale more accessible, thus helping establish stronger baselines for future research. Code and checkpoints will be made public. | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | [
"Yadong Lu",
"Chunyuan Li",
"Haotian Liu",
"Jianwei Yang",
"Jianfeng Gao",
"yelong shen"
] | Workshop/Instruction | 2023 | 2309.09958 | [
"https://github.com/haotian-liu/LLaVA"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=2fc5GOPYip | @inproceedings{
raman2023for,
title={For Distillation, Tokens Are Not All You Need},
author={Mrigank Raman and Pranav Mani and Davis Liang and Zachary Lipton},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=2fc5GOPYip}
} | The unwieldy size of state-of-the-art language models
presents significant obstacles for deployment,
driving up cost and latency.
While prior works have offered methods
for distilling these larger language models
into smaller students,
the best previous method is somewhat complex,
relying on an RL-based optimization.
In this work, we introduce SLIM (Sparse Logit Infused Modeling),
a simple method for distilling LLMs
that leverages not only samples from the teacher LLM
but also the values of the logits produced at each decoding step.
Our distillation method uses only the top-5% highest logits along with a dynamic weighting scheme that assigns weights to the KL divergence and cross-entropy loss based on the relative confidence between the student and teacher models.
Our experiments demonstrate that SLIM produces models
that are better at a wide range of downstream NLP tasks
compared to supervised fine-tuning, vanilla knowledge distillation, and the recently proposed MiniLLM.
Contrary to other methods, our method is scalable
to much larger teacher ($\sim70$B parameters).
We also provide an intuition for the superior performance of SLIM
via established sample complexity bounds within simplified scenarios. | For Distillation, Tokens Are Not All You Need | [
"Mrigank Raman",
"Pranav Mani",
"Davis Liang",
"Zachary Lipton"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=1TFhamIXNn | @inproceedings{
ye2023investigating,
title={Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following},
author={Seonghyeon Ye and Hyeonbin Hwang and Sohee Yang and Hyeongu Yun and Yireun Kim and Minjoon Seo},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=1TFhamIXNn}
} | In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference. TAPP is different from canonical prompts for LLMs in that it is a fixed prompt prepended to the beginning of every input regardless of the target task for zero-shot generalization. We observe that both base LLMs (i.e. not fine-tuned to follow instructions) and instruction-tuned models benefit from TAPP, resulting in 34.58% and 12.26% improvement on average, respectively. This implies that the instruction-following ability of LLMs can be improved during inference time with a fixed prompt constructed with simple heuristics. We hypothesize that TAPP assists language models to better estimate the output distribution by focusing more on the instruction of the target task during inference. In other words, such ability does not seem to be sufficiently activated in not only base LLMs but also many instruction-fine-tuned LLMs. | Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following | [
"Seonghyeon Ye",
"Hyeonbin Hwang",
"Sohee Yang",
"Hyeongu Yun",
"Yireun Kim",
"Minjoon Seo"
] | Workshop/Instruction | 2023 | 2302.14691 | [
"https://github.com/seonghyeonye/icil"
] | https://huggingface.co/papers/2302.14691 | 1 | 0 | 0 | 6 | 1 | [] | [] | [] |
null | https://openreview.net/forum?id=0nRcZeeE5f | @inproceedings{
mozannar2023simulating,
title={Simulating Iterative Human-{AI} Interaction in Programming with {LLM}s},
author={Hussein Mozannar and Valerie Chen and Dennis Wei and Prasanna Sattigeri and Manish Nagireddy and Subhro Das and Ameet Talwalkar and David Sontag},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=0nRcZeeE5f}
} | Large language models (LLMs) are increasingly used to support humans in tasks involving writing natural language and programming. How do we evaluate the benefits of LLM assistance for humans and learn from human interaction? We argue that benchmarks that evaluate the abilities of the model in isolation are not sufficient to reveal its impact on humans. Ideally, we can conduct user studies where humans complete tasks with the LLM and measure outcomes of interest. However, this can be prohibitively expensive in terms of human resources, especially as we want to iterate on model design continuously. We propose building a simulation environment that mimics how humans interact with the LLM, focusing in this work on assistants that provide inline suggestions for coding tasks. The environment simulates the multi-turn interactions that occur in programming with LLMs and uses a secondary LLM to simulate the human.
We design the environment based on work that studies programmer behavior when coding with LLMs to make sure it is realistic. The environment allows us to evaluate the abilities of different scales of LLMs in terms of simulation metrics of success. The simulation also allows us to collect data that can be potentially used to improve the LLM's ability to assist humans, which we showcase with a simple experiment. | Simulating Iterative Human-AI Interaction in Programming with LLMs | [
"Hussein Mozannar",
"Valerie Chen",
"Dennis Wei",
"Prasanna Sattigeri",
"Manish Nagireddy",
"Subhro Das",
"Ameet Talwalkar",
"David Sontag"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=0U1ZHdWX3l | @inproceedings{
schnabel2023balancing,
title={Balancing Multiple Objectives for Efficient Metaprompts for Data Labeling Tasks with Extensive Guidelines},
author={Tobias Schnabel and Jennifer Neville},
booktitle={NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following},
year={2023},
url={https://openreview.net/forum?id=0U1ZHdWX3l}
} | Spurred by ever increasing context-window sizes, two recent trends in the application of large language models (LLMs) for data annotation and pattern extraction are (i) longer prompts with complex structures, rich information and task instructions and (ii) the processing of many data points in the same prompt (minibatching) to increase query efficiency. In the process of annotating and analyzing data, the same metaprompts are re-used with many different inputs and are thus worth being optimized for length as billing is proportional to overall token usage.
Traditional prompt optimization techniques are only insufficiently addressing those two trends: First, by ignoring the structure of prompts, they are limited in the transformation operations they can perform and second, they do not consider important factors such as input and output costs or adherence to output specifications.
To overcome these limitations, we propose structure-aware multi-objective metaprompt optimization (SAMMO), a framework that automatically balances multiple objectives for high level prompt structures and encompasses several existing prompt optimization methods as special cases.
Drawing from approaches for neural architecture search, SAMMO carries out a genetic search over a set of mutation operators that can change the structure and information contained in non-trivial ways. Empirically, we show on a wide range of annotation tasks that SAMMO succeeds in finding metaprompts that have over 30% fewer tokens while still as accurate as the baseline prompt. | Balancing Multiple Objectives for Efficient Metaprompts for Data Labeling Tasks with Extensive Guidelines | [
"Tobias Schnabel",
"Jennifer Neville"
] | Workshop/Instruction | 2023 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=tYCLmx9RgE | @inproceedings{
pichler2024on,
title={On the Limitation of Backdoor Detection Methods},
author={Georg Pichler and Marco Romanelli and Divya Prakash Manivannan and Prashanth Krishnamurthy and Farshad Khorrami and Siddharth Garg},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=tYCLmx9RgE}
} | We introduce a formal statistical definition for the problem of backdoor detection in machine learning systems and use it analyze the feasibility of such problem, providing evidence for the utility and applicability of our definition. The main contributions of this work are an impossibility result and an achievability results for backdoor detection. We show a no-free-lunch theorem, proving that universal backdoor detection is impossible, except for very small alphabet sizes. Furthermore, we link our definition to the probably approximately correct (PAC) learnability of the out-of-distribution detection problem, establishing a formal connections between backdoor and out-of-distribution detection. | On the Limitation of Backdoor Detection Methods | [
"Georg Pichler",
"Marco Romanelli",
"Divya Prakash Manivannan",
"Prashanth Krishnamurthy",
"Farshad Khorrami",
"Siddharth Garg"
] | Workshop/BUGS | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=sz9vHbZPWU | @inproceedings{
an2024how,
title={How to remove backdoors in diffusion models?},
author={Shengwei An and Sheng-Yen Chou and Kaiyuan Zhang and Qiuling Xu and Guanhong Tao and Guangyu Shen and Siyuan Cheng and Shiqing Ma and Pin-Yu Chen and Tsung-Yi Ho and Xiangyu Zhang},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=sz9vHbZPWU}
} | Diffusion models (DM) have become state-of-the-art generative models because of their capability of generating high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the first backdoor detection and removal framework for DMs. We evaluate our framework on over hundreds of DMs of 3 types including DDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks. Extensive experiments show that our approach can have close to 100% detection accuracy and reduce the backdoor effects to close to zero without significantly sacrificing the model utility. | How to remove backdoors in diffusion models? | [
"Shengwei An",
"Sheng-Yen Chou",
"Kaiyuan Zhang",
"Qiuling Xu",
"Guanhong Tao",
"Guangyu Shen",
"Siyuan Cheng",
"Shiqing Ma",
"Pin-Yu Chen",
"Tsung-Yi Ho",
"Xiangyu Zhang"
] | Workshop/BUGS | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=l642rGiKGr | @inproceedings{
kim2024adversarial,
title={Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning},
author={Taejin Kim and Jiarui Li and Nikhil Madaan and Shubhranshu Singh and Carlee Joe-Wong},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=l642rGiKGr}
} | In today's data-driven landscape, the delicate equilibrium between safeguarding user privacy and unleashing data's potential stands as a paramount concern. Federated learning, which enables collaborative model training without necessitating data sharing, has emerged as a privacy-centric solution. This distributed approach brings forth security challenges, notably poisoning and backdoor attacks where malicious entities inject corrupted data. Our research, initially spurred by test-time evasion attacks, investigates the intersection of adversarial training and backdoor attacks within federated learning, introducing Adversarial Robustness Unhardening (ARU). ARU is employed by a subset of adversaries to intentionally undermine model robustness during federated training, rendering models susceptible to a broader range of evasion attacks. We present extensive empirical experiments evaluating ARU's impact on adversarial training and existing robust aggregation defenses against poisoning and backdoor attacks. Our findings inform strategies for enhancing ARU to counter current defensive measures and highlight the limitations of existing defenses, offering insights into bolstering defenses against ARU. | Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning | [
"Taejin Kim",
"Jiarui Li",
"Nikhil Madaan",
"Shubhranshu Singh",
"Carlee Joe-Wong"
] | Workshop/BUGS | poster | 2310.11594 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=jhWV5bLeT0 | @inproceedings{
lai2024how,
title={How to Backdoor HyperNetwork in Personalized Federated Learning?},
author={Phung Lai and Hai Phan and Issa Khalil and Abdallah Khreishah and Xintao Wu},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=jhWV5bLeT0}
} | This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks. Based upon that, we propose a novel model transferring attack (called HNTroj), i.e., the first of its kind, to transfer a local backdoor infected model to all legitimate and personalized local models, which are generated by the HyperNetFL model, through consistent and effective malicious local gradients computed across all compromised clients in the whole training process. As a result, HNTroj reduces the number of compromised clients needed to successfully launch the attack without any observable signs of sudden shifts or degradation regarding model utility on legitimate data samples, making our attack stealthy. To defend against HNTroj, we adapted several backdoor-resistant FL training algorithms into HyperNetFL. An extensive experiment that is carried out using several benchmark datasets shows that HNTroj significantly outperforms data poisoning and model replacement attacks and bypasses robust training algorithms even with modest numbers of compromised clients. | How to Backdoor HyperNetwork in Personalized Federated Learning? | [
"Phung Lai",
"Hai Phan",
"Issa Khalil",
"Abdallah Khreishah",
"Xintao Wu"
] | Workshop/BUGS | poster | 2201.07063 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=hLtbmYoW5w | @inproceedings{
acharya2024universal,
title={Universal Trojan Signatures in Reinforcement Learning},
author={Manoj Acharya and Weichao Zhou and Anirban Roy and Xiao Lin and Wenchao Li and Susmit Jha},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=hLtbmYoW5w}
} | We present a novel approach for characterizing Trojaned reinforcement learning (RL) agents. By monitoring for discrepancies in how an agent's policy evaluates state observations for choosing an action, we can reliably detect whether the policy is Trojaned. Experiments on the IARPA RL challenge benchmarks show that our approach can effectively detect Trojaned models even in transfer settings with novel RL environments and modified architectures. | Universal Trojan Signatures in Reinforcement Learning | [
"Manoj Acharya",
"Weichao Zhou",
"Anirban Roy",
"Xiao Lin",
"Wenchao Li",
"Susmit Jha"
] | Workshop/BUGS | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=e9F4fB23o0 | @inproceedings{
lamparth2024analyzing,
title={Analyzing And Editing Inner Mechanisms of Backdoored Language Models},
author={Max Lamparth and Ann-Katrin Reuel},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=e9F4fB23o0}
} | Poisoning of data sets is a potential security threat to large language models that can lead to backdoored models. A description of the internal mechanisms of backdoored language models and how they process trigger inputs, e.g., when switching to toxic language, has yet to be found. In this work, we study the internal representations of transformer-based backdoored language models and determine early-layer MLP modules as most important for the backdoor mechanism in combination with the initial embedding projection. We use this knowledge to remove, insert, and modify backdoor mechanisms with engineered replacements that reduce the MLP module outputs to essentials for the backdoor mechanism. To this end, we introduce PCP ablation, where we replace transformer modules with low-rank matrices based on the principal components of their activations. We demonstrate our results on backdoored toy, backdoored large, and non-backdoored open-source models. We show that we can improve the backdoor robustness of large language models by locally constraining individual modules during fine-tuning on potentially poisonous data sets.
Trigger warning: Offensive language. | Analyzing And Editing Inner Mechanisms of Backdoored Language Models | [
"Max Lamparth",
"Ann-Katrin Reuel"
] | Workshop/BUGS | poster | 2302.12461 | [
"https://github.com/maxlampe/causalbackdoor"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
null | https://openreview.net/forum?id=cmJiEqniEc | @inproceedings{
langosco2024detecting,
title={Detecting Backdoors with Meta-Models},
author={Lauro Langosco and Neel Alex and William Baker and David Quarel and Herbie Bradley and David Krueger},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=cmJiEqniEc}
} | It is widely known that it is possible to implant backdoors into neural networks,
by which an attacker can choose an input to produce a particular undesirable output
(e.g.\ misclassify an image).
We propose to use \emph{meta-models}, neural networks that take another network's parameters
as input, to detect backdoors directly from model weights.
To this end we present a meta-model architecture and train it on a dataset of approx.\ 4000 clean and backdoored CNNs trained on CIFAR-10.
Our approach is simple and scalable, and is able to detect the presence of a backdoor with $>99\%$ accuracy when the test trigger pattern is i.i.d., with some success even on out-of-distribution backdoors. | Detecting Backdoors with Meta-Models | [
"Lauro Langosco",
"Neel Alex",
"William Baker",
"David Quarel",
"Herbie Bradley",
"David Krueger"
] | Workshop/BUGS | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=a34bgvner1 | @inproceedings{
deng2024benchmark,
title={Benchmark Probing: Investigating Data Leakage in Large Language Models},
author={Chunyuan Deng and Yilun Zhao and Xiangru Tang and Mark Gerstein and Arman Cohan},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=a34bgvner1}
} | Large language models have consistently demonstrated exceptional performance across a wide range of natural language processing tasks. However, concerns have been raised about whether LLMs rely on benchmark data during their training phase, potentially leading to inflated scores on these benchmarks. This phenomenon, known as data contamination, presents a significant challenge within the context of LLMs. In this paper, we present a novel investigation protocol named $\textbf{T}$estset $\textbf{S}$lot Guessing ($\textbf{TS-Guessing}$) on knowledge-required benchmark MMLU and TruthfulQA, designed to estimate the contamination of emerging commercial LLMs. We divide this protocol into two subtasks: (i) $\textit{Question-based}$ setting: guessing the missing portions for long and complex questions in the testset (ii) $\textit{Question-Multichoice}$ setting: guessing the missing option given both complicated questions and options. We find that commercial LLMs could surprisingly fill in the absent data and demonstrate a remarkable increase given additional metadata (from 22.28\% to 42.19\% for Claude-instant-1 and from 17.53\% to 29.49\% for GPT-4). | Benchmark Probing: Investigating Data Leakage in Large Language Models | [
"Chunyuan Deng",
"Yilun Zhao",
"Xiangru Tang",
"Mark Gerstein",
"Arman Cohan"
] | Workshop/BUGS | poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
null | https://openreview.net/forum?id=VsyEqsL630 | @inproceedings{
chou2024villandiffusion,
title={VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models},
author={Sheng-Yen Chou and Pin-Yu Chen and Tsung-Yi Ho},
booktitle={NeurIPS 2023 Workshop on Backdoors in Deep Learning - The Good, the Bad, and the Ugly},
year={2024},
url={https://openreview.net/forum?id=VsyEqsL630}
} | Diffusion Models (DMs) are state-of-the-art generative models that learn a reversible corruption process from iterative noise addition and denoising. They are the backbone of many generative AI applications, such as text-to-image conditional generation. However, recent studies have shown that basic unconditional DMs (e.g., DDPM and DDIM) are vulnerable to backdoor injection, a type of output manipulation attack triggered by a maliciously embedded pattern at model input. This paper presents a unified backdoor attack framework (VillanDiffusion) to expand the current scope of backdoor analysis for DMs. Our framework covers mainstream unconditional and conditional DMs (denoising-based and score-based) and various training-free samplers for holistic evaluations. Experiments show that our unified framework facilitates the backdoor analysis of different DM configurations and provides new insights into caption-based backdoor attacks on DMs. | VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models | [
"Sheng-Yen Chou",
"Pin-Yu Chen",
"Tsung-Yi Ho"
] | Workshop/BUGS | oral | 2306.06874 | [
"https://github.com/ibm/villandiffusion"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |