parent_paper_title
stringclasses 63
values | parent_paper_arxiv_id
stringclasses 63
values | citation_shorthand
stringlengths 2
56
| raw_citation_text
stringlengths 9
63
| cited_paper_title
stringlengths 5
161
| cited_paper_arxiv_link
stringlengths 32
37
⌀ | cited_paper_abstract
stringlengths 406
1.92k
⌀ | has_metadata
bool 1
class | is_arxiv_paper
bool 2
classes | bib_paper_authors
stringlengths 2
2.44k
⌀ | bib_paper_year
float64 1.97k
2.03k
⌀ | bib_paper_month
stringclasses 16
values | bib_paper_url
stringlengths 20
116
⌀ | bib_paper_doi
stringclasses 269
values | bib_paper_journal
stringlengths 3
148
⌀ | original_title
stringlengths 5
161
| search_res_title
stringlengths 4
122
| search_res_url
stringlengths 22
267
| search_res_content
stringlengths 19
1.92k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
zengScalableEffectiveGenerative2023b
|
\cite{zengScalableEffectiveGenerative2023b}
|
Scalable and Effective Generative Information Retrieval
|
http://arxiv.org/abs/2311.09134v1
|
Recent research has shown that transformer networks can be used as
differentiable search indexes by representing each document as a sequences of
document ID tokens. These generative retrieval models cast the retrieval
problem to a document ID generation problem for each given query. Despite their
elegant design, existing generative retrieval models only perform well on
artificially-constructed and small-scale collections. This has led to serious
skepticism in the research community on their real-world impact. This paper
represents an important milestone in generative retrieval research by showing,
for the first time, that generative retrieval models can be trained to perform
effectively on large-scale standard retrieval benchmarks. For doing so, we
propose RIPOR- an optimization framework for generative retrieval that can be
adopted by any encoder-decoder architecture. RIPOR is designed based on two
often-overlooked fundamental design considerations in generative retrieval.
First, given the sequential decoding nature of document ID generation,
assigning accurate relevance scores to documents based on the whole document ID
sequence is not sufficient. To address this issue, RIPOR introduces a novel
prefix-oriented ranking optimization algorithm. Second, initial document IDs
should be constructed based on relevance associations between queries and
documents, instead of the syntactic and semantic information in the documents.
RIPOR addresses this issue using a relevance-based document ID construction
approach that quantizes relevance-based representations learned for documents.
Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses
state-of-the-art generative retrieval models by a large margin (e.g., 30.5% MRR
improvements on MS MARCO Dev Set), and perform better on par with popular dense
retrieval models.
| true | true |
Hansi Zeng and Chen Luo and Bowen Jin and Sheikh Muhammad Sarwar and Tianxin Wei and Hamed Zamani
| null | null |
https://doi.org/10.1145/3589334.3645477
|
10.1145/3589334.3645477
| null |
Scalable and Effective Generative Information Retrieval
|
Scalable and Effective Generative Information Retrieval
|
http://arxiv.org/pdf/2311.09134v1
|
Recent research has shown that transformer networks can be used as
differentiable search indexes by representing each document as a sequences of
document ID tokens. These generative retrieval models cast the retrieval
problem to a document ID generation problem for each given query. Despite their
elegant design, existing generative retrieval models only perform well on
artificially-constructed and small-scale collections. This has led to serious
skepticism in the research community on their real-world impact. This paper
represents an important milestone in generative retrieval research by showing,
for the first time, that generative retrieval models can be trained to perform
effectively on large-scale standard retrieval benchmarks. For doing so, we
propose RIPOR- an optimization framework for generative retrieval that can be
adopted by any encoder-decoder architecture. RIPOR is designed based on two
often-overlooked fundamental design considerations in generative retrieval.
First, given the sequential decoding nature of document ID generation,
assigning accurate relevance scores to documents based on the whole document ID
sequence is not sufficient. To address this issue, RIPOR introduces a novel
prefix-oriented ranking optimization algorithm. Second, initial document IDs
should be constructed based on relevance associations between queries and
documents, instead of the syntactic and semantic information in the documents.
RIPOR addresses this issue using a relevance-based document ID construction
approach that quantizes relevance-based representations learned for documents.
Evaluation on MSMARCO and TREC Deep Learning Track reveals that RIPOR surpasses
state-of-the-art generative retrieval models by a large margin (e.g., 30.5% MRR
improvements on MS MARCO Dev Set), and perform better on par with popular dense
retrieval models.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
askariFewshotIndexing2024
|
\cite{askariFewshotIndexing2024}
|
Generative Retrieval with Few-shot Indexing
|
http://arxiv.org/abs/2408.02152v1
|
Existing generative retrieval (GR) approaches rely on training-based
indexing, i.e., fine-tuning a model to memorise the associations between a
query and the document identifier (docid) of a relevant document.
Training-based indexing has three limitations: high training overhead,
under-utilization of the pre-trained knowledge of large language models (LLMs),
and challenges in adapting to a dynamic document corpus. To address the above
issues, we propose a novel few-shot indexing-based GR framework (Few-Shot GR).
It has a novel few-shot indexing process, where we prompt an LLM to generate
docids for all documents in a corpus, ultimately creating a docid bank for the
entire corpus. During retrieval, we feed a query to the same LLM and constrain
it to generate a docid within the docid bank created during indexing, and then
map the generated docid back to its corresponding document. Few-Shot GR relies
solely on prompting an LLM without requiring any training, making it more
efficient. Moreover, we devise few-shot indexing with one-to-many mapping to
further enhance Few-Shot GR. Experiments show that Few-Shot GR achieves
superior performance to state-of-the-art GR methods that require heavy
training.
| true | true |
Arian Askari and Chuan Meng and Mohammad Aliannejadi and Zhaochun Ren and Evangelos Kanoulas and Suzan Verberne
| null | null |
https://doi.org/10.48550/arXiv.2408.02152
|
10.48550/ARXIV.2408.02152
|
CoRR
|
Generative Retrieval with Few-shot Indexing
|
(PDF) Generative Retrieval with Few-shot Indexing - ResearchGate
|
https://www.researchgate.net/publication/382884626_Generative_Retrieval_with_Few-shot_Indexing
|
It has a novel few-shot indexing process, where we prompt an LLM to generate docids for all documents in a corpus, ultimately creating a docid
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
cont-learning-gr2023cikm
|
\cite{cont-learning-gr2023cikm}
|
Continual Learning for Generative Retrieval over Dynamic Corpora
|
http://arxiv.org/abs/2308.14968v1
|
Generative retrieval (GR) directly predicts the identifiers of relevant
documents (i.e., docids) based on a parametric model. It has achieved solid
performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a
static document collection. In many practical scenarios, however, document
collections are dynamic, where new documents are continuously added to the
corpus. The ability to incrementally index new documents while preserving the
ability to answer queries with both previously and newly indexed relevant
documents is vital to applying GR models. In this paper, we address this
practical continual learning problem for GR. We put forward a novel
Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major
contributions to continual learning for GR: (i) To encode new documents into
docids with low computational cost, we present Incremental Product
Quantization, which updates a partial quantization codebook according to two
adaptive thresholds; and (ii) To memorize new documents for querying without
forgetting previous knowledge, we propose a memory-augmented learning
mechanism, to form meaningful connections between old and new documents.
Empirical results demonstrate the effectiveness and efficiency of the proposed
model.
| true | true |
Chen, Jiangui and Zhang, Ruqing and Guo, Jiafeng and de Rijke, Maarten and Chen, Wei and Fan, Yixing and Cheng, Xueqi
| null | null |
https://doi.org/10.1145/3583780.3614821
|
10.1145/3583780.3614821
| null |
Continual Learning for Generative Retrieval over Dynamic Corpora
|
Continual Learning for Generative Retrieval over Dynamic Corpora
|
http://arxiv.org/pdf/2308.14968v1
|
Generative retrieval (GR) directly predicts the identifiers of relevant
documents (i.e., docids) based on a parametric model. It has achieved solid
performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a
static document collection. In many practical scenarios, however, document
collections are dynamic, where new documents are continuously added to the
corpus. The ability to incrementally index new documents while preserving the
ability to answer queries with both previously and newly indexed relevant
documents is vital to applying GR models. In this paper, we address this
practical continual learning problem for GR. We put forward a novel
Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major
contributions to continual learning for GR: (i) To encode new documents into
docids with low computational cost, we present Incremental Product
Quantization, which updates a partial quantization codebook according to two
adaptive thresholds; and (ii) To memorize new documents for querying without
forgetting previous knowledge, we propose a memory-augmented learning
mechanism, to form meaningful connections between old and new documents.
Empirical results demonstrate the effectiveness and efficiency of the proposed
model.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
liu2024robustnessgenerative
|
\cite{liu2024robustnessgenerative}
|
On the Robustness of Generative Information Retrieval Models
|
http://arxiv.org/abs/2412.18768v1
|
Generative information retrieval methods retrieve documents by directly
generating their identifiers. Much effort has been devoted to developing
effective generative IR models. Less attention has been paid to the robustness
of these models. It is critical to assess the out-of-distribution (OOD)
generalization of generative IR models, i.e., how would such models generalize
to new distributions? To answer this question, we focus on OOD scenarios from
four perspectives in retrieval problems: (i)query variations; (ii)unseen query
types; (iii)unseen tasks; and (iv)corpus expansion. Based on this taxonomy, we
conduct empirical studies to analyze the OOD robustness of representative
generative IR models against dense retrieval models. Our empirical results
indicate that the OOD robustness of generative IR models is in need of
improvement. By inspecting the OOD robustness of generative IR models we aim to
contribute to the development of more reliable IR models. The code is available
at \url{https://github.com/Davion-Liu/GR_OOD}.
| true | true |
Yu-An Liu and Ruqing Zhang and Jiafeng Guo and Changjiang Zhou and Maarten de Rijke and Xueqi Cheng
| null | null |
https://arxiv.org/abs/2412.18768
| null | null |
On the Robustness of Generative Information Retrieval Models
|
On the Robustness of Generative Information Retrieval Models
|
http://arxiv.org/pdf/2412.18768v1
|
Generative information retrieval methods retrieve documents by directly
generating their identifiers. Much effort has been devoted to developing
effective generative IR models. Less attention has been paid to the robustness
of these models. It is critical to assess the out-of-distribution (OOD)
generalization of generative IR models, i.e., how would such models generalize
to new distributions? To answer this question, we focus on OOD scenarios from
four perspectives in retrieval problems: (i)query variations; (ii)unseen query
types; (iii)unseen tasks; and (iv)corpus expansion. Based on this taxonomy, we
conduct empirical studies to analyze the OOD robustness of representative
generative IR models against dense retrieval models. Our empirical results
indicate that the OOD robustness of generative IR models is in need of
improvement. By inspecting the OOD robustness of generative IR models we aim to
contribute to the development of more reliable IR models. The code is available
at \url{https://github.com/Davion-Liu/GR_OOD}.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
liuRobustnessGenerativeRetrieval2023
|
\cite{liuRobustnessGenerativeRetrieval2023}
|
On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective
| null | null | true | false |
Yu{-}An Liu and Ruqing Zhang and Jiafeng Guo and Wei Chen and Xueqi Cheng
| null | null |
https://doi.org/10.48550/arXiv.2306.12756
|
10.48550/ARXIV.2306.12756
|
CoRR
|
On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective
|
On the Robustness of Generative Retrieval Models: An Out ...
|
https://arxiv.org/abs/2306.12756
|
**arXiv:2306.12756** (cs) View a PDF of the paper titled On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective, by Yu-An Liu and 4 other authors View a PDF of the paper titled On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective, by Yu-An Liu and 4 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
leeNonparametricDecodingGenerative2023
|
\cite{leeNonparametricDecodingGenerative2023}
|
Nonparametric Decoding for Generative Retrieval
|
http://arxiv.org/abs/2210.02068v3
|
The generative retrieval model depends solely on the information encoded in
its model parameters without external memory, its information capacity is
limited and fixed. To overcome the limitation, we propose Nonparametric
Decoding (Np Decoding) which can be applied to existing generative retrieval
models. Np Decoding uses nonparametric contextualized vocab embeddings
(external memory) rather than vanilla vocab embeddings as decoder vocab
embeddings. By leveraging the contextualized vocab embeddings, the generative
retrieval model is able to utilize both the parametric and nonparametric space.
Evaluation over 9 datasets (8 single-hop and 1 multi-hop) in the document
retrieval task shows that applying Np Decoding to generative retrieval models
significantly improves the performance. We also show that Np Decoding is data-
and parameter-efficient, and shows high performance in the zero-shot setting.
| true | true |
Lee, Hyunji and Kim, JaeYoung and Chang, Hoyeon and Oh, Hanseok and Yang, Sohee and Karpukhin, Vladimir and Lu, Yi and Seo, Minjoon
| null | null | null | null | null |
Nonparametric Decoding for Generative Retrieval
|
Nonparametric Decoding for Generative Retrieval
|
http://arxiv.org/pdf/2210.02068v3
|
The generative retrieval model depends solely on the information encoded in
its model parameters without external memory, its information capacity is
limited and fixed. To overcome the limitation, we propose Nonparametric
Decoding (Np Decoding) which can be applied to existing generative retrieval
models. Np Decoding uses nonparametric contextualized vocab embeddings
(external memory) rather than vanilla vocab embeddings as decoder vocab
embeddings. By leveraging the contextualized vocab embeddings, the generative
retrieval model is able to utilize both the parametric and nonparametric space.
Evaluation over 9 datasets (8 single-hop and 1 multi-hop) in the document
retrieval task shows that applying Np Decoding to generative retrieval models
significantly improves the performance. We also show that Np Decoding is data-
and parameter-efficient, and shows high performance in the zero-shot setting.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
yuan2024generative-memory-burden
|
\cite{yuan2024generative-memory-burden}
|
Generative Dense Retrieval: Memory Can Be a Burden
|
http://arxiv.org/abs/2401.10487v1
|
Generative Retrieval (GR), autoregressively decoding relevant document
identifiers given a query, has been shown to perform well under the setting of
small-scale corpora. By memorizing the document corpus with model parameters,
GR implicitly achieves deep interaction between query and document. However,
such a memorizing mechanism faces three drawbacks: (1) Poor memory accuracy for
fine-grained features of documents; (2) Memory confusion gets worse as the
corpus size increases; (3) Huge memory update costs for new documents. To
alleviate these problems, we propose the Generative Dense Retrieval (GDR)
paradigm. Specifically, GDR first uses the limited memory volume to achieve
inter-cluster matching from query to relevant document clusters.
Memorizing-free matching mechanism from Dense Retrieval (DR) is then introduced
to conduct fine-grained intra-cluster matching from clusters to relevant
documents. The coarse-to-fine process maximizes the advantages of GR's deep
interaction and DR's scalability. Besides, we design a cluster identifier
constructing strategy to facilitate corpus memory and a cluster-adaptive
negative sampling strategy to enhance the intra-cluster mapping ability.
Empirical results show that GDR obtains an average of 3.0 R@100 improvement on
NQ dataset under multiple settings and has better scalability.
| true | true |
Peiwen Yuan and Xinglin Wang and Shaoxiong Feng and Boyuan Pan and Yiwei Li and Heda Wang and Xupeng Miao and Kan Li
| null | null |
https://aclanthology.org/2024.eacl-long.173
| null | null |
Generative Dense Retrieval: Memory Can Be a Burden
|
Generative Dense Retrieval: Memory Can Be a Burden
|
http://arxiv.org/pdf/2401.10487v1
|
Generative Retrieval (GR), autoregressively decoding relevant document
identifiers given a query, has been shown to perform well under the setting of
small-scale corpora. By memorizing the document corpus with model parameters,
GR implicitly achieves deep interaction between query and document. However,
such a memorizing mechanism faces three drawbacks: (1) Poor memory accuracy for
fine-grained features of documents; (2) Memory confusion gets worse as the
corpus size increases; (3) Huge memory update costs for new documents. To
alleviate these problems, we propose the Generative Dense Retrieval (GDR)
paradigm. Specifically, GDR first uses the limited memory volume to achieve
inter-cluster matching from query to relevant document clusters.
Memorizing-free matching mechanism from Dense Retrieval (DR) is then introduced
to conduct fine-grained intra-cluster matching from clusters to relevant
documents. The coarse-to-fine process maximizes the advantages of GR's deep
interaction and DR's scalability. Besides, we design a cluster identifier
constructing strategy to facilitate corpus memory and a cluster-adaptive
negative sampling strategy to enhance the intra-cluster mapping ability.
Empirical results show that GDR obtains an average of 3.0 R@100 improvement on
NQ dataset under multiple settings and has better scalability.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
wangNOVOLearnableInterpretable2023
|
\cite{wangNOVOLearnableInterpretable2023}
|
NOVO: Learnable and Interpretable Document Identifiers for Model-Based IR
| null | null | true | false |
Wang, Zihan and Zhou, Yujia and Tu, Yiteng and Dou, Zhicheng
| null | null |
https://doi.org/10.1145/3583780.3614993
|
10.1145/3583780.3614993
| null |
NOVO: Learnable and Interpretable Document Identifiers for Model-Based IR
|
Learnable and Interpretable Document Identifiers for Model ...
|
https://www.researchgate.net/publication/374903378_NOVO_Learnable_and_Interpretable_Document_Identifiers_for_Model-Based_IR
|
NOVO [389] introduces learnable continuous N-gram DocIDs, refining embeddings through query denoising and retrieval tasks. LMIndexer [153] generates neural
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
kishoreIncDSI2023
|
\cite{kishoreIncDSI2023}
|
IncDSI: Incrementally Updatable Document Retrieval
|
http://arxiv.org/abs/2307.10323v2
|
Differentiable Search Index is a recently proposed paradigm for document
retrieval, that encodes information about a corpus of documents within the
parameters of a neural network and directly maps queries to corresponding
documents. These models have achieved state-of-the-art performances for
document retrieval across many benchmarks. These kinds of models have a
significant limitation: it is not easy to add new documents after a model is
trained. We propose IncDSI, a method to add documents in real time (about
20-50ms per document), without retraining the model on the entire dataset (or
even parts thereof). Instead we formulate the addition of documents as a
constrained optimization problem that makes minimal changes to the network
parameters. Although orders of magnitude faster, our approach is competitive
with re-training the model on the whole dataset and enables the development of
document retrieval systems that can be updated with new information in
real-time. Our code for IncDSI is available at
https://github.com/varshakishore/IncDSI.
| true | true |
Kishore, Varsha and Wan, Chao and Lovelace, Justin and Artzi, Yoav and Weinberger, Kilian Q.
| null | null | null | null | null |
IncDSI: Incrementally Updatable Document Retrieval
|
IncDSI: Incrementally Updatable Document Retrieval
|
http://arxiv.org/pdf/2307.10323v2
|
Differentiable Search Index is a recently proposed paradigm for document
retrieval, that encodes information about a corpus of documents within the
parameters of a neural network and directly maps queries to corresponding
documents. These models have achieved state-of-the-art performances for
document retrieval across many benchmarks. These kinds of models have a
significant limitation: it is not easy to add new documents after a model is
trained. We propose IncDSI, a method to add documents in real time (about
20-50ms per document), without retraining the model on the entire dataset (or
even parts thereof). Instead we formulate the addition of documents as a
constrained optimization problem that makes minimal changes to the network
parameters. Although orders of magnitude faster, our approach is competitive
with re-training the model on the whole dataset and enables the development of
document retrieval systems that can be updated with new information in
real-time. Our code for IncDSI is available at
https://github.com/varshakishore/IncDSI.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
mehtaDSIpp2023
|
\cite{mehtaDSIpp2023}
|
{DSI}++: Updating Transformer Memory with New Documents
| null | null | true | false |
Mehta, Sanket Vaibhav and Gupta, Jai and Tay, Yi and Dehghani, Mostafa and Tran, Vinh Q. and Rao, Jinfeng and Najork, Marc and Strubell, Emma and Metzler, Donald
| null | null |
https://aclanthology.org/2023.emnlp-main.510/
|
10.18653/v1/2023.emnlp-main.510
| null |
{DSI}++: Updating Transformer Memory with New Documents
|
DSI++: Updating Transformer Memory with New Documents
|
https://aclanthology.org/2023.emnlp-main.510/
|
DSI++: Updating Transformer Memory with New Documents - ACL Anthology Anthology ID:2023.emnlp-main.510 Volume:Proceedings of the 2023 Conference on Empirical Methods in Natural Language ProcessingMonth:December Year:2023 Address:Singapore Editors:Houda Bouamor, Juan Pino, Kalika BaliVenue:EMNLPSIG:Publisher:Association for Computational Linguistics Note:Pages:8198–8213 Language:URL:https://aclanthology.org/2023.emnlp-main.510/DOI:10.18653/v1/2023.emnlp-main.510Bibkey:mehta-etal-2023-dsi Cite (ACL):Sanket Vaibhav Mehta, Jai Gupta, Yi Tay, Mostafa Dehghani, Vinh Q. Association for Computational Linguistics.Cite (Informal):DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023)Copy Citation:BibTeX Markdown MODS XML Endnote More options…PDF:https://aclanthology.org/2023.emnlp-main.510.pdfVideo:https://aclanthology.org/2023.emnlp-main.510.mp4 title = "{DSI}++: Updating Transformer Memory with New Documents", <title>DSI++: Updating Transformer Memory with New Documents</title> <namePart type="family">Mehta</namePart> <namePart type="given">Houda</namePart> <namePart type="given">Juan</namePart> <namePart type="given">Kalika</namePart> DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023) * DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023)
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
guoContinualGenerative2024
|
\cite{guoContinualGenerative2024}
|
CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks
| null | null | true | false |
Jiafeng Guo and Changjiang Zhou and Ruqing Zhang and Jiangui Chen and Maarten de Rijke and Yixing Fan and Xueqi Cheng
| null | null |
https://arxiv.org/abs/2402.16767
| null | null |
CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks
|
[2402.16767] CorpusBrain++: A Continual Generative Pre-Training ...
|
https://arxiv.org/abs/2402.16767
|
Title:CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks View a PDF of the paper titled CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks, by Jiafeng Guo and 5 other authors View a PDF of the paper titled CorpusBrain++: A Continual Generative Pre-Training Framework for Knowledge-Intensive Language Tasks, by Jiafeng Guo and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
ahmedNeuroSymbolicLearning2023
|
\cite{ahmedNeuroSymbolicLearning2023}
|
Semantic Strengthening of Neuro-Symbolic Learning
|
http://arxiv.org/abs/2302.14207v1
|
Numerous neuro-symbolic approaches have recently been proposed typically with
the goal of adding symbolic knowledge to the output layer of a neural network.
Ideally, such losses maximize the probability that the neural network's
predictions satisfy the underlying domain. Unfortunately, this type of
probabilistic inference is often computationally infeasible. Neuro-symbolic
approaches therefore commonly resort to fuzzy approximations of this
probabilistic objective, sacrificing sound probabilistic semantics, or to
sampling which is very seldom feasible. We approach the problem by first
assuming the constraint decomposes conditioned on the features learned by the
network. We iteratively strengthen our approximation, restoring the dependence
between the constraints most responsible for degrading the quality of the
approximation. This corresponds to computing the mutual information between
pairs of constraints conditioned on the network's learned features, and may be
construed as a measure of how well aligned the gradients of two distributions
are. We show how to compute this efficiently for tractable circuits. We test
our approach on three tasks: predicting a minimum-cost path in Warcraft,
predicting a minimum-cost perfect matching, and solving Sudoku puzzles,
observing that it improves upon the baselines while sidestepping
intractability.
| true | true |
Ahmed, Kareem and Chang, Kai-Wei and Van den Broeck, Guy
| null |
25--27 Apr
|
https://proceedings.mlr.press/v206/ahmed23a.html
| null | null |
Semantic Strengthening of Neuro-Symbolic Learning
|
[PDF] Semantic Strengthening of Neuro-Symbolic Learning
|
https://proceedings.mlr.press/v206/ahmed23a/ahmed23a.pdf
|
Neuro-symbolic learning aims to add symbolic knowledge to neural networks, using a probabilistic approach to scale inference while retaining sound semantics.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
mustafaStrcutredOutputPrediction2021
|
\cite{mustafaStrcutredOutputPrediction2021}
|
Fine-grained Generalization Analysis of Structured Output Prediction
|
http://arxiv.org/abs/2106.00115v1
|
In machine learning we often encounter structured output prediction problems
(SOPPs), i.e. problems where the output space admits a rich internal structure.
Application domains where SOPPs naturally occur include natural language
processing, speech recognition, and computer vision. Typical SOPPs have an
extremely large label set, which grows exponentially as a function of the size
of the output. Existing generalization analysis implies generalization bounds
with at least a square-root dependency on the cardinality $d$ of the label set,
which can be vacuous in practice. In this paper, we significantly improve the
state of the art by developing novel high-probability bounds with a logarithmic
dependency on $d$. Moreover, we leverage the lens of algorithmic stability to
develop generalization bounds in expectation without any dependency on $d$. Our
results therefore build a solid theoretical foundation for learning in
large-scale SOPPs. Furthermore, we extend our results to learning with weakly
dependent data.
| true | true |
Mustafa, Waleed and Lei, Yunwen and Ledent, Antoine and Kloft, Marius
| null | null |
https://doi.org/10.24963/ijcai.2021/391
|
10.24963/ijcai.2021/391
| null |
Fine-grained Generalization Analysis of Structured Output Prediction
|
[PDF] Fine-grained Generalization Analysis of Structured Output Prediction
|
https://www.ijcai.org/proceedings/2021/0391.pdf
|
We consider two popular methods for structured output prediction: stochastic gradient descent (SGD) and reg- ularized risk minimization (RRM). We adapt the
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
nishinoGeneralizationAnalysisLearning2022a
|
\cite{nishinoGeneralizationAnalysisLearning2022a}
|
Generalization Analysis on Learning with a Concurrent Verifier
|
http://arxiv.org/abs/2210.05331v1
|
Machine learning technologies have been used in a wide range of practical
systems. In practical situations, it is natural to expect the input-output
pairs of a machine learning model to satisfy some requirements. However, it is
difficult to obtain a model that satisfies requirements by just learning from
examples. A simple solution is to add a module that checks whether the
input-output pairs meet the requirements and then modifies the model's outputs.
Such a module, which we call a {\em concurrent verifier} (CV), can give a
certification, although how the generalizability of the machine learning model
changes using a CV is unclear. This paper gives a generalization analysis of
learning with a CV. We analyze how the learnability of a machine learning model
changes with a CV and show a condition where we can obtain a guaranteed
hypothesis using a verifier only in the inference time. We also show that
typical error bounds based on Rademacher complexity will be no larger than that
of the original model when using a CV in multi-class classification and
structured prediction settings.
| true | true |
Nishino, Masaaki and Nakamura, Kengo and Yasuda, Norihito
| null | null | null | null | null |
Generalization Analysis on Learning with a Concurrent Verifier
|
Generalization Analysis on Learning with a Concurrent Verifier
|
http://arxiv.org/pdf/2210.05331v1
|
Machine learning technologies have been used in a wide range of practical
systems. In practical situations, it is natural to expect the input-output
pairs of a machine learning model to satisfy some requirements. However, it is
difficult to obtain a model that satisfies requirements by just learning from
examples. A simple solution is to add a module that checks whether the
input-output pairs meet the requirements and then modifies the model's outputs.
Such a module, which we call a {\em concurrent verifier} (CV), can give a
certification, although how the generalizability of the machine learning model
changes using a CV is unclear. This paper gives a generalization analysis of
learning with a CV. We analyze how the learnability of a machine learning model
changes with a CV and show a condition where we can obtain a guaranteed
hypothesis using a verifier only in the inference time. We also show that
typical error bounds based on Rademacher complexity will be no larger than that
of the original model when using a CV in multi-class classification and
structured prediction settings.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
nishinoUnderstandingCV2025
|
\cite{nishinoUnderstandingCV2025}
|
Understanding the impact of introducing constraints at inference time on generalization error
| null | null | true | false |
Nishino, Masaaki and Nakamura, Kengo and Yasuda, Norihito
| null | null | null | null | null |
Understanding the impact of introducing constraints at inference time on generalization error
|
[PDF] Understanding the Impact of Introducing Constraints at Inference ...
|
https://raw.githubusercontent.com/mlresearch/v235/main/assets/nishino24a/nishino24a.pdf
|
This paper analyses how the generalization error bounds change when we only put constraints in the inference time. Our main finding is that a class of loss
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
zhangSurveyControllableText2023
|
\cite{zhangSurveyControllableText2023}
|
A Survey of Controllable Text Generation using Transformer-based
Pre-trained Language Models
|
http://arxiv.org/abs/2201.05337v5
|
Controllable Text Generation (CTG) is emerging area in the field of natural
language generation (NLG). It is regarded as crucial for the development of
advanced text generation technologies that better meet the specific constraints
in practical applications. In recent years, methods using large-scale
pre-trained language models (PLMs), in particular the widely used
transformer-based PLMs, have become a new paradigm of NLG, allowing generation
of more diverse and fluent text. However, due to the limited level of
interpretability of deep neural networks, the controllability of these methods
need to be guaranteed. To this end, controllable text generation using
transformer-based PLMs has become a rapidly growing yet challenging new
research hotspot. A diverse range of approaches have emerged in the recent 3-4
years, targeting different CTG tasks that require different types of controlled
constraints. In this paper, we present a systematic critical review on the
common tasks, main approaches, and evaluation methods in this area. Finally, we
discuss the challenges that the field is facing, and put forward various
promising future directions. To the best of our knowledge, this is the first
survey paper to summarize the state-of-the-art CTG techniques from the
perspective of Transformer-based PLMs. We hope it can help researchers and
practitioners in the related fields to quickly track the academic and
technological frontier, providing them with a landscape of the area and a
roadmap for future research.
| true | true |
Zhang, Hanqing and Song, Haolin and Li, Shaoyu and Zhou, Ming and Song, Dawei
| null | null |
https://doi.org/10.1145/3617680
|
10.1145/3617680
|
ACM Comput. Surv.
|
A Survey of Controllable Text Generation using Transformer-based
Pre-trained Language Models
|
A Survey of Controllable Text Generation Using Transformer-based ...
|
https://dl.acm.org/doi/10.1145/3617680
|
This article is closely related to two key aspects: controllable text generation and pre-trained language models, which will be briefly introduced in this
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
mireshghallahControllableTextGeneration2022
|
\cite{mireshghallahControllableTextGeneration2022}
|
Mix and Match: Learning-free Controllable Text Generation using Energy
Language Models
|
http://arxiv.org/abs/2203.13299v2
|
Recent work on controlled text generation has either required attribute-based
fine-tuning of the base language model (LM), or has restricted the
parameterization of the attribute discriminator to be compatible with the base
autoregressive LM. In this work, we propose Mix and Match LM, a global
score-based alternative for controllable text generation that combines
arbitrary pre-trained black-box models for achieving the desired attributes in
the generated text without involving any fine-tuning or structural assumptions
about the black-box models. We interpret the task of controllable generation as
drawing samples from an energy-based model whose energy values are a linear
combination of scores from black-box models that are separately responsible for
fluency, the control attribute, and faithfulness to any conditioning context.
We use a Metropolis-Hastings sampling scheme to sample from this energy-based
model using bidirectional context and global attribute features. We validate
the effectiveness of our approach on various controlled generation and
style-based text revision tasks by outperforming recently proposed methods that
involve extra training, fine-tuning, or restrictive assumptions over the form
of models.
| true | true |
Mireshghallah, Fatemehsadat and Goyal, Kartik and Berg-Kirkpatrick, Taylor
| null | null |
https://aclanthology.org/2022.acl-long.31/
|
10.18653/v1/2022.acl-long.31
| null |
Mix and Match: Learning-free Controllable Text Generation using Energy
Language Models
|
Mix and Match: Learning-free Controllable Text Generation ...
|
https://cseweb.ucsd.edu/~fmireshg/acl2022_mix_match.pdf
|
by F Mireshghallah · Cited by 86 — We interpret the task of controllable generation as drawing samples from an energy-based model whose energy values are a linear combination of scores from black
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
mudgalControlledDecoding2025
|
\cite{mudgalControlledDecoding2025}
|
Controlled Decoding from Language Models
|
http://arxiv.org/abs/2310.17022v3
|
KL-regularized reinforcement learning (RL) is a popular alignment framework
to control the language model responses towards high reward outcomes. We pose a
tokenwise RL objective and propose a modular solver for it, called controlled
decoding (CD). CD exerts control through a separate prefix scorer module, which
is trained to learn a value function for the reward. The prefix scorer is used
at inference time to control the generation from a frozen base model, provably
sampling from a solution to the RL objective. We empirically demonstrate that
CD is effective as a control mechanism on popular benchmarks. We also show that
prefix scorers for multiple rewards may be combined at inference time,
effectively solving a multi-objective RL problem with no additional training.
We show that the benefits of applying CD transfer to an unseen base model with
no further tuning as well. Finally, we show that CD can be applied in a
blockwise decoding fashion at inference-time, essentially bridging the gap
between the popular best-of-K strategy and tokenwise control through
reinforcement learning. This makes CD a promising approach for alignment of
language models.
| true | true |
Mudgal, Sidharth and Lee, Jong and Ganapathy, Harish and Li, YaGuang and Wang, Tao and Huang, Yanping and Chen, Zhifeng and Cheng, Heng-Tze and Collins, Michael and Strohman, Trevor and Chen, Jilin and Beutel, Alex and Beirami, Ahmad
| null | null | null | null | null |
Controlled Decoding from Language Models
|
Controlled Decoding from Language Models
|
http://arxiv.org/pdf/2310.17022v3
|
KL-regularized reinforcement learning (RL) is a popular alignment framework
to control the language model responses towards high reward outcomes. We pose a
tokenwise RL objective and propose a modular solver for it, called controlled
decoding (CD). CD exerts control through a separate prefix scorer module, which
is trained to learn a value function for the reward. The prefix scorer is used
at inference time to control the generation from a frozen base model, provably
sampling from a solution to the RL objective. We empirically demonstrate that
CD is effective as a control mechanism on popular benchmarks. We also show that
prefix scorers for multiple rewards may be combined at inference time,
effectively solving a multi-objective RL problem with no additional training.
We show that the benefits of applying CD transfer to an unseen base model with
no further tuning as well. Finally, we show that CD can be applied in a
blockwise decoding fashion at inference-time, essentially bridging the gap
between the popular best-of-K strategy and tokenwise control through
reinforcement learning. This makes CD a promising approach for alignment of
language models.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
kimCriticGuidedDecoding2023
|
\cite{kimCriticGuidedDecoding2023}
|
Critic-Guided Decoding for Controlled Text Generation
|
http://arxiv.org/abs/2212.10938v1
|
Steering language generation towards objectives or away from undesired
content has been a long-standing goal in utilizing language models (LM). Recent
work has demonstrated reinforcement learning and weighted decoding as effective
approaches to achieve a higher level of language control and quality with pros
and cons. In this work, we propose a novel critic decoding method for
controlled language generation (CriticControl) that combines the strengths of
reinforcement learning and weighted decoding. Specifically, we adopt the
actor-critic framework to train an LM-steering critic from non-differentiable
reward models. And similar to weighted decoding, our method freezes the
language model and manipulates the output token distribution using called
critic, improving training efficiency and stability. Evaluation of our method
on three controlled generation tasks, namely topic control, sentiment control,
and detoxification, shows that our approach generates more coherent and
well-controlled texts than previous methods. In addition, CriticControl
demonstrates superior generalization ability in zero-shot settings. Human
evaluation studies also corroborate our findings.
| true | true |
Kim, Minbeom and Lee, Hwanhee and Yoo, Kang Min and Park, Joonsuk and Lee, Hwaran and Jung, Kyomin
| null | null |
https://aclanthology.org/2023.findings-acl.281/
|
10.18653/v1/2023.findings-acl.281
| null |
Critic-Guided Decoding for Controlled Text Generation
|
[2212.10938] Critic-Guided Decoding for Controlled Text Generation
|
https://arxiv.org/abs/2212.10938
|
View a PDF of the paper titled Critic-Guided Decoding for Controlled Text Generation, by Minbeom Kim and 5 other authors In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. View a PDF of the paper titled Critic-Guided Decoding for Controlled Text Generation, by Minbeom Kim and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
chakrabortyPrincipledDecodingLLM2024
|
\cite{chakrabortyPrincipledDecodingLLM2024}
|
Transfer Q Star: Principled Decoding for LLM Alignment
|
http://arxiv.org/abs/2405.20495v1
|
Aligning foundation models is essential for their safe and trustworthy
deployment. However, traditional fine-tuning methods are computationally
intensive and require updating billions of model parameters. A promising
alternative, alignment via decoding, adjusts the response distribution directly
without model updates to maximize a target reward $r$, thus providing a
lightweight and adaptable framework for alignment. However, principled decoding
methods rely on oracle access to an optimal Q-function ($Q^*$), which is often
unavailable in practice. Hence, prior SoTA methods either approximate this
$Q^*$ using $Q^{\pi_{\texttt{sft}}}$ (derived from the reference $\texttt{SFT}$
model) or rely on short-term rewards, resulting in sub-optimal decoding
performance. In this work, we propose Transfer $Q^*$, which implicitly
estimates the optimal value function for a target reward $r$ through a baseline
model $\rho_{\texttt{BL}}$ aligned with a baseline reward $\rho_{\texttt{BL}}$
(which can be different from the target reward $r$). Theoretical analyses of
Transfer $Q^*$ provide a rigorous characterization of its optimality, deriving
an upper bound on the sub-optimality gap and identifying a hyperparameter to
control the deviation from the pre-trained reference $\texttt{SFT}$ model based
on user needs. Our approach significantly reduces the sub-optimality gap
observed in prior SoTA methods and demonstrates superior empirical performance
across key metrics such as coherence, diversity, and quality in extensive tests
on several synthetic and real datasets.
| true | true |
Chakraborty, Souradip and Ghosal, Soumya Suvra and Yin, Ming and Manocha, Dinesh and Wang, Mengdi and Bedi, Amrit Singh and Huang, Furong
| null | null | null | null |
arXiv preprint arXiv:2405.20495
|
Transfer Q Star: Principled Decoding for LLM Alignment
|
Transfer Q Star: Principled Decoding for LLM Alignment
|
http://arxiv.org/pdf/2405.20495v1
|
Aligning foundation models is essential for their safe and trustworthy
deployment. However, traditional fine-tuning methods are computationally
intensive and require updating billions of model parameters. A promising
alternative, alignment via decoding, adjusts the response distribution directly
without model updates to maximize a target reward $r$, thus providing a
lightweight and adaptable framework for alignment. However, principled decoding
methods rely on oracle access to an optimal Q-function ($Q^*$), which is often
unavailable in practice. Hence, prior SoTA methods either approximate this
$Q^*$ using $Q^{\pi_{\texttt{sft}}}$ (derived from the reference $\texttt{SFT}$
model) or rely on short-term rewards, resulting in sub-optimal decoding
performance. In this work, we propose Transfer $Q^*$, which implicitly
estimates the optimal value function for a target reward $r$ through a baseline
model $\rho_{\texttt{BL}}$ aligned with a baseline reward $\rho_{\texttt{BL}}$
(which can be different from the target reward $r$). Theoretical analyses of
Transfer $Q^*$ provide a rigorous characterization of its optimality, deriving
an upper bound on the sub-optimality gap and identifying a hyperparameter to
control the deviation from the pre-trained reference $\texttt{SFT}$ model based
on user needs. Our approach significantly reduces the sub-optimality gap
observed in prior SoTA methods and demonstrates superior empirical performance
across key metrics such as coherence, diversity, and quality in extensive tests
on several synthetic and real datasets.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
kimGuaranteedGenerationLarge2024
|
\cite{kimGuaranteedGenerationLarge2024}
|
Guaranteed Generation from Large Language Models
|
http://arxiv.org/abs/2410.06716v2
|
As large language models (LLMs) are increasingly used across various
applications, there is a growing need to control text generation to satisfy
specific constraints or requirements. This raises a crucial question: Is it
possible to guarantee strict constraint satisfaction in generated outputs while
preserving the distribution of the original model as much as possible? We first
define the ideal distribution - the one closest to the original model, which
also always satisfies the expressed constraint - as the ultimate goal of
guaranteed generation. We then state a fundamental limitation, namely that it
is impossible to reach that goal through autoregressive training alone. This
motivates the necessity of combining training-time and inference-time methods
to enforce such guarantees. Based on this insight, we propose GUARD, a simple
yet effective approach that combines an autoregressive proposal distribution
with rejection sampling. Through GUARD's theoretical properties, we show how
controlling the KL divergence between a specific proposal and the target ideal
distribution simultaneously optimizes inference speed and distributional
closeness. To validate these theoretical concepts, we conduct extensive
experiments on two text generation settings with hard-to-satisfy constraints: a
lexical constraint scenario and a sentiment reversal scenario. These
experiments show that GUARD achieves perfect constraint satisfaction while
almost preserving the ideal distribution with highly improved inference
efficiency. GUARD provides a principled approach to enforcing strict guarantees
for LLMs without compromising their generative capabilities.
| true | true |
Minbeom Kim and Thibaut Thonet and Jos Rozen and Hwaran Lee and Kyomin Jung and Marc Dymetman
| null | null |
https://arxiv.org/abs/2410.06716
| null | null |
Guaranteed Generation from Large Language Models
|
Guaranteed Generation from Large Language Models
|
http://arxiv.org/pdf/2410.06716v2
|
As large language models (LLMs) are increasingly used across various
applications, there is a growing need to control text generation to satisfy
specific constraints or requirements. This raises a crucial question: Is it
possible to guarantee strict constraint satisfaction in generated outputs while
preserving the distribution of the original model as much as possible? We first
define the ideal distribution - the one closest to the original model, which
also always satisfies the expressed constraint - as the ultimate goal of
guaranteed generation. We then state a fundamental limitation, namely that it
is impossible to reach that goal through autoregressive training alone. This
motivates the necessity of combining training-time and inference-time methods
to enforce such guarantees. Based on this insight, we propose GUARD, a simple
yet effective approach that combines an autoregressive proposal distribution
with rejection sampling. Through GUARD's theoretical properties, we show how
controlling the KL divergence between a specific proposal and the target ideal
distribution simultaneously optimizes inference speed and distributional
closeness. To validate these theoretical concepts, we conduct extensive
experiments on two text generation settings with hard-to-satisfy constraints: a
lexical constraint scenario and a sentiment reversal scenario. These
experiments show that GUARD achieves perfect constraint satisfaction while
almost preserving the ideal distribution with highly improved inference
efficiency. GUARD provides a principled approach to enforcing strict guarantees
for LLMs without compromising their generative capabilities.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
honghuaLogicalControl2024
|
\cite{honghuaLogicalControl2024}
|
Adaptable Logical Control for Large Language Models
|
http://arxiv.org/abs/2406.13892v2
|
Despite the success of Large Language Models (LLMs) on various tasks
following human instructions, controlling model generation at inference time
poses a persistent challenge. In this paper, we introduce Ctrl-G, an adaptable
framework that facilitates tractable and flexible control of LLM generation to
reliably follow logical constraints. Ctrl-G combines any production-ready LLM
with a Hidden Markov Model, enabling LLM outputs to adhere to logical
constraints represented as deterministic finite automata. We show that Ctrl-G,
when applied to a TULU2-7B model, outperforms GPT3.5 and GPT4 on the task of
interactive text editing: specifically, for the task of generating text
insertions/continuations following logical constraints, Ctrl-G achieves over
30% higher satisfaction rate in human evaluation compared to GPT4. When applied
to medium-size language models (e.g., GPT2-large), Ctrl-G also beats its
counterparts for constrained generation by large margins on standard
benchmarks. Additionally, as a proof-of-concept study, we experiment Ctrl-G on
the Grade School Math benchmark to assist LLM reasoning, foreshadowing the
application of Ctrl-G, as well as other constrained generation approaches,
beyond traditional language generation tasks.
| true | true |
Honghua Zhang and Po-Nien Kung and Masahiro Yoshida and Guy Van den Broeck and Nanyun Peng
| null | null |
https://openreview.net/forum?id=58X9v92zRd
| null | null |
Adaptable Logical Control for Large Language Models
|
Adaptable Logical Control for Large Language Models
|
http://arxiv.org/pdf/2406.13892v2
|
Despite the success of Large Language Models (LLMs) on various tasks
following human instructions, controlling model generation at inference time
poses a persistent challenge. In this paper, we introduce Ctrl-G, an adaptable
framework that facilitates tractable and flexible control of LLM generation to
reliably follow logical constraints. Ctrl-G combines any production-ready LLM
with a Hidden Markov Model, enabling LLM outputs to adhere to logical
constraints represented as deterministic finite automata. We show that Ctrl-G,
when applied to a TULU2-7B model, outperforms GPT3.5 and GPT4 on the task of
interactive text editing: specifically, for the task of generating text
insertions/continuations following logical constraints, Ctrl-G achieves over
30% higher satisfaction rate in human evaluation compared to GPT4. When applied
to medium-size language models (e.g., GPT2-large), Ctrl-G also beats its
counterparts for constrained generation by large margins on standard
benchmarks. Additionally, as a proof-of-concept study, we experiment Ctrl-G on
the Grade School Math benchmark to assist LLM reasoning, foreshadowing the
application of Ctrl-G, as well as other constrained generation approaches,
beyond traditional language generation tasks.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
zhangTractableControlAutoregressive2023
|
\cite{zhangTractableControlAutoregressive2023}
|
Tractable Control for Autoregressive Language Generation
|
http://arxiv.org/abs/2304.07438v4
|
Despite the success of autoregressive large language models in text
generation, it remains a major challenge to generate text that satisfies
complex constraints: sampling from the conditional distribution
${\Pr}(\text{text} | \alpha)$ is intractable for even the simplest lexical
constraints $\alpha$. To overcome this challenge, we propose to use tractable
probabilistic models (TPMs) to impose lexical constraints in autoregressive
text generation models, which we refer to as GeLaTo (Generating Language with
Tractable Constraints). To demonstrate the effectiveness of this framework, we
use distilled hidden Markov models, where we can efficiently compute
${\Pr}(\text{text} | \alpha)$, to guide autoregressive generation from GPT2.
GeLaTo achieves state-of-the-art performance on challenging benchmarks for
constrained text generation (e.g., CommonGen), beating various strong baselines
by a large margin. Our work not only opens up new avenues for controlling large
language models but also motivates the development of more expressive TPMs.
| true | true |
Zhang, Honghua and Dang, Meihua and Peng, Nanyun and Van Den Broeck, Guy
| null | null | null | null | null |
Tractable Control for Autoregressive Language Generation
|
Tractable Control for Autoregressive Language Generation
|
http://arxiv.org/pdf/2304.07438v4
|
Despite the success of autoregressive large language models in text
generation, it remains a major challenge to generate text that satisfies
complex constraints: sampling from the conditional distribution
${\Pr}(\text{text} | \alpha)$ is intractable for even the simplest lexical
constraints $\alpha$. To overcome this challenge, we propose to use tractable
probabilistic models (TPMs) to impose lexical constraints in autoregressive
text generation models, which we refer to as GeLaTo (Generating Language with
Tractable Constraints). To demonstrate the effectiveness of this framework, we
use distilled hidden Markov models, where we can efficiently compute
${\Pr}(\text{text} | \alpha)$, to guide autoregressive generation from GPT2.
GeLaTo achieves state-of-the-art performance on challenging benchmarks for
constrained text generation (e.g., CommonGen), beating various strong baselines
by a large margin. Our work not only opens up new avenues for controlling large
language models but also motivates the development of more expressive TPMs.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
liTreeIndexDenseRetrieval2023
|
\cite{liTreeIndexDenseRetrieval2023}
|
Constructing Tree-based Index for Efficient and Effective Dense
Retrieval
|
http://arxiv.org/abs/2304.11943v1
|
Recent studies have shown that Dense Retrieval (DR) techniques can
significantly improve the performance of first-stage retrieval in IR systems.
Despite its empirical effectiveness, the application of DR is still limited. In
contrast to statistic retrieval models that rely on highly efficient inverted
index solutions, DR models build dense embeddings that are difficult to be
pre-processed with most existing search indexing systems. To avoid the
expensive cost of brute-force search, the Approximate Nearest Neighbor (ANN)
algorithm and corresponding indexes are widely applied to speed up the
inference process of DR models. Unfortunately, while ANN can improve the
efficiency of DR models, it usually comes with a significant price on retrieval
performance.
To solve this issue, we propose JTR, which stands for Joint optimization of
TRee-based index and query encoding. Specifically, we design a new unified
contrastive learning loss to train tree-based index and query encoder in an
end-to-end manner. The tree-based negative sampling strategy is applied to make
the tree have the maximum heap property, which supports the effectiveness of
beam search well. Moreover, we treat the cluster assignment as an optimization
problem to update the tree-based index that allows overlapped clustering. We
evaluate JTR on numerous popular retrieval benchmarks. Experimental results
show that JTR achieves better retrieval performance while retaining high system
efficiency compared with widely-adopted baselines. It provides a potential
solution to balance efficiency and effectiveness in neural retrieval system
designs.
| true | true |
Li, Haitao and Ai, Qingyao and Zhan, Jingtao and Mao, Jiaxin and Liu, Yiqun and Liu, Zheng and Cao, Zhao
| null | null |
https://doi.org/10.1145/3539618.3591651
|
10.1145/3539618.3591651
| null |
Constructing Tree-based Index for Efficient and Effective Dense
Retrieval
|
Constructing Tree-based Index for Efficient and Effective ...
|
https://arxiv.org/abs/2304.11943
|
by H Li · 2023 · Cited by 29 — The tree-based negative sampling strategy is applied to make the tree have the maximum heap property, which supports the effectiveness of beam ...See more
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
zhuTreeRecsys2018
|
\cite{zhuTreeRecsys2018}
|
Learning Tree-based Deep Model for Recommender Systems
|
http://arxiv.org/abs/1801.02294v5
|
Model-based methods for recommender systems have been studied extensively in
recent years. In systems with large corpus, however, the calculation cost for
the learnt model to predict all user-item preferences is tremendous, which
makes full corpus retrieval extremely difficult. To overcome the calculation
barriers, models such as matrix factorization resort to inner product form
(i.e., model user-item preference as the inner product of user, item latent
factors) and indexes to facilitate efficient approximate k-nearest neighbor
searches. However, it still remains challenging to incorporate more expressive
interaction forms between user and item features, e.g., interactions through
deep neural networks, because of the calculation cost.
In this paper, we focus on the problem of introducing arbitrary advanced
models to recommender systems with large corpus. We propose a novel tree-based
method which can provide logarithmic complexity w.r.t. corpus size even with
more expressive models such as deep neural networks. Our main idea is to
predict user interests from coarse to fine by traversing tree nodes in a
top-down fashion and making decisions for each user-node pair. We also show
that the tree structure can be jointly learnt towards better compatibility with
users' interest distribution and hence facilitate both training and prediction.
Experimental evaluations with two large-scale real-world datasets show that the
proposed method significantly outperforms traditional methods. Online A/B test
results in Taobao display advertising platform also demonstrate the
effectiveness of the proposed method in production environments.
| true | true |
Zhu, Han and Li, Xiang and Zhang, Pengye and Li, Guozheng and He, Jie and Li, Han and Gai, Kun
| null | null |
https://doi.org/10.1145/3219819.3219826
|
10.1145/3219819.3219826
| null |
Learning Tree-based Deep Model for Recommender Systems
|
[PDF] Learning Tree-based Deep Model for Recommender Systems - arXiv
|
https://arxiv.org/pdf/1801.02294
|
In this paper, we focus on the problem of introducing arbitrary advanced models to recommender systems with large corpus. We propose a novel tree-based method
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
zhuoOptimalTreeModels2020
|
\cite{zhuoOptimalTreeModels2020}
|
Learning Optimal Tree Models Under Beam Search
|
http://arxiv.org/abs/2006.15408v1
|
Retrieving relevant targets from an extremely large target set under
computational limits is a common challenge for information retrieval and
recommendation systems. Tree models, which formulate targets as leaves of a
tree with trainable node-wise scorers, have attracted a lot of interests in
tackling this challenge due to their logarithmic computational complexity in
both training and testing. Tree-based deep models (TDMs) and probabilistic
label trees (PLTs) are two representative kinds of them. Though achieving many
practical successes, existing tree models suffer from the training-testing
discrepancy, where the retrieval performance deterioration caused by beam
search in testing is not considered in training. This leads to an intrinsic gap
between the most relevant targets and those retrieved by beam search with even
the optimally trained node-wise scorers. We take a first step towards
understanding and analyzing this problem theoretically, and develop the concept
of Bayes optimality under beam search and calibration under beam search as
general analyzing tools for this purpose. Moreover, to eliminate the
discrepancy, we propose a novel algorithm for learning optimal tree models
under beam search. Experiments on both synthetic and real data verify the
rationality of our theoretical analysis and demonstrate the superiority of our
algorithm compared to state-of-the-art methods.
| true | true |
Zhuo, Jingwei and Xu, Ziru and Dai, Wei and Zhu, Han and Li, Han and Xu, Jian and Gai, Kun
| null | null | null | null | null |
Learning Optimal Tree Models Under Beam Search
|
Learning Optimal Tree Models Under Beam Search
|
http://arxiv.org/pdf/2006.15408v1
|
Retrieving relevant targets from an extremely large target set under
computational limits is a common challenge for information retrieval and
recommendation systems. Tree models, which formulate targets as leaves of a
tree with trainable node-wise scorers, have attracted a lot of interests in
tackling this challenge due to their logarithmic computational complexity in
both training and testing. Tree-based deep models (TDMs) and probabilistic
label trees (PLTs) are two representative kinds of them. Though achieving many
practical successes, existing tree models suffer from the training-testing
discrepancy, where the retrieval performance deterioration caused by beam
search in testing is not considered in training. This leads to an intrinsic gap
between the most relevant targets and those retrieved by beam search with even
the optimally trained node-wise scorers. We take a first step towards
understanding and analyzing this problem theoretically, and develop the concept
of Bayes optimality under beam search and calibration under beam search as
general analyzing tools for this purpose. Moreover, to eliminate the
discrepancy, we propose a novel algorithm for learning optimal tree models
under beam search. Experiments on both synthetic and real data verify the
rationality of our theoretical analysis and demonstrate the superiority of our
algorithm compared to state-of-the-art methods.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
zhuJointTreeIndexRecsys2019
|
\cite{zhuJointTreeIndexRecsys2019}
|
Joint Optimization of Tree-based Index and Deep Model for Recommender
Systems
|
http://arxiv.org/abs/1902.07565v2
|
Large-scale industrial recommender systems are usually confronted with
computational problems due to the enormous corpus size. To retrieve and
recommend the most relevant items to users under response time limits,
resorting to an efficient index structure is an effective and practical
solution. The previous work Tree-based Deep Model (TDM) \cite{zhu2018learning}
greatly improves recommendation accuracy using tree index. By indexing items in
a tree hierarchy and training a user-node preference prediction model
satisfying a max-heap like property in the tree, TDM provides logarithmic
computational complexity w.r.t. the corpus size, enabling the use of arbitrary
advanced models in candidate retrieval and recommendation.
In tree-based recommendation methods, the quality of both the tree index and
the user-node preference prediction model determines the recommendation
accuracy for the most part. We argue that the learning of tree index and
preference model has interdependence. Our purpose, in this paper, is to develop
a method to jointly learn the index structure and user preference prediction
model. In our proposed joint optimization framework, the learning of index and
user preference prediction model are carried out under a unified performance
measure. Besides, we come up with a novel hierarchical user preference
representation utilizing the tree index hierarchy. Experimental evaluations
with two large-scale real-world datasets show that the proposed method improves
recommendation accuracy significantly. Online A/B test results at a display
advertising platform also demonstrate the effectiveness of the proposed method
in production environments.
| true | true |
Zhu, Han and Chang, Daqing and Xu, Ziru and Zhang, Pengye and Li, Xiang and He, Jie and Li, Han and Xu, Jian and Gai, Kun
| null | null | null | null | null |
Joint Optimization of Tree-based Index and Deep Model for Recommender
Systems
|
[PDF] Joint Optimization of Tree-based Index and Deep Model for ...
|
http://papers.neurips.cc/paper/8652-joint-optimization-of-tree-based-index-and-deep-model-for-recommender-systems.pdf
|
In tree-based recommendation methods, the quality of both the tree index and the user-node preference prediction model determines the recommendation accuracy.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
zengPlanningAheadGenerative2024
|
\cite{zengPlanningAheadGenerative2024}
|
Planning Ahead in Generative Retrieval: Guiding Autoregressive
Generation through Simultaneous Decoding
|
http://arxiv.org/abs/2404.14600v1
|
This paper introduces PAG-a novel optimization and decoding approach that
guides autoregressive generation of document identifiers in generative
retrieval models through simultaneous decoding. To this aim, PAG constructs a
set-based and sequential identifier for each document. Motivated by the
bag-of-words assumption in information retrieval, the set-based identifier is
built on lexical tokens. The sequential identifier, on the other hand, is
obtained via quantizing relevance-based representations of documents. Extensive
experiments on MSMARCO and TREC Deep Learning Track data reveal that PAG
outperforms the state-of-the-art generative retrieval model by a large margin
(e.g., 15.6% MRR improvements on MS MARCO), while achieving 22x speed up in
terms of query latency.
| true | true |
Hansi Zeng and Chen Luo and Hamed Zamani
| null | null |
https://doi.org/10.1145/3626772.3657746
|
10.1145/3626772.3657746
| null |
Planning Ahead in Generative Retrieval: Guiding Autoregressive
Generation through Simultaneous Decoding
|
[2404.14600] Planning Ahead in Generative Retrieval
|
https://arxiv.org/abs/2404.14600
|
by H Zeng · 2024 · Cited by 21 — This paper introduces PAG-a novel optimization and decoding approach that guides autoregressive generation of document identifiers in generative retrieval
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
liCorpusLM2024
|
\cite{liCorpusLM2024}
|
CorpusLM: Towards a Unified Language Model on Corpus for
Knowledge-Intensive Tasks
|
http://arxiv.org/abs/2402.01176v2
|
Large language models (LLMs) have gained significant attention in various
fields but prone to hallucination, especially in knowledge-intensive (KI)
tasks. To address this, retrieval-augmented generation (RAG) has emerged as a
popular solution to enhance factual accuracy. However, traditional retrieval
modules often rely on large document index and disconnect with generative
tasks. With the advent of generative retrieval (GR), language models can
retrieve by directly generating document identifiers (DocIDs), offering
superior performance in retrieval tasks. However, the potential relationship
between GR and downstream tasks remains unexplored. In this paper, we propose
\textbf{CorpusLM}, a unified language model that leverages external corpus to
tackle various knowledge-intensive tasks by integrating generative retrieval,
closed-book generation, and RAG through a unified greedy decoding process. We
design the following mechanisms to facilitate effective retrieval and
generation, and improve the end-to-end effectiveness of KI tasks: (1) We
develop a ranking-oriented DocID list generation strategy, which refines GR by
directly learning from a DocID ranking list, to improve retrieval quality. (2)
We design a continuous DocIDs-References-Answer generation strategy, which
facilitates effective and efficient RAG. (3) We employ well-designed
unsupervised DocID understanding tasks, to comprehend DocID semantics and their
relevance to downstream tasks. We evaluate our approach on the widely used KILT
benchmark with two variants of backbone models, i.e., T5 and Llama2.
Experimental results demonstrate the superior performance of our models in both
retrieval and downstream tasks.
| true | true |
Xiaoxi Li and Zhicheng Dou and Yujia Zhou and Fangchao Liu
| null | null |
https://doi.org/10.1145/3626772.3657778
|
10.1145/3626772.3657778
| null |
CorpusLM: Towards a Unified Language Model on Corpus for
Knowledge-Intensive Tasks
|
CorpusLM: Towards a Unified Language Model on Corpus ...
|
https://dl.acm.org/doi/10.1145/3626772.3657778
|
In this paper, we propose CorpusLM, a unified language model that leverages external corpus to tackle various knowledge-intensive tasks.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
liUnigen2024
|
\cite{liUnigen2024}
|
UniGen: A Unified Generative Framework for Retrieval and Question
Answering with Large Language Models
|
http://arxiv.org/abs/2312.11036v1
|
Generative information retrieval, encompassing two major tasks of Generative
Document Retrieval (GDR) and Grounded Answer Generation (GAR), has gained
significant attention in the area of information retrieval and natural language
processing. Existing methods for GDR and GAR rely on separate retrieval and
reader modules, which hinder simultaneous optimization. To overcome this, we
present \textbf{UniGen}, a \textbf{Uni}fied \textbf{Gen}erative framework for
retrieval and question answering that integrates both tasks into a single
generative model leveraging the capabilities of large language models. UniGen
employs a shared encoder and two distinct decoders for generative retrieval and
question answering. To facilitate the learning of both tasks, we introduce
connectors, generated by large language models, to bridge the gaps between
query inputs and generation targets, as well as between document identifiers
and answers. Furthermore, we propose an iterative enhancement strategy that
leverages generated answers and retrieved documents to iteratively improve both
tasks. Through extensive experiments on the MS MARCO and NQ datasets, we
demonstrate the effectiveness of UniGen, showcasing its superior performance in
both the retrieval and the question answering tasks.
| true | true |
Xiaoxi Li and Yujia Zhou and Zhicheng Dou
| null | null |
https://doi.org/10.1609/aaai.v38i8.28714
|
10.1609/AAAI.V38I8.28714
| null |
UniGen: A Unified Generative Framework for Retrieval and Question
Answering with Large Language Models
|
UniGen: A Unified Generative Framework for Retrieval and Question ...
|
https://underline.io/lecture/93708-unigen-a-unified-generative-framework-for-retrieval-and-question-answering-with-large-language-models
|
UniGen: A Unified Generative Framework for Retrieval and Question Answering with Large Language Models
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.