
Contextualized Document Embedding Benchmark
AI & ML interests
None defined yet.
Recent Activity
ConTEB: Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document Embeddings

This organization contains all artifacts released with our preprint Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document Embeddings, including the ConTEB benchmark.
Abstract
A limitation of modern document retrieval embedding methods is that they typically encode passages (chunks) from the same documents independently, often overlooking crucial contextual information from the rest of the document that could greatly improve individual chunk representations.
In this work, we introduce ConTEB (Context-aware Text Embedding Benchmark), a benchmark designed to evaluate retrieval models on their ability to leverage document-wide context. Our results show that state-of-the-art embedding models struggle in retrieval scenarios where context is required. To address this limitation, we propose InSeNT (In-sequence Negative Training), a novel contrastive post-training approach which combined with \textit{late chunking} pooling enhances contextual representation learning while preserving computational efficiency. Our method significantly improves retrieval quality on ConTEB without sacrificing base model performance. We further find chunks embedded with our method are more robust to suboptimal chunking strategies and larger retrieval corpus sizes. We open-source all artifacts here and at https://github.com/illuin-tech/contextual-embeddings.
Ressources
- HuggingFace Project Page: The HF page centralizing everything!
- (Model) ModernBERT: The Contextualized ModernBERT bi-encoder trained with InSENT loss and Late Chunking
- (Model) ModernColBERT: The Contextualized ModernColBERT trained with InSENT loss and Late Chunking
- Leaderboard: Coming Soon
- (Data) ConTEB Benchmark Datasets: Datasets included in ConTEB.
- (Code) Contextual Document Engine: The code used to train and run inference with our architecture.
- (Code) ConTEB Benchmarkk: A Python package/CLI tool to evaluate document retrieval systems on the ConTEB benchmark.
- Preprint: The paper with all details!
- Blog: A blogpost that covers the paper in a 5 minute read.
Contact of the first-authors
- Manuel Faysse: [email protected]
- Max Conti: [email protected]
Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
@misc{conti2025contextgoldgoldpassage,
title={Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document Embeddings},
author={Max Conti and Manuel Faysse and Gautier Viaud and Antoine Bosselut and Céline Hudelot and Pierre Colombo},
year={2025},
eprint={2505.24782},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2505.24782},
}
Acknowledgments
This work is partially supported by ILLUIN Technology, and by a grant from ANRT France. This work was performed using HPC resources from the GENCI Jeanzay supercomputer with grant AD011016393.