language:
- en
license: mit
datasets:
- pile
metrics:
- nDCG@10
Carptriever-1
Model description
Carptriever-1 is a bert-large-uncased
retrieval model trained with contrastive learning via a momentum contrastive (MoCo) mechanism following the work of G. Izacard et al. in "Contriever: Unsupervised Dense Information Retrieval with Contrastive Learning".
How to use
from transformers import AutoTokenizer, AutoModel
def mean_pooling(token_embeddings, mask):
token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.)
sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None]
return sentence_embeddings
# Remove pooling layer
model = AutoModel.from_pretrained("Carper-AI/carptriever-1", add_pooling_layer=False)
tokenizer = AutoTokenizer.from_pretrained("Carper-AI/carptriever-1")
sentences = [
"Where was Marie Curie born?", # Query
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Encode sentences
outputs = model(**inputs)
embeddings = mean_pooling(outputs[0], inputs['attention_mask'])
# Compute dot-product scores between the query and sentence embeddings
query_embedding, sentence_embeddings = embeddings[0], embeddings[1:]
scores = (query_embedding @ sentence_embeddings.transpose(0, 1)).cpu().tolist()
sentence_score_pairs = sorted(zip(sentences[1:], scores), reverse=True)
print(f"Query: {sentences[0]}")
for sentence, score in sentence_score_pairs:
print(f"\nSentence: {sentence}\nScore: {score:.4f}")
Training data
Carptriever-1 is pre-trained on The Pile, a large and diverse dataset created by EleutherAI for language model training.
Training procedure
The model was trained on 32 40GB A100 for approximately 100 hours with the following configurations:
- Base model:
bert-large-uncased
- Optimizer settings:
optimizer = AdamW
lr = 1e-5
schedule = linear
warmup = 20,000 steps
batch size = 2048
training steps = 150,000
- MoCo settings:
queue size = 8192
momentum = 0.999
temperature = 0.05
Evaluation results
We provide evaluation results on the BEIR: Benchmarking IR suite.
nDCG@10 | Avg | MSMARCO | TREC-Covid | NFCorpus | NaturalQuestions | HotpotQA | FiQA | ArguAna | Tóuche-2020 | Quora | CQAdupstack | DBPedia | Scidocs | Fever | Climate-fever | Scifact |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Contriever* | 35.97 | 20.6 | 27.4 | 31.7 | 25.4 | 48.1 | 24.5 | 37.9 | 19.3 | 83.5 | 28.4 | 29.2 | 14.9 | 68.2 | 15.5 | 64.9 |
Carptriever-1 | 34.29 | 18.81 | 46.5 | 28.9 | 21.1 | 39.01 | 20.2 | 33.4 | 17.3 | 80.6 | 25.4 | 23.6 | 14.9 | 59.6 | 18.7 | 66.4 |
* Results are taken from the Contriever repository.
Note that degradation in performance, relative to the Contriever model, was expected given the much broader diversity of our training dataset. We plan on addressing this in future updates with architectural improvements and view Carptriever-1 as our first iteration in the exploratory phase towards better language-embedding models.
Appreciation
All compute was graciously provided by Stability.ai.
Citations
@misc{izacard2021contriever,
title={Unsupervised Dense Information Retrieval with Contrastive Learning},
author={Gautier Izacard and Mathilde Caron and Lucas Hosseini and Sebastian Riedel and Piotr Bojanowski and Armand Joulin and Edouard Grave},
year={2021},
url = {https://arxiv.org/abs/2112.09118},
doi = {10.48550/ARXIV.2112.09118},
}
@article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}