Upload README
Browse files
README.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: feature-extraction
|
3 |
+
library_name: transformers
|
4 |
+
license: apache-2.0
|
5 |
+
---
|
6 |
+
|
7 |
+
# Overview
|
8 |
+
|
9 |
+
This repository contains an encoder model, part of the research presented in the paper *Should We Still Pretrain Encoders with Masked Language Modeling?* (Gisserot-Boukhlef et al.).
|
10 |
+
|
11 |
+
* **Paper:** [Should We Still Pretrain Encoders with Masked Language Modeling?](https://huggingface.co/papers/2507.00994)
|
12 |
+
* **Blog post:** [Link](https://huggingface.co/blog/Nicolas-BZRD/encoders-should-not-be-only-pre-trained-with-mlm)
|
13 |
+
* **Project page:** [https://hf.co/MLMvsCLM](https://hf.co/MLMvsCLM)
|
14 |
+
|
15 |
+
## Model Naming
|
16 |
+
|
17 |
+
Model identifiers follow a consistent format that encodes key training details:
|
18 |
+
|
19 |
+
* **Single-stage models**:
|
20 |
+
`[model size]-[objective]-[number of steps]`.
|
21 |
+
Example: `610m-clm-42k` denotes a 610M-parameter model trained with CLM for 42,000 steps.
|
22 |
+
* **Two-stage models**:
|
23 |
+
`[model size]-[objective #1]-[steps #1]-[objective #2]-[total steps]`.
|
24 |
+
Example: `610m-clm-10k-mlm40-42k` indicates a 610M model trained first with CLM for 10k steps, then continued with MLM (40% masking ratio) for 32k more steps, totaling 42k steps.
|
25 |
+
* **Continued pretraining from decayed checkpoints**:
|
26 |
+
These use the dec prefix on the first training stage.
|
27 |
+
Example: `610m-clm-dec42k-mlm40-64k refers` to a 610M model pretrained with CLM for 42k steps (with weight decay), then further trained with MLM (40% masking) for 22k additional steps, totaling 64k.
|
28 |
+
* **Intermediate checkpoints**:
|
29 |
+
To refer to a specific training step before the final checkpoint, append the step number at the end.
|
30 |
+
Example: `610m-mlm40-42k-1000` corresponds to step 1,000 during the MLM training phase of a 610M model trained for 42k steps.
|
31 |
+
|
32 |
+
## Usage
|
33 |
+
|
34 |
+
You can use this model for feature extraction with the Hugging Face `transformers` library.
|
35 |
+
|
36 |
+
```python
|
37 |
+
from transformers import AutoTokenizer, AutoModel
|
38 |
+
import torch
|
39 |
+
|
40 |
+
# Replace with the actual model ID if different, e.g., "AhmedAliHassan/MLMvsCLM-Biphasic-210M"
|
41 |
+
# This placeholder assumes the current repository is the model you want to load.
|
42 |
+
model_name = "<YOUR_MODEL_ID_HERE>"
|
43 |
+
|
44 |
+
# Load the tokenizer and model, ensuring trust_remote_code for custom architectures
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
46 |
+
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
|
47 |
+
|
48 |
+
text = "This is an example sentence to extract features from."
|
49 |
+
|
50 |
+
inputs = tokenizer(text, return_tensors="pt")
|
51 |
+
|
52 |
+
with torch.no_grad():
|
53 |
+
outputs = model(**inputs)
|
54 |
+
|
55 |
+
# The last hidden state contains the token embeddings (features)
|
56 |
+
last_hidden_state = outputs.last_hidden_state
|
57 |
+
print(f"Shape of last hidden state: {last_hidden_state.shape}")
|
58 |
+
|
59 |
+
# For sentence-level embeddings, common approaches include:
|
60 |
+
# 1. Averaging the token embeddings (excluding special tokens)
|
61 |
+
# 2. Using the embedding of the [CLS] token (if applicable for the model's architecture)
|
62 |
+
# Example: Mean pooling (simple average over non-padding tokens)
|
63 |
+
attention_mask = inputs["attention_mask"]
|
64 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(last_hidden_state.size()).float()
|
65 |
+
sum_embeddings = torch.sum(last_hidden_state * input_mask_expanded, 1)
|
66 |
+
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
67 |
+
mean_pooled_embedding = sum_embeddings / sum_mask
|
68 |
+
print(f"Shape of mean pooled embedding: {mean_pooled_embedding.shape}")
|
69 |
+
```
|
70 |
+
|
71 |
+
## Citation
|
72 |
+
|
73 |
+
If you found this model useful, please consider citing our paper:
|
74 |
+
|
75 |
+
```bibtex
|
76 |
+
@misc{gisserotboukhlef2025pretrainencodersmaskedlanguage,
|
77 |
+
title={Should We Still Pretrain Encoders with Masked Language Modeling?},
|
78 |
+
author={Hippolyte Gisserot-Boukhlef and Nicolas Boizard and Manuel Faysse and Duarte M. Alves and Emmanuel Malherbe and André F. T. Martins and Céline Hudelot and Pierre Colombo},
|
79 |
+
year={2025},
|
80 |
+
eprint={2507.00994},
|
81 |
+
archivePrefix={arXiv},
|
82 |
+
primaryClass={cs.CL},
|
83 |
+
url={https://arxiv.org/abs/2507.00994},
|
84 |
+
}
|
85 |
+
```
|