SOTA Entity Recognition English Foundation Model by NuMind 🔥
This model provides the best embedding for the Entity Recognition task in English.
We suggest using newer version of this model: NuNER v2.0
This is the model from our Paper: NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data
Checkout other models by NuMind:
- SOTA Multilingual Entity Recognition Foundation Model: link
- SOTA Sentiment Analysis Foundation Model: English, Multilingual
About
Roberta-base fine-tuned on NuNER data.
Metrics:
Read more about evaluation protocol & datasets in our paper.
We suggest using newer version of this model: NuNER v2.0
Here is the aggregated performance of the models over several datasets.
k=X means that as training data for this evaluation, we took only X examples for each class, trained the model, and evaluated it on the full test set.
Model | k=1 | k=4 | k=16 | k=64 |
---|---|---|---|---|
RoBERTa-base | 24.5 | 44.7 | 58.1 | 65.4 |
RoBERTa-base + NER-BERT pre-training | 32.3 | 50.9 | 61.9 | 67.6 |
NuNER v0.1 | 34.3 | 54.6 | 64.0 | 68.7 |
NuNER v1.0 | 39.4 | 59.6 | 67.8 | 71.5 |
NuNER v2.0 | 43.6 | 61.0 | 68.2 | 72.0 |
NuNER v1.0 has similar performance to 7B LLMs (70 times bigger than NuNER v1.0) created specifically for the NER task.
Model | k=8~16 | k=64~128 |
---|---|---|
UniversalNER (7B) | 57.89 ± 4.34 | 71.02 ± 1.53 |
NuNER v1.0 (100M) | 58.75 ± 0.93 | 70.30 ± 0.35 |
Usage
Embeddings can be used out of the box or fine-tuned on specific datasets.
Get embeddings:
import torch
import transformers
model = transformers.AutoModel.from_pretrained(
'numind/NuNER-v1.0'
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
'numind/NuNER-v1.0'
)
text = [
"NuMind is an AI company based in Paris and USA.",
"See other models from us on https://huggingface.co/numind"
]
encoded_input = tokenizer(
text,
return_tensors='pt',
padding=True,
truncation=True
)
output = model(**encoded_input)
emb = output.last_hidden_state
Citation
@misc{bogdanov2024nuner,
title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data},
author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
year={2024},
eprint={2402.15343},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 13