Update README.md
Browse files
README.md
CHANGED
|
@@ -18,19 +18,30 @@ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/b
|
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
- Loss: 0.0814
|
| 20 |
|
|
|
|
| 21 |
## Model description
|
| 22 |
|
| 23 |
-
|
| 24 |
|
| 25 |
-
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
-
|
|
|
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
-
## Training procedure
|
| 34 |
|
| 35 |
### Training hyperparameters
|
| 36 |
|
|
|
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
- Loss: 0.0814
|
| 20 |
|
| 21 |
+
|
| 22 |
## Model description
|
| 23 |
|
| 24 |
+
bert-base-NER is a fine-tuned BERT model that is ready to use for Named Entity Recognition and achieves state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
|
| 25 |
|
| 26 |
+
Specifically, this model is a bert-base-cased model that was fine-tuned on the English version of the standard CoNLL-2003 Named Entity Recognition dataset.
|
| 27 |
|
| 28 |
+
If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a bert-large-NER version is also available.
|
| 29 |
|
| 30 |
+
# How to Use
|
| 31 |
+
You can use this model with Transformers pipeline for NER.
|
| 32 |
|
| 33 |
+
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
| 34 |
+
from transformers import pipeline
|
| 35 |
+
|
| 36 |
+
tokenizer = AutoTokenizer.from_pretrained("Hatman/bert-finetuned-ner")
|
| 37 |
+
model = AutoModelForTokenClassification.from_pretrained("Hatman/bert-finetuned-ner")
|
| 38 |
+
|
| 39 |
+
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
|
| 40 |
+
example = "My name is Wolfgang and I live in Berlin"
|
| 41 |
+
|
| 42 |
+
ner_results = nlp(example)
|
| 43 |
+
print(ner_results)
|
| 44 |
|
|
|
|
| 45 |
|
| 46 |
### Training hyperparameters
|
| 47 |
|