webmichaelnosenko commited on
Commit
50c6ccf
·
1 Parent(s): 7a1d333

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -114
README.md CHANGED
@@ -4,117 +4,5 @@ datasets:
4
  - conll2003
5
  license: mit
6
  ---
7
- # bert-base-NER
8
-
9
- ## Model description
10
-
11
- **bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
12
-
13
- Specifically, this model is a *bert-base-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
14
-
15
- If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a [**bert-large-NER**](https://huggingface.co/dslim/bert-large-NER/) version is also available.
16
-
17
-
18
- ## Intended uses & limitations
19
-
20
- #### How to use
21
-
22
- You can use this model with Transformers *pipeline* for NER.
23
-
24
- ```python
25
- from transformers import AutoTokenizer, AutoModelForTokenClassification
26
- from transformers import pipeline
27
-
28
- tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
29
- model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
30
-
31
- nlp = pipeline("ner", model=model, tokenizer=tokenizer)
32
- example = "My name is Wolfgang and I live in Berlin"
33
-
34
- ner_results = nlp(example)
35
- print(ner_results)
36
- ```
37
-
38
- #### Limitations and bias
39
-
40
- This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
41
-
42
- ## Training data
43
-
44
- This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
45
-
46
- The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
47
-
48
- Abbreviation|Description
49
- -|-
50
- O|Outside of a named entity
51
- B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
52
- I-MIS | Miscellaneous entity
53
- B-PER |Beginning of a person’s name right after another person’s name
54
- I-PER |Person’s name
55
- B-ORG |Beginning of an organization right after another organization
56
- I-ORG |organization
57
- B-LOC |Beginning of a location right after another location
58
- I-LOC |Location
59
-
60
-
61
- ### CoNLL-2003 English Dataset Statistics
62
- This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
63
- #### # of training examples per entity type
64
- Dataset|LOC|MISC|ORG|PER
65
- -|-|-|-|-
66
- Train|7140|3438|6321|6600
67
- Dev|1837|922|1341|1842
68
- Test|1668|702|1661|1617
69
- #### # of articles/sentences/tokens per dataset
70
- Dataset |Articles |Sentences |Tokens
71
- -|-|-|-
72
- Train |946 |14,987 |203,621
73
- Dev |216 |3,466 |51,362
74
- Test |231 |3,684 |46,435
75
-
76
- ## Training procedure
77
-
78
- This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task.
79
-
80
- ## Eval results
81
- metric|dev|test
82
- -|-|-
83
- f1 |95.1 |91.3
84
- precision |95.0 |90.7
85
- recall |95.3 |91.9
86
-
87
- The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).
88
-
89
- ### BibTeX entry and citation info
90
-
91
- ```
92
- @article{DBLP:journals/corr/abs-1810-04805,
93
- author = {Jacob Devlin and
94
- Ming{-}Wei Chang and
95
- Kenton Lee and
96
- Kristina Toutanova},
97
- title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
98
- Understanding},
99
- journal = {CoRR},
100
- volume = {abs/1810.04805},
101
- year = {2018},
102
- url = {http://arxiv.org/abs/1810.04805},
103
- archivePrefix = {arXiv},
104
- eprint = {1810.04805},
105
- timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
106
- biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
107
- bibsource = {dblp computer science bibliography, https://dblp.org}
108
- }
109
- ```
110
- ```
111
- @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
112
- title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
113
- author = "Tjong Kim Sang, Erik F. and
114
- De Meulder, Fien",
115
- booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
116
- year = "2003",
117
- url = "https://www.aclweb.org/anthology/W03-0419",
118
- pages = "142--147",
119
- }
120
- ```
 
4
  - conll2003
5
  license: mit
6
  ---
7
+ This is a rust-adapted ready to use version of https://huggingface.co/dslim/bert-base-NER
8
+ I've generated the rust_model.ot file using the convert_model.py script provided by the rust_bert crate