Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
Swedish
Size:
100K - 1M
License:
Commit
•
c61eb12
0
Parent(s):
Update files from the datasets library (from 1.13.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.13.0
- .gitattributes +27 -0
- README.md +195 -0
- dataset_infos.json +1 -0
- dummy/1177/1.0.0/dummy_data.zip +3 -0
- dummy/lt/1.0.0/dummy_data.zip +3 -0
- dummy/wiki/1.0.0/dummy_data.zip +3 -0
- swedish_medical_ner.py +202 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- machine-generated
|
4 |
+
- expert-generated
|
5 |
+
language_creators:
|
6 |
+
- found
|
7 |
+
languages:
|
8 |
+
- sv-SE
|
9 |
+
licenses:
|
10 |
+
- cc-by-sa-4-0
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
+
size_categories:
|
14 |
+
- 100K<n<1M
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- structure-prediction
|
19 |
+
task_ids:
|
20 |
+
- named-entity-recognition
|
21 |
+
pretty_name: SwedMedNER
|
22 |
+
---
|
23 |
+
|
24 |
+
# Dataset Card for swedish_medical_ner
|
25 |
+
|
26 |
+
## Table of Contents
|
27 |
+
- [Dataset Description](#dataset-description)
|
28 |
+
- [Dataset Summary](#dataset-summary)
|
29 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
30 |
+
- [Languages](#languages)
|
31 |
+
- [Dataset Structure](#dataset-structure)
|
32 |
+
- [Data Instances](#data-instances)
|
33 |
+
- [Data Fields](#data-fields)
|
34 |
+
- [Data Splits](#data-splits)
|
35 |
+
- [Dataset Creation](#dataset-creation)
|
36 |
+
- [Curation Rationale](#curation-rationale)
|
37 |
+
- [Source Data](#source-data)
|
38 |
+
- [Annotations](#annotations)
|
39 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
40 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
41 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
42 |
+
- [Discussion of Biases](#discussion-of-biases)
|
43 |
+
- [Other Known Limitations](#other-known-limitations)
|
44 |
+
- [Additional Information](#additional-information)
|
45 |
+
- [Dataset Curators](#dataset-curators)
|
46 |
+
- [Licensing Information](#licensing-information)
|
47 |
+
- [Citation Information](#citation-information)
|
48 |
+
- [Contributions](#contributions)
|
49 |
+
|
50 |
+
## Dataset Description
|
51 |
+
|
52 |
+
- **Repository:** https://github.com/olofmogren/biomedical-ner-data-swedish
|
53 |
+
- **Paper:** [Named Entity Recognition in Swedish Health Records with Character-Based Deep Bidirectional LSTMs](https://aclanthology.org/W16-5104.pdf)
|
54 |
+
- **Point of Contact:** [Olof Mogren]([email protected])
|
55 |
+
|
56 |
+
### Dataset Summary
|
57 |
+
|
58 |
+
SwedMedNER is Named Entity Recognition dataset on medical text in Swedish. It consists three subsets which are in turn derived from three different sources respectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt), and 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen subsets in total contains over 790000 sequences with 60 characters each, the 1177 Vårdguiden subset is manually annotated and contains 927 sentences, 2740 annotations, out of which 1574 are _disorder and findings_, 546 are _pharmaceutical drug_, and 620 are _body structure_.
|
59 |
+
|
60 |
+
Texts from both Swedish Wikipedia and Läkartidningen were automatically annotated using a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually annotated.
|
61 |
+
|
62 |
+
|
63 |
+
### Supported Tasks and Leaderboards
|
64 |
+
|
65 |
+
Medical NER.
|
66 |
+
|
67 |
+
### Languages
|
68 |
+
|
69 |
+
Swedish (SV).
|
70 |
+
|
71 |
+
## Dataset Structure
|
72 |
+
|
73 |
+
### Data Instances
|
74 |
+
|
75 |
+
Annotated example sentences are shown below:
|
76 |
+
|
77 |
+
```
|
78 |
+
( Förstoppning ) är ett vanligt problem hos äldre.
|
79 |
+
[ Cox-hämmare ] finns även som gel och sprej.
|
80 |
+
[ Medicinen ] kan också göra att man blöder lättare eftersom den påverkar { blodets } förmåga att levra sig.
|
81 |
+
```
|
82 |
+
|
83 |
+
Tags are as follows:
|
84 |
+
- Prenthesis, (): Disorder and Finding
|
85 |
+
- Brackets, []: Pharmaceutical Drug
|
86 |
+
- Curly brackets, {}: Body Structure
|
87 |
+
|
88 |
+
Data example:
|
89 |
+
|
90 |
+
```
|
91 |
+
In: data = load_dataset('./datasets/swedish_medical_ner', "wiki")
|
92 |
+
In: data['train']:
|
93 |
+
Out:
|
94 |
+
Dataset({
|
95 |
+
features: ['sid', 'sentence', 'entities'],
|
96 |
+
num_rows: 48720
|
97 |
+
})
|
98 |
+
|
99 |
+
In: data['train'][0]['sentence']
|
100 |
+
Out: '{kropp} beskrivs i till exempel människokroppen, anatomi och f'
|
101 |
+
In: data['train'][0]['entities']
|
102 |
+
Out: {'start': [0], 'end': [7], 'text': ['kropp'], 'type': [2]}
|
103 |
+
```
|
104 |
+
|
105 |
+
### Data Fields
|
106 |
+
|
107 |
+
- `sentence`
|
108 |
+
- `entities`
|
109 |
+
- `start`: the start index
|
110 |
+
- `end`: the end index
|
111 |
+
- `text`: the text of the entity
|
112 |
+
- `type`: entity type: Disorder and Finding (0), Pharmaceutical Drug (1) or Body Structure (2)
|
113 |
+
|
114 |
+
### Data Splits
|
115 |
+
|
116 |
+
In the original paper, its authors used the text from Läkartidningen for model training, Swedish Wikipedia for validation, and 1177.se for the final model evaluation.
|
117 |
+
|
118 |
+
## Dataset Creation
|
119 |
+
|
120 |
+
### Curation Rationale
|
121 |
+
|
122 |
+
### Source Data
|
123 |
+
|
124 |
+
- Swedish Wikipedia;
|
125 |
+
- Läkartidningen - contains articles from the Swedish journal for medical professionals;
|
126 |
+
- 1177.se - a web site provided by the Swedish public health care authorities, containing information, counselling, and other health-care services.
|
127 |
+
|
128 |
+
#### Initial Data Collection and Normalization
|
129 |
+
|
130 |
+
[More Information Needed]
|
131 |
+
|
132 |
+
#### Who are the source language producers?
|
133 |
+
|
134 |
+
[More Information Needed]
|
135 |
+
|
136 |
+
### Annotations
|
137 |
+
|
138 |
+
#### Annotation process
|
139 |
+
|
140 |
+
- A list of seed terms was extracted using SweMeSH and SNOMED CT;
|
141 |
+
- The following predefined categories was used for the extraction: disorder & finding (sjukdom & symtom), pharmaceutical drug (läkemedel) and body structure (kroppsdel)
|
142 |
+
- For _Swedish Wikipedia_, an initial list of medical domain articles were selected manually. These source articles as well as their linked articles were downloaded and automatically annotated by finding the aforementioned seed terms with a context window of 60 characters;
|
143 |
+
- Articles from the _Läkartidningen_ corpus were downloaded and automatically annotated by finding the aforementioned seed terms with a context window of 60 characters;
|
144 |
+
- 15 documents from _1177.se_ were downloaded in May 2016 and then manually annotated with the seed terms as support, resulting 2740 annotations.
|
145 |
+
|
146 |
+
#### Who are the annotators?
|
147 |
+
|
148 |
+
[More Information Needed]
|
149 |
+
|
150 |
+
### Personal and Sensitive Information
|
151 |
+
|
152 |
+
[More Information Needed]
|
153 |
+
|
154 |
+
## Considerations for Using the Data
|
155 |
+
|
156 |
+
### Social Impact of Dataset
|
157 |
+
|
158 |
+
[More Information Needed]
|
159 |
+
|
160 |
+
### Discussion of Biases
|
161 |
+
|
162 |
+
[More Information Needed]
|
163 |
+
|
164 |
+
### Other Known Limitations
|
165 |
+
|
166 |
+
[More Information Needed]
|
167 |
+
|
168 |
+
## Additional Information
|
169 |
+
|
170 |
+
### Dataset Curators
|
171 |
+
|
172 |
+
- Simon Almgren, [email protected]
|
173 |
+
- Sean Pavlov, [email protected]
|
174 |
+
- Olof Mogren, [email protected]
|
175 |
+
Chalmers University of Technology
|
176 |
+
|
177 |
+
### Licensing Information
|
178 |
+
|
179 |
+
This dataset is released under the [Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)](http://creativecommons.org/licenses/by-sa/4.0/).
|
180 |
+
|
181 |
+
### Citation Information
|
182 |
+
|
183 |
+
```bibtex
|
184 |
+
@inproceedings{almgrenpavlovmogren2016bioner,
|
185 |
+
title={Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},
|
186 |
+
author={Simon Almgren, Sean Pavlov, Olof Mogren},
|
187 |
+
booktitle={Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},
|
188 |
+
pages={1},
|
189 |
+
year={2016}
|
190 |
+
}
|
191 |
+
```
|
192 |
+
|
193 |
+
### Contributions
|
194 |
+
|
195 |
+
Thanks to [@bwang482](https://github.com/bwang482) for adding this dataset.
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"wiki": {"description": "SwedMedNER is a dataset for training and evaluating Named Entity Recognition systems on medical texts in Swedish.\nIt is derived from medical articles on the Swedish Wikipedia, L\u00e4kartidningen, and 1177 V\u00e5rdguiden.\n", "citation": "@inproceedings{almgrenpavlovmogren2016bioner,\n title={Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},\n author={Simon Almgren, Sean Pavlov, Olof Mogren},\n booktitle={Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},\n pages={1},\n year={2016}\n}\n", "homepage": "https://github.com/olofmogren/biomedical-ner-data-swedish", "license": "Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)\nSee http://creativecommons.org/licenses/by-sa/4.0/ for the summary of the license.\n", "features": {"sid": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "entities": {"feature": {"start": {"dtype": "int32", "id": null, "_type": "Value"}, "end": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"num_classes": 3, "names": ["Disorder and Finding", "Pharmaceutical Drug", "Body Structure"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "swedish_medical_ner", "config_name": "wiki", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7044714, "num_examples": 48720, "dataset_name": "swedish_medical_ner"}}, "download_checksums": {"https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/Wiki_annotated_60.txt": {"num_bytes": 3219144, "checksum": "734636dfd409c27539ca7fa57db35f04f9f6bdd8f0af4e385fb32f0c26a702f1"}, "https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/LT_annotated_60.txt": {"num_bytes": 48959042, "checksum": "76a8e0aa4a56039074a15bcd95122cdfdecc4d1c1ddf71d94b58a25d42577dc3"}, "https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/1177_annotated_sentences.txt": {"num_bytes": 94526, "checksum": "7c55f61f57cc1504e47b8b69d01ba763a13a3d3ebce4b0ca9851133392fd000a"}}, "download_size": 52272712, "post_processing_size": null, "dataset_size": 7044714, "size_in_bytes": 59317426}, "lt": {"description": "SwedMedNER is a dataset for training and evaluating Named Entity Recognition systems on medical texts in Swedish.\nIt is derived from medical articles on the Swedish Wikipedia, L\u00e4kartidningen, and 1177 V\u00e5rdguiden.\n", "citation": "@inproceedings{almgrenpavlovmogren2016bioner,\n title={Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},\n author={Simon Almgren, Sean Pavlov, Olof Mogren},\n booktitle={Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},\n pages={1},\n year={2016}\n}\n", "homepage": "https://github.com/olofmogren/biomedical-ner-data-swedish", "license": "Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)\nSee http://creativecommons.org/licenses/by-sa/4.0/ for the summary of the license.\n", "features": {"sid": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "entities": {"feature": {"start": {"dtype": "int32", "id": null, "_type": "Value"}, "end": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"num_classes": 3, "names": ["Disorder and Finding", "Pharmaceutical Drug", "Body Structure"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "swedish_medical_ner", "config_name": "lt", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 97955287, "num_examples": 745753, "dataset_name": "swedish_medical_ner"}}, "download_checksums": {"https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/Wiki_annotated_60.txt": {"num_bytes": 3219144, "checksum": "734636dfd409c27539ca7fa57db35f04f9f6bdd8f0af4e385fb32f0c26a702f1"}, "https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/LT_annotated_60.txt": {"num_bytes": 48959042, "checksum": "76a8e0aa4a56039074a15bcd95122cdfdecc4d1c1ddf71d94b58a25d42577dc3"}, "https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/1177_annotated_sentences.txt": {"num_bytes": 94526, "checksum": "7c55f61f57cc1504e47b8b69d01ba763a13a3d3ebce4b0ca9851133392fd000a"}}, "download_size": 52272712, "post_processing_size": null, "dataset_size": 97955287, "size_in_bytes": 150227999}, "1177": {"description": "SwedMedNER is a dataset for training and evaluating Named Entity Recognition systems on medical texts in Swedish.\nIt is derived from medical articles on the Swedish Wikipedia, L\u00e4kartidningen, and 1177 V\u00e5rdguiden.\n", "citation": "@inproceedings{almgrenpavlovmogren2016bioner,\n title={Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},\n author={Simon Almgren, Sean Pavlov, Olof Mogren},\n booktitle={Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},\n pages={1},\n year={2016}\n}\n", "homepage": "https://github.com/olofmogren/biomedical-ner-data-swedish", "license": "Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)\nSee http://creativecommons.org/licenses/by-sa/4.0/ for the summary of the license.\n", "features": {"sid": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "entities": {"feature": {"start": {"dtype": "int32", "id": null, "_type": "Value"}, "end": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"num_classes": 3, "names": ["Disorder and Finding", "Pharmaceutical Drug", "Body Structure"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "swedish_medical_ner", "config_name": "1177", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 159007, "num_examples": 927, "dataset_name": "swedish_medical_ner"}}, "download_checksums": {"https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/Wiki_annotated_60.txt": {"num_bytes": 3219144, "checksum": "734636dfd409c27539ca7fa57db35f04f9f6bdd8f0af4e385fb32f0c26a702f1"}, "https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/LT_annotated_60.txt": {"num_bytes": 48959042, "checksum": "76a8e0aa4a56039074a15bcd95122cdfdecc4d1c1ddf71d94b58a25d42577dc3"}, "https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/1177_annotated_sentences.txt": {"num_bytes": 94526, "checksum": "7c55f61f57cc1504e47b8b69d01ba763a13a3d3ebce4b0ca9851133392fd000a"}}, "download_size": 52272712, "post_processing_size": null, "dataset_size": 159007, "size_in_bytes": 52431719}}
|
dummy/1177/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1fc1cd9c37e837eaf564a40c443989198aabe189f34dd1febe5b8e6d23c4f13d
|
3 |
+
size 1256
|
dummy/lt/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1fc1cd9c37e837eaf564a40c443989198aabe189f34dd1febe5b8e6d23c4f13d
|
3 |
+
size 1256
|
dummy/wiki/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1fc1cd9c37e837eaf564a40c443989198aabe189f34dd1febe5b8e6d23c4f13d
|
3 |
+
size 1256
|
swedish_medical_ner.py
ADDED
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""SwedMedNER: A Named Entity Recognition Dataset on medical texts in Swedish"""
|
16 |
+
|
17 |
+
|
18 |
+
import re
|
19 |
+
|
20 |
+
import datasets
|
21 |
+
|
22 |
+
|
23 |
+
_CITATION = """\
|
24 |
+
@inproceedings{almgrenpavlovmogren2016bioner,
|
25 |
+
title={Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},
|
26 |
+
author={Simon Almgren, Sean Pavlov, Olof Mogren},
|
27 |
+
booktitle={Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},
|
28 |
+
pages={1},
|
29 |
+
year={2016}
|
30 |
+
}
|
31 |
+
"""
|
32 |
+
|
33 |
+
|
34 |
+
_DESCRIPTION = """\
|
35 |
+
SwedMedNER is a dataset for training and evaluating Named Entity Recognition systems on medical texts in Swedish.
|
36 |
+
It is derived from medical articles on the Swedish Wikipedia, Läkartidningen, and 1177 Vårdguiden.
|
37 |
+
"""
|
38 |
+
|
39 |
+
|
40 |
+
_LICENSE = """\
|
41 |
+
Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)
|
42 |
+
See http://creativecommons.org/licenses/by-sa/4.0/ for the summary of the license.
|
43 |
+
"""
|
44 |
+
|
45 |
+
|
46 |
+
_URL = "https://github.com/olofmogren/biomedical-ner-data-swedish"
|
47 |
+
|
48 |
+
|
49 |
+
_DATA_URL = "https://raw.githubusercontent.com/olofmogren/biomedical-ner-data-swedish/master/"
|
50 |
+
|
51 |
+
|
52 |
+
class SwedishMedicalNerConfig(datasets.BuilderConfig):
|
53 |
+
"""BuilderConfig for SwedMedNER"""
|
54 |
+
|
55 |
+
def __init__(self, **kwargs):
|
56 |
+
"""
|
57 |
+
Args:
|
58 |
+
**kwargs: keyword arguments forwarded to super.
|
59 |
+
"""
|
60 |
+
super(SwedishMedicalNerConfig, self).__init__(**kwargs)
|
61 |
+
|
62 |
+
|
63 |
+
class SwedishMedicalNer(datasets.GeneratorBasedBuilder):
|
64 |
+
"""SwedMedNER: A Named Entity Recognition Dataset on medical texts in Swedish"""
|
65 |
+
|
66 |
+
VERSION = datasets.Version("1.0.0")
|
67 |
+
|
68 |
+
BUILDER_CONFIGS = [
|
69 |
+
datasets.BuilderConfig(name="wiki", version=VERSION, description="The Swedish Wikipedia part of the dataset"),
|
70 |
+
datasets.BuilderConfig(name="lt", version=VERSION, description="The Läkartidningen part of the dataset"),
|
71 |
+
datasets.BuilderConfig(name="1177", version=VERSION, description="The 1177 Vårdguiden part of the dataset"),
|
72 |
+
]
|
73 |
+
|
74 |
+
def _info(self):
|
75 |
+
if self.config.name == "wiki":
|
76 |
+
features = datasets.Features(
|
77 |
+
{
|
78 |
+
"sid": datasets.Value("string"),
|
79 |
+
"sentence": datasets.Value("string"),
|
80 |
+
"entities": datasets.Sequence(
|
81 |
+
{
|
82 |
+
"start": datasets.Value("int32"),
|
83 |
+
"end": datasets.Value("int32"),
|
84 |
+
"text": datasets.Value("string"),
|
85 |
+
"type": datasets.ClassLabel(
|
86 |
+
names=["Disorder and Finding", "Pharmaceutical Drug", "Body Structure"]
|
87 |
+
),
|
88 |
+
}
|
89 |
+
),
|
90 |
+
}
|
91 |
+
)
|
92 |
+
elif self.config.name == "lt":
|
93 |
+
features = datasets.Features(
|
94 |
+
{
|
95 |
+
"sid": datasets.Value("string"),
|
96 |
+
"sentence": datasets.Value("string"),
|
97 |
+
"entities": datasets.Sequence(
|
98 |
+
{
|
99 |
+
"start": datasets.Value("int32"),
|
100 |
+
"end": datasets.Value("int32"),
|
101 |
+
"text": datasets.Value("string"),
|
102 |
+
"type": datasets.ClassLabel(
|
103 |
+
names=["Disorder and Finding", "Pharmaceutical Drug", "Body Structure"]
|
104 |
+
),
|
105 |
+
}
|
106 |
+
),
|
107 |
+
}
|
108 |
+
)
|
109 |
+
elif self.config.name == "1177":
|
110 |
+
features = datasets.Features(
|
111 |
+
{
|
112 |
+
"sid": datasets.Value("string"),
|
113 |
+
"sentence": datasets.Value("string"),
|
114 |
+
"entities": datasets.Sequence(
|
115 |
+
{
|
116 |
+
"start": datasets.Value("int32"),
|
117 |
+
"end": datasets.Value("int32"),
|
118 |
+
"text": datasets.Value("string"),
|
119 |
+
"type": datasets.ClassLabel(
|
120 |
+
names=["Disorder and Finding", "Pharmaceutical Drug", "Body Structure"]
|
121 |
+
),
|
122 |
+
}
|
123 |
+
),
|
124 |
+
}
|
125 |
+
)
|
126 |
+
return datasets.DatasetInfo(
|
127 |
+
description=_DESCRIPTION,
|
128 |
+
features=features,
|
129 |
+
supervised_keys=None,
|
130 |
+
homepage=_URL,
|
131 |
+
license=_LICENSE,
|
132 |
+
citation=_CITATION,
|
133 |
+
)
|
134 |
+
|
135 |
+
def _split_generators(self, dl_manager):
|
136 |
+
"""Returns SplitGenerators."""
|
137 |
+
urls_to_download = {
|
138 |
+
"wiki": _DATA_URL + "Wiki_annotated_60.txt",
|
139 |
+
"lt": _DATA_URL + "LT_annotated_60.txt",
|
140 |
+
"1177": _DATA_URL + "1177_annotated_sentences.txt",
|
141 |
+
}
|
142 |
+
downloaded_files = dl_manager.download_and_extract(urls_to_download)
|
143 |
+
|
144 |
+
if self.config.name == "wiki":
|
145 |
+
return [
|
146 |
+
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["wiki"]})
|
147 |
+
]
|
148 |
+
elif self.config.name == "lt":
|
149 |
+
return [
|
150 |
+
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["lt"]})
|
151 |
+
]
|
152 |
+
elif self.config.name == "1177":
|
153 |
+
return [
|
154 |
+
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["1177"]})
|
155 |
+
]
|
156 |
+
|
157 |
+
def _generate_examples(self, filepath):
|
158 |
+
"""Yields examples as (key, example) tuples."""
|
159 |
+
|
160 |
+
def find_type(s, e):
|
161 |
+
if (s == "(") and (e == ")"):
|
162 |
+
return "Disorder and Finding"
|
163 |
+
elif (s == "[") and (e == "]"):
|
164 |
+
return "Pharmaceutical Drug"
|
165 |
+
elif (s == "{") and (e == "}"):
|
166 |
+
return "Body Structure"
|
167 |
+
|
168 |
+
pattern = r"\[([^\[\]()]+)\]|\(([^\[\]()]+)\)|\{([^\[\]()]+)\}"
|
169 |
+
with open(filepath, encoding="utf-8") as f:
|
170 |
+
for id_, row in enumerate(f):
|
171 |
+
sentence = row.replace("\n", "")
|
172 |
+
|
173 |
+
if self.config.name == "1177":
|
174 |
+
targets = [
|
175 |
+
{
|
176 |
+
"start": m.start(0),
|
177 |
+
"end": m.end(0),
|
178 |
+
"text": sentence[m.start(0) + 2 : m.end(0) - 2],
|
179 |
+
"type": find_type(sentence[m.start(0)], sentence[m.end(0) - 1]),
|
180 |
+
}
|
181 |
+
for m in re.finditer(pattern, sentence)
|
182 |
+
]
|
183 |
+
yield id_, {
|
184 |
+
"sid": self.config.name + "_" + str(id_),
|
185 |
+
"sentence": sentence,
|
186 |
+
"entities": targets if targets else [],
|
187 |
+
}
|
188 |
+
else:
|
189 |
+
targets = [
|
190 |
+
{
|
191 |
+
"start": m.start(0),
|
192 |
+
"end": m.end(0),
|
193 |
+
"text": sentence[m.start(0) + 1 : m.end(0) - 1],
|
194 |
+
"type": find_type(sentence[m.start(0)], sentence[m.end(0) - 1]),
|
195 |
+
}
|
196 |
+
for m in re.finditer(pattern, sentence)
|
197 |
+
]
|
198 |
+
yield id_, {
|
199 |
+
"sid": self.config.name + "_" + str(id_),
|
200 |
+
"sentence": sentence,
|
201 |
+
"entities": targets if targets else [],
|
202 |
+
}
|