amsa02 commited on
Commit
26ea4c4
·
verified ·
1 Parent(s): f2cce70

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +326 -0
README.md ADDED
@@ -0,0 +1,326 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # Dataset Card for Text2Tech Curated Documents
4
+
5
+ ## Dataset Summary
6
+
7
+ This dataset is the result of converting a UIMA CAS 0.4 JSON export from the Inception annotation tool into a simplified format suitable for Natural Language Processing tasks. Specifically, it provides configurations for Named Entity Recognition (NER), Entity Linking (EL), and Relation Extraction (RE).
8
+
9
+ The conversion process utilized the `dkpro-cassis` library to load the original annotations and `spaCy` for tokenization. The final dataset is structured similarly to the DFKI-SLT/mobie dataset to ensure compatibility and ease of use with the Hugging Face ecosystem.
10
+
11
+ This version of the dataset loader provides configurations for:
12
+
13
+ * **Named Entity Recognition (ner)**: NER tags use spaCy's BILUO tagging scheme.
14
+ * **Entity Linking (el)**: Entity mentions are linked to external knowledge bases.
15
+ * **Relation Extraction (re)**: Relations between entities are annotated.
16
+
17
+ ## Supported Tasks and Leaderboards
18
+
19
+ * **Tasks**: Named Entity Recognition, Entity Linking, Relation Extraction
20
+ * **Leaderboards**: More Information Needed
21
+
22
+ ## Languages
23
+
24
+ The text in the dataset is in English.
25
+
26
+ ## Dataset Structure
27
+
28
+ ### Data Instances
29
+
30
+ #### ner
31
+
32
+ An example of 'train' looks as follows.
33
+
34
+ ```json
35
+ {
36
+ "docid": "138",
37
+ "tokens": [
38
+ "\"",
39
+ "Samsung",
40
+ "takes",
41
+ "aim",
42
+ "at",
43
+ "blood",
44
+ "pressure",
45
+ "monitoring",
46
+ "with",
47
+ "the",
48
+ "Galaxy",
49
+ "Watch",
50
+ "Active",
51
+ "..."
52
+ ],
53
+ "ner_tags": [
54
+ 0,
55
+ 1,
56
+ 0,
57
+ 0,
58
+ 0,
59
+ 2,
60
+ 3,
61
+ 4,
62
+ 0,
63
+ 0,
64
+ 5,
65
+ 6,
66
+ 7,
67
+ "..."
68
+ ]
69
+ }
70
+ ```
71
+
72
+ #### el
73
+
74
+ An example of 'train' looks as follows.
75
+
76
+ ```json
77
+ {
78
+ "docid": "138",
79
+ "tokens": [
80
+ "\"",
81
+ "Samsung",
82
+ "takes",
83
+ "aim",
84
+ "at",
85
+ "blood",
86
+ "pressure",
87
+ "monitoring",
88
+ "with",
89
+ "the",
90
+ "Galaxy",
91
+ "Watch",
92
+ "Active",
93
+ "..."
94
+ ],
95
+ "ner_tags": [
96
+ 0,
97
+ 1,
98
+ 0,
99
+ 0,
100
+ 0,
101
+ 2,
102
+ 3,
103
+ 4,
104
+ 0,
105
+ 0,
106
+ 5,
107
+ 6,
108
+ 7,
109
+ "..."
110
+ ],
111
+
112
+ "entity_mentions": [
113
+ {
114
+ "text": "Samsung",
115
+ "start": 1,
116
+ "end": 2,
117
+ "char_start": 1,
118
+ "char_end": 8,
119
+ "type": 0,
120
+ "entity_id": "http://www.wikidata.org/entity/Q124989916"
121
+ },
122
+ "..."
123
+ ]
124
+ }
125
+ ```
126
+
127
+ #### re
128
+
129
+ An example of 'train' looks as follows.
130
+
131
+ ```json
132
+ {
133
+ "docid": "138",
134
+ "tokens": [
135
+ "\"",
136
+ "Samsung",
137
+ "takes",
138
+ "aim",
139
+ "at",
140
+ "blood",
141
+ "pressure",
142
+ "monitoring",
143
+ "with",
144
+ "the",
145
+ "Galaxy",
146
+ "Watch",
147
+ "Active",
148
+ "..."
149
+ ],
150
+ "ner_tags": [
151
+ 0,
152
+ 1,
153
+ 0,
154
+ 0,
155
+ 0,
156
+ 2,
157
+ 3,
158
+ 4,
159
+ 0,
160
+ 0,
161
+ 5,
162
+ 6,
163
+ 7,
164
+ "..."
165
+ ],
166
+ "relations": [
167
+ {
168
+ "id": "138-0",
169
+ "head_start": 706,
170
+ "head_end": 708,
171
+ "head_type": 2,
172
+ "tail_start": 706,
173
+ "tail_end": 708,
174
+ "tail_type": 2,
175
+ "type": 0
176
+ },
177
+ "..."
178
+ ]
179
+ }
180
+ ```
181
+
182
+ ### Data Fields
183
+
184
+ #### ner
185
+
186
+ * `docid`: A `string` feature representing the document identifier.
187
+ * `tokens`: A `list` of `string` features representing the tokens in the document.
188
+ * `ner_tags`: A `list` of classification labels using spaCy's BILUO tagging scheme. The mapping from ID to tag is as follows:
189
+
190
+ **BILUO Tagging Scheme:**
191
+ - **B-** (Begin): First token of a multi-token entity
192
+ - **I-** (Inside): Inner tokens of a multi-token entity
193
+ - **L-** (Last): Final token of a multi-token entity
194
+ - **U-** (Unit): Single token entity
195
+ - **O** (Outside): Non-entity token
196
+
197
+ ```json
198
+ {
199
+ "O": 0,
200
+ "U-Organization": 1,
201
+ "B-Method": 2,
202
+ "I-Method": 3,
203
+ "L-Method": 4,
204
+ "B-Technological System": 5,
205
+ "I-Technological System": 6,
206
+ "L-Technological System": 7,
207
+ "U-Technological System": 8,
208
+ "U-Method": 9,
209
+ "B-Material": 10,
210
+ "L-Material": 11,
211
+ "I-Material": 12,
212
+ "B-Organization": 13,
213
+ "L-Organization": 14,
214
+ "I-Organization": 15,
215
+ "U-Material": 16,
216
+ "B-Technical Field": 17,
217
+ "L-Technical Field": 18,
218
+ "I-Technical Field": 19,
219
+ "U-Technical Field": 20
220
+ }
221
+ ```
222
+
223
+ #### el
224
+
225
+ * `docid`: A `string` feature representing the document identifier.
226
+ * `tokens`: A `list` of `string` features representing the tokens in the document.
227
+ * `entity_mentions`: A `list` of `struct` features containing:
228
+ * `text`: a `string` feature.
229
+ * `start`: token offset start, a `int32` feature.
230
+ * `end`: token offset end, a `int32` feature.
231
+ * `char_start`: character offset start, a `int32` feature.
232
+ * `char_end`: character offset end, a `int32` feature.
233
+ * `type`: a classification label. The mapping from ID to entity type is as follows:
234
+
235
+ ```json
236
+ {
237
+ "Organization": 0,
238
+ "Method": 1,
239
+ "Technological System": 2,
240
+ "Material": 3,
241
+ "Technical Field": 4
242
+ }
243
+ ```
244
+
245
+ * `entity_id`: a `string` feature representing the entity identifier from a knowledge base.
246
+
247
+ #### re
248
+
249
+ * `docid`: A `string` feature representing the document identifier.
250
+ * `tokens`: A `list` of `string` features representing the tokens in the document.
251
+ * `ner_tags`: A `list` of classification labels, corresponding to the NER task.
252
+ * `relations`: A `list` of `struct` features containing:
253
+ * `id`: a `string` feature representing the relation identifier.
254
+ * `head_start`: token offset start of the head entity, an `int32` feature.
255
+ * `head_end`: token offset end of the head entity, an `int32` feature.
256
+ * `head_type`: a classification label for the head entity type.
257
+ * `tail_start`: token offset start of the tail entity, an `int32` feature.
258
+ * `tail_end`: token offset end of the tail entity, an `int32` feature.
259
+ * `tail_type`: a classification label for the tail entity type.
260
+ * `type`: a classification label for the relation type. The mapping from ID to relation type is as follows:
261
+
262
+ ```json
263
+ {
264
+ "ts:executes": 0,
265
+ "org:develops_or_provides": 1,
266
+ "ts:contains": 2,
267
+ "ts:made_of": 3,
268
+ "ts:uses": 4,
269
+ "ts:supports": 5,
270
+ "met:employs": 6,
271
+ "met:processes": 7,
272
+ "mat:transformed_to": 8,
273
+ "org:collaborates": 9,
274
+ "met:creates": 10,
275
+ "met:applied_to": 11,
276
+ "ts:processes": 12
277
+ }
278
+ ```
279
+
280
+ ### Data Splits
281
+
282
+ Please add information about your data splits here. For example:
283
+
284
+ * **train**: X samples
285
+ * **validation**: Y samples
286
+ * **test**: Z samples
287
+
288
+ ## Dataset Creation
289
+
290
+ The dataset was created by converting JSON files exported from the Inception annotation tool. The `inception_converter.py` script was used to process these files. This script uses the `dkpro-cassis` library to load the UIMA CAS JSON data and `spaCy` for tokenization and creating BIO tags for the NER task. The data was then split into three separate files for NER, EL, and RE tasks.
291
+
292
+ ## Considerations for Using the Data
293
+
294
+ ### Social Impact of Dataset
295
+
296
+ More Information Needed
297
+
298
+ ### Discussion of Biases
299
+
300
+ More Information Needed
301
+
302
+ ### Other Known Limitations
303
+
304
+ More Information Needed
305
+
306
+ ## Additional Information
307
+
308
+ ### Dataset Curators
309
+
310
+ Amir Safari
311
+
312
+ ### Licensing Information
313
+
314
+ Please specify the license for this dataset.
315
+
316
+ ### Citation Information
317
+
318
+ Please provide a BibTeX citation for your dataset.
319
+
320
+ ```bibtex
321
+ author = {Amir Safari},
322
+ title = {Text2Tech Curated Documents},
323
+ year = {2025},
324
+ publisher = {Hugging Face}
325
+ }
326
+ ```