Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Research-EAI commited on
Commit
832f622
·
verified ·
1 Parent(s): da1f52d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +541 -3
README.md CHANGED
@@ -1,3 +1,541 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # 🏥 Taxonomy Med w/ DCLM
5
+
6
+ A high-quality medical dataset curated from web data using taxonomy-based filtering, containing **205 billion tokens** of medical content.
7
+
8
+ ## 🎯 Dataset Overview
9
+
10
+ This dataset is part of the [**Essential-Web**](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) project, which introduces a new paradigm for dataset curation using expressive metadata and simple semantic filters. Unlike traditional medical datasets that require complex domain-specific pipelines, our approach leverages a 12-category taxonomy to efficiently identify and extract high-quality medical content.
11
+
12
+ **🔬 EAI-Taxonomy Med w/ DCLM** (205B tokens): Documents targeting scientific medical content that exhibit reasoning and are technically correct, combined with the DCLM classifier to filter for instruction-dense documents.
13
+
14
+ ## 🏆 Performance
15
+
16
+ Our taxonomy-based approach achieves superior results with significantly less curation effort:
17
+
18
+ | Dataset | CareQA-en | MedMCQA | MedQA-USMLE | PubMedQA | MMLU-Med | Curation Complexity |
19
+ |---------|-----------|---------|-------------|----------|----------|-------------------|
20
+ | DCLM-baseline | 26.9% | 31.6% | 25.9% | **70.6%** | 31.0% | General web filtering |
21
+ | TheBlueScrubs-v1 | 25.1% | 32.2% | 25.3% | 69.2% | 25.7% | Complex domain pipeline |
22
+ | EAI-Taxonomy Med | 27.7% | 32.5% | 28.1% | 67.0% | 29.5% | Simple semantic filter |
23
+ | EAI-Taxonomy Med w/ DCLM | **31.5%** | **32.7%** | **30.1%** | 68.6% | **39.2%** | + DCLM classifier |
24
+
25
+ ## 🔍 Key Findings
26
+
27
+ - **Robust Performance**: Achieves best or near-best performance across all medical evaluations
28
+ - **Above Random Performance**: Successfully performs above chance (~25%) on MedQA-USMLE where baseline methods fail
29
+ - **Consistent Improvements**: +13.8% average improvement over existing specialized medical datasets
30
+ - **Efficiency**: Strong medical knowledge without complex domain-specific curation pipelines
31
+
32
+ # Dataset Schema Documentation
33
+
34
+ ## Overview
35
+
36
+ This dataset contains web-crawled text data with comprehensive metadata, quality signals, and taxonomic classifications. Each record represents a document extracted from web archives with detailed provenance tracking and quality assessment metrics.
37
+
38
+ ## Core Fields
39
+
40
+ | Field | Type | Description | Path |
41
+ |-------|------|-------------|------|
42
+ | `id` | `Int64` | Unique identifier based on document hash | `id` |
43
+ | `text` | `String` | The main textual content of the document | `text` |
44
+
45
+ ## EAI Taxonomy Classification
46
+
47
+ Comprehensive hierarchical classification system with primary and secondary labels - the most important feature of this dataset. The taxonomy is designed to provide detailed subject categorization, document type identification, content quality assessment, and extraction quality indicators.
48
+
49
+ <details>
50
+ <summary><strong>Free Decimal Correspondence (FDC)</strong></summary>
51
+
52
+ A Dewey Decimal-inspired classification system with 3-level hierarchical labels. The FDC provides nested categories where each successive level refines its parent category. It's designed to be compatible with the Dewey Decimal System for library cataloging.
53
+
54
+ **Level Structure:**
55
+ - **Level 1**: Top-level categories (0-9) covering broad subject areas like General works, Philosophy, Religion, Social Sciences, etc.
56
+ - **Level 2**: Sub-divisions (00-99) that refine Level 1 categories
57
+ - **Level 3**: Specific categories (000-999) that further refine Level 2 categories
58
+
59
+ | Component | Description | Path |
60
+ |-----------|-------------|------|
61
+ | Primary Code | Main classification code | `eai_taxonomy.free_decimal_correspondence.primary.code` |
62
+ | Primary Level 1 | Top-level category (0=General works, 1=Philosophy, 2=Religion, 3=Social Sciences, 4=Language, 5=Science, 6=Technology, 7=Arts, 8=Literature, 9=History/Geography) | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_1` |
63
+ | Primary Level 2 | Mid-level category | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_2` |
64
+ | Primary Level 3 | Specific category | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_3` |
65
+ | Secondary Code | Alternative classification code | `eai_taxonomy.free_decimal_correspondence.secondary.code` |
66
+ | Secondary Level 1 | Alternative top-level category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_1` |
67
+ | Secondary Level 2 | Alternative mid-level category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_2` |
68
+ | Secondary Level 3 | Alternative specific category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_3` |
69
+
70
+ We recommend this viewer for easily navigating the FDC categories when curating filters: https://www.librarything.com/mds
71
+
72
+ </details>
73
+
74
+ <details>
75
+ <summary><strong>Bloom's Taxonomy Integration</strong></summary>
76
+
77
+ Based on Anderson and Krathwohl's 2001 revision of Bloom's Taxonomy of Educational Objectives, providing two complementary categorization dimensions for educational content analysis.
78
+
79
+ ### Knowledge Domain
80
+ Categorizes the type of knowledge demonstrated in the document:
81
+
82
+ | Component | Description | Path |
83
+ |-----------|-------------|------|
84
+ | Primary Code | Main knowledge domain code | `eai_taxonomy.bloom_knowledge_domain.primary.code` |
85
+ | Primary Label | Main knowledge domain label | `eai_taxonomy.bloom_knowledge_domain.primary.label` |
86
+ | Secondary Code | Alternative knowledge domain code | `eai_taxonomy.bloom_knowledge_domain.secondary.code` |
87
+ | Secondary Label | Alternative knowledge domain label | `eai_taxonomy.bloom_knowledge_domain.secondary.label` |
88
+
89
+ **Possible Values:**
90
+ | Code | Label | Description |
91
+ |------|-------|-------------|
92
+ | `-1` | Abstain | Unable to determine |
93
+ | `1` | Factual | Basic elements to learn or solve problems |
94
+ | `2` | Conceptual | Interrelationships between basic elements within larger context |
95
+ | `3` | Procedural | Methods and techniques in the discipline |
96
+ | `4` | Metacognitive | Awareness of how learning works in relation to oneself |
97
+
98
+ ### Cognitive Processing Level
99
+ Assesses the learning and thinking skill levels demonstrated by the document author:
100
+
101
+ | Component | Description | Path |
102
+ |-----------|-------------|------|
103
+ | Primary Code | Main cognitive process code | `eai_taxonomy.bloom_cognitive_process.primary.code` |
104
+ | Primary Label | Main cognitive process label | `eai_taxonomy.bloom_cognitive_process.primary.label` |
105
+ | Secondary Code | Alternative cognitive process code | `eai_taxonomy.bloom_cognitive_process.secondary.code` |
106
+ | Secondary Label | Alternative cognitive process label | `eai_taxonomy.bloom_cognitive_process.secondary.label` |
107
+
108
+ **Possible Values:**
109
+ | Code | Label | Description |
110
+ |------|-------|-------------|
111
+ | `-1` | Abstain | Unable to determine |
112
+ | `1` | Remember | Retrieve relevant knowledge from memory |
113
+ | `2` | Understand | Determine meaning of instructional messages |
114
+ | `3` | Apply | Use a procedure in a given situation |
115
+ | `4` | Analyze | Break materials into components and determine relationships |
116
+ | `5` | Evaluate | Make judgments based on criteria and standards |
117
+ | `6` | Create | Create new or original work |
118
+
119
+ </details>
120
+
121
+ <details>
122
+ <summary><strong>Document Characteristics</strong></summary>
123
+
124
+ ### Document Type v1
125
+ In-house classification of common web document types and formats:
126
+
127
+ | Component | Description | Path |
128
+ |-----------|-------------|------|
129
+ | Primary Code | Main document type code | `eai_taxonomy.document_type_v1.primary.code` |
130
+ | Primary Label | Main document type label | `eai_taxonomy.document_type_v1.primary.label` |
131
+ | Secondary Code | Alternative document type code | `eai_taxonomy.document_type_v1.secondary.code` |
132
+ | Secondary Label | Alternative document type label | `eai_taxonomy.document_type_v1.secondary.label` |
133
+
134
+ **Possible Values:**
135
+ | Code | Label | Examples |
136
+ |------|-------|----------|
137
+ | `-1` | Abstain | Unable to classify |
138
+ | `1` | News/Editorial | CNN articles, opinion columns |
139
+ | `2` | Academic/Research | ArXiv papers, research articles |
140
+ | `3` | Reference/Encyclopedic/Educational | FAQs, Wikipedia entries |
141
+ | `4` | Code/Software | GitHub repos, code examples |
142
+ | `5` | Social/Forum | Conversation threads, Q&A boards |
143
+ | `6` | Promotional/Advertisement | Product pages, calls to action |
144
+ | `7` | Search/Directory/Bibliography | Link pages, search results |
145
+ | `8` | Adult/Pornographic | Adult content |
146
+ | `9` | Personal/Misc | Blogs, user profiles |
147
+ | `10` | Machine-Generated | Lorem ipsum, garbled text |
148
+ | `11` | Legal/Regulatory | Contracts, terms of service |
149
+ | `12` | Government/Political | Legislation, press releases |
150
+ | `13` | Literary/Creative | Poems, short stories |
151
+ | `14` | Reviews/Critiques | Film critiques, product reviews |
152
+ | `15` | E-Commerce/Marketplace | eBay listings, Amazon pages |
153
+ | `16` | Images/Videos/Audio | YouTube videos, Imgur pages |
154
+ | `17` | Other/Unclassified | Documents that resist classification |
155
+
156
+ ### Document Type v2
157
+ Updated classification based on WebOrganizer taxonomy with refined categories for improved document classification accuracy:
158
+
159
+ | Component | Description | Path |
160
+ |-----------|-------------|------|
161
+ | Primary Code | Main document type code (v2) | `eai_taxonomy.document_type_v2.primary.code` |
162
+ | Primary Label | Main document type label (v2) | `eai_taxonomy.document_type_v2.primary.label` |
163
+ | Secondary Code | Alternative document type code (v2) | `eai_taxonomy.document_type_v2.secondary.code` |
164
+ | Secondary Label | Alternative document type label (v2) | `eai_taxonomy.document_type_v2.secondary.label` |
165
+
166
+ **Complete Value Mapping:**
167
+ | Code | Label | Examples |
168
+ |------|-------|----------|
169
+ | `-1` | Abstain | Documents requiring human review |
170
+ | `1` | About (Org.) | Company about pages, mission statements |
171
+ | `2` | About (Personal) | Personal bios, LinkedIn profiles |
172
+ | `3` | Academic Writing | Research papers, abstracts, dissertations |
173
+ | `4` | Audio Transcript | Interview transcripts, court records, captions |
174
+ | `5` | Comment Section | Reddit threads, blog comments |
175
+ | `6` | Content Listing | Site maps, product catalogs, directory listings |
176
+ | `7` | Creative Writing | Song lyrics, novel excerpts, poetry |
177
+ | `8` | Documentation | API docs, README files, user manuals |
178
+ | `9` | FAQ | FAQ pages, Q&A lists |
179
+ | `10` | Knowledge Article | Wikipedia articles, Britannica entries |
180
+ | `11` | Legal Notices | Privacy policies, license agreements, terms of service |
181
+ | `12` | Listicle | Buzzfeed-style articles, "Top 10" lists |
182
+ | `13` | News (Org.) | Government blog posts, corporate announcements |
183
+ | `14` | News Article | Newspaper articles, CNN content, breaking news |
184
+ | `15` | Nonfiction Writing | Editorials, obituaries, memoirs, opinion pieces |
185
+ | `16` | Personal Blog | Personal journals, diary entries, lifestyle blogs |
186
+ | `17` | Product Page | Product descriptions, course offerings, sales pages |
187
+ | `18` | Q&A Forum | Quora posts, Stack Exchange discussions |
188
+ | `19` | Spam / Ads | SEO keyword stuffing, promotional spam |
189
+ | `20` | Structured Data | Datasheets, glossaries, JSON files, databases |
190
+ | `21` | Customer Support | Help articles, troubleshooting guides |
191
+ | `22` | Truncated | Paywalled sites, image galleries, partial content |
192
+ | `23` | Tutorial | Cooking recipes, WikiHow pages, step-by-step guides |
193
+ | `24` | User Review | Yelp reviews, TripAdvisor feedback, product reviews |
194
+ | `25` | Other/Unclassified | Miscellaneous documents not fitting other categories |
195
+
196
+ ### Extraction Artifacts
197
+ Assessment of technical extraction quality, identifying issues from HTML-to-text conversion:
198
+
199
+ | Component | Description | Path |
200
+ |-----------|-------------|------|
201
+ | Primary Code | Main extraction artifact code | `eai_taxonomy.extraction_artifacts.primary.code` |
202
+ | Primary Label | Main extraction artifact label | `eai_taxonomy.extraction_artifacts.primary.label` |
203
+ | Secondary Code | Alternative extraction artifact code | `eai_taxonomy.extraction_artifacts.secondary.code` |
204
+ | Secondary Label | Alternative extraction artifact label | `eai_taxonomy.extraction_artifacts.secondary.label` |
205
+
206
+ **Possible Values:**
207
+ | Code | Label | Description |
208
+ |------|-------|-------------|
209
+ | `-1` | Abstain | Unable to determine |
210
+ | `0` | No Artifacts | Clean text with no leftover HTML or irrelevant elements |
211
+ | `1` | Leftover HTML | HTML/code artifacts remaining after extraction |
212
+ | `2` | Text Extraction Errors | Broken math expressions, encoding errors, improperly parsed tables |
213
+ | `3` | Irrelevant Content | Headers, footers, nav menus extracted by mistake |
214
+ | `4` | Indeterminate | Insufficient content to judge |
215
+
216
+ ### Missing Content
217
+ Assessment of content completeness and extraction success:
218
+
219
+ | Component | Description | Path |
220
+ |-----------|-------------|------|
221
+ | Primary Code | Main missing content code | `eai_taxonomy.missing_content.primary.code` |
222
+ | Primary Label | Main missing content label | `eai_taxonomy.missing_content.primary.label` |
223
+ | Secondary Code | Alternative missing content code | `eai_taxonomy.missing_content.secondary.code` |
224
+ | Secondary Label | Alternative missing content label | `eai_taxonomy.missing_content.secondary.label` |
225
+
226
+ **Possible Values:**
227
+ | Code | Label | Description |
228
+ |------|-------|-------------|
229
+ | `-1` | Abstain | Unable to determine |
230
+ | `0` | No Missing Content | Complete and coherent text |
231
+ | `1` | Truncated Snippets | Obvious "...", incomplete paragraphs, cut-off text |
232
+ | `2` | Click Here References | "Download here", "Click here" without linked content |
233
+ | `3` | Incoherent Flow | Unreadable or illogical flow due to missing context |
234
+ | `4` | Missing Images or Figures | Placeholders or references to missing visual content |
235
+ | `5` | Missing Referenced Data | References to absent tables/datasets (e.g., "See Table 3") |
236
+ | `6` | Indeterminate | Insufficient content to judge |
237
+
238
+ ### Text Structure Information
239
+
240
+ | Field | Type | Description | Path |
241
+ |-------|------|-------------|------|
242
+ | Line Start Indices | `List[Int32]` | Starting indices of each line | `line_start_n_end_idx.line_start_idx` |
243
+ | Line End Indices | `List[Int32]` | Ending indices of each line | `line_start_n_end_idx.line_end_idx` |
244
+
245
+ </details>
246
+
247
+ <details>
248
+ <summary><strong>Content Quality Dimensions</strong></summary>
249
+
250
+ Quality assessment inspired by NaturalReasoning and FineWeb efforts to categorize web data by information sophistication.
251
+
252
+ ### Reasoning Depth
253
+ Assesses the complexity and sophistication of logical reasoning in the document:
254
+
255
+ | Component | Description | Path |
256
+ |-----------|-------------|------|
257
+ | Primary Code | Main reasoning depth code | `eai_taxonomy.reasoning_depth.primary.code` |
258
+ | Primary Label | Main reasoning depth label | `eai_taxonomy.reasoning_depth.primary.label` |
259
+ | Secondary Code | Alternative reasoning depth code | `eai_taxonomy.reasoning_depth.secondary.code` |
260
+ | Secondary Label | Alternative reasoning depth label | `eai_taxonomy.reasoning_depth.secondary.label` |
261
+
262
+ **Possible Values:**
263
+ | Code | Label | Description |
264
+ |------|-------|-------------|
265
+ | `-1` | Abstain | Unable to determine |
266
+ | `1` | No Reasoning | Facts present but no evidence of reasoning |
267
+ | `2` | Basic Reasoning | Basic analysis with minimal explanation and summarization |
268
+ | `3` | Intermediate Reasoning | Some logical steps connecting ideas and structured thinking |
269
+ | `4` | Advanced Reasoning | Multi-step reasoning and thorough analysis with well-developed explanations |
270
+ | `5` | Exceptional Reasoning | Novel abstractions, theoretical frameworks, long chain-of-thought, original insights, or proofs |
271
+ | `6` | Indeterminate | Insufficient context to judge |
272
+
273
+ ### Technical Correctness
274
+ Evaluates the accuracy and precision of technical information:
275
+
276
+ | Component | Description | Path |
277
+ |-----------|-------------|------|
278
+ | Primary Code | Main technical correctness code | `eai_taxonomy.technical_correctness.primary.code` |
279
+ | Primary Label | Main technical correctness label | `eai_taxonomy.technical_correctness.primary.label` |
280
+ | Secondary Code | Alternative technical correctness code | `eai_taxonomy.technical_correctness.secondary.code` |
281
+ | Secondary Label | Alternative technical correctness label | `eai_taxonomy.technical_correctness.secondary.label` |
282
+
283
+ **Possible Values:**
284
+ | Code | Label | Description |
285
+ |------|-------|-------------|
286
+ | `-1` | Abstain | Unable to determine |
287
+ | `1` | Technically Flawed | Significant errors undermining content validity |
288
+ | `2` | Partially Correct | Some correctness but contains flaws, omissions, or errors |
289
+ | `3` | Mostly Correct | Technical correctness with minor flaws or incomplete explanations |
290
+ | `4` | Highly Correct | High technical correctness with precise definitions and clear explanations |
291
+ | `5` | Exceptionally Correct | Exceptional technical correctness with formal proofs and flawless content |
292
+ | `6` | Not Applicable/Indeterminate | No technical content or insufficient context |
293
+
294
+ ### Education Level
295
+ Assesses the appropriate educational background required to comprehend the content:
296
+
297
+ | Component | Description | Path |
298
+ |-----------|-------------|------|
299
+ | Primary Code | Main education level code | `eai_taxonomy.education_level.primary.code` |
300
+ | Primary Label | Main education level label | `eai_taxonomy.education_level.primary.label` |
301
+ | Secondary Code | Alternative education level code | `eai_taxonomy.education_level.secondary.code` |
302
+ | Secondary Label | Alternative education level label | `eai_taxonomy.education_level.secondary.label` |
303
+
304
+ **Possible Values:**
305
+ | Code | Label | Description |
306
+ |------|-------|-------------|
307
+ | `-1` | Abstain | Unable to determine |
308
+ | `1` | General Audience | Accessible to anyone with basic literacy; simple terms |
309
+ | `2` | High School Level | Requires high school education; specialized terminology explained for non-experts |
310
+ | `3` | Undergraduate Level | Requires college education; uses specialized terminology and assumes background knowledge |
311
+ | `4` | Graduate/Expert Level | Requires graduate education or domain expertise; assumes deep background knowledge |
312
+ | `5` | Indeterminate | Insufficient content to judge educational level |
313
+
314
+ </details>
315
+
316
+ <details>
317
+ <summary><strong>Metadata</strong></summary>
318
+
319
+ ## Metadata Structure
320
+
321
+ The `metadata` field contains a nested structure with web archive information:
322
+
323
+ | Field | Type | Description | Path |
324
+ |-------|------|-------------|------|
325
+ | **URL Information** | | | |
326
+ | URL | `String` | Original URL of the document | `metadata.url` |
327
+ | Source Domain | `String` | Domain name of the source | `metadata.source_domain` |
328
+ | Snapshot ID | `String` | Identifier for the web archive snapshot | `metadata.snapshot_id` |
329
+ | **WARC Metadata** | | WARC (Web ARChive) format metadata | |
330
+ | Content Length | `String` | Size of the content | `metadata.warc_metadata.Content-Length` |
331
+ | Content Type | `String` | MIME type of the content | `metadata.warc_metadata.Content-Type` |
332
+ | Block Digest | `String` | Checksum of the WARC block | `metadata.warc_metadata.WARC-Block-Digest` |
333
+ | Concurrent To | `String` | Related WARC records | `metadata.warc_metadata.WARC-Concurrent-To` |
334
+ | Date | `String` | Timestamp of the crawl | `metadata.warc_metadata.WARC-Date` |
335
+ | IP Address | `String` | Source server IP address | `metadata.warc_metadata.WARC-IP-Address` |
336
+ | Payload Type | `String` | Identified content type | `metadata.warc_metadata.WARC-Identified-Payload-Type` |
337
+ | Payload Digest | `String` | Checksum of the payload | `metadata.warc_metadata.WARC-Payload-Digest` |
338
+ | Record ID | `String` | Unique WARC record identifier | `metadata.warc_metadata.WARC-Record-ID` |
339
+ | Target URI | `String` | Original target URL | `metadata.warc_metadata.WARC-Target-URI` |
340
+ | Truncated | `String` | Truncation status | `metadata.warc_metadata.WARC-Truncated` |
341
+ | Type | `String` | WARC record type | `metadata.warc_metadata.WARC-Type` |
342
+ | Warcinfo ID | `String` | Associated warcinfo record | `metadata.warc_metadata.WARC-Warcinfo-ID` |
343
+ | **Additional Info** | | | |
344
+ | WARC Info | `String` | Additional WARC information | `metadata.warc_info` |
345
+
346
+ </details>
347
+
348
+ <details>
349
+ <summary><strong>Quality Signals</strong></summary>
350
+
351
+ The dataset includes two comprehensive quality assessment frameworks:
352
+
353
+ ## Red Pajama v2 Quality Metrics
354
+
355
+ Text quality indicators derived from the Red Pajama v2 filtering pipeline:
356
+
357
+ ### Content Structure Metrics
358
+ | Metric | Description | Path |
359
+ |--------|-------------|------|
360
+ | Original Length | Original document length | `quality_signals.red_pajama_v2.ccnet_original_length` |
361
+ | Original Lines | Number of lines in original document | `quality_signals.red_pajama_v2.ccnet_original_nlines` |
362
+ | Sentence Count | Total sentence count | `quality_signals.red_pajama_v2.rps_doc_num_sentences` |
363
+ | Word Count | Total word count | `quality_signals.red_pajama_v2.rps_doc_word_count` |
364
+ | Mean Word Length | Average word length | `quality_signals.red_pajama_v2.rps_doc_mean_word_length` |
365
+
366
+ ### Language Quality Metrics
367
+ | Metric | Description | Path |
368
+ |--------|-------------|------|
369
+ | Stop Word Fraction | Proportion of stop words | `quality_signals.red_pajama_v2.rps_doc_stop_word_fraction` |
370
+ | Unique Words Fraction | Fraction of unique words | `quality_signals.red_pajama_v2.rps_doc_frac_unique_words` |
371
+ | All Caps Words | Fraction of words in all capitals | `quality_signals.red_pajama_v2.rps_doc_frac_all_caps_words` |
372
+ | Non-Alphabetic Words | Fraction of non-alphabetic words | `quality_signals.red_pajama_v2.rps_doc_frac_no_alph_words` |
373
+ | Unigram Entropy | Entropy measure of word distribution | `quality_signals.red_pajama_v2.rps_doc_unigram_entropy` |
374
+
375
+ ### Content Pattern Analysis
376
+ | Metric | Description | Path |
377
+ |--------|-------------|------|
378
+ | Curly Bracket Density | Curly bracket density (code indicator) | `quality_signals.red_pajama_v2.rps_doc_curly_bracket` |
379
+ | Symbol-to-Word Ratio | Symbol-to-word ratio | `quality_signals.red_pajama_v2.rps_doc_symbol_to_word_ratio` |
380
+ | Ellipsis Line Endings | Lines ending with ellipsis | `quality_signals.red_pajama_v2.rps_doc_frac_lines_end_with_ellipsis` |
381
+ | Lorem Ipsum Detection | Lorem ipsum text detection | `quality_signals.red_pajama_v2.rps_doc_lorem_ipsum` |
382
+ | Offensive Content | Potentially offensive content detection | `quality_signals.red_pajama_v2.rps_doc_ldnoobw_words` |
383
+ | UT1 Blacklist | UT1 blacklist filtering score | `quality_signals.red_pajama_v2.rps_doc_ut1_blacklist` |
384
+
385
+ ### Duplication Detection
386
+ | Metric | Description | Path |
387
+ |--------|-------------|------|
388
+ | 5-gram Duplication | Character-level duplication for 5-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_5grams` |
389
+ | 6-gram Duplication | Character-level duplication for 6-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_6grams` |
390
+ | 7-gram Duplication | Character-level duplication for 7-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_7grams` |
391
+ | 8-gram Duplication | Character-level duplication for 8-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_8grams` |
392
+ | 9-gram Duplication | Character-level duplication for 9-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_9grams` |
393
+ | 10-gram Duplication | Character-level duplication for 10-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_10grams` |
394
+ | Top 2-gram Coverage | Most frequent 2-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_2gram` |
395
+ | Top 3-gram Coverage | Most frequent 3-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_3gram` |
396
+ | Top 4-gram Coverage | Most frequent 4-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_4gram` |
397
+
398
+ ### Domain Importance Scores
399
+ | Metric | Description | Path |
400
+ |--------|-------------|------|
401
+ | Books Importance | Similarity to book content | `quality_signals.red_pajama_v2.rps_doc_books_importance` |
402
+ | Books Importance (Length Corrected) | Length-corrected books similarity | `quality_signals.red_pajama_v2.rps_doc_books_importance_length_correction` |
403
+ | OpenWebText Importance | Similarity to OpenWebText | `quality_signals.red_pajama_v2.rps_doc_openwebtext_importance` |
404
+ | OpenWebText Importance (Length Corrected) | Length-corrected OpenWebText similarity | `quality_signals.red_pajama_v2.rps_doc_openwebtext_importance_length_correction` |
405
+ | Wikipedia Importance | Similarity to Wikipedia | `quality_signals.red_pajama_v2.rps_doc_wikipedia_importance` |
406
+ | Wikipedia Importance (Length Corrected) | Length-corrected Wikipedia similarity | `quality_signals.red_pajama_v2.rps_doc_wikipedia_importance_length_correction` |
407
+
408
+ ## FastText Classification Scores
409
+
410
+ Domain and content type classification probabilities:
411
+
412
+ | Metric | Description | Path |
413
+ |--------|-------------|------|
414
+ | DCLM Score | DataComp-LM classifier score | `quality_signals.fasttext.dclm` |
415
+ | English Confidence | English language confidence | `quality_signals.fasttext.english` |
416
+ | Educational Content | Educational content approximation | `quality_signals.fasttext.fineweb_edu_approx` |
417
+ | General Math | General mathematics content | `quality_signals.fasttext.eai_general_math` |
418
+ | Web Math | OWM Web-based mathematics content | `quality_signals.fasttext.eai_open_web_math` |
419
+ | Code Content | Code content detection | `quality_signals.fasttext.eai_web_code` |
420
+
421
+ </details>
422
+
423
+ ## How to Load the Dataset
424
+
425
+ This section provides examples of how to load the `EssentialAI/eai-taxonomy-medical-w-dclm` dataset using different Python libraries and frameworks.
426
+
427
+ ### Using Hugging Face Datasets (Standard Method)
428
+
429
+ The simplest way to load the dataset is using the Hugging Face `datasets` library:
430
+
431
+ ```python
432
+ from datasets import load_dataset
433
+
434
+ # Load the entire dataset
435
+ dataset = load_dataset("EssentialAI/eai-taxonomy-medical-w-dclm")
436
+
437
+ # View dataset structure
438
+ print(dataset)
439
+ print(f"Number of examples: {len(dataset['train'])}")
440
+ ```
441
+
442
+ You can also load the dataset in streaming mode to avoid downloading the entire dataset at once:
443
+
444
+ ```python
445
+ from datasets import load_dataset
446
+
447
+ # Load in streaming mode
448
+ dataset = load_dataset("EssentialAI/eai-taxonomy-medical-w-dclm", streaming=True)
449
+ data_stream = dataset["train"]
450
+
451
+ # Iterate through examples
452
+ for example in data_stream.take(5):
453
+ print(example)
454
+ ```
455
+
456
+ ### Using PySpark
457
+
458
+ For large-scale distributed processing, you can load the dataset using PySpark with the `pyspark_huggingface` library:
459
+
460
+ ```python
461
+ # First install the required library:
462
+ # pip install pyspark_huggingface
463
+
464
+ import pyspark_huggingface
465
+ from pyspark.sql import SparkSession
466
+
467
+ # Initialize Spark session
468
+ spark = SparkSession.builder.appName("EAI-Taxonomy-Medical-w-DCLM").getOrCreate()
469
+
470
+ # Load the dataset using the "huggingface" data source
471
+ df = spark.read.format("huggingface").load("EssentialAI/eai-taxonomy-medical-w-dclm")
472
+
473
+ # Basic dataset exploration
474
+ print(f"Dataset shape: {df.count()} rows, {len(df.columns)} columns")
475
+ df.show(10)
476
+ df.printSchema()
477
+
478
+ # Load only specific columns for efficiency
479
+ df_subset = (
480
+ spark.read.format("huggingface")
481
+ .option("columns", '["column1", "column2"]') # Replace with actual column names
482
+ .load("EssentialAI/eai-taxonomy-medical-w-dclm")
483
+ )
484
+
485
+ # Run SQL queries on the dataset
486
+ df.createOrReplaceTempView("eai_taxonomy_medical_w_dclm_dataset")
487
+ result = spark.sql("""
488
+ SELECT COUNT(*) as total_examples
489
+ FROM eai_taxonomy_medical_w_dclm_dataset
490
+ """)
491
+ result.show()
492
+ ```
493
+
494
+ ### Using Daft
495
+
496
+ Daft provides a modern DataFrame library optimized for machine learning workloads. You can load the dataset directly from Hugging Face:
497
+
498
+ ```python
499
+ import daft
500
+
501
+ # Load the entire dataset
502
+ df = daft.read_parquet("hf://datasets/EssentialAI/eai-taxonomy-medical-w-dclm")
503
+
504
+ # Basic exploration
505
+ print("Dataset schema:")
506
+ df.schema()
507
+
508
+ print("First 5 rows:")
509
+ df.show(5)
510
+ ```
511
+
512
+ If you need to access private datasets or use authentication:
513
+
514
+ ```python
515
+ import daft
516
+ from daft.io import IOConfig, HTTPConfig
517
+
518
+ io_config = IOConfig(http=HTTPConfig(bearer_token="your_token"))
519
+ df = daft.read_parquet("hf://datasets/EssentialAI/eai-taxonomy-medical-w-dclm", io_config=io_config)
520
+ ```
521
+
522
+ ### Installation Requirements
523
+
524
+ Make sure you have the required libraries installed:
525
+
526
+ ```bash
527
+ # For Hugging Face datasets
528
+ pip install datasets
529
+
530
+ # For PySpark with Hugging Face integration
531
+ pip install pyspark_huggingface
532
+
533
+ # For Daft
534
+ pip install daft
535
+ ```
536
+
537
+ ## 📝 Citation
538
+
539
+ ```bibtex
540
+ [Citation information to be added]
541
+ ```