Research-EAI commited on
Commit
ec58b95
·
verified ·
1 Parent(s): fa04b98

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +545 -3
README.md CHANGED
@@ -1,3 +1,545 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # 🌐 Essential-Web: FDC Level-2 Partitioned Dataset
5
+
6
+ ## 📋 Dataset Description
7
+
8
+ This dataset contains a 1 trillion token sample from Essential-Web, partitioned by Free Decimal Correspondence (FDC) level-2 categories. Essential-Web is a 24-trillion-token web dataset with extensive document-level metadata designed to enable rapid dataset curation through SQL-like filtering.
9
+
10
+ ## 🔍 Free Decimal Correspondence (FDC)
11
+
12
+ The FDC taxonomy is an open classification system inspired by the Dewey Decimal System. Level-2 categories provide broad subject matter classifications that enable researchers to quickly identify and filter relevant content domains.
13
+
14
+ For help navigating FDC codes, see: https://www.librarything.com/mds
15
+
16
+ ## ⚙️ Dataset Creation
17
+
18
+ The source documents were classified using EAI-Taxonomy-0.5b, a classifier trained on synthetic labels generated by open-weight LLMs. The classification process involved inference across 23.6 billion web documents, requiring approximately 90,000 AMD MI300x GPU-hours.
19
+
20
+ ## 🎯 Performance
21
+
22
+ Datasets curated from Essential-Web using simple metadata filters have demonstrated competitive performance:
23
+ - 🧮 Math domain: within 15.3% of state-of-the-art
24
+ - 💻 Web code: 14.3% above state-of-the-art
25
+ - 🔬 STEM: 24.5% above state-of-the-art
26
+ - 🩺 Medical: 8.6% above state-of-the-art
27
+
28
+ ## 🏗️ Dataset Structure
29
+
30
+ The dataset is organized by FDC level-2 categories, which provide a Dewey Decimal-inspired taxonomy for classifying web content by subject matter. Files are organized in the `data/` directory with partitions like:
31
+
32
+ ```
33
+ data/fdc_level=02/
34
+ data/fdc_level=05/
35
+ data/fdc_level=10/
36
+ ...
37
+ ```
38
+
39
+ Each partition contains documents labeled with their corresponding FDC classification along with associated metadata.
40
+
41
+
42
+ # Dataset Schema Documentation
43
+
44
+ ## Overview
45
+
46
+ This dataset contains web-crawled text data with comprehensive metadata, quality signals, and taxonomic classifications. Each record represents a document extracted from web archives with detailed provenance tracking and quality assessment metrics.
47
+
48
+ ## Core Fields
49
+
50
+ | Field | Type | Description | Path |
51
+ |-------|------|-------------|------|
52
+ | `id` | `Int64` | Unique identifier for each document | `id` |
53
+ | `text` | `String` | The main textual content of the document | `text` |
54
+
55
+ ## EAI Taxonomy Classification
56
+
57
+ Comprehensive hierarchical classification system with primary and secondary labels - the most important feature of this dataset. The taxonomy is designed to provide detailed subject categorization, document type identification, content quality assessment, and extraction quality indicators.
58
+
59
+ <details>
60
+ <summary><strong>Free Decimal Correspondence (FDC)</strong></summary>
61
+
62
+ A Dewey Decimal-inspired classification system with 3-level hierarchical labels. The FDC provides nested categories where each successive level refines its parent category. It's designed to be compatible with the Dewey Decimal System for library cataloging.
63
+
64
+ **Level Structure:**
65
+ - **Level 1**: Top-level categories (0-9) covering broad subject areas like General works, Philosophy, Religion, Social Sciences, etc.
66
+ - **Level 2**: Sub-divisions (00-99) that refine Level 1 categories
67
+ - **Level 3**: Specific categories (000-999) that further refine Level 2 categories
68
+
69
+ | Component | Description | Path |
70
+ |-----------|-------------|------|
71
+ | Primary Code | Main classification code | `eai_taxonomy.free_decimal_correspondence.primary.code` |
72
+ | Primary Level 1 | Top-level category (0=General works, 1=Philosophy, 2=Religion, 3=Social Sciences, 4=Language, 5=Science, 6=Technology, 7=Arts, 8=Literature, 9=History/Geography) | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_1` |
73
+ | Primary Level 2 | Mid-level category | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_2` |
74
+ | Primary Level 3 | Specific category | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_3` |
75
+ | Secondary Code | Alternative classification code | `eai_taxonomy.free_decimal_correspondence.secondary.code` |
76
+ | Secondary Level 1 | Alternative top-level category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_1` |
77
+ | Secondary Level 2 | Alternative mid-level category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_2` |
78
+ | Secondary Level 3 | Alternative specific category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_3` |
79
+
80
+ We recommend this viewer for easily navigating the FDC categories when curating filters: https://www.librarything.com/mds
81
+
82
+ </details>
83
+
84
+ <details>
85
+ <summary><strong>Bloom's Taxonomy Integration</strong></summary>
86
+
87
+ Based on Anderson and Krathwohl's 2001 revision of Bloom's Taxonomy of Educational Objectives, providing two complementary categorization dimensions for educational content analysis.
88
+
89
+ ### Cognitive Process
90
+ Assesses the learning and thinking skill levels demonstrated by the document author:
91
+
92
+ | Component | Description | Path |
93
+ |-----------|-------------|------|
94
+ | Primary Code | Main cognitive process code | `eai_taxonomy.bloom_cognitive_process.primary.code` |
95
+ | Primary Label | Main cognitive process label | `eai_taxonomy.bloom_cognitive_process.primary.label` |
96
+ | Secondary Code | Alternative cognitive process code | `eai_taxonomy.bloom_cognitive_process.secondary.code` |
97
+ | Secondary Label | Alternative cognitive process label | `eai_taxonomy.bloom_cognitive_process.secondary.label` |
98
+
99
+ **Possible Values:**
100
+ | Code | Label | Description |
101
+ |------|-------|-------------|
102
+ | `-1` | Abstain | Unable to determine |
103
+ | `1` | Remember | Retrieve relevant knowledge from memory |
104
+ | `2` | Understand | Determine meaning of instructional messages |
105
+ | `3` | Apply | Use a procedure in a given situation |
106
+ | `4` | Analyze | Break materials into components and determine relationships |
107
+ | `5` | Evaluate | Make judgments based on criteria and standards |
108
+ | `6` | Create | Create new or original work |
109
+
110
+ ### Knowledge Domain
111
+ Categorizes the type of knowledge demonstrated in the document:
112
+
113
+ | Component | Description | Path |
114
+ |-----------|-------------|------|
115
+ | Primary Code | Main knowledge domain code | `eai_taxonomy.bloom_knowledge_domain.primary.code` |
116
+ | Primary Label | Main knowledge domain label | `eai_taxonomy.bloom_knowledge_domain.primary.label` |
117
+ | Secondary Code | Alternative knowledge domain code | `eai_taxonomy.bloom_knowledge_domain.secondary.code` |
118
+ | Secondary Label | Alternative knowledge domain label | `eai_taxonomy.bloom_knowledge_domain.secondary.label` |
119
+
120
+ **Possible Values:**
121
+ | Code | Label | Description |
122
+ |------|-------|-------------|
123
+ | `-1` | Abstain | Unable to determine |
124
+ | `1` | Factual | Basic elements to learn or solve problems |
125
+ | `2` | Conceptual | Interrelationships between basic elements within larger context |
126
+ | `3` | Procedural | Methods and techniques in the discipline |
127
+ | `4` | Metacognitive | Awareness of how learning works in relation to oneself |
128
+
129
+ </details>
130
+
131
+ <details>
132
+ <summary><strong>Document Characteristics</strong></summary>
133
+
134
+ ### Document Type v1
135
+ In-house classification of common web document types and formats:
136
+
137
+ | Component | Description | Path |
138
+ |-----------|-------------|------|
139
+ | Primary Code | Main document type code | `eai_taxonomy.document_type_v1.primary.code` |
140
+ | Primary Label | Main document type label | `eai_taxonomy.document_type_v1.primary.label` |
141
+ | Secondary Code | Alternative document type code | `eai_taxonomy.document_type_v1.secondary.code` |
142
+ | Secondary Label | Alternative document type label | `eai_taxonomy.document_type_v1.secondary.label` |
143
+
144
+ **Possible Values:**
145
+ | Code | Label | Examples |
146
+ |------|-------|----------|
147
+ | `-1` | Abstain | Unable to classify |
148
+ | `1` | News/Editorial | CNN articles, opinion columns |
149
+ | `2` | Academic/Research | ArXiv papers, research articles |
150
+ | `3` | Reference/Encyclopedic/Educational | FAQs, Wikipedia entries |
151
+ | `4` | Code/Software | GitHub repos, code examples |
152
+ | `5` | Social/Forum | Conversation threads, Q&A boards |
153
+ | `6` | Promotional/Advertisement | Product pages, calls to action |
154
+ | `7` | Search/Directory/Bibliography | Link pages, search results |
155
+ | `8` | Adult/Pornographic | Adult content |
156
+ | `9` | Personal/Misc | Blogs, user profiles |
157
+ | `10` | Machine-Generated | Lorem ipsum, garbled text |
158
+ | `11` | Legal/Regulatory | Contracts, terms of service |
159
+ | `12` | Government/Political | Legislation, press releases |
160
+ | `13` | Literary/Creative | Poems, short stories |
161
+ | `14` | Reviews/Critiques | Film critiques, product reviews |
162
+ | `15` | E-Commerce/Marketplace | eBay listings, Amazon pages |
163
+ | `16` | Images/Videos/Audio | YouTube videos, Imgur pages |
164
+ | `17` | Other/Unclassified | Documents that resist classification |
165
+
166
+ ### Document Type v2
167
+ Updated classification based on WebOrganizer taxonomy with refined categories:
168
+
169
+ | Component | Description | Path |
170
+ |-----------|-------------|------|
171
+ | Primary Code | Main document type code (v2) | `eai_taxonomy.document_type_v2.primary.code` |
172
+ | Primary Label | Main document type label (v2) | `eai_taxonomy.document_type_v2.primary.label` |
173
+ | Secondary Code | Alternative document type code (v2) | `eai_taxonomy.document_type_v2.secondary.code` |
174
+ | Secondary Label | Alternative document type label (v2) | `eai_taxonomy.document_type_v2.secondary.label` |
175
+
176
+ **Selected Values:**
177
+ | Code | Label | Examples |
178
+ |------|-------|----------|
179
+ | `3` | Academic Writing | Research papers, abstracts |
180
+ | `7` | Creative Writing | Song lyrics, novel excerpts |
181
+ | `8` | Documentation | API docs, README files |
182
+ | `10` | Knowledge Article | Wikipedia, Britannica |
183
+ | `14` | News Article | Newspaper articles, CNN content |
184
+ | `18` | Q&A Forum | Quora, Stack Exchange |
185
+ | `23` | Tutorial | Cooking recipes, WikiHow pages |
186
+ | `25` | Other/Unclassified | Documents that resist classification |
187
+
188
+ ### Extraction Artifacts
189
+ Assessment of technical extraction quality, identifying issues from HTML-to-text conversion:
190
+
191
+ | Component | Description | Path |
192
+ |-----------|-------------|------|
193
+ | Primary Code | Main extraction artifact code | `eai_taxonomy.extraction_artifacts.primary.code` |
194
+ | Primary Label | Main extraction artifact label | `eai_taxonomy.extraction_artifacts.primary.label` |
195
+ | Secondary Code | Alternative extraction artifact code | `eai_taxonomy.extraction_artifacts.secondary.code` |
196
+ | Secondary Label | Alternative extraction artifact label | `eai_taxonomy.extraction_artifacts.secondary.label` |
197
+
198
+ **Possible Values:**
199
+ | Code | Label | Description |
200
+ |------|-------|-------------|
201
+ | `-1` | Abstain | Unable to determine |
202
+ | `0` | No Artifacts | Clean text with no leftover HTML or irrelevant elements |
203
+ | `1` | Leftover HTML | HTML/code artifacts remaining after extraction |
204
+ | `2` | Text Extraction Errors | Broken math expressions, encoding errors, improperly parsed tables |
205
+ | `3` | Irrelevant Content | Headers, footers, nav menus extracted by mistake |
206
+ | `4` | Indeterminate | Insufficient content to judge |
207
+
208
+ ### Missing Content
209
+ Assessment of content completeness and extraction success:
210
+
211
+ | Component | Description | Path |
212
+ |-----------|-------------|------|
213
+ | Primary Code | Main missing content code | `eai_taxonomy.missing_content.primary.code` |
214
+ | Primary Label | Main missing content label | `eai_taxonomy.missing_content.primary.label` |
215
+ | Secondary Code | Alternative missing content code | `eai_taxonomy.missing_content.secondary.code` |
216
+ | Secondary Label | Alternative missing content label | `eai_taxonomy.missing_content.secondary.label` |
217
+
218
+ **Possible Values:**
219
+ | Code | Label | Description |
220
+ |------|-------|-------------|
221
+ | `-1` | Abstain | Unable to determine |
222
+ | `0` | No Missing Content | Complete and coherent text |
223
+ | `1` | Truncated Snippets | Obvious "...", incomplete paragraphs, cut-off text |
224
+ | `2` | Click Here References | "Download here", "Click here" without linked content |
225
+ | `3` | Incoherent Flow | Unreadable or illogical flow due to missing context |
226
+ | `4` | Missing Images or Figures | Placeholders or references to missing visual content |
227
+ | `5` | Missing Referenced Data | References to absent tables/datasets (e.g., "See Table 3") |
228
+ | `6` | Indeterminate | Insufficient content to judge |
229
+
230
+ </details>
231
+
232
+ <details>
233
+ <summary><strong>Content Quality Dimensions</strong></summary>
234
+
235
+ Quality assessment inspired by NaturalReasoning and FineWeb efforts to categorize web data by information sophistication.
236
+
237
+ ### Reasoning Depth
238
+ Assesses the complexity and sophistication of logical reasoning in the document:
239
+
240
+ | Component | Description | Path |
241
+ |-----------|-------------|------|
242
+ | Primary Code | Main reasoning depth code | `eai_taxonomy.reasoning_depth.primary.code` |
243
+ | Primary Label | Main reasoning depth label | `eai_taxonomy.reasoning_depth.primary.label` |
244
+ | Secondary Code | Alternative reasoning depth code | `eai_taxonomy.reasoning_depth.secondary.code` |
245
+ | Secondary Label | Alternative reasoning depth label | `eai_taxonomy.reasoning_depth.secondary.label` |
246
+
247
+ **Possible Values:**
248
+ | Code | Label | Description |
249
+ |------|-------|-------------|
250
+ | `-1` | Abstain | Unable to determine |
251
+ | `1` | No Reasoning | Facts present but no evidence of reasoning |
252
+ | `2` | Basic Reasoning | Basic analysis with minimal explanation and summarization |
253
+ | `3` | Intermediate Reasoning | Some logical steps connecting ideas and structured thinking |
254
+ | `4` | Advanced Reasoning | Multi-step reasoning and thorough analysis with well-developed explanations |
255
+ | `5` | Exceptional Reasoning | Novel abstractions, theoretical frameworks, long chain-of-thought, original insights, or proofs |
256
+ | `6` | Indeterminate | Insufficient context to judge |
257
+
258
+ ### Technical Correctness
259
+ Evaluates the accuracy and precision of technical information:
260
+
261
+ | Component | Description | Path |
262
+ |-----------|-------------|------|
263
+ | Primary Code | Main technical correctness code | `eai_taxonomy.technical_correctness.primary.code` |
264
+ | Primary Label | Main technical correctness label | `eai_taxonomy.technical_correctness.primary.label` |
265
+ | Secondary Code | Alternative technical correctness code | `eai_taxonomy.technical_correctness.secondary.code` |
266
+ | Secondary Label | Alternative technical correctness label | `eai_taxonomy.technical_correctness.secondary.label` |
267
+
268
+ **Possible Values:**
269
+ | Code | Label | Description |
270
+ |------|-------|-------------|
271
+ | `-1` | Abstain | Unable to determine |
272
+ | `1` | Technically Flawed | Significant errors undermining content validity |
273
+ | `2` | Partially Correct | Some correctness but contains flaws, omissions, or errors |
274
+ | `3` | Mostly Correct | Technical correctness with minor flaws or incomplete explanations |
275
+ | `4` | Highly Correct | High technical correctness with precise definitions and clear explanations |
276
+ | `5` | Exceptionally Correct | Exceptional technical correctness with formal proofs and flawless content |
277
+ | `6` | Not Applicable/Indeterminate | No technical content or insufficient context |
278
+
279
+ ### Education Level
280
+ Assesses the appropriate educational background required to comprehend the content:
281
+
282
+ | Component | Description | Path |
283
+ |-----------|-------------|------|
284
+ | Primary Code | Main education level code | `eai_taxonomy.education_level.primary.code` |
285
+ | Primary Label | Main education level label | `eai_taxonomy.education_level.primary.label` |
286
+ | Secondary Code | Alternative education level code | `eai_taxonomy.education_level.secondary.code` |
287
+ | Secondary Label | Alternative education level label | `eai_taxonomy.education_level.secondary.label` |
288
+
289
+ **Possible Values:**
290
+ | Code | Label | Description |
291
+ |------|-------|-------------|
292
+ | `-1` | Abstain | Unable to determine |
293
+ | `1` | General Audience | Accessible to anyone with basic literacy; simple terms |
294
+ | `2` | High School Level | Requires high school education; specialized terminology explained for non-experts |
295
+ | `3` | Undergraduate Level | Requires college education; uses specialized terminology and assumes background knowledge |
296
+ | `4` | Graduate/Expert Level | Requires graduate education or domain expertise; assumes deep background knowledge |
297
+ | `5` | Indeterminate | Insufficient content to judge educational level |
298
+
299
+ </details>
300
+
301
+ <details>
302
+ <summary><strong>Schema Structure</strong></summary>
303
+
304
+ ## Metadata Structure
305
+
306
+ The `metadata` field contains a nested structure with web archive information:
307
+
308
+ | Field | Type | Description | Path |
309
+ |-------|------|-------------|------|
310
+ | **URL Information** | | | |
311
+ | URL | `String` | Original URL of the document | `metadata.url` |
312
+ | Source Domain | `String` | Domain name of the source | `metadata.source_domain` |
313
+ | Snapshot ID | `String` | Identifier for the web archive snapshot | `metadata.snapshot_id` |
314
+ | **WARC Metadata** | | WARC (Web ARChive) format metadata | |
315
+ | Content Length | `String` | Size of the content | `metadata.warc_metadata.Content-Length` |
316
+ | Content Type | `String` | MIME type of the content | `metadata.warc_metadata.Content-Type` |
317
+ | Block Digest | `String` | Checksum of the WARC block | `metadata.warc_metadata.WARC-Block-Digest` |
318
+ | Concurrent To | `String` | Related WARC records | `metadata.warc_metadata.WARC-Concurrent-To` |
319
+ | Date | `String` | Timestamp of the crawl | `metadata.warc_metadata.WARC-Date` |
320
+ | IP Address | `String` | Source server IP address | `metadata.warc_metadata.WARC-IP-Address` |
321
+ | Payload Type | `String` | Identified content type | `metadata.warc_metadata.WARC-Identified-Payload-Type` |
322
+ | Payload Digest | `String` | Checksum of the payload | `metadata.warc_metadata.WARC-Payload-Digest` |
323
+ | Record ID | `String` | Unique WARC record identifier | `metadata.warc_metadata.WARC-Record-ID` |
324
+ | Target URI | `String` | Original target URL | `metadata.warc_metadata.WARC-Target-URI` |
325
+ | Truncated | `String` | Truncation status | `metadata.warc_metadata.WARC-Truncated` |
326
+ | Type | `String` | WARC record type | `metadata.warc_metadata.WARC-Type` |
327
+ | Warcinfo ID | `String` | Associated warcinfo record | `metadata.warc_metadata.WARC-Warcinfo-ID` |
328
+ | **Additional Info** | | | |
329
+ | WARC Info | `String` | Additional WARC information | `metadata.warc_info` |
330
+
331
+ ## Text Structure Information
332
+
333
+ | Field | Type | Description | Path |
334
+ |-------|------|-------------|------|
335
+ | Line Start Indices | `List[Int32]` | Starting indices of each line | `line_start_n_end_idx.line_start_idx` |
336
+ | Line End Indices | `List[Int32]` | Ending indices of each line | `line_start_n_end_idx.line_end_idx` |
337
+
338
+ </details>
339
+
340
+ <details>
341
+ <summary><strong>Quality Signals</strong></summary>
342
+
343
+ The dataset includes two comprehensive quality assessment frameworks:
344
+
345
+ ## Red Pajama v2 Quality Metrics
346
+
347
+ Text quality indicators derived from the Red Pajama v2 filtering pipeline:
348
+
349
+ ### Content Structure Metrics
350
+ | Metric | Description | Path |
351
+ |--------|-------------|------|
352
+ | Original Length | Original document length | `quality_signals.red_pajama_v2.ccnet_original_length` |
353
+ | Original Lines | Number of lines in original document | `quality_signals.red_pajama_v2.ccnet_original_nlines` |
354
+ | Sentence Count | Total sentence count | `quality_signals.red_pajama_v2.rps_doc_num_sentences` |
355
+ | Word Count | Total word count | `quality_signals.red_pajama_v2.rps_doc_word_count` |
356
+ | Mean Word Length | Average word length | `quality_signals.red_pajama_v2.rps_doc_mean_word_length` |
357
+
358
+ ### Language Quality Metrics
359
+ | Metric | Description | Path |
360
+ |--------|-------------|------|
361
+ | Stop Word Fraction | Proportion of stop words | `quality_signals.red_pajama_v2.rps_doc_stop_word_fraction` |
362
+ | Unique Words Fraction | Fraction of unique words | `quality_signals.red_pajama_v2.rps_doc_frac_unique_words` |
363
+ | All Caps Words | Fraction of words in all capitals | `quality_signals.red_pajama_v2.rps_doc_frac_all_caps_words` |
364
+ | Non-Alphabetic Words | Fraction of non-alphabetic words | `quality_signals.red_pajama_v2.rps_doc_frac_no_alph_words` |
365
+ | Unigram Entropy | Entropy measure of word distribution | `quality_signals.red_pajama_v2.rps_doc_unigram_entropy` |
366
+
367
+ ### Content Pattern Analysis
368
+ | Metric | Description | Path |
369
+ |--------|-------------|------|
370
+ | Curly Bracket Density | Curly bracket density (code indicator) | `quality_signals.red_pajama_v2.rps_doc_curly_bracket` |
371
+ | Symbol-to-Word Ratio | Symbol-to-word ratio | `quality_signals.red_pajama_v2.rps_doc_symbol_to_word_ratio` |
372
+ | Ellipsis Line Endings | Lines ending with ellipsis | `quality_signals.red_pajama_v2.rps_doc_frac_lines_end_with_ellipsis` |
373
+ | Lorem Ipsum Detection | Lorem ipsum text detection | `quality_signals.red_pajama_v2.rps_doc_lorem_ipsum` |
374
+ | Offensive Content | Potentially offensive content detection | `quality_signals.red_pajama_v2.rps_doc_ldnoobw_words` |
375
+ | UT1 Blacklist | UT1 blacklist filtering score | `quality_signals.red_pajama_v2.rps_doc_ut1_blacklist` |
376
+
377
+ ### Duplication Detection
378
+ | Metric | Description | Path |
379
+ |--------|-------------|------|
380
+ | 5-gram Duplication | Character-level duplication for 5-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_5grams` |
381
+ | 6-gram Duplication | Character-level duplication for 6-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_6grams` |
382
+ | 7-gram Duplication | Character-level duplication for 7-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_7grams` |
383
+ | 8-gram Duplication | Character-level duplication for 8-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_8grams` |
384
+ | 9-gram Duplication | Character-level duplication for 9-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_9grams` |
385
+ | 10-gram Duplication | Character-level duplication for 10-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_10grams` |
386
+ | Top 2-gram Coverage | Most frequent 2-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_2gram` |
387
+ | Top 3-gram Coverage | Most frequent 3-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_3gram` |
388
+ | Top 4-gram Coverage | Most frequent 4-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_4gram` |
389
+
390
+ ### Domain Importance Scores
391
+ | Metric | Description | Path |
392
+ |--------|-------------|------|
393
+ | Books Importance | Similarity to book content | `quality_signals.red_pajama_v2.rps_doc_books_importance` |
394
+ | Books Importance (Length Corrected) | Length-corrected books similarity | `quality_signals.red_pajama_v2.rps_doc_books_importance_length_correction` |
395
+ | OpenWebText Importance | Similarity to OpenWebText | `quality_signals.red_pajama_v2.rps_doc_openwebtext_importance` |
396
+ | OpenWebText Importance (Length Corrected) | Length-corrected OpenWebText similarity | `quality_signals.red_pajama_v2.rps_doc_openwebtext_importance_length_correction` |
397
+ | Wikipedia Importance | Similarity to Wikipedia | `quality_signals.red_pajama_v2.rps_doc_wikipedia_importance` |
398
+ | Wikipedia Importance (Length Corrected) | Length-corrected Wikipedia similarity | `quality_signals.red_pajama_v2.rps_doc_wikipedia_importance_length_correction` |
399
+
400
+ ## FastText Classification Scores
401
+
402
+ Domain and content type classification probabilities:
403
+
404
+ | Metric | Description | Path |
405
+ |--------|-------------|------|
406
+ | DCLM Score | DataComp-LM classifier score | `quality_signals.fasttext.dclm` |
407
+ | English Confidence | English language confidence | `quality_signals.fasttext.english` |
408
+ | Educational Content | Educational content approximation | `quality_signals.fasttext.fineweb_edu_approx` |
409
+ | General Math | General mathematics content | `quality_signals.fasttext.eai_general_math` |
410
+ | Web Math | Web-based mathematics content | `quality_signals.fasttext.eai_open_web_math` |
411
+ | Code Content | Code content detection | `quality_signals.fasttext.eai_web_code` |
412
+
413
+ </details>
414
+
415
+ ## How to Load the Dataset
416
+
417
+ This section provides examples of how to load the `Research-EAI/eai-taxonomy-math-w-fm` dataset using different Python libraries and frameworks.
418
+
419
+ ### Using Hugging Face Datasets (Standard Method)
420
+
421
+ The simplest way to load the dataset is using the Hugging Face `datasets` library:
422
+
423
+ ```python
424
+ from datasets import load_dataset
425
+
426
+ # Load the entire dataset
427
+ dataset = load_dataset("Research-EAI/eai-taxonomy-math-w-fm")
428
+
429
+ # View dataset structure
430
+ print(dataset)
431
+ print(f"Number of examples: {len(dataset['train'])}")
432
+ ```
433
+
434
+ You can also load the dataset in streaming mode to avoid downloading the entire dataset at once:
435
+
436
+ ```python
437
+ from datasets import load_dataset
438
+
439
+ # Load in streaming mode
440
+ dataset = load_dataset("Research-EAI/eai-taxonomy-math-w-fm", streaming=True)
441
+ data_stream = dataset["train"]
442
+
443
+ # Iterate through examples
444
+ for example in data_stream.take(5):
445
+ print(example)
446
+ ```
447
+
448
+ ### Using PySpark
449
+
450
+ For large-scale distributed processing, you can load the dataset using PySpark with the `pyspark_huggingface` library:
451
+
452
+ ```python
453
+ # First install the required library:
454
+ # pip install pyspark_huggingface
455
+
456
+ import pyspark_huggingface
457
+ from pyspark.sql import SparkSession
458
+
459
+ # Initialize Spark session
460
+ spark = SparkSession.builder.appName("EAI-Taxonomy-Math").getOrCreate()
461
+
462
+ # Load the dataset using the "huggingface" data source
463
+ df = spark.read.format("huggingface").load("Research-EAI/eai-taxonomy-math-w-fm")
464
+
465
+ # Basic dataset exploration
466
+ print(f"Dataset shape: {df.count()} rows, {len(df.columns)} columns")
467
+ df.show(10)
468
+ df.printSchema()
469
+
470
+ # Load only specific columns for efficiency
471
+ df_subset = (
472
+ spark.read.format("huggingface")
473
+ .option("columns", '["column1", "column2"]') # Replace with actual column names
474
+ .load("Research-EAI/eai-taxonomy-math-w-fm")
475
+ )
476
+
477
+ # Run SQL queries on the dataset
478
+ df.createOrReplaceTempView("eai_math_dataset")
479
+ result = spark.sql("""
480
+ SELECT COUNT(*) as total_examples
481
+ FROM eai_math_dataset
482
+ """)
483
+ result.show()
484
+ ```
485
+
486
+ ### Using Daft
487
+
488
+ Daft provides a modern DataFrame library optimized for machine learning workloads. You can load the dataset directly from Hugging Face:
489
+
490
+ ```python
491
+ import daft
492
+
493
+ # Load the entire dataset
494
+ df = daft.read_parquet("hf://datasets/Research-EAI/eai-taxonomy-math-w-fm")
495
+
496
+ # Basic exploration
497
+ print("Dataset schema:")
498
+ df.schema()
499
+
500
+ print("First 5 rows:")
501
+ df.show(5)
502
+ ```
503
+
504
+ If you need to access private datasets or use authentication:
505
+
506
+ ```python
507
+ import daft
508
+ import os
509
+
510
+ # Set your Hugging Face token
511
+ os.environ["HF_TOKEN"] = "your_huggingface_token_here"
512
+
513
+ # Load with authentication
514
+ df = daft.read_parquet(
515
+ "hf://datasets/Research-EAI/eai-taxonomy-math-w-fm",
516
+ hf_token=os.environ["HF_TOKEN"]
517
+ )
518
+ ```
519
+
520
+ ### Installation Requirements
521
+
522
+ Make sure you have the required libraries installed:
523
+
524
+ ```bash
525
+ # For Hugging Face datasets
526
+ pip install datasets
527
+
528
+ # For PySpark with Hugging Face integration
529
+ pip install pyspark_huggingface
530
+
531
+ # For Daft
532
+ pip install daft
533
+ ```
534
+
535
+ ## 🎓 Citation
536
+
537
+ If you use this dataset, please cite our EssentialWeb paper:
538
+
539
+ ```bibtex
540
+ @article{essentialweb2025,
541
+ title={Essential-Web: 24T tokens of organized web data},
542
+ author={[Authors]},
543
+ year={2025}
544
+ }
545
+ ```