Update README.md
Browse files
README.md
CHANGED
@@ -2,13 +2,13 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
# 🔬 EAI-Taxonomy STEM w/ DCLM
|
6 |
|
7 |
A high-quality STEM dataset curated from web data using taxonomy-based filtering, containing **100 billion tokens** of science, technology, engineering, and mathematics content.
|
8 |
|
9 |
## 🎯 Dataset Overview
|
10 |
|
11 |
-
This dataset is part of the [**Essential-Web**](https://huggingface.co/datasets/EssentialAI/essential-web) project, which introduces a new paradigm for dataset curation using expressive metadata and simple semantic filters. Unlike traditional STEM datasets that require complex domain-specific pipelines, our approach leverages a 12-category taxonomy to efficiently identify and extract high-quality STEM content.
|
12 |
|
13 |
**🧪 EAI-Taxonomy STEM w/ DCLM** (100B tokens): Documents targeting science, engineering, medical, and computer science content that exhibit reasoning, combined with the DCLM classifier to filter for instruction-dense documents.
|
14 |
|
@@ -425,7 +425,7 @@ Domain and content type classification probabilities:
|
|
425 |
|
426 |
## How to Load the Dataset
|
427 |
|
428 |
-
This section provides examples of how to load the `EssentialAI/eai-taxonomy-stem-w-dclm` dataset using different Python libraries and frameworks.
|
429 |
|
430 |
### Using Hugging Face Datasets (Standard Method)
|
431 |
|
@@ -435,7 +435,7 @@ The simplest way to load the dataset is using the Hugging Face `datasets` librar
|
|
435 |
from datasets import load_dataset
|
436 |
|
437 |
# Load the entire dataset
|
438 |
-
dataset = load_dataset("EssentialAI/eai-taxonomy-stem-w-dclm")
|
439 |
|
440 |
# View dataset structure
|
441 |
print(dataset)
|
@@ -448,7 +448,7 @@ You can also load the dataset in streaming mode to avoid downloading the entire
|
|
448 |
from datasets import load_dataset
|
449 |
|
450 |
# Load in streaming mode
|
451 |
-
dataset = load_dataset("EssentialAI/eai-taxonomy-stem-w-dclm", streaming=True)
|
452 |
data_stream = dataset["train"]
|
453 |
|
454 |
# Iterate through examples
|
@@ -471,7 +471,7 @@ from pyspark.sql import SparkSession
|
|
471 |
spark = SparkSession.builder.appName("EAI-Taxonomy-STEM-w-DCLM").getOrCreate()
|
472 |
|
473 |
# Load the dataset using the "huggingface" data source
|
474 |
-
df = spark.read.format("huggingface").load("EssentialAI/eai-taxonomy-stem-w-dclm")
|
475 |
|
476 |
# Basic dataset exploration
|
477 |
print(f"Dataset shape: {df.count()} rows, {len(df.columns)} columns")
|
@@ -482,7 +482,7 @@ df.printSchema()
|
|
482 |
df_subset = (
|
483 |
spark.read.format("huggingface")
|
484 |
.option("columns", '["column1", "column2"]') # Replace with actual column names
|
485 |
-
.load("EssentialAI/eai-taxonomy-stem-w-dclm")
|
486 |
)
|
487 |
|
488 |
# Run SQL queries on the dataset
|
@@ -502,7 +502,7 @@ Daft provides a modern DataFrame library optimized for machine learning workload
|
|
502 |
import daft
|
503 |
|
504 |
# Load the entire dataset
|
505 |
-
df = daft.read_parquet("hf://datasets/EssentialAI/eai-taxonomy-stem-w-dclm")
|
506 |
|
507 |
# Basic exploration
|
508 |
print("Dataset schema:")
|
@@ -519,7 +519,7 @@ import daft
|
|
519 |
from daft.io import IOConfig, HTTPConfig
|
520 |
|
521 |
io_config = IOConfig(http=HTTPConfig(bearer_token="your_token"))
|
522 |
-
df = daft.read_parquet("hf://datasets/EssentialAI/eai-taxonomy-stem-w-dclm", io_config=io_config)
|
523 |
```
|
524 |
|
525 |
### Installation Requirements
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
# 🔬 EAI-Taxonomy STEM w/ DCLM (100B sample)
|
6 |
|
7 |
A high-quality STEM dataset curated from web data using taxonomy-based filtering, containing **100 billion tokens** of science, technology, engineering, and mathematics content.
|
8 |
|
9 |
## 🎯 Dataset Overview
|
10 |
|
11 |
+
This dataset is part of the [**Essential-Web**](https://huggingface.co/datasets/EssentialAI/essential-web-v1.0) project, which introduces a new paradigm for dataset curation using expressive metadata and simple semantic filters. Unlike traditional STEM datasets that require complex domain-specific pipelines, our approach leverages a 12-category taxonomy to efficiently identify and extract high-quality STEM content.
|
12 |
|
13 |
**🧪 EAI-Taxonomy STEM w/ DCLM** (100B tokens): Documents targeting science, engineering, medical, and computer science content that exhibit reasoning, combined with the DCLM classifier to filter for instruction-dense documents.
|
14 |
|
|
|
425 |
|
426 |
## How to Load the Dataset
|
427 |
|
428 |
+
This section provides examples of how to load the `EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample` dataset using different Python libraries and frameworks.
|
429 |
|
430 |
### Using Hugging Face Datasets (Standard Method)
|
431 |
|
|
|
435 |
from datasets import load_dataset
|
436 |
|
437 |
# Load the entire dataset
|
438 |
+
dataset = load_dataset("EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample")
|
439 |
|
440 |
# View dataset structure
|
441 |
print(dataset)
|
|
|
448 |
from datasets import load_dataset
|
449 |
|
450 |
# Load in streaming mode
|
451 |
+
dataset = load_dataset("EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample", streaming=True)
|
452 |
data_stream = dataset["train"]
|
453 |
|
454 |
# Iterate through examples
|
|
|
471 |
spark = SparkSession.builder.appName("EAI-Taxonomy-STEM-w-DCLM").getOrCreate()
|
472 |
|
473 |
# Load the dataset using the "huggingface" data source
|
474 |
+
df = spark.read.format("huggingface").load("EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample")
|
475 |
|
476 |
# Basic dataset exploration
|
477 |
print(f"Dataset shape: {df.count()} rows, {len(df.columns)} columns")
|
|
|
482 |
df_subset = (
|
483 |
spark.read.format("huggingface")
|
484 |
.option("columns", '["column1", "column2"]') # Replace with actual column names
|
485 |
+
.load("EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample")
|
486 |
)
|
487 |
|
488 |
# Run SQL queries on the dataset
|
|
|
502 |
import daft
|
503 |
|
504 |
# Load the entire dataset
|
505 |
+
df = daft.read_parquet("hf://datasets/EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample")
|
506 |
|
507 |
# Basic exploration
|
508 |
print("Dataset schema:")
|
|
|
519 |
from daft.io import IOConfig, HTTPConfig
|
520 |
|
521 |
io_config = IOConfig(http=HTTPConfig(bearer_token="your_token"))
|
522 |
+
df = daft.read_parquet("hf://datasets/EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample", io_config=io_config)
|
523 |
```
|
524 |
|
525 |
### Installation Requirements
|