metadata
license: apache-2.0
π·οΈ EAI-Distill-0.5b
π Website | π₯οΈ Code | π Paper
π Model Description
EAI-Distill-0.5b is a fine-tuned version of Qwen2.5-0.5B-Instruct designed for document classification across 12 taxonomic categories. This model is optimized for high-throughput classification of web documents and produces structured metadata for large-scale dataset curation.
The model classifies documents across the following dimensions:
- π Free Decimal Correspondence (FDC): Subject matter classification based on the Dewey Decimal System
- π§ Bloom's Taxonomy: Cognitive process (Remember/Understand/Apply/Analyze/Evaluate/Create) and knowledge domain (Factual/Conceptual/Procedural/Metacognitive)
- π Document Type: Web page categorization (News, Academic, Reference, Code, Social, etc.)
- π Content Quality: Extraction artifacts, missing content detection
- π Educational Metadata: Reasoning depth, technical correctness, educational level
π Training Details
- π€ Base Model: Qwen2.5-0.5B-Instruct
- π Training Data: 82B synthetic tokens generated by Qwen2.5-32B-Instruct (teacher model) on 104M Common Crawl documents
- βοΈ Optimizer: AdamW (Ξ²β=0.9, Ξ²β=0.95, weight_decay=0.1)
- π Learning Rate: 1Γ10β»β΄ with linear warmup (2B tokens), cosine decay to 1Γ10β»β΅, then linear anneal to 0
- π¦ Batch Size: 2M tokens
- π Sequence Length: 16,384 tokens
- π» Hardware: Trained on AMD MI300x GPUs
π Performance
The model achieves an average Cohen's ΞΊ agreement of 0.71-0.74 with our golden annotators, GPT-4o and Claude 3.5 Sonnet, on held-out evaluation sets, which is within 3% of its teacher model Qwen2.5-32b-Instruct while being 64Γ smaller.
π» Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import random
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("EssentialAI/EAI-Distill-0.5b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("EssentialAI/EAI-Distill-0.5b")
def chunk_text(text, max_char_per_doc=30000):
if len(text) <= max_char_per_doc:
return text
chunk_size = max_char_per_doc // 3
start = text[:chunk_size]
middle_start = chunk_size
middle_end = len(text) - chunk_size
mid_point = random.randint(middle_start + chunk_size//2, middle_end - chunk_size//2)
middle = text[mid_point - chunk_size//2:mid_point + chunk_size//2]
end = text[-chunk_size:]
return f"[beginning]\n{start}\n[middle]\n{middle}\n[end]\n{end}"
def classify_document(text):
chunked_text = chunk_text(text)
messages = [
{"role": "system", "content": "taxonomy"},
{"role": "user", "content": chunked_text},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
document_text = "Your document content here..."
classification = classify_document(document_text)
print(classification)
π€ Output Format
The model outputs classifications in a condensed format:
{FDC primary},{FDC secondary or skip}
{Bloom cognitive process primary (1-6)},{Bloom cognitive process secondary (1-6) or skip}
{Bloom knowledge domain primary (1-4)},{Bloom knowledge domain secondary (1-4) or skip}
{Document type v1 primary (1-17)},{Document type v1 secondary (1-17) or skip}
{Extraction artifacts primary (0-4)},{Extraction artifacts secondary (0-4) or skip}
{Missing content primary (0-6)},{Missing content secondary (0-6) or skip}
{Document type v2 primary (1-25)},{Document type v2 secondary (1-25) or skip}
{Reasoning depth primary (1-6)},{Reasoning depth secondary (1-6) or skip}
{Technical correctness primary (1-6)},{Technical correctness secondary (1-6) or skip}
{Educational level primary (1-5)},{Educational level secondary (1-5) or skip}
π― Intended Use
This model is designed for:
- ποΈ Large-scale web document classification and metadata generation
- π§ Dataset curation through taxonomic filtering
- β Content quality assessment for training data preparation
- π Educational content analysis and organization
β οΈ Limitations
- Optimized for English web documents extracted using resiliparse
- Documents over 30k characters are automatically chunked, which may affect classification accuracy
- Performance may vary on content significantly different from Common Crawl web data
- Classification categories are based on web content patterns and may not generalize to other document types
π Citation
If you use this model, please cite:
@misc{ai2025essentialwebv1024ttokens,
title={Essential-Web v1.0: 24T tokens of organized web data},
author={Essential AI and : and Andrew Hojel and Michael Pust and Tim Romanski and Yash Vanjani and Ritvik Kapila and Mohit Parmar and Adarsh Chaluvaraju and Alok Tripathy and Anil Thomas and Ashish Tanwer and Darsh J Shah and Ishaan Shah and Karl Stratos and Khoi Nguyen and Kurt Smith and Michael Callahan and Peter Rushton and Philip Monk and Platon Mazarakis and Saad Jamal and Saurabh Srivastava and Somanshu Singla and Ashish Vaswani},
year={2025},
eprint={2506.14111},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.14111},
}