Safetensors
qwen2
File size: 5,701 Bytes
e5c15c1
 
 
fa45741
ce92fb2
 
fa45741
e5c15c1
fa45741
e5c15c1
fa45741
 
 
 
 
 
e5c15c1
fa45741
e5c15c1
fa45741
 
 
 
 
 
 
e5c15c1
fa45741
e5c15c1
fa45741
 
ac4589c
 
fa45741
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5c15c1
 
fa45741
e5c15c1
fa45741
 
 
 
 
 
 
 
 
 
 
 
 
ac4589c
fa45741
ac4589c
fa45741
 
 
 
 
e5c15c1
fa45741
e5c15c1
fa45741
 
 
 
e5c15c1
fa45741
e5c15c1
fa45741
 
77902ea
 
 
 
 
 
 
 
e5c15c1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: apache-2.0
---
# 🏷️ EAI-Distill-0.5b
[πŸ† Website](https://www.essential.ai/)  |  [πŸ–₯️ Code](https://github.com/Essential-AI/eai-taxonomy)  |  [πŸ“– Paper](https://huggingface.co/papers/2506.14111)

## πŸ“‹ Model Description

EAI-Distill-0.5b is a fine-tuned version of Qwen2.5-0.5B-Instruct designed for document classification across 12 taxonomic categories. This model is optimized for high-throughput classification of web documents and produces structured metadata for large-scale dataset curation.

The model classifies documents across the following dimensions:
- **πŸ“š Free Decimal Correspondence (FDC)**: Subject matter classification based on the Dewey Decimal System
- **🧠 Bloom's Taxonomy**: Cognitive process (Remember/Understand/Apply/Analyze/Evaluate/Create) and knowledge domain (Factual/Conceptual/Procedural/Metacognitive)
- **πŸ“„ Document Type**: Web page categorization (News, Academic, Reference, Code, Social, etc.)
- **πŸ” Content Quality**: Extraction artifacts, missing content detection
- **πŸŽ“ Educational Metadata**: Reasoning depth, technical correctness, educational level

## πŸš€ Training Details

- **πŸ€– Base Model**: Qwen2.5-0.5B-Instruct
- **πŸ“Š Training Data**: 82B synthetic tokens generated by Qwen2.5-32B-Instruct (teacher model) on 104M Common Crawl documents
- **βš™οΈ Optimizer**: AdamW (β₁=0.9, Ξ²β‚‚=0.95, weight_decay=0.1)
- **πŸ“ˆ Learning Rate**: 1Γ—10⁻⁴ with linear warmup (2B tokens), cosine decay to 1Γ—10⁻⁡, then linear anneal to 0
- **πŸ“¦ Batch Size**: 2M tokens
- **πŸ“ Sequence Length**: 16,384 tokens
- **πŸ’» Hardware**: Trained on AMD MI300x GPUs

## πŸ“Š Performance

The model achieves an average Cohen's ΞΊ agreement of 0.71-0.74 with our golden annotators, GPT-4o and Claude 3.5 Sonnet, on held-out evaluation sets, which is within 3% of its teacher model Qwen2.5-32b-Instruct while being 64Γ— smaller.
## πŸ’» Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import random

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("EssentialAI/EAI-Distill-0.5b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("EssentialAI/EAI-Distill-0.5b")

def chunk_text(text, max_char_per_doc=30000):
    if len(text) <= max_char_per_doc:
        return text
        
    chunk_size = max_char_per_doc // 3
    start = text[:chunk_size]
    
    middle_start = chunk_size 
    middle_end = len(text) - chunk_size 
    
    mid_point = random.randint(middle_start + chunk_size//2, middle_end - chunk_size//2)
    
    middle = text[mid_point - chunk_size//2:mid_point + chunk_size//2]
    end = text[-chunk_size:]
    return f"[beginning]\n{start}\n[middle]\n{middle}\n[end]\n{end}"

def classify_document(text):
    chunked_text = chunk_text(text)
    
    messages = [
        {"role": "system", "content": "taxonomy"},
        {"role": "user", "content": chunked_text},
    ]
    
    prompt = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_new_tokens=100)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

# Example usage
document_text = "Your document content here..."
classification = classify_document(document_text)
print(classification)
```

## πŸ“€ Output Format

The model outputs classifications in a condensed format:
```
{FDC primary},{FDC secondary or skip}
{Bloom cognitive process primary (1-6)},{Bloom cognitive process secondary (1-6) or skip}
{Bloom knowledge domain primary (1-4)},{Bloom knowledge domain secondary (1-4) or skip}
{Document type v1 primary (1-17)},{Document type v1 secondary (1-17) or skip}
{Extraction artifacts primary (0-4)},{Extraction artifacts secondary (0-4) or skip}
{Missing content primary (0-6)},{Missing content secondary (0-6) or skip}
{Document type v2 primary (1-25)},{Document type v2 secondary (1-25) or skip}
{Reasoning depth primary (1-6)},{Reasoning depth secondary (1-6) or skip}
{Technical correctness primary (1-6)},{Technical correctness secondary (1-6) or skip}
{Educational level primary (1-5)},{Educational level secondary (1-5) or skip}
```

## 🎯 Intended Use

This model is designed for:
- πŸ—οΈ Large-scale web document classification and metadata generation
- πŸ”§ Dataset curation through taxonomic filtering
- βœ… Content quality assessment for training data preparation
- πŸ“š Educational content analysis and organization

## ⚠️ Limitations

- Optimized for English web documents extracted using resiliparse
- Documents over 30k characters are automatically chunked, which may affect classification accuracy
- Performance may vary on content significantly different from Common Crawl web data
- Classification categories are based on web content patterns and may not generalize to other document types

## πŸ“ Citation

If you use this model, please cite:
```bibtex
@misc{ai2025essentialwebv1024ttokens,
      title={Essential-Web v1.0: 24T tokens of organized web data}, 
      author={Essential AI and : and Andrew Hojel and Michael Pust and Tim Romanski and Yash Vanjani and Ritvik Kapila and Mohit Parmar and Adarsh Chaluvaraju and Alok Tripathy and Anil Thomas and Ashish Tanwer and Darsh J Shah and Ishaan Shah and Karl Stratos and Khoi Nguyen and Kurt Smith and Michael Callahan and Peter Rushton and Philip Monk and Platon Mazarakis and Saad Jamal and Saurabh Srivastava and Somanshu Singla and Ashish Vaswani},
      year={2025},
      eprint={2506.14111},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.14111}, 
}
```