metadata
license: cc
task_categories:
- text-classification
- feature-extraction
language:
- en
Text Quality Assessment Dataset
Overview
This dataset is designed to assess text quality robustly across various domains for NLP and AI applications. It provides a composite quality score based on multiple classifiers, offering a more comprehensive evaluation of text quality beyond educational domains.
Dataset Details
- Size: 100,000 sentences
- Source: 20,000 sentences from each of 5 different datasets
Features
The quality scores of each text were assessed using
Text Length:
- Measured in characters
- Box-Cox transformed
Fineweb-edu Classifier Score:
- Raw logits
- Yeo-Johnson transformed
NVIDIA Quality Score:
- Weighted average of quality levels:
- "Low" (0)
- "Medium" (0.5)
- "High" (1)
- Weighted by predicted probabilities (result between 0 and 1)
- Logit transformed
- Yeo-Johnson transformed
- Weighted average of quality levels:
Composite Quality Score:
- First principal component of fineweb-edu and NVIDIA scores
All scores adjusted for length using linear regression with the transformed text length
Key Insights
- Fineweb-edu and NVIDIA scores show weak correlation
- Composite quality score correlates with both individual scores
- Clear quality differences observed across the 5 source datasets
Figure 1: Correlation between individual scores (fineweb-edu and NVIDIA) and the composite quality score. Each point represents a single row of text.
Figure 2: Distribution of quality scores across the five source datasets, highlighting quality differences
Applications
- Benchmarking text quality across various domains
- Training robust text quality assessment models
- Analyzing dataset quality for diverse NLP tasks
Limitations
- Based on existing classifiers, may inherit their biases
- The current quality definition may not capture all aspects of text quality
Ethics and Privacy
- No personal information is included in the dataset
- Users should appropriately credit the source datasets when using this compilation