Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
meg HF Staff commited on
Commit
90dcda1
·
verified ·
1 Parent(s): 01108c1

1 of 2 TODOs

Browse files

Noticed 2 TODOs here. I think one of them means to link to https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1 ? Not sure if the other does as well.

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -544,7 +544,7 @@ We fine-tuned a Bert-like regression model using these annotations, based on [Sn
544
  The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/ ](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/)
545
 
546
  ### Filtering and results
547
- **Note**: You can find more details about the ablations and results in the FineWeb blog post (TODO).
548
 
549
  We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best overall results. Although using a threshold higher than 3 improves performance on knowledge and reasoning intensive benchmarks, it significantly degrades performance on HellaSwag and PIQA.
550
 
 
544
  The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/ ](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/)
545
 
546
  ### Filtering and results
547
+ **Note**: You can find more details about the ablations and results in [the FineWeb blog post](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1).
548
 
549
  We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best overall results. Although using a threshold higher than 3 improves performance on knowledge and reasoning intensive benchmarks, it significantly degrades performance on HellaSwag and PIQA.
550