Datasets:
Inquiry Regarding Deduplication Strategies
I am reaching out to learn more about the deduplication strategies employed in Ultra Fine-Web, particularly regarding temporal dimensions and lookahead bias prevention. Your insights would be invaluable to our research. Below are my specific questions:
Does Ultra Fine-Web perform deduplication across temporal dimensions? For example, if a webpage with the same URL (or identical content) is crawled in multiple years (e.g., 2013, 2014, 2015), how is this handled?
If deduplication is applied:
Which version is typically retained (e.g., the earliest, latest, or highest-quality snapshot)?
Are there specific tools or metrics used to assess content equivalence over time?
We would greatly appreciate any advice, references, or examples from your own pipeline. Thank you for your time and expertise—please feel free to share only what is within your comfort zone.
Looking forward to your insights.
We do not use any deduplication. We used the first version of FineWeb, without any processing, and used our HQ classifier to filter Ultra-FineWeb samples.