Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:

Were the original PDFs saved?

#2
by staghado - opened

quickly looking at the dataset it doesn't look like it contains the original PDF files, only URLs!

FineData org
โ€ข
edited 13 days ago

It contains the text extracted from the PDFs. The actual PDFs take an extremely large amount of storage. For the non truncated PDFs you can fetch them from the commoncrawl index if you'd like using the offsets

Most Urls failed downloading

FineData org

You should fetch them from commoncrawl (for non truncated) and not from the url directly, many are no longer online

Anyone has a code snippet to download a specific subset?
Thanks in advance.

Yes @HaithemH
using streaming DS
dataset = load_dataset("HuggingFaceFW/finepdfs",name=subset_name, split=split_name, streaming=True)

and then iterate over it

I mean pdfs, I was not able to locate pdfs correponding to text in Commoncrawl

Even I am looking into downloading it from CC.
What issues you are getting btw?

problem is I'm not able to get the right pdfs corresponding to text within commoncrawl parquet.
if possible to share a code.

Sign up or log in to comment