Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
code
ArXiv:
Libraries:
Datasets
Dask
License:

ds.map() blues

#9
by eminorhan - opened

I'm just curious if anybody has been able to download the dataset from S3. I'm basically using the code snippet provided on the README page of this repo for downloading the file contents from S3. I'm using num_proc>1 in ds.map() to speed up the download. It successfully downloads ~90% of the data, but then slows down and comes to a halt in the end at ~99%, never being able to download the full dataset (it's truly a remarkable real-life example of Zeno's paradox!). I've found that this very same issue with ds.map() has been mentioned in several other places before: e.g. here and here, so it seems like a long-standing issue in ds.map() that was never successfully addressed. Have the dataset providers ever tried and succeeded in downloading the dataset from S3 using the code snippet they show on the README page?

I've tried lots of things, but nothing seems to fix this issue and it's kind of frustrating. This issue affects all versions of this dataset in my experience.

For posterity: the following script successfully downloads the data:

import boto3
import gzip
from botocore import UNSIGNED
from botocore.config import Config
from datasets import load_dataset
from botocore.exceptions import ClientError


s3 = boto3.client("s3", config=Config(signature_version=UNSIGNED))
bucket_name = "softwareheritage"


def download_contents(files):
    download_success = True
    for file in files:
        try:
            key = f"content/{file['blob_id']}"
            obj = s3.get_object(Bucket=bucket_name, Key=key)
            with gzip.GzipFile(fileobj=obj['Body']) as fin:
                file["text"] = fin.read().decode("utf-8", errors="ignore")
        except ClientError as e:
            if e.response['Error']['Code'] == 'NoSuchKey':
                print(f"File not found: {key}")    
                file["text"] = ""
                download_success = False
    return {"files": files, "download_success": download_success}


num_proc = 1000  # adjust this number based on your setup
ds = load_dataset("bigcode/the-stack-v2-train-full-ids", split="train", num_proc=num_proc, trust_remote_code=True)
ds = ds.map(lambda row: download_contents(row["files"]), num_proc=num_proc)
ds = ds.filter(lambda x: x['download_success'], num_proc=num_proc)  # filter out failed downloads

# print the first example to verify the data
print(ds[0])

# optionally, save the preprocessed data to disk
ds.save_to_disk('LOCAL_PATH', num_shards=3000)
print('Done!')

It still slows down toward the end as described in the previous comment, but the download finishes successfully. Make sure to adjust the number of processes (num_proc) and the optional local save path ('LOCAL_PATH') based on your setup.

Sign up or log in to comment