Dataset Viewer issue

#1
by ashraf-ali - opened

The dataset viewer is not working.

Error details:

Error code:   StreamingRowsError
Exception:    FileNotFoundError
Message:      https://huggingface.co/datasets/ashraf-ali/quran-data/resolve/dc85497e7d13d5e65af52e5dda727af8a8435713/data/audio/Imam/Abdullah_Basfar/002090_Abdullah_Basfar_64kbps.wav
Traceback:    Traceback (most recent call last):
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 407, in _info
                  await _file_info(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 788, in _file_info
                  r = await session.get(url, allow_redirects=ar, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/aiohttp/client.py", line 560, in _request
                  await resp.start(conn)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 899, in start
                  message, payload = await protocol.read()  # type: ignore[union-attr]
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/aiohttp/streams.py", line 616, in read
                  await self._waiter
              aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/workers/first_rows/src/first_rows/worker.py", line 484, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/first_rows/src/first_rows/worker.py", line 119, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/first_rows/src/first_rows/worker.py", line 175, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 751, in __iter__
                  yield _apply_feature_types(example, self.features, token_per_repo_id=self._token_per_repo_id)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 635, in _apply_feature_types
                  decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1794, in decode_example
                  return {
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1795, in <dictcomp>
                  column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1262, in decode_nested_example
                  return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 160, in decode_example
                  array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/features/audio.py", line 260, in _decode_non_mp3_path_like
                  with xopen(path, "rb", use_auth_token=use_auth_token) as f:
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 469, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 135, in open
                  return self.__enter__()
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/core.py", line 103, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1106, in open
                  f = self._open(
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 346, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 113, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 98, in sync
                  raise return_result
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/workers/first_rows/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 420, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: https://huggingface.co/datasets/ashraf-ali/quran-data/resolve/dc85497e7d13d5e65af52e5dda727af8a8435713/data/audio/Imam/Abdullah_Basfar/002090_Abdullah_Basfar_64kbps.wav

cc @albertvillanova @lhoestq @severo .

Thanks for reporting @ashraf-ali .

Please note that to have a dataset of type "AudioFolder", you need: https://huggingface.co/docs/datasets/audio_dataset#audiofolder

  • Metadata file should be called either metadata.csv (only comma-separated) or metadata.jsonl
    • You have 2 metadata files instead of 1, none of them is called metadata.csv/metadata.jsonl and yours are not CSV (comma-separated), but TSV (tab-separated), which are not supported yet
  • The metadata file should have a column/field named file_name
    • Yours is called wav_filename instead

Thanks @albertvillanova , one suggestion I would make is before the dataset is parsed the data should be validated using a tool like https://greatexpectations.io/ which would make it easier for end users to understand the reasons why the data may fail.

albertvillanova changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment