Dataset Viewer
The dataset viewer is not available for this split.
The info cannot be fetched for the config 'default' of the dataset.
Error code: InfoError Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 3b7dc9b6-73bf-423e-a1ff-95157ea4b460)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 211, in compute_first_rows_from_streaming_response info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 278, in get_dataset_config_info builder = load_dataset_builder( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1781, in load_dataset_builder dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1018, in get_module data_files = DataFilesDict.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 690, in from_patterns else DataFilesList.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 593, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 507, in _get_origin_metadata return thread_map( File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1169, in __iter__ for obj in iterable: File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 486, in _get_single_origin_metadata resolved_path = fs.resolve_path(data_file) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 3b7dc9b6-73bf-423e-a1ff-95157ea4b460)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Tokenized Level 5 Vital Wikipedia Articles Dataset
This dataset is a modified version of the Level 5 Vital Wikipedia Articles dataset. The primary difference in this version is that the text has been tokenized into sentences to facilitate sentence-level NLP tasks.
How to Use
You can load the dataset using the datasets
library from Hugging Face:
from datasets import load_dataset
dataset = load_dataset("michsethowusu/lvl_5_vital_wikipedia_articles_tokenised")
print(dataset["train"][0]) # Example output
Citation
If you use this dataset, please also cite the original dataset:
@dataset{lvl5_vital_wikipedia_articles,
author = {AMead10},
title = {Level 5 Vital Wikipedia Articles Dataset},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/AMead10/lvl_5_vital_wikipedia_articles}
}
License
The dataset follows the same licensing as the original dataset. Please refer to the original dataset's page for more details.
- Downloads last month
- 10
Source Dataset
AMead10/lvl_5_vital_wikipedia_articles
Modifications
: Tokenized into sentences
Format
: The dataset maintains the same structure as the original but with sentence-level segmentation.
Intended Use
: Useful for NLP applications such as summarization, sentence classification, and language modeling at the sentence level.
Size of downloaded dataset files:
620 MB
Size of the auto-converted Parquet files:
620 MB
Number of rows:
6,905,522