Datasets:

Languages:
English
ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
                  with h5py.File(first_file, "r") as h5:
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
                  fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
                  fid = h5f.open(name, flags, fapl=fapl)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
                File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
                File "h5py/h5f.pyx", line 102, in h5py.h5f.open
              FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/Dearcat/CPathPatchFeature@ab5bf33e5bf4623c94ae7144f85bbf07f88c18ac/camelyon/patches/test_001.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

CPathPatchFeature: Pre-extracted WSI Features for Computational Pathology

Dataset Summary

This dataset provides a comprehensive collection of pre-extracted features from Whole Slide Images (WSIs) for various cancer types, designed to facilitate research in computational pathology. The features are extracted using multiple state-of-the-art encoders, offering a rich resource for developing and evaluating Multiple Instance Learning (MIL) models and other deep learning architectures.

The repository contains features for the following public datasets:

  • PANDA: Prostate cANcer graDe Assessment
  • TCGA-BRCA: Breast Cancer in TCGA
  • TCGA-NSCLC: Non-Small Cell Lung Cancer in TCGA
  • TCGA-BLCA: Bladder Cancer in TCGA
  • CAMELYON: Cancer Metastases in Lymph Nodes
  • CPTAC-NSCLC: Non-Small Cell Lung Cancer in CPTAC

Dataset Structure

The features for each WSI dataset are organized into subdirectories. Each subdirectory contains the features extracted by a specific encoder, along with the corresponding patch coordinates.

Feature Encoders

The following encoders were used to generate the features:

  • UNI: A vision-language pretrained model for pathology (UNI by Chen et al.).
  • CHIEF: A feature extractor based on self-supervised learning for pathology (CHIEF by Wang et al.).
  • GIGAP: A Giga-Pixel vision model for pathology (GigaPath by Xu et al.).
  • R50: A ResNet-50 model pre-trained on ImageNet.

Some data may not be fully organized yet. If you have specific needs or questions, please feel free to open an issue in the community tab.

How to Use

You can load and access the dataset using the Hugging Face datasets library or by cloning the repository with Git LFS.

Using the datasets Library

To load the data, you can use the following Python code:

from datasets import load_dataset

# Load a specific subset (e.g., PANDA)
# Note: You may need to specify the data files manually depending on the configuration.
# Example for a hypothetical configuration named 'panda'
# ds = load_dataset("your-username/CPathPatchFeature", name="panda")

# For datasets with this structure, it's often easier to download and access files directly.
# We recommend using Git LFS for a complete download.

Note: Due to the heterogeneous structure (mixed zipped and unzipped files), direct loading with load_dataset might be complex. The recommended approach is to clone the repository.

Using Git LFS

First, ensure you have Git LFS installed and configured:

git lfs install

Then, clone the dataset repository:

git clone https://huggingface.co/datasets/Dearcat/CPathPatchFeature

Citation

This dataset has been used in the following publications. If you find it useful for your research, please consider citing them:

@misc{tang2025revisitingdatachallengescomputational,
      title={Revisiting Data Challenges of Computational Pathology: A Pack-based Multiple Instance Learning Framework}, 
      author={Wenhao Tang and Heng Fang and Ge Wu and Xiang Li and Ming-Ming Cheng},
      year={2025},
      eprint={2509.20923},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={[https://arxiv.org/abs/2509.20923](https://arxiv.org/abs/2509.20923)}, 
}

@misc{tang2025multipleinstancelearningframework,
      title={Multiple Instance Learning Framework with Masked Hard Instance Mining for Gigapixel Histopathology Image Analysis}, 
      author={Wenhao Tang and Sheng Huang and Heng Fang and Fengtao Zhou and Bo Liu and Qingshan Liu},
      year={2025},
      eprint={2509.11526},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={[https://arxiv.org/abs/2509.11526](https://arxiv.org/abs/2509.11526)}, 
}

@misc{tang2025revisitingendtoendlearningslidelevel,
      title={Revisiting End-to-End Learning with Slide-level Supervision in Computational Pathology}, 
      author={Wenhao Tang and Rong Qin and Heng Fang and Fengtao Zhou and Hao Chen and Xiang Li and Ming-Ming Cheng},
      year={2025},
      eprint={2506.02408},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={[https://arxiv.org/abs/2506.02408](https://arxiv.org/abs/2506.02408)}, 
}
Downloads last month
3,173