Datasets:
HistoPlexer-Ultivue Dataset
Dataset Summary
The HistoPlexer-Ultivue dataset provides a collection of multimodal histological images for cancer research. It includes whole-slide images (WSIs) of hematoxylin and eosin (H&E) stained tissue, multiplexed immunofluorescence images from Ultivue panels (immuno8
and mdsc
), alignment matrices, exclusion masks, and nuclear segmentation outputs. It is a multiplexed dataset for 10 cancer samples from the Tumor Profiler Study. The dataset supports studies in tumor microenvironment analysis, immune profiling, and spatial biology.
It was generated in study "Histopathology-based Protein Multiplex Generation using Deep Learning". The print is available here.
Dataset Description
The dataset includes the following modalities/data for each sample:
Ultivue Immuno8 Panel (
ultivue_immuno8
):- Description: Multiplexed immunofluorescence images with 10 channels: DAPI, PD-L1, CD68, CD8a, PD-1, DAPI2, FoXP3, SOX10, CD3, CD4.
- Subfolders:
*Scene-1-stacked*
: Tonsil reference tissue data (generated with each sample as an experiment QC)*Scene-2-stacked*
: Tumor sample tissue data (most relevant for analysis/downstream tasks)
- Resolution: 0.325 ΞΌm/px
- Format: Image files (e.g.,
.tif
for individual channels,.afi
used for loading entire panel with all channels)
Ultivue MDSC Panel (
ultivue_mdsc
):- Description: Multiplexed immunofluorescence images with 5 channels: DAPI, CD11b, CD14, CD15, HLA-DR.
- Details: Each sample includes a
.czi
file containing both tonsil reference and tumor sample tissues side by side in same image file. - Resolution: 0.325 ΞΌm/px
- Format:
.czi
H&E Whole-Slide Images (
HE
):- Description: High-resolution H&E-stained WSIs from consecutive tissue sections.
- Resolution: 0.23 ΞΌm/px
- Format: Image files (e.g.,
.ndpi
)
Exclusion Masks (
exclusion_mask
):- Description: Binary annotations marking low-quality or red regions in H&E images to exclude, preventing false positives in Ultivue images due to marker bleed-through.
- Format: Image files (e.g.,
.annotations
)
Alignment Immuno8 to H&E (
alignment_immuno8_HE
):- Description: Transformation matrices aligning Ultivue
immuno8
images to corresponding H&E images, generated using the DRMIME algorithm. - Format: Matrix files (e.g.,
.npz
)
- Description: Transformation matrices aligning Ultivue
Alignment MDSC to Immuno8 (
alignment_mdsc_immuno8
):- Description: Transformation matrices for aligning Ultivue
mdsc
images to Ultivueimmuno8
images, with instructions to extract tissue sample excluding tonsil reference. - Format: Matrix files (e.g.,
.npz
)
- Description: Transformation matrices for aligning Ultivue
HoverNet Output (
hovernet
):- Description: Nuclear coordinates in H&E images, generated using the HoVerNet model for nuclear segmentation.
- Format: Coordinate files (e.g.,
.csv.gz
)
Dataset Structure
The dataset is organized by sample, with each sample (e.g., MAHEFOG
, MACEGEJ
, MAJOFIJ
) containing subfolders for each modality/data. Below is the folder structure:
CTPLab-DBE-UniBas/HistoPlexer-Ultivue/
βββ MAHEFOG/
β βββ ultivue_immuno8/
β β βββ *Scene-1-stacked*/
β β β βββ *.afi
β β β βββ ...
β β βββ *Scene-2-stacked*/
β β β βββ *.afi
β β β βββ ...
β βββ ultivue_mdsc/
β β βββ *.czi
β βββ HE/
β β βββ *.ndpi
β βββ exclusion_mask/
β β βββ *.annotatons
β βββ alignment_immuno8_HE/
β β βββ *.npz
β βββ alignment_mdsc_immuno8/
β β βββ *.npz
β βββ hovernet/
β β βββ *.csv.gz
βββ MACEGEJ/
β βββ [same structure as MAHEFOG]
βββ MAJOFIJ/
β βββ [same structure as MAHEFOG]
βββ ...
Usage
Prerequisites
- Install the
huggingface_hub
library:pip install huggingface_hub
- Obtain a Hugging Face access token with permissions to access
CTPLab-DBE-UniBas/HistoPlexer-Ultivue
. Set it as an environment variable or pass it directly in the code:export HF_TOKEN="your_hugging_face_token"
- Ensure sufficient disk space, as the dataset contains high-resolution images.
Download Instructions
Below are Python code snippets for downloading the dataset using different approaches. Due to selective access restrictions, selective downloads (by sample or modality) download the entire dataset and filter locally to ensure compatibility.
Getting folder structure and samples in dataset
from huggingface_hub import HfApi, login
from huggingface_hub.utils import HFValidationError, HfHubHTTPError
token = "your_hugging_face_token" # update as needed
login(token)
dataset_name = "CTPLab-DBE-UniBas/HistoPlexer-Ultivue"
try:
api = HfApi()
file_list = api.list_repo_files(repo_id=dataset_name, repo_type="dataset")
print(f"Found {len(file_list)} files in dataset '{dataset_name}'.")
samples_dict = {}
for file in file_list:
parts = file.split("/")
if len(parts) >= 2:
sample, modality = parts[0], parts[1]
if sample not in samples_dict:
samples_dict[sample] = set()
samples_dict[sample].add(modality)
print("Dataset structure (samples and their modalities):")
for sample, modalities in samples_dict.items():
print(f" Sample: {sample}")
print(f" Modalities: {', '.join(sorted(modalities))}")
print(samples_dict)
except (HFValidationError, HfHubHTTPError) as e:
print(f"Error accessing dataset '{dataset_name}': {str(e)}")
except Exception as e:
print(f"Unexpected error listing dataset structure: {str(e)}")
Downloading the Entire Dataset
Download the entire HistoPlexer-Ultivue
dataset to a local directory.
from huggingface_hub import login, snapshot_download
# Log in to Hugging Face
token = "your_hugging_face_token" # update as needed
login(token)
# Download the entire dataset
dataset_name = "CTPLab-DBE-UniBas/HistoPlexer-Ultivue"
local_dir = "./dataset_download"
snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
local_dir=local_dir
)
print(f"Entire dataset downloaded to {local_dir}")
Downloading by Sample
Download all data for a specific sample (e.g., MAHEFOG
).
from huggingface_hub import login, snapshot_download
# Log in to Hugging Face
token = "your_hugging_face_token" # update as needed
login(token)
# Configuration
dataset_name = "CTPLab-DBE-UniBas/HistoPlexer-Ultivue"
sample_name = "MAHEFOG"
local_dir = "./dataset_download"
# Download entire dataset to temp directory
snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
allow_patterns=f"{sample_name}/*",
local_dir=local_dir
)
Downloading by Modality
Download a specific modality (e.g., immuno8_panel
) for all samples.
from huggingface_hub import login, snapshot_download, HfApi
from huggingface_hub.utils import HFValidationError, HfHubHTTPError
# Log in to Hugging Face
token = "your_hugging_face_token" # update as needed
login(token)
# Configuration
dataset_name = "CTPLab-DBE-UniBas/HistoPlexer-Ultivue"
modality = "immuno8_panel" # update to data of interest
local_dir = "./dataset_download"
# getting all samples
api = HfApi()
file_list = api.list_repo_files(repo_id=dataset_name, repo_type="dataset")
print(f"Found {len(file_list)} files in dataset '{dataset_name}'.")
samples_dict = {}
for file in file_list:
parts = file.split("/")
if len(parts) >= 2:
sample, mod = parts[0], parts[1]
if sample not in samples_dict:
samples_dict[sample] = set()
samples_dict[sample].add(mod)
print(samples_dict)
# Verify modality name and list samples with hovernet
sample_folders = [s for s, mods in samples_dict.items() if modality in mods]
print(f"Samples with modality '{modality}': {sample_folders}")
if len(sample_folders)>0:
# Download modality for each sample
for sample in sample_folders:
try:
# Use recursive pattern to match files in subfolders
pattern = f"{sample}/{modality}/"
print(f"Attempting to download {sample}/{sample_folders}/{modality} with pattern '{pattern}'...")
snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
allow_patterns=[pattern],
local_dir=local_dir,
)
print(f"Successfully downloaded modality '{modality}' for sample '{sample}' to {local_dir}")
except (HFValidationError, HfHubHTTPError) as e:
print(f"Failed to download {sample}/{modality}: {str(e)}")
except Exception as e:
print(f"Unexpected error downloading {sample}/{modality}: {str(e)}")
else:
print(f"No samples found for modality '{modality}'")
Downloading by Sample and Modality List
Download specific modalities (e.g., HE
, immuno8_panel
) for specific samples (e.g., MAHEFOG
, MACEGEJ
).
from huggingface_hub import login, snapshot_download, HfApi
from huggingface_hub.utils import HFValidationError, HfHubHTTPError
import os
# Log in to Hugging Face
token = "your_hugging_face_token" # update as needed
login(token)
# Configuration
dataset_name = "CTPLab-DBE-UniBas/HistoPlexer-Ultivue"
modalities = ["HE", "immuno8_panel"]
samples = ["MAHEFOG", "MACEGEJ"]
local_dir = "./dataset_download"
# getting all samples
api = HfApi()
file_list = api.list_repo_files(repo_id=dataset_name, repo_type="dataset")
print(f"Found {len(file_list)} files in dataset '{dataset_name}'.")
samples_dict = {}
for file in file_list:
parts = file.split("/")
if len(parts) >= 2:
sample, mod = parts[0], parts[1]
if sample not in samples_dict:
samples_dict[sample] = set()
samples_dict[sample].add(mod)
print(samples_dict)
# Validate samples and modalities
valid_samples = [s for s in samples if s in samples_dict]
valid_modalities = []
for modality in modalities:
found = False
for sample, mods in samples_dict.items():
if any(mod.lower() == modality.lower() for mod in mods):
valid_modalities.append(modality)
found = True
break
if not found:
print(f"Warning: Modality '{modality}' not found in any sample.")
if valid_modalities and valid_samples:
# Download files for specified samples and modalities
success = False
for sample in valid_samples:
for modality in valid_modalities:
# Check files for this sample-modality pair
target_files = [f for f in file_list if f.startswith(f"{sample}/{modality}/")]
if not target_files:
print(f"No files found for {sample}/{modality}/")
continue
print(f"Found {len(target_files)} files for {sample}/{modality}. First few:")
for f in target_files[:3]:
print(f" - {f}")
# Download using snapshot_download with recursive pattern
try:
snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
allow_patterns=[f"{sample}/{modality}/"],
local_dir=local_dir,
)
print(f"Downloaded modality '{modality}' for sample '{sample}' to {local_dir}")
success = True
except (HFValidationError, HfHubHTTPError) as e:
print(f"Failed to download {sample}/{modality}: {str(e)}")
except Exception as e:
print(f"Unexpected error downloading {sample}/{modality}: {str(e)}")
if not success:
print(f"No files successfully downloaded for specified samples and modalities.")
else:
print(f"Completed downloading files for samples {valid_samples} and modalities {valid_modalities} to {local_dir}")
else:
print("No valid samples or modalities to download.")
print("Available samples:", list(samples_dict.keys()))
print("Available modalities:", {mod for mods in samples_dict.values() for mod in mods})
How to load data in python
The Multipled data requires specific packages to load in python. Please refer to jupyter notebook in our github repository... to see how to load multiplexed images, use alignment matrices to align modalities.
How to cite
If you use this dataset in your research, please cite the following paper:
@article {Andani2024.01.26.24301803,
author = {Andani, Sonali and Chen, Boqi and Ficek-Pascual, Joanna and Heinke, Simon and Casanova, Ruben and Hild, Bernard and Sobottka, Bettina and Bodenmiller, Bernd and Tumor Profiler Consortium and Koelzer, Viktor H. and R{\"a}tsch, Gunnar},
title = {Histopathology-based Protein Multiplex Generation using Deep Learning},
elocation-id = {2024.01.26.24301803},
year = {2025},
doi = {10.1101/2024.01.26.24301803},
publisher = {Cold Spring Harbor Laboratory Press},
eprint = {https://www.medrxiv.org/content/early/2025/05/28/2024.01.26.24301803.full.pdf},
journal = {medRxiv}
}
Contact
Sonali Andani, ETH Zurich and CTPLab ([email protected])
License
The dataset is distributed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)
- Downloads last month
- 287