--- license: cc-by-4.0 tags: - image - video - timeseries - forecasting - astrophysics - heliophysics - scientific-data - climate - weather - space-weather pretty_name: SDOML-lite size_categories: - 100K The following is another example showing how to put together a minimal PyTorch dataset and dataloader to train with SDOML-lite. ```python import torch from torch.utils.data import IterableDataset, DataLoader # PyTorch dataset class SDOMLlite(IterableDataset): def __init__(self, dataset_iter): self.dataset_iter = dataset_iter def __iter__(self): for sample in self.dataset_iter: t, d = process(sample) yield t, torch.tensor(d, dtype=torch.float32) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') batch_size = 8 # Create DataLoader dataset_iter = iter(dataset['train']) torch_dataset = SDOMLlite(dataset_iter) loader = DataLoader(torch_dataset, batch_size=batch_size) # Training loop num_epochs = 1 for epoch in range(num_epochs): for batch in loader: times = batch[0] # Timestamps images = batch[1].to(device) # Images with shape (batch_size, 6, 512, 512) print(f"Batch with shape: {images.shape}") # ... # training code # ... break break ``` ## Data Generation and Processing The SDOML-lite dataset is generated using the pipeline detailed in the [sdoml-lite GitHub repository](https://github.com/oxai4science/sdoml-lite). The download and processing scripts were run in July 2024 using distributed computing resources provided by Google Cloud for FDL-X Heliolab 2024, which is a public-private partnership AI research initiative with NASA, Google Cloud and Nvidia and other leading research organizations. ## Data Splits This dataset is provided as a collection of daily `.tar` files. No predefined training, validation, or test splits are provided. Users are encouraged to define their own splits according to their specific research requirements, such as by date ranges (e.g., specific years or months of year for training/validation/testing) or by solar events. ## Data normalization The data comes normalized within each image channel such that the pixel values are in the range [0, 1], making it ready for machine learning use out of the box. The HMI source we use is already normalized in the range [0, 1]. We normalize the AIA data based on the statistics of the actual AIA data processed during the generation of the dataset, in a two-phase processing pipeline where the first phase computes data statistics and the second phase applies normalization. ## A note on data quality The primary motivation for SDOML-lite is to provide a lightweight dataset suitable for use in machine learning pipelines, for example, as input to models that predict Sun-dependent quantities in domains such as space weather, thermospheric density, or radiation exposure. We believe the dataset is of sufficient quality to serve as input for a broad range of machine learning applications. However, it is not intended for detailed scientific analysis of the HMI or AIA instruments, for which users should consult the original calibrated data products. ## Acknowledgments This work is supported by NASA under award #80NSSC24M0122 and is the research product of FDL-X Heliolab a public/private partnership between NASA, Trillium Technologies Inc (trillium.tech) and commercial AI partners Google Cloud, NVIDIA and Pasteur Labs & ISI, developing open science for all Humankind. ## License [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://choosealicense.com/licenses/cc-by-4.0/)