Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
640
640
End of preview. Expand in Data Studio

NeurIPS Weather Dataset

Dataset Description and Motivation

The NeurIPS Weather Dataset is a benchmark designed to develop and evaluate robust object detection models for autonomous driving under adverse weather conditions. Safety-critical systems like self-driving cars often struggle when a model trained in clear weather is deployed in drastically different conditions (fog, rain, snow, or night), due to weather-induced domain shifts that degrade detector performance. This dataset addresses that challenge by providing paired real and simulated image data across a variety of difficult weather scenarios. The goal is to facilitate research on domain adaptation and generalization, allowing models to learn invariances to weather changes and maintain high detection accuracy even in poor visibility or unusual conditions.

Key motivations and features include:

  • Robust Object Detection in Adverse Conditions: The dataset was introduced in an IJCNN 2024 paper on all-weather object detection. It serves as a testbed for algorithms aimed at closing the performance gap between normal and harsh conditions. Researchers can quantify how much detection accuracy drops from clear weather to foggy, rainy, night-time, or snowy scenes and devise methods to mitigate this drop (e.g. data augmentation, domain adaptation, image enhancement, etc.).
  • Real-World + Simulated Data Blend: Collecting large-scale real images for every extreme weather is often impractical or unsafe (e.g. heavy rain or snow storms are rare and hazardous to capture). Therefore, this dataset leverages both real photographs and high-fidelity simulation. Real driving scenes (sourced from the BDD100K dataset) are augmented with synthetic weather effects, and complementary simulated scenes from the CARLA simulator provide fully controllable weather scenarios. This combination offers a rich and diverse set of conditions while ensuring ground-truth annotations are available for all images.
  • Domain Shift Benchmark: By organizing data into different weather domains, the dataset enables controlled experiments on domain shift. For example, one can train a detector on one domain (say clear weather) and test on another (like fog or night) to evaluate generalization. The provided data splits (explained below) include standard baseline splits to replicate such scenarios, as well as configurations for augmentation experiments where mixed-weather training is used to improve robustness. Overall, the dataset is meant to drive progress in making object detectors invariant to real-world weather changes.

Dataset Structure

Figure: Dataset directory structure – The NeurIPS Weather Dataset is structured into two main parts (or "frameworks"): a Real-World Data Framework and a Simulated Data Framework. Each framework contains subfolders for images under specific weather conditions, and each of those is further divided into a Trainable Set (training images) and an Evaluation Set (validation/testing images). All images come with corresponding bounding box annotations for objects of interest (vehicles, pedestrians, etc.), stored separately in a Bounding Box Information directory. The high-level organization is outlined below:

  • Real-World Data Framework: This portion consists of real driving images (originally from the BDD100K dataset, a large-scale driving database) that have been augmented to simulate various weather conditions. A Python script bdd100k_weather_augmentation.py is included in the dataset to document the augmentation process applied to the clear-weather source images. Five weather categories are provided in separate subfolders:

    • default – Clear daytime images (baseline real-world conditions without added effects).
    • fog – The same scenes with synthetic fog/haze applied (reduced visibility).
    • night – Images adjusted to low-light/night-time settings (darkened conditions and headlights/lighting effects).
    • rain – Images with rain effects (rain streaks, wet appearance) overlaid.
    • snow – Images with snow effects (snowfall and possibly accumulation) added.

    Every image in the real-world set has one or more annotated bounding boxes for objects such as cars, buses, trucks, pedestrians, cyclists, traffic lights, etc., following the standard BDD100K labeling schema (10 classes for common road objects). The Trainable Set Images and Evaluation Set Images directories under each weather category contain the training and test splits respectively. For instance, Real-World Data Framework/Images/Trainable Set Images/fog/ holds training images under fog, and .../Evaluation Set Images/fog/ holds foggy images reserved for evaluation. Similarly, all other weather subfolders are split into trainable vs. evaluation sets. This separation ensures that models can be trained and validated on disjoint sets of scenes. The exact file lists used in our experiments are provided (see Data Splits below), but users can also combine or resplit as needed for custom training regimes.

  • Simulated Data Framework: This part contains fully synthetic images generated using the CARLA autonomous driving simulator. CARLA’s built-in weather engine was used (via the carla_weather_augmentation.py script) to render the same virtual environments under different weather and lighting conditions. Four weather settings are included as subfolders:

    • default – Clear weather in the simulation (typically a daytime clear sky scenario).
    • fog – Foggy conditions in the simulator (reduced visibility distance, haze).
    • night – Night-time in the simulation (dark environment, possibly with street lighting or headlights).
    • rain – Rainy weather in CARLA (rainfall and wet road effects).

    (Note: CARLA did not simulate snow in this dataset, so there is no snow category in the simulated branch.) Each simulated image comes with ground-truth bounding boxes and labels for all rendered objects (e.g. vehicles, pedestrians) obtained directly from the simulator’s engine. The object classes correspond closely to the real data classes (e.g., car, truck, motorcycle, person, etc.), ensuring compatibility for cross-domain evaluation. The directory structure mirrors the real data: under Images, each weather folder has Trainable Set Images and Evaluation Set Images subfolders for training vs. testing images. The Bounding Box Information for simulated data contains the annotation files (in a similar format to the real data annotations) divided into Trainable Set Labels and Evaluation Set Labels. This simulated set provides a controlled environment to test algorithms’ ability to transfer learning from synthetic to real, or to use simulation to supplement real training data.

  • Data Splits and Experiments: In addition to the organized image folders, the dataset includes a Data Splits directory with text files listing the image IDs or file names for various experimental configurations. Specifically, under Data Splits/Baseline Experiment/ you will find train.txt, val.txt, and test.txt which delineate a recommended split of the data for a baseline evaluation (for example, a typical baseline might train on the Real-World/default images and validate on other conditions – the exact usage is described in the paper). Another subdirectory Data Augmentation Experiment/ contains split files used when training with augmented data (e.g. mixing multiple weather conditions in training). These splits were used in the IJCNN paper to compare different training strategies:

    • Baseline experiment: training on a narrow domain (e.g. clear-only training set) and testing on dissimilar domains (fog, rain, etc.) to quantify the domain gap.
    • Augmentation experiment: training on an expanded training set that includes augmented weather images or combined real+simulated data, and then evaluating on held-out sets to measure robustness gains.

    Researchers can use these provided splits to reproduce the paper’s results or as a starting point for their own experiments. Of course, you are free to ignore these and create custom train/test splits using the raw image folders, but the provided configurations ensure consistency with the benchmark as originally proposed.

Using the Dataset

Loading via Hugging Face: The dataset is hosted on Hugging Face Hub, which makes it straightforward to load using the datasets library in Python. Each image sample is packaged with its annotations for convenient access. For example, you can load the dataset as follows:

from datasets import load_dataset

# Load the entire NeurIPS Weather Dataset (all images and annotations)
dataset = load_dataset("neurips-weather-dataset")

This will download the dataset and prepare it for use. By default, the dataset may combine both real and simulated data; you can also load each subset separately if desired (depending on how the dataset is configured on the Hub). For instance:

# Load only the real-world subset
real_data = load_dataset("neurips-weather-dataset", name="real_world")

# Load only the simulated subset
sim_data = load_dataset("neurips-weather-dataset", name="simulated")

(Replace the dataset identifier with the correct namespace if applicable, e.g. "your-username/neurips-weather-dataset" in the code above, depending on the hosting.)

Each subset typically contains a training split and a validation/test split, accessible as real_data['train'], real_data['test'], etc. (or sim_data['validation'], depending on naming). You can iterate through the dataset like a regular PyTorch/TF dataset or convert it to Pandas, etc.

Data fields: Each data example is a dictionary with at least the following fields:

  • image: the input image (typically as a PIL image or NumPy array, depending on datasets settings) of a traffic scene.
  • bboxes: the bounding box coordinates for each object in the image (e.g., in [x_min, y_min, x_max, y_max] format, or as normalized coordinates if specified by the loader).
  • labels: the class labels corresponding to each bounding box (e.g., integers or category names like "car", "pedestrian", etc.). The set of possible labels includes common road users and objects (vehicles of various types, pedestrians, traffic signs, etc., matching the BDD100K annotation classes).
  • domain (if provided): which framework the image is from ("real" or "simulated"), or this might be inferable from context if you load them separately.
  • weather: the weather condition category for that image (e.g., "clear", "fog", "night", "rain", "snow"). In the real-world data, "snow" appears only in augmented form; in the simulated data, "snow" is not present.
  • Other metadata: There might be additional info like an image ID, or the original source of the image (especially for real images, an ID referencing the BDD100K source frame).

Using these fields, you can filter or group the data by condition. For example, you could take all fog images (across real and sim) to form a test set for a model, or use the weather label to apply condition-specific preprocessing in your pipeline.

Accessing images and labels: If using the datasets library, each dataset[split] is an iterable of examples. For instance:

example = dataset['train'][0]
img = example['image']
boxes = example['bboxes']
classes = example['labels']
print(example['weather'], example['domain'])

This would give you the first training image, its bounding boxes and labels, and print the weather condition and domain of that image. You can then visualize the image with boxes drawn, or feed it into a model. If you prefer to manually handle the data, you can also download the archive from Hugging Face and navigate the folder structure as described above (the folder names themselves indicate the domain and condition).

Example Use Cases

This dataset unlocks a variety of research and application possibilities in the field of autonomous driving and computer vision:

  • Weather Robustness Benchmarking: Evaluate how existing object detection models (e.g., YOLO, Faster R-CNN, SSD) trained on standard clear-weather data perform on foggy, rainy, nighttime, or snowy images. The NeurIPS Weather Dataset can be used to benchmark model robustness by reporting metrics (mAP, recall, etc.) separately on each weather condition. This helps identify failure modes; for example, one might find that a detector's performance drops significantly in fog compared to clear weather, highlighting the need for improvement.
  • Domain Adaptation and Generalization: Use the dataset to develop and test domain adaptation techniques. For instance, train a model on the Simulated images and then test it on the Real-World images (cross-domain testing). Since the simulated data is labeled and abundant, one could apply unsupervised domain adaptation to adapt the model from the synthetic domain to the real domain (with weather shifts in both). Conversely, domain generalization methods can be evaluated by training on multiple domains (e.g. mixing real and simulated, or mixing several weather conditions) and checking if the model generalizes to a new unseen condition.
  • Data Augmentation Strategies: The dataset facilitates experiments with data augmentation for robustness. Researchers can try augmenting clear-weather training images with various filters (defocus blur, color jitter, adding artificial rain streaks, etc.) – some of which are similar to the provided augmented set – and measure the impact on detection performance in adverse weather. The provided augmentation experiment split can serve as an example: by including the synthetic fog/rain/snow images in the training set, does the model become more weather-invariant? Users can test techniques like style transfer (making images look like different weather) or GAN-generated weather effects and compare with the baseline results using this dataset.
  • All-Weather Model Development: Train new object detection models explicitly on the union of all conditions to create an all-weather detector. Because the dataset includes a variety of conditions, one can train a single model with images from clear, fog, rain, night (and snow in real) all together. Example use cases include training a robust perception model for an autonomous vehicle that must operate 24/7 in any weather. The real and simulated combination can also be used to expand the diversity – e.g., use real images for normal conditions and simulated images to cover rarer conditions like heavy fog or extreme rain that are not well-represented in existing real datasets.
  • Computer Vision Education and Demos: The clear organization of this dataset makes it a good teaching tool for illustrating the effects of domain shift. Students can visually inspect images across domains – e.g., see how a scene looks in clear vs. foggy conditions – and then run a pre-trained detector to observe failure cases. This can motivate discussions on why certain weather affects the model (e.g., fog reduces contrast, night reduces visible detail) and how multi-domain training can help. Moreover, the simulated data can be used to demonstrate synthetic data generation and its benefits in a simple way.

These are just a few examples. We anticipate that the NeurIPS Weather Dataset will be useful for any project that needs diverse driving images with annotations, especially where robustness to environmental conditions is a concern. Whether you are developing improved sensor fusion (combining camera with radar/LiDAR for bad weather), or trying out the latest domain generalization algorithm, this dataset provides a solid and realistic testbed.

License

Contact and Acknowledgments

For any questions, feedback, or requests related to the NeurIPS Weather Dataset, you can reach out to the maintainers via the Hugging Face discussions on the dataset page or by contacting the authors directly. (You may find contact emails in the paper or the repository; alternatively, opening an Issue/Discussion on Hugging Face is a good way to get a response.)

We hope this dataset enables fruitful research and innovation. If you use it or find it helpful, consider letting the authors know — and if you discover any issues or have suggestions for improvement, please share them! Together, we can advance the state of the art in all-weather, resilient object detection for autonomous systems.

Happy experimenting, and safe driving in all conditions!

Downloads last month
321