metadata
datasets: polymathic-ai/supernova_explosion_64
tags:
- physics
The models have been trained for a fixed time of 12 hours or up to 500 epochs, whichever happens first. The training was performed on a NVIDIA H100 96GB GPU.
In the time dimension, the context length was set to 4. The batch size was set to maximize the memory usage. We experiment with 5 different learning rates for each model on each dataset.
We use the model performing best on the validation set to report test set results.
The reported results are here to provide a simple baseline. They should not be considered as state-of-the-art. We hope that the community will build upon these results to develop better architectures for PDE surrogate modeling.
For benchmarking on the Well, we used the following parameters.
Below is the list of checkpoints available for the training of U-Net on different datasets of the Well.