Description
The transition to GPU-accelerated computing has significantly improved the performance and interactivity of climate simulations. However, the enormous data volumes generated by high-resolution models remain a major bottleneck, particularly due to the limited VRAM capacity of GPUs. Efficient data compression is therefore essential to sustain scalability and performance.
Climate simulation data exhibits strong spatial correlations, making it well-suited for compression techniques that exploit such structure. For checkpointing purposes—saving and restarting simulation states—the compression scheme must be either strictly lossless or tightly error-bounded to prevent numerical divergence and ensure scientific reproducibility. Furthermore, the compression algorithm must be GPU-native, as transferring data to the CPU for encoding and decoding would introduce significant overhead given the scale of the datasets involved.
Tasks
The objective of this project is to evaluate existing GPU-native compression algorithms [1-3], select the most suitable approach for our datasets, and integrate into the GPU implementation of the PALM Model System—an advanced meteorological model for turbulence-resolving large-eddy simulations of atmospheric and oceanic boundary layers.
Requirements
- Knowledge of English language (source code comments and final report should be in English)
- Knowledge of CUDA
- More knowledge is always advantageous