Feeding the data-hungry GPU beast
Partnered with NVIDIA
GPU-accelerated systems need to be able to store and process massive amounts of data, particularly when it comes to running HPC and AI workloads. To that end, a lot of work has been done over the last few years to keep the performance of the file systems and storage servers that support them humming along in concert with ever-more-powerful GPU compute components.
But as we enter the hybrid cloud era, other considerations around security, sharing and simplicity have become equally as critical as performance. So how are new technologies like DPUs and NVM-Express over Fabrics helping to address these requirements?
This session will cover:
- The role of the programmable, SoC-based DPU in the evolution of data center HPC and AI systems.
- How different priorities around performance, data types, ease of installation and management among different organizations impact customer storage file system selection.
- The extent to which GPU Direct Storage can boost the performance of both storage and AI/HPC applications.
- How to optimize network connectivity between the storage nodes and GPU clusters.
- The development of NVM-Express over Fabrics and how it is expected to change the way storage is architected for HPC and AI systems.
Timothy Prickett Morgan, The Register (Host/moderator)
Rob Davis, Vice President of Storage Technology, NVIDIA
Randy Kreiser, Storage Specialist/FAE, Supermicro