The boom in demand for artificial intelligence and machine learning (AI/ML) enabled applications and services is putting severe strain on existing datacenter infrastructure which can struggle to match high speed I/O and transmission requirements which involve extremely large datasets. GPUs and HPC are one part of the solution, but can only do so much without faster, more scalable storage that can keep GPUs fully utilized during training and inferencing.
So how do companies get around those limitations and build a future proofed enterprise data architecture that will provide the scale and performance they need to handle the AI/GenAI/ML workloads today and in the future?
To help answer the question, join our webinar “A data architecture built for enterprise AI” where we will:
Introduce the subject by discussing how and why AI/ML/DL and GPU computing are bringing HPC into more mainstream usage both on premises and in the cloud.
Explain how this has created new requirements for IT infrastructure which are difficult to meet with existing storage architectures such as scale out NAS and HPC file systems.
Outline the specs and capabilities of the Hammerspace Hyperscale NAS architecture including GPUDirect support, discuss how it differs from other storage solutions, and explain why it’s better suited to meet modern HPC and AI requirements than scale out NAS and HPC file systems.
Talk about how Hammerspace Hyperscale NAS is now being used by Meta to provide LLM training for Llama2/3 models.