It’s Time to Bring AI into the Broader Infrastructure Fold

Sponsored by VAST Data


Its-Time-to-Bring

Since its infancy, AI compute, especially for medium to large-scale training systems, has been an outlier - literally disconnected from other systems. However, as users move beyond experimentation to broader production, there is a growing need to mesh AI with the common tools, frameworks, and hardware that run the rest of the organization.

As part of that process, the focus has shifted from compute to data movement and storage. The CPU and GPU number crunching is the easy part. Ensuring the results from large neural network runs are available to other systems and on a common platform is the new challenge.

Whether in the cloud or on-prem, this more mature view of AI demands integration, bringing once secluded AI systems into the wider organizational fold.

But this demands more than traditional NAS can handle. It demands the robustness of established principles and protocols over novel file systems or limited scale-out hardware. And TCO has to be in tune with the big picture of system investments.

Join Jeff Denworth of VAST, Tony Paikeday of NVIDIA and Nicole Hemsoth co-editor of The Next Platform as they discuss:

  • The shift from experimental to production AI: how were these systems considered and built, how did they become experimental silos, and what needs to happen to bring them into the fold, especially for storage.
  • How does TCO play out when you take this integrated view of AI in the cloud as well as on prem.
  • What you’ve been missing all along when it comes to unified, robust storage platforms that can handle all workloads (mixed files, etc).

The NVIDIA DGX™ A100 system features eight NVIDIA GPUs and two 2nd Gen AMD EPYC™ processors.