It’s Time to Bring AI into the Broader Infrastructure Fold
Sponsored by VAST Data
Get an Alert
when The Register has something new about ...
Check the boxes & select Email or Atom/RSS Feed.
In its infancy, AI compute, especially for medium to large-scale training systems, has been an outlier—literally disconnected from the stable of systems. However, as users move from AI experimentation to broader production, the need is stronger than ever to mesh AI with the common tools, frameworks, and hardware that run the rest of an organization.
In that process of maturation, the AI focus has shifted from compute to data movement and storage. These days, the CPU and GPU number crunching is the easy part: Having the results from large neural network runs available to other systems and on a common platform is the new target.
Whether in the cloud or on-prem, this more mature AI system view is focused on integration, on bringing once secluded AI systems into the wider organizational fold.
To do that takes more than what traditional NAS can handle. It demands the robustness of established principles and protocols over novel file systems or limited scale-out hardware. What’s more, the TCO also has to be in tune with the big picture of system investments Join Jeff Denworth of VAST, Tony Paikeday of NVIDIA and Nicole Hemsoth co-editor of The Next Platform as they discuss.
The shift from experimental to production AI: how were these systems considered and built, how did they become experimental silos, and what needs to happen to bring them into the fold, especially for storage.
How does TCO play out when you take this integrated view of AI in the cloud as well as on prem.
What you’ve been missing all along when it comes to unified, robust storage platforms that can handle all workloads (mixed files, etc).