In the early days of high performance computing, core technology developments, including clustering, parallel programming, and fast networking quickly found ways into the enterprise. Once these developments were integrated, supercomputing spun out its own ecosystems and the broader enterprise world encapsulated its own toolchains and technologies.
However, as AI/ML, accelerated databases and analytics platforms, and the need for near-real time results put more demands on enterprise infrastructure, HPC is more important than ever. The years of government investments leading to exascale do have value outside of the Top 500 and it is getting easier for exascale to come to everyone.
The reputation that HPC is built on technologies that are out of reach and cannot be widely adopted is based on out-of-date assumptions. In this webcast, we will tackle those misconceptions and look at the many ways technologies developed for the fastest supercomputers in the world have broad relevance in ordinary enterprises, as well as what it takes to make the leap.
Join Bill Mannel, Vice President and General Manager for HPE’s High Performance Computing division and Steve Conway, Hyperion Research analyst for HPC Market Dynamics, along with moderator, Nicole Hemsoth in this vibrant conversation.
Among the topics of conversation:
How did HPC diverge from the rest of the enterprise world and doesn’t a move to exascale make it even more esoteric?
What has changed in the last several years to make exascale-like computing capabilities practical and easy to adopt, even for organizations that don’t consider what they do to be HPC?
How does an organization quickly bootstrap a path to supercomputing-like technologies and what does this mean for their productivity, performance, and cost-effectiveness?
With a broad line of server and services-related products, from its Apollo hardware line, to the Cray technology assets, to HPE Greenlake, how can HPE deliver on the promise