HPE at ISC: “Perform like a supercomputer, run like a cloud”

Since we last saw HPE at ISC over a year ago, the company has embarked on a solid streak of successes in HPC and supercomputing – successes the company will no doubt be happy to see. discuss during the virtual SAI 2021.

This year being the year of exascale, HPE is likely to place at the top of its list of HPC achievements its Frontier system which progresses chronologically on the schedule of exascale installations of the US Department of Energy; the system is expected to be the first US exascale to be installed later this year at the Oak Ridge National Laboratory. El Capitan, another exascale system (2 exaflops) for which HPE-Cray is the prime contractor, is expected to be installed at the Lawrence Livermore National Laboratory in early 2023.

A third exascale system, Aurora from the Argonne National Laboratory, for which Intel is the main one, will have a strong HPE-Cray flavor, integrating the HPE Cray EX supercomputer, formerly called Cray “Shasta”.

Moving away from the rarefied level of exascale, HPE has announced a series of customer gains over the past 12 months, including:

  • The US National Energy Research Scientific Computing Center recently unveiled the Perlmutter HPC System, a machine beast powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance based on the HPE Cray Shasta platform architecture, including the HPE Slingshot HPC interconnect and a Cray ClusterStor E1000 All Flash storage system with a usable capacity of 35 petabytes;
  • In a 10-year, more than £ 1 billion deal, Microsoft Azure will integrate HPE Cray EX supercomputers for the UK Met Office, a system set to make the top 25 on the Top500 list of HPC systems the fastest in the world, according to the Weather Service, and twice as powerful as any other in the UK.
  • an HPE Cray EX system targeting modeling and simulation in academic activities and industrial fields, including drug design and renewable energy for the KTH Royal Institute of Technology (KTH) in Stockholm, funded by the Swedish National Infrastructure for IT (SNIC);
  • the delivery of a 7.2 petaflop supercomputer to support weather forecasting and modeling for the US Air Force and Army, powered by two HPE Cray EX supercomputers in operation at the Oak National Laboratory Ridge, where it is managed by ORNL.
  • expansion of NASA’s “Aitken” supercomputer with HPE Apollo systems that became operational in January 2021 and designed for compute-intensive CFD modeling and simulation.
  • a contract to build a more than $ 35 million supercomputer for the National Center for Atmospheric Research (NCAR) – a supercomputer 3.5 times faster and 6 times more energy efficient for extreme weather research.
  • in tandem with AMD, HPE announced last October that Australia’s Pawsey Supercomputing Center awarded HPE a systems contract worth A $ 48 million; and that Los Alamos National Lab has stated that it has implemented the “Chicoma” system based on AMD processors and the HPE Cray EX supercomputer.
  • which will be used by DOE’s Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratory, a $ 105 million system built with the HPE Cray EX (“Crossroads”) supercomputer that will be installed in spring 2022 with quadruple the performance of the existing system for the National Nuclear Security Administration (NNSA) of the US Department of Energy
  • and on the Front Arm, last June, the Leibniz Supercomputing Center (LRZ) in Munich announced that it would deploy HPE’s Cray CS500 with Fujitsu A64FX chips based on the Arm architecture.
  • Microsoft has partnered with HPE to bring the Met Office’s new HPE Cray EX supercomputer to the cloud, enabling the Met Office to work and act on information in their data using AI, modeling and simulation.

Additionally, this week, HPE announced the acquisition of Defined AI, a San Francisco-based startup with a software stack designed to train AI models faster using its machine learning platform ( ML) open source.

We recently chatted with Joseph George of HPE, a long-time figure in the HPC community who joined Cray’s business when it was acquired by HPE in 2019. George, HPE vice president, global industry and marketing, alliance, said that in a way ISC is for HPE a continuation of its HPE Discover party from June 22-24 – at least as far as HPC is concerned.

“Our biggest customer event is HPE Discover, and HPE Discover this year is a week before ISC, so it works well, because a lot of the presentations and content that we publish are there,” said George. “We are therefore going to create a link to this content, a lot of very rich and useful information for our clients and potentially the participants. “

“And much of that content,” he said, “will support HPE’s goal of delivering systems that run like a supercomputer and run like a cloud. Our goal is to power all kinds of new workloads of any size for every data center… We build on this legacy, this history and this impact of exascale. You have heard a lot about this from us for a long time. But within the confines of exascale, in this methodology, there are workloads and data that flow smoothly between system architectures and different organizations. So we try to bring this concept to all data centers, regardless of their size.

This central message – “perform like a supercomputer and function like a cloud” – means to merge the qualities of the two entities.

Joseph George of HPE

“If we look at what we’ve been delivering in a supercomputer space for a long time,” George told us, “the scale we’re getting to meet some of these big challenges that require the power, capability and functionality of this supercomputer. through software on the network, on storage…, taking advantage of all these qualities. But when you get to the cloud you think about the possibility of managing it in a more modern way, how can I manage the software, integrate the software with APIs? How do I run it cloud scale, especially when you are starting to go into other data centers or the enterprise. Our customers are now interfacing with their compute, storage, and infrastructure, with things like APIs, with more centralized consoles, which are managed in cloud data centers, so how do we integrate those elements and tenants into how our supercomputer customers handle these devices? And then vice versa: how to reach customers who know their cloud infrastructure but who provide the performance and power of a supercomputer? It’s about bringing those capabilities together, serving our customers where they are in their journey.

At ISC, HPE will be involved in over 10 sessions throughout the conference (see here and here) covering topics such as Pankaj Goyal, HPE VP HPC / AI & Compute Product Management, speaking on ‘The’ innovation fuels the HPC and AI journey ”; Torsten Wilde, Senior HPE System Architect, on “Guidelines for Developing the HPC Data Center Monitoring and Analysis Framework”; and Jonathan Bill Sparks, HPE technologist, who will talk about “Containers in HPC”.

Another session involving HPE and others will take place on Thursday, July 2: “High Computing Trends and Directions: HPE, Microsoft Azure, NEC, T-Systems”, featuring Nicolas Dube, Member and Vice President of HPE, and Addison Snell, co-founder and CEO of industry analysis company HPC Intersect360 Research.

All of these and other activities that HPE will be participating in at ISC are available at the company’s virtual booth, located at https://www.isc-hpc.com/sponsor-exhibitor-listing.html.

Source link

About Mariel Baker

Check Also

Getting up to date on the proton – sciencedaily

Scientists are developing a revolutionary theory to calculate what happens inside a proton traveling at …