EuroHPC Systems Leonardo and Lumi share more details as they approach the finish lines

About two years ago, the EuroHPC Joint Undertaking (EC) selected eight host countries for its first eight systems. Now these trees are bearing fruit – Slovenia’s 6.8 Vega woodpecker petaflop system, for example, is already operational. The stars of the show, however, are the three pre-exascale systems: the Finnish 375 Linpack petaflops Lumi system, the Italian 249 Linpack petaflops Leonardo system and Spain yet to be detailed MareNostrum 5. At the HPC User Forum, representatives of CSC and CINECA have provided updates on their respective pre-exascale systems as the installation and operating dates get closer and closer.

Leonardo

Sanzio Bassini, director of CINECA’s Supercomputing Applications and Innovation department, provided an update on Leonardo, which is expected to cost up to 240 million euros. Bassini said the approximately 3,456 Atos Sequana nodes in Leonardo’s 3,456-node “booster module” will provide 240.5 Linpack petaflops, while the “data-centric, general-purpose module” – consisting of 1,536 nodes – will provide 8.97 Linpack petaflops. The hardware of these Atos Sequana XH2000 nodes, detailed in previous releases, will include around 14,000 Nvidia A100 GPUs, Intel Ice Lake processors, Nvidia HGX base cards and Nvidia InfiniBand 200 Gb / s network.

Bassini also indicated that the nodes will be liquid cooled directly to “95%” with hot water and that the system will be accompanied by an “installation of exabytes” aimed at a processing capacity of one terabyte of data per. second.

Leonardo is to be hosted in a new data center located in the new Tecnopolo di Bologna, a few kilometers from Casalecchio di Reno, the current headquarters of CINECA. This, Bassini said, posed some challenges for efficient cooling: the system, he explained, would be “enough in southern Europe, and with a pretty hot summer”. As a result, the data center – which will also house supercomputing resources for the European Center for Medium-Range Weather Forecasts – anticipates a possible electrical footprint of 40 MW. Still, said Bassini, the Leonardo team aims to deliver the system with an energy use efficiency (PUE) of less than 1.1.

Evangelos Floros, a program manager for the joint venture, had recently said that CINECA planned to install the booster module by August 2021 and complete configuration and testing by 2021; the data-centric module, meanwhile, was to be installed in the first quarter of 2022 and finalized in the second quarter of 2022. According to Bassini, this is still the case: “more or less eight, nine months from now”, a he said of the booster module, followed by January 2022 for the data-centric module.

Lumi

Pekka Manninen, director of the Lumi Leadership Computing Facility, gave more details on the installation process of this system. CSC, the Finnish national supercomputing center, previously developed Lumi’s specifications in some detail. A system based on HPE Cray EX, Lumi will include AMD Epyc Milan processors and Instinct GPUs, with its primary GPU partition (Lumi-G) providing 550 state-of-the-art petaflops and complemented by a set of partitions and storage systems: Lumi- C, an add-on Partition CPU with ~ 200,000 Milan cores; Lumi-D, a data analysis partition with 32 terabytes of memory and additional GPUs; and plenty of petabytes of storage through Lumi-P, Lumi-O, and Lumi-F. And, Manninen said in the HPC User Forum, “We aspire to add emerging technological capabilities for users to explore.”

Concept art of the LUMI supercomputer. Image courtesy of CSC.

With most of the internals of the nodes laid bare beforehand, Manninen offered more details about Lumi’s impressive data center instead. “It’s in an old stationery, [and] paper production [had] already been to Latin America some time ago, ”he said. “But the plant remains, and a capacity to house supercomputers up to, say, 200 megawatts. So the paper machines were using a lot of electricity and that is still on site and we can now use it for computing capacity – and then we can run that computing load with carbon neutral electricity. “

Unlike Leonardo’s uncomfortably hot summers, Finland offers a decidedly cool climate all year round, providing free cooling for the entire system all year round, Manninen said. The energy used by the system itself will be 100% hydropower thanks to the nearby hydropower plants (although, according to Manninen, the system can still tap into the grid if needed). He also touted the reliability of the local network: in the past 38 years, he said, it had only experienced a two-minute outage.

The design of the LUMI data center. Image courtesy of Synposis Architects Ltd. and Geometria Architecture Ltd.

Even Lumi’s waste heat will not be wasted. “Lumi is not wasting heat by transmitting it, but by collecting it and heating the surrounding town of Kajaani,” Manninen said. “In fact, 20% of the city will be heated by Lumi’s excess heat. Someone’s trash is someone’s treasure. All of these factors combined, he added, make Lumi not only carbon neutral, but carbon negative as well, with an estimated negative CO2 footprint of 13,500 tonnes per year and a PUE of 1.03. (And, of course, the center will make money on the sale of the waste heat.)

Despite the somewhat remote location of the supercomputer – Kajaani is a good day’s drive from the relatively smooth city of Helsinki – Manninen assured the public that the system would be connected to the backbone of the Nordic network, with multiple connections. gigabit to GIANT which Manninen said could easily be upgraded to terabit connections. “Kajaani may be far away on a map,” he said, “but he’s very well connected.”

In a previous session, Lumi-C (and much of the rest of the system) is expected to be deployed by the third quarter of this year; However, the much more powerful Lumi-G module will have to wait until the fourth quarter of 2021 or the first quarter of 2022. Lumi will host periods of pilot use for certain areas of the project (such as data intensive computing and high computing. flow) from the beginning of September (for the first phase) and end of December (for the second phase).

MareNostrum 5

Two downstairs, one to go. The mysterious MareNostrum 5, which will be hosted by Spain’s Barcelona Supercomputing Center, is the third and (for now) last pre-exascale system ordered by EuroHPC – but unlike the other seven systems, details of the supplier have yet to be completed. been shared. All we know so far is that from 2019 MareNostrum 5 was planned as a heterogeneous system with peak performance around 200 petaflops that would include an experimental platform aimed at developing new technologies. for future supercomputers.

The anticipation for the reveal is definitely building, with some “coming” teases dating back to late last year.


Source link

About Mariel Baker

Check Also

Getting up to date on the proton – sciencedaily

Scientists are developing a revolutionary theory to calculate what happens inside a proton traveling at …