By Jaya Jagadish
In recent years, we have seen a rapid increase in the adoption of technologies such as AI, ML, cloud computing, and data science. During the Covid-19 pandemic, the large-scale adoption of virtual business models by companies, the expansion, maturation and adoption of cloud ecosystems, the growth of broadband, mobile and mobile penetration PC, the shift to digital transactions by an increasingly tech-savvy consumer base, and the efforts of the wider ecosystem to prepare for the 5G network, etc.
have helped this. The resulting data explosion is so huge that we can no longer rely on traditional supercomputers to process the incoming information.
We have entered the megacycle of high performance computing (HPC). An IDC study estimates that more than 59 zettabytes of data were generated in 2020. One zettabyte equals one trillion gigabytes. The volume of data created over the next three years will be much higher than the volume of the past thirty years. It is also estimated that the volume of data generated by in-vehicle devices, as well as the increase in metadata, will soon exceed all other types of data.
A new IT paradigm
While the performance of supercomputers doubled almost every year from 2002 to 2009, this rate fell every 2.3 years from 2009 to 2019. This was due to several factors, including the slowdown in Moore’s Law and constraints. techniques such as Dennard scaling. While they appeared to be significant hurdles, technologists have now found innovative ways to overcome them to usher in what is known as the Exascale computer age. Exascale systems correspond to computing systems powerful enough to calculate a billion billion, or one quintillion (1018) operations per second.
To push the boundaries of performance and efficiency, engineers build heterogeneous systems that use processors and GPUs, as well as co-design, that is, iterative optimization of hardware and software to the search for superior performance and efficiency at a lower cost. It comes to life when the Frontier supercomputer powered by 3rd generation EPYC processors and Radeon Instinct GPUs is launched. Frontier is expected to be the world’s fastest and the world’s first Exascale supercomputer. It is being installed at Oak Ridge National Laboratory (ORNL) in the United States. Frontier will deliver over 1.5 exaflops of maximum processing power.
Pushing back the frontiers of science
Researchers plan to use this immense computing power combined with the fusion of HPC and AI to tackle big challenges once thought to be out of reach. We saw a glimpse of this with the demand for high computing power during the global rush to develop the Covid-19 vaccine.
Regarding âFrontierâ, ORNL is preparing eight key scientific applications, including one that will study astrophysics and galactic formations. Another interesting application is a physical plasma simulation system called PIConGPU. Its main relevance is for applications in cancer radiotherapy and for probing the structure of matter via X-rays in the materials and life sciences.
Another use case for Exascale computers is climate science. Exascale computers will allow climatologists to simulate the behavior of the world’s oceans and atmosphere, and model abrupt climate change, helping us understand what we need to do to keep the earth a hospitable place.
In conclusion, at over a quintillion operations per second, Exascale computers hold the promise of unveiling a new world of possibilities comprising the fundamental forces of the universe, and who knows, maybe even the origin of the universe. universe!
The writer is Country Manager, AMD India