Detta är en insändare. Semi14 granskar alla insändare och publicerar de vi anser relevanta för våra läsare. Alla aktörer uppmuntras leverera svenskt material, men i de fall det inte är möjligt publicerar vi engelska original. Insändaren står för åsikterna i texten.
Over the past few years, we have witnessed an unprecedented shift in the adoption of digital technologies, from a fairly known High Performance Computing (HPC) and Big Data analytics to artificial intelligence (AI) and machine learning (ML). A historical academic demand to accelerate the pace of scientific discovery coincides today with competitive pressure to accelerate new product designs and time to decision in the commercial market.
Supercomputers, also called computational or parallel clusters process complex simulations by splitting compute problems into smaller “jobs”, simultaneously running on multiple server nodes, interconnected by one fast network. The price-performance of such systems is constantly improving, helping to make HPC more affordable. Nowadays a compute problem that would take weeks to process on a multi-million-euro system twenty years ago can be accomplished in just a few hours on a single server, equipped with compute GPUs.
Once available only to educational institutions and major corporations, HPC has become more attainable today as public clouds and community datacenters are enabling smaller organizations to book remote computational capacity on demand, paying only for the core-hours and related services.
HPC can enable commercial organisations to solve a wide spectrum of complex problems, including product optimisation and electronic design, credit analysis and fraud detection, drug discovery and human studies, oil and gas exploration, climate research and weather prediction, rendering and movie pre- and post-production, and so forth.
The hardware architecture with multiple computational GPUs used today in the HPC space has many similarities with AI and ML implementations, and the intersection of the technologies is also bringing more advanced AI solutions to the mainstream, with HPC potentially enabling training models on ever larger datasets to optimize compute cluster use.
Just as enterprise cloud computing created new ways for businesses to engage customers and to transition to new ways of working, the next generation of supercomputing will open up new possibilities for innovation breakthroughs by accelerating R&D speed and product development by orders of magnitude.
A new era of supercomputing
From 2002 to 2009, supercomputing performance doubled almost every 12 months. However, this rate dropped to every 2.3 years from 2009 to 2019, which has been credited to several factors such as the slowdown in Moore’s Law and technical constraints such as Dennard scaling.
Yet technologists have now found innovative ways to overcome them to usher in what is being called the Exascale era of computing. An exascale system is one that can perform a quintillion floating-point operations per second (FLOPS). That’s a billion billion – or 1,000,000,000,000,000,000 – which means exascale machines can solve calculations five times faster than today’s top supercomputers, and also run more complex, higher precision models.
To reach these new performance highs, engineers are taking a heterogeneous approach, consisting of integrated CPUs and GPUs and iterative optimisation of both hardware and software in order to reach new levels of performance and efficiency at a lower cost per FLOPS.
Nowhere is this better demonstrated than with the Frontier supercomputer being developed at the Oak Ridge Leadership Computing Facility in the United States, which is set to make history as the world’s first operational exascale supercomputer when it’s switched on later this year. The machine, which will accelerate innovation in science and technology and help the US maintain leadership in high-performance computing and AI, is powered by 3rd-gen AMD EPYC™ CPUs and Radeon Instinct™ GPUs and will deliver more than 1.5 exaflops of peak processing power. There is an even more powerful AMD-based Exascale-class system, called El Capitan, anticipated to be built at Lawrence Livermore National Labs in 2023 in the United States.
Japan was first to market with its own FUGAKU 1.42 exaflops peak performance supercomputer and China is reportedly operating a less publicized Sunway “Oceanlite” 1.32 exaflops peak performance system. So where is Europe in this race?
Europe’s Exascale mission
Europe is known for taking its own routes in almost every segment and supercomputing is no different.
While China and US are looking to become the leaders in the world of supercomputing, Europe is taking a more collaborative approach with the government-funded European High-Performance Computing Joint Undertaking (EuroHPC), which was initiated and pursued by the Partnership for Advanced Computing in Europe (PRACE). The initiative pools resources to fund world-class integrated European HPC and data infrastructure and support an innovative supercomputing ecosystem.
The continent’s supercomputing efforts is also bolstered by Horizon Europe, a seven-year European Union scientific research framework that is investing nearly €80 billion to fuel discoveries and world-firsts, including the development of EU-based exascale machines.
This unique approach has led to a number of breakthroughs in the European supercomputing market, enabling researchers across the continent to tackle challenges once thought beyond reach.
Take the Hawk supercomputer, currently the 24th system in the TOP500 list of world’s fastest supercomputers, installed at the University of Stuttgart (HLRS). This machine – an HPE Apollo 9000 system with 5,632 nodes spread across its 44 cabinets, each node carrying AMD EPYC CPUs – delivers around 26 peak petaflops of performance and has enabled the educational institutions and customers to carry out cutting edge academic and industrial research in a wide range of contexts. As a case in point, HLRS is enabling customers in the automotive segment to run structural analysis and fluid dynamic simulations.
There’s also Lumi, a pre-exascale machine located at the IT Centre for Science (CSC) in Kajaani, Finland that demonstrates the power of this next era of supercomputing. Lumi, which utilises similar technology as Frontier with its custom AMD EPYC “Trento” CPU and four AMD Instinct MI250X GPU accelerators per node, will be capable of executing more than 375 petaflops or more than 375 million billion calculations per second, with a theoretical peak performance of more than 550 petaflops per second.
What makes pre- and exascale machines particularly interesting is a memory coherency. This technology, not yet available to the general market, means there is a single copy of data accessed by both the CPU and GPUs, without the need to keep separate copies for each. This, in turn, reduces overhead programming, improves performance, and frees up system resources, helping bleeding edge systems like Lumi to run more efficiently.
Lumi also boasts innovative ‘free cooling technology’, which enables waste heat to be utilized in the district heating network of Kajaani, further reducing costs, and CO2 footprint. This technology is anticipated to reduce the entire city’s annual carbon footprint by 13,500 tons – an amount that equals the output from 4,000 passenger cars.
Thanks to this massive computational capacity, the machine – which already ranks amongst the world’s top supercomputers – is enabling European researchers to solve problems across different areas, from weather and cybersecurity through to drug discovery and personalised medicine. It’s making breakthroughs in the area of climate change too; the AMD-powered Lumi enables climate scientists to run high-resolution climate models, which can provide better insights for climate impact studies.
On its journey to exascale, Europe is already realising the new world of possibilities that this unmatched level of performance can provide; these systems will help to solve the most complex scientific research questions, allow scientists to create more realistic Earth system and climate models, and will power new studies of the universe, from particle physics to the formation of stars.
The continent now wants hardware that exceeds the performance of the world’s fastest Fugaku supercomputer in Japan. It’s an ambitious and complex project and will take time, which is why the current democratisation of HPC is so important. With access to technology being developed by AMD and x86 vendors that’s already being used in some of the world’s fastest machines, as well access to an array of off-the-shelf open-source software tools to optimise and scale supercomputing workloads, Europe is already able to solve complex solutions and realise the benefits of exascale computing, Yet it would require a continued focus and investments from multiple European nations to develop home-grown hardware, tools and scalable software if Europe is serious about operating its own, unique Exascale class systems.
Insändaren: AMD’s Senior Commercial Sales Director, Roger Benson. Roger is responsible for AMD’s Data Centre and Embedded Solutions Business in Europe and has over 20 years’ experience in improving market and customer capabilities through the application of new, high-performance technologies.