High performance computing is going mainstream

High performance computing is going mainstream

Once the domain of scientists and governments, HPC is now becoming critical for a broad spectrum of organisations.

There was a time when high performance computing (HPC) referred to supercomputing or parallel processing capable of delivering high number-crunching capabilities needed in applications like weather forecasting, medical research and other areas that required complex modelling. For many decades supercomputing or HPC was the domain of scientists and researchers who had to deal with large data sets. It was then limited to sectors like defence, large global banks, governments, and the energy industry.

The changing contours of HPC
However, the situation has changed over the years, considering the amount of data enterprises are required to process today. With Internet of Things (IoT), unstructured data from numerous sources and the size of data sets enterprises have to deal with has gone up phenomenally. Moving forward, as enterprises experiment with technologies like AI and Machine Learning (ML), they will have to walk down the HPC path.

In the past, it would take researchers or scientists to collect data sets large enough to use parallel processing or supercomputing facilities. But that is no longer the case. There’s a data explosion happening thanks to the highly connected nature of the world we live in. Connected devices that are Wi-Fi enabled are generating massive quantities of data that need to be processed and analysed. And that requires massive computing power that only HPC can provide. Enterprises no longer face the challenge of collecting massive quantities of data. The focus has shifted from quantity to quality of data. And today, quality data comes in massive quantities. The challenge here is to crunch that data and derive actionable insights that will help enterprises drive positive business outcomes. This is where HPC comes handy.

HPC democratisation
There was a time when the United States had restricted HPC for many countries, including India, as they were likely to use it for military applications. But commercial computing underwent a revolution as companies like Intel kept doubling their microprocessors’ processing capabilities and the processing capabilities of commercial computers went well beyond the supercomputers of the 1970s and 1980s. Supercomputing or HPC is no longer the privilege of a few. HPC has been democratised over the years and today it is available to any enterprise that has to process massive quantities of data sets and make sense of them.

HPC on Cloud
Today HPC capabilities can be accessed through the cloud. While the bulk of HPC work is being done in-house on private clouds, there is a trend where service providers are offering a range of HPC options on the public cloud. At present, these are attracting traditional HPC users, such as research institutes, government organisations and large banks. Moving forward, given the large data sets enterprises will have to deal with, non-traditional HPC users are also likely to avail of these services. Innovations in networking and bare metal computing will make HPC more accessible to enterprise users and help them solve problems that involve an avalanche of data.

GPU versus TPU
There was a time when graphic processing units or GPUs were built for high performance gaming. But GPUs today have found a range of new applications. They are used in almost everything from AI and ML to natural language processing to self-driving cars. GPUs today handle most HPC workloads. As GPU processors become more powerful they will be increasingly deployed for applications requiring HPC. Sometime ago Google launched the Tensor Processing Unit (TPU), an alternative to the GPU. The company has been using this chip in its data centres for ML applications, such as neural networks. An individual TPU can reportedly process 100 million photos in a day. The company will be offering TPU capabilities on the cloud. And the HPC market will see GPU competing with TPU as their respective computing capabilities evolve.

Artificial Intelligence is another area that will exploit HPC in a major way. While AI by itself is not new and has been around for almost six decades, it has evolved and now uses humongous data sets to train ML models. Creating self-learning chatbots and conversational assistants like Siri is no longer considered bleeding edge as many enterprises are already deploying virtual conversational assistants. These will only create gigantic data sets requiring enterprises to deploy HPC on a large scale.

The impact of edge computing
The other factor that will increase HPC usage is the shift from centralised computing to decentralised models. Models like edge computing and fog computing will require enterprises to process and analyse data at the edge of the network or close to the data source. In this scenario there are modern applications that need to be close to compute, storage and network capacity to provide faster responses to customer requirements. Edge and fog computing cut down on latency, increasing speed. Such capabilities will require superior processing power and HPC is the only way to go.

These are early days of HPC going mainstream. With cloud service providers offering HPC, everybody, including small and medium businesses, can access supercomputing capabilities. HPC will also impact the way individual users access and use applications on various devices, including desktops and mobiles. According to various research reports, the HPC market will see exponential growth over the next few years and touch almost USD 60 billion by 2025.

Photograph: Matt Howard/Argonne National Laboratory/Creative Commons

About Post Author