How realistic is Graphcore’s 2024 timeline for ultra-smart supercomputers

British AI chip designer Graphcore is building an ultra-intelligent AI computer slated for release in 2024. The company says the AI ​​computer will surpass the parametric capability of the brain.

Graphcore dubbed the ultra-intelligent computer AI Good after computing pioneer Jack Good.

The company is developing next-generation IPU technology to power Good, which is expected to cost around $120 million apiece. The ultra-intelligent AI computer will have over 10 exa-flops of AI floating point computing and up to 4 petabytes of memory with a bandwidth of over 10 petabytes/second. Additionally, the supercomputer will be able to support AI model sizes of up to 500 trillion parameters and is fully supported by the company’s Poplar SDK.

Earlier, Graphcore announced updated models of its multiprocessor computers, IPU-POD, running on the Bow chip. The company claimed the new models are five times faster than comparable NVIDIA DGX machines at half the price.

arc pods

Graphcore relies on 3D Wafer-on-Wafer technology developed with TSMC to provide higher bandwidth between silicon chips. As a result, Bow can speed up the formation of neural networks by up to 40% with 16% less energy than its previous generation.

The Bow Pod256 delivers over 89 PetaFLOPS of AI computation, and the full-scale Bow POD1024 produces 350 PetaFLOPS of AI computation. Bow Pods can deliver superior performance at scale for a wide range of AI applications – from GPT and BERT for natural language processing, to EfficientNet and ResNet for computer vision, to neural networks , etc., the company claimed.

Competetion

Microsoft built a supercomputer for OpenAI to train huge AI models. The move is seen as a first step towards making the necessary infrastructure available to train massive AI models in platform form. The supercomputer has over 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server.

NVIDIA’s Selene is the seventh best supercomputer in the world in terms of performance. Built in three months, it is the fastest industrial system in the United States and the second most energy efficient system ever.

Meanwhile, Meta recently announced the AI ​​Research SuperCluster (RSC) supercomputer. The RSC will accelerate its AI research and help build the metaverse. Meta researchers use RSC to train large language models.

Graphcore’s competitors include Habana Labs – their chips are used to power AWS DL1 instances – and SambaNova, which raised $676 million to develop AI training and inference chips.

Last year, Cerebras Systems embarked on the quest for brain-scale artificial intelligence. The company has designed an external memory system that helps multiple computers train neural networks using billions of parameters. Today, the most advanced AI clusters support a trillion parameters (the machine equivalent of a synapse) and require megawatts of power to operate.

Last year, Cerebras Systems claimed it could support AI models with 120 trillion parameters. The company’s new WSE-2 processor features hundreds of thousands of AI cores on a whopping 46225 mm2 of silicon. Cerebras activated 2.6 trillion transistors for 850,000 cores. To put things into perspective, the second largest AI processor on the market is 826mm2, with 0.054 trillion transistors. Cerebras also claims 1,000 times more on-board memory, with 40GB of SRAM and can “unlock brain-scale neural networks” thanks to its next-generation processors.

According to Nick Bostrom, if Moore’s Law still applies, then human-level intelligence can be achieved between 2015 and 2024. Ray Kurzweil has stated that computers can crack human-level intelligence by 2029. “J I predicted that in 2029 an AI will pass a valid Turing test and successfully reach human levels of intelligence,” he said. Check out more predictions here.

Interestingly, Graphcore’sGraphcore timeline more or less lives up to the predictions made by renowned scientists.

About Mariel Baker

Check Also

AMD Releases Latest Consistent Device Memory Mapping Linux Code – Designed for Frontier

Over the past year, we have seen various patches released by AMD engineers with a …