Cerebras

World's largest AI chips for ultra-fast training.
Cerebras
Cerebras

COMPANY

2016

Date

Hardware & AI

Category

About the partner

Cerebras Systems is one of the most technically audacious semiconductor companies ever created — the organization that asked a question no one else had dared to ask seriously: what if instead of connecting thousands of small chips together to create AI compute, you built a single chip the size of an entire silicon wafer, with every processing unit on a single piece of silicon, connected by the shortest possible interconnects at the fastest possible speeds? That question led to the Wafer-Scale Engine — the largest computer chip ever manufactured, containing billions of transistors and tens of thousands of AI-optimized cores on a single 300mm wafer — and to a company that has demonstrated order-of-magnitude improvements in AI training and inference speed that the fragmented multi-chip approaches of its competitors cannot match. The Cerebras CS-2 and CS-3 systems, built around successive generations of the Wafer-Scale Engine, deliver AI training speeds that compress weeks of training time on conventional GPU clusters into days or hours — a capability that fundamentally changes the economics and velocity of AI research. The architectural advantage derives from the elimination of inter-chip communication overhead: on a conventional GPU cluster, chips must constantly transfer activations and gradients across high-speed interconnects that, despite their sophistication, still represent the dominant bottleneck in training large neural networks. On a Cerebras system, those transfers happen within a single chip at memory bandwidth speeds that are orders of magnitude faster, enabling training throughput that scales linearly with model size in a way that GPU clusters cannot replicate. Cerebras's cloud service and on-premises deployment options give AI researchers and enterprises access to wafer-scale AI compute without the capital expenditure of purchasing the hardware outright — democratizing access to this unique computational capability for organizations ranging from national laboratories and pharmaceutical companies to AI-native startups seeking the fastest possible path from research to deployment. The company's technology is particularly well-suited to the training of large language models, scientific AI applications in drug discovery and genomics, and research workloads where training speed is the critical bottleneck. For any organization where AI training velocity is a strategic priority, Cerebras Systems offers a capability that exists nowhere else in the world.
Loading...