Menu Close

Invest in Cerebras

Artificial Intelligence

Founded: 2016

Headquarters: Sunnyvale, California



Last Round Valuation

$50 Billion

Share Price


Sign Up to Start Investing

Already have an account? Sign In


About Cerebras

Cerebras Systems is a developer of computing chips designed for the singular purpose of accelerating AI. Cerebras builds computer systems for complex artificial intelligence deep learning applications. With a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types, Cerebras creates chips that are the size of wafers, which are trays that would usually have several chips on it, to train AI models to accomplish tasks faster. The Company’s flagship product, the CS-2 system, which is powered by the world’s largest processor – the 850,000 core Cerebras WSE-2, enables customers to accelerate their deep learning work by orders of magnitude over graphics processing units.

Key Facts

“At Cerebras Systems, our goal is to revolutionize computing. It’s thrilling to see some of the most respected supercomputing centers around the world deploying our CS-2 system to accelerate their AI workloads and achieving incredible scientific breakthroughs in climate research, precision medicine, computational fluid dynamics and more.” said Andrew Feldman, CEO and Co-Founder of Cerebras Systems.

With enterprises in all sectors realizing the importance of AI, adoption is increasing. However, most AI projects fail due to not being cost effective.

According to recent surveys, business leaders have put the failure rate of AI projects between 83% and 92% due to AI not being cost efficient. “As an industry, we’re worse than gambling in terms of producing financial returns,” Aible CEO Arijit Sengupta says.

"AI will be in every electronic product we buy and sell and use from toys to automobiles. And the ability to make AI useful depends on really good engineers, really good AI modelers, data scientists, as well as the hardware. But all that will be for naught if you're not running it on efficient hardware because you can't afford it. So this is all about making AI pervasive and affordable and – people overuse the term – but democratizing AI is still a goal. It's not where we are, but it's gonna take a lot of work to get there,” according to the Founder of Cambrian AI Research.

Cerebras Systems’ main offering is a chip, The Wafer-Scale Engine (WSE). It is a revolutionary central processor for the Company’s deep learning computer system. The second-generation WSE (WSE-2) powers the CS-2 system: it is the largest computer chip ever built and the fastest AI processor on Earth.

Unlike legacy, general-purpose processors, the WSE was built from the ground up to accelerate deep learning.

The WSE-2’s main competitive advantages are: (1) Massive high bandwidth on-chip memory; (2) Speed (Faster than a traditional cluster could possibly achieve); (3) Cost and power efficiency (One chip is equivalent to a cluster of legacy machines); (4) Ease of programing; (5) Radically reduced programming complexity; and (6) Reduced wall-clock compute time, and time to solution

Cerebras’ WSE-2 gives unprecedented levels of computation, memory and interconnect bandwidth on a single, wafer-scale piece of silicon. Further optimizations by sparsity harvesting allow the computation capabilities to be maximized. The outcome is huge performance in an integrated chip without bottlenecks, in which every node is programmable and independent of others. With this revolutionary approach to AI, companies can reduce the cost of curiosity.

The net result of the Company’s innovation to date is unmatched utilization, performance levels, and scaling properties that were previously unthinkable.

Cerebras offers a revolutionary AI infrastructure, with its CS-2 system. The CS-2 is designed from the ground up to power, cool, and deliver data to the revolutionary WSE-2 processor so that it can deliver unparalleled performance to users. The package is easy to deploy, operate, and maintain in client datacenters today. This means no compromises and peak performance with no large-scale cluster deployment complexity for customers.

Large-scale data centers use mass amounts of computers, while Cerebras offers far less supercomputers for the same output. This means customers can deploy datacenter-scale AI computers to unlock world leading innovation in just a few days or weeks – delivering greater performance in a space- and power-efficient package built for the job.

Cerebras also offers the Cerebras Software Platform, CSoft. The Cerebras Software Development Kit allows researchers to extend the platform and develop custom kernels – empowering them to push the limits of AI and HPC innovation. The Cerebras Machine Learning Software integrates with the popular machine learning frameworks TensorFlow and PyTorch, so researchers can effortlessly bring their models to the CS-2 system.

Cerebras also offers Cloud Cirrascale. Cerebras Cloud provides access to a blazing fast AI solution, right at customers fingertips. It provides access to the latest Cerebras technology hosted at Cirrascale Cloud Services, a specialist in deep learning infrastructure. This joint solution delivers the model training performance of many processors in a single instance, so users can develop new neural network architectures, Machine Learning methods, and algorithms that were previously impractical – all without worrying about infrastructure management.

Cerebras can double the amount of computing its chips do for double the power, unlike current systems that need more than twice as much power to double their computing capacity. Current AI systems use tens of megawatts of power over months and use the equivalent of a small city’s power to train models – Cerebras allows the same computer power with far less energy at faster speeds.

Cerebras allows researchers and organizations with tiny budgets — in the tens of thousands of dollars range — to access AI training tools that were previously only available to much larger organizations with lots of money.

In November 2023, Cerebras announcednthe achievement of a 130x speedup over Nvidia A100 GPUs on a key nuclear energy HPC simulation kernel, developed by researchers at Argonne National Laboratory. This result demonstrates the performance and versatility of the Cerebras Wafer-Scale Engine (WSE-2) and ensures that the U.S. continues to be the global leader in supercomputing for energy and defense applications.

In November 2023, Cerebras announced itsnsoftware release 2.0. R2.0 is the biggest software update to Cerebras' platform this year, bringing a generational leap in performance, programmability, and model support. Crucially, R2.0 moves the Cerebras software stack to PyTorch 2.0, making it easy to work with the latest models and training techniques on Cerebras hardware.

In November 2023, Cerebrasnannounced that it hired industry expert Julie Shin Choi as its Senior Vice President and Chief Marketing Officer. Choi joins Cerebras from MosaicML where she was Chief Marketing and Community Officer and the company’s first exec hire. In this role, Choi led marketing, communications, and community efforts for the company, helping it grow more than 10X to its $1.3 billion sale to Databricks in June 2023. Prior to that, she was the Vice President of AI Products & Research Marketing at Intel where she created a full stack marketing organization that scaled globally to support Intel’s AI business and product strategies. Previously, Julie led product marketing roles at HPE, Mozilla, and Yahoo, where she focused on enterprise customers and developers. She has produced more than 50 developer conferences and hackathons, including SheCodes – a one-day conference for women technologists that included Facebook, Google, GitHub, Hackbright, Mozilla, X and Yahoo. Julie holds bachelor’s and master’s degrees from MIT and Stanford, respectively, both in management science. She currently serves as an advisor to Coactive AI and serves as regular speaker on topics including AI, marketing, and diversity in technology.

In December 2023, Cerebras announced the Company's pivotal role as a founding member of the AI Alliance, an initiative orchestrated by IBM and Meta. Comprised of leading organizations spanning industry, startups, academia, research, and government, the AI Alliance is a beacon of open innovation and science in AI.

In January 2024, Cerebras announcednhat the Barcelona Supercomputing Center (BSC) has completed training FLOR-6.3B, the state-of-the-art English Spanish Catalan large language model. FLOR-6.3B was trained in just 2.5 days on Condor Galaxy (CG-1), the massive AI supercomputer built from 64 Cerebras CS-2s by Cerebras and G42. FLOR-6.3B continues Cerebras’ leading work on multilingual models, a trend that started with the introduction ofnJais, the leading Arabic English model.

In January 2024, Cerebrasnannounced that the Company will support the U.S. National Science Foundation’s (NSF) National Artificial Intelligence Research Resource (NAIRR) Pilot with a remote access grant of up to 4 exaFLOPs of AI compute for large language and generative AI model training using CS-2 systems. This unprecedented compute capacity will enable NAIRR Pilot researchers to train AI models in days on Cerebras vs. months on traditional GPU clusters, significantly accelerating their ability to probe the frontiers of science and engineering in AI in the national interest.

In January 2024, Cerebras announcedna collaboration with Mayo Clinic as its first generative AI collaborator for the development of large language models (LLMs) for medical applications. Unveiled at the JP Morgan Healthcare Conference in San Francisco, the multi-year collaboration is already producing pioneering new LLMs to improve patient outcomes and diagnoses, leveraging Mayo Clinic’s robust longitudinal data repository and Cerebras’ industry-leading generative AI compute to accelerate breakthrough insights.

In June 2024, according to The Information, Cerebras filed confidentially with securities regulators for an initial public offering.

Sign up to see more information

Get Started Button Arrow




Nulla malesuada justo velit, eget

Donec rhoncus sapien sit amet mauris consectetur convallis. Proin mollis eros vel orci.


Key Investors

Pellentesque tempus, ipsum id suscipit tristique, Aenean lacinia quis tortor sed, ornare Integer vel.


Size of Market

Mauris velit nisl, ultrices ut est quis, dapibus euismod tellus. Ut finibus ultricies sollicitudin. Vivamus a aliquet magna. Fusce lacinia urna in laoreet facilisis. Aliquam dignissim efficitur sapien eu convallis.


Market Position

Nam volutpat purus vel lacus pellentesque porttitor sed in orci. Vivamus porttitor sit amet tellus non sollicitudin. Suspendisse sit amet turpis nisi. In sem velit, interdum at augue id, eleifend tristique nunc, Maecenas scelerisque.


Mauris velit nisl

Primary Vertical

Artificial Intelligence



Mosaic Score










What is a Mosaic Score?

Key Officers

Name Work History Title Status
John Smith Aenean, quam eu, diam dignissim Chief Executive Officer Current
John Smith Aenean, quam eu, diam dignissim Chief Executive Officer Current
John Smith Aenean, quam eu, diam dignissim Chief Executive Officer Current
John Smith Aenean, quam eu, diam dignissim Chief Executive Officer Current


Valuation Over Time


Last Round

Sept. 2019, Series B1

Valuation Post-Money


Amount Raised


Total Funds Raised


Implied Valuation


Funding Rounds

Date Round Amount Valuation Investors
MM/DD/YYYY Growth Equity $1M $2.8B Name goes here
MM/DD/YYYY Growth Equity $1M $2.8B Name goes here
MM/DD/YYYY Growth Equity $1M $2.8B Name goes here
MM/DD/YYYY Growth Equity $1M $2.8B Name goes here

Competitive Landscape

Company Valued Revenue
San Francisco, CA
$2.8B $1.3B
San Francisco, CA
$2.8B $1.3B
San Francisco, CA
$2.8B $1.3B

Your access to private investment

Explore companies like this

Your access to private investment

Get Started Button Arrow