Headquarters: Boston, MA
Lightmatter is delivering a new paradigm in semiconductor chip architecture and the next transition for large-scale computing. The company has developed photonic processors that are faster, more efficient and cooler than any conventional processors in existence today and is answering the call for increased compute speed, low energy density, and reduced chip heating. Lightmatter is set to enable the continued rapid growth of artificial intelligence computing while minimizing its well-known and growing impact on the environment.
Artificial intelligence (AI) is prevalent in daily life, powering products ranging from online advertising to intelligent personal assistants. The AI models behind these products are “trained” using large datasets: a process that requires a lot of time and energy with existing computer processors. As AI algorithms continue to develop, current technologies will struggle to keep up with the increasing demand for computing power. Faster and more energy efficient computers will be essential. This is where Lightmatter comes in.
Lightmatter’s products essentially do two things: make computing faster and more energy-efficient — requirements critical to supporting the generative AI boom.
Lightmatter's chips execute certain complex calculations crucial to machine learning with remarkable speed—literally in a flash. Unlike traditional chips that employ charge, logic gates, and transistors to store and manipulate data, Lightmatter's chips utilize photonic circuits. These circuits carry out calculations by altering the path of light. While this concept has been around for some time, it's only recently that scaling it for practical and highly valuable purposes has been achieved.
Through patented technology developed at MIT, Lightmatter has created a silicon chip that uses light signals, rather than electrical signals, for processing. This chip was used to demonstrate a new form of AI processing that promises orders of magnitude performance improvements over what’s feasible using existing technologies.
“For decades, electronic computers have been at the foundation of the computational progress that has ultimately enabled the AI revolution, but AI algorithms have a voracious appetite for computational power,” said Nick Harris, CEO of Lightmatter. “AI is really in its infancy, and to move forward, new enabling technologies are required. At Lightmatter, we are augmenting electronic computers with photonics to power a fundamentally new kind of computer that is efficient enough to propel the next generation of AI.”
Lightmatter’s suite of products consists of Envise, Idiom, and Passage, providing a full stack of hardware and software solutions to realize the benefits of photonic compute and interconnect technologies.
Envise is the world’s first general-purpose photonic artificial intelligence (AI) accelerator. Inference, which is synonymous with the deployment of AI models, represents the fastest-growing, largest energy footprint endeavor in the AI computing space. Envise enables ultra-high performance inference on the world’s most advanced AI models (GPT-3, Megatron) as well as staple neural networks (BERT-Large, DLRM, ResNet-50).
Idiom interfaces with standard deep learning frameworks and model exchange formats, while providing the transformations and tools required by deep learning model authors and deployers.
To enable customers to extract maximum performance from Envise, Lightmatter added a software stack: Idiom. Idiom compiles and executes neural network models on Envise and offers tools for debug, profiling and model optimization. Idiom also enables automatic blade detection — enabling rapid deployment of large AI models across entire server racks.
Passage is a new, wafer-scale programmable photonic interconnect through which computer chips communicate at unprecedented speed. Passage, an 8-inch by 8-inch computer chip, integrates transistors and photonics (including lasers, modulators, and photodetectors). Heterogeneous computer chips are integrated with the platform using standard chip-on-wafer packaging technology.
Lightmatter believes Passage will enable a world where configurable-topology supercomputing systems are built on a single platform that takes advantage of the benefits of optics, without the packaging cost and complexity, enabling a 100Tbps chip-to-chip interconnect.
Lightmatter claims its technology delivers a performance that is 5 times faster than an Nvidia A100 unit when running large transformer models like BERT, all while consuming merely 15% of the energy. This proposition is particularly enticing for AI behemoths like Google and Amazon, who are always in pursuit of higher computing power while grappling with hefty energy bills. The blend of enhanced performance and reduced energy costs makes Lightmatter's platform an overwhelmingly attractive option for these industry giants.
According to TechCrunch, Lightmatter’s products are in beta testing, and mass production is planned for 2024, presumably, at which point Lightmatter will have enough feedback and maturity to deploy in data centers.
Lightmatter’s product helps companies reduce their data center energy consumption costs by around 80% when training and running machine-learning models, compared with providers like Nvidia, AMD and Intel, whose chips move data over electrical wires. Lightmatter is also aiming to get companies such as Nvidia, AMD, and Intel to license its technology for use in their own chips.
In August 2022, Lightmatter named its new Vice President of Hardware Engineering, Richard Ho, who spent nearly 9 years at Google leading the revolutionary Cloud Tensor Processing Units (TPU) project. Ho was one of the earliest engineers on the Google Cloud TPU project, most recently serving as the Senior Director of Engineering. He was able to bring the Google TPU from concept to reality – using revolutionary silicon breakthroughs to power advanced machine learning applications, fueling top consumer applications such as Google Translate, Gmail, and Assistant. Ho will spearhead Lightmatter’s chip engineering division, with a focus on developing and deploying Lightmatter’s state-of-the-art photonic AI accelerator and wafer-scale interconnect for faster and cleaner computing solutions at scale.
In August 2022, Lightmatter named its new Vice President of Engineering, Ritesh Jain. He joins Lightmatter after more than 21 years at Intel leading systems and packaging engineering for data center programs. Jain served as Vice President of Intel’s Data Center and Artificial Intelligence group. Over the span of two decades, he built and led several cross-functional engineering teams globally and was part of several major technology transitions and initiatives for data center products, including the Ponte Vecchio GPU and yet-to-be-released Aurora Supercomputer. At Lightmatter, Jain will leverage his at-scale expertise and passion for sustainability to lead packaging and systems development for Lightmatter’s next-generation compute and interconnect products.
In May 2023, Lightmatter announced it had raised a Series C investment round from SIP Global, Fidelity Management & Research Company, Viking Global Investors, GV (Google Ventures), HPE Pathfinder, and existing investors. The company will leverage this new financing to arm some of the largest cloud providers, semiconductor companies, and enterprises with the power of photonic technology to bring a new level of performance and energy savings to the most advanced AI and HPC workloads.
Since the company’s last funding announcement in May 2023, Lightmatter has grown its headcount more than 50% to meet client demand and product milestones. As a result of this expansion, Lightmatter plans to open a Toronto office in 2024. The company has also bolstered its leadership team, hiring Danner Stodolsky as Vice President of Data Center Architecture and Colin Sturt as General Counsel.