Twenty years ago, a Duke University professor, David R. Smithartificial composites called “Metamaterials” to do a real life invisibility cloak. While this cloak didn’t really work like Harry Potter’s, exhibiting a limited ability to hide objects from light of a single wavelength, these developments in materials science eventually led to research in electromagnetism.
Today, based in Austin Neurophosa photonics startup that started out of Duke University and Metacept (an incubator run by Smith), is taking this research further to solve what may be the biggest problem facing AI labs and hyperscalers: how to scale computing power while keeping power consumption in check.
The startup has come up with a “meta-surface shaper” with optical properties that allow it to act as tensor core processor for matrix vector multiplication — math at the heart of many AI tasks (especially inference), currently performed by specialized GPUs and TPUs using traditional silicon gates and transistors. By putting thousands of these shapers on a chip, Neurophos claims, its “optical processing unit” is significantly faster than the silicon GPUs currently used en masse in AI data centers, and much more efficient at inference (running trained models), which can be quite an expensive task.
To fund its chip development, Neurophos just raised $110 million in a Series A round led by Gates Frontier (Bill Gates’ venture capital firm), with participation from Microsoft’s M12, Carbon Direct, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital and others.
Now, photonic chips are nothing new. In theory, photonic chips offer higher efficiency than traditional silicon because light produces less heat than electricity, can travel faster, and is much less sensitive to temperature changes and electromagnetic fields.
But optical components tend to be much larger than their silicon counterparts and can be difficult to mass produce. And converters are also needed to convert data from digital to analog and vice versa, which can be large and consume a lot of power.
Neurophos, however, claims that the metasurface it has developed can solve all these problems in one go because it is about “10,000 times” smaller than traditional optical transistors. The small size, the startup claims, allows it to fit thousands of modules on a chip, which leads to much higher performance than traditional silicon because the chip can do many more calculations at once.
Techcrunch event
San Francisco
|
13-15 October 2026
“When you shrink the optical transistor, you can do a lot more math in the optics domain before you have to make that conversion back to the electronics domain,” Dr. Patrick Bowen, CEO and co-founder of Neurophos. “If you want to go fast, you have to solve the power efficiency problem first. Because if you’re going to take a chip and make it 100 times faster, it’s going to burn 100 times more power. So you’ll have the privilege of going fast after you solve the power efficiency problem.”
The result, Neurophos claims, is an optical processing unit that can far outperform Nvidia’s B200 AI GPU. The startup he says Its chip can run at 56 GHz, delivering a maximum speed of 235 Peta Operations per Second (POPS) and consumes 675 watts, compared to the B200, which can deliver 9 POPS at 1,000 Watts.
Bowen says Neurophos has already signed up several customers (though he declined to name any), and companies like Microsoft are “looking very closely” at the startup’s products.
But the startup is entering a crowded market dominated by Nvidia, the world’s most valuable public company, whose products have more or less underpinned the artificial intelligence boom. There are also other companies working in photonics, though some, like Lightmatter, have rotating to focus on connections. And Neurophos is still a few years away from production, expecting its first chips to hit the market by mid-2028.
However, Bowen is confident that the advances in performance and efficiency provided by its meta-surface formers will prove moat sufficient.
“What everybody, including Nvidia, is doing in terms of the fundamental physics of silicon is really evolutionary rather than revolutionary, and it’s tied to TSMC’s progress. If you look at the improvement of TSMC nodes, on average, they improve in energy efficiency about 15%, and that takes a couple of years.”
“Even if we factor in Nvidia’s improvement in the architecture over the years, by the time we go to market in 2028, we still have huge advantages over everyone else in the market because we’re starting with 50x Blackwell in both power efficiency and raw speed.”
And to address the mass-production problems that have traditionally faced optical chips, Neurophos says its chips can be made with standard silicon foundry materials, tooling and processes.
The new funding will be used to develop the company’s first integrated photonic computing system, including data center-ready OPUs, a full software stack and early access developer hardware. The company is also opening an engineering location in San Francisco and expanding its headquarters in Austin, Texas.
“Modern AI inference requires monumental amounts of power and computation,” said Dr. Marc Tremblay, corporate vice president and technical partner of AI core infrastructure at Microsoft, in a statement. “We need a breakthrough in computation with the leaps we’ve seen in the AI models themselves, which is what Neurophos’ high-tech and talent-density team is developing.”
