
Piris Labs
Inference at Light Speed
About
Piris Labs offers a full-stack inference service that eliminates the AI data movement bottleneck. By pairing proprietary photonic hardware with a vertically optimized software stack, we minimize the "memory wall" associated with expensive GPUs. This allows us to deliver the same performance as traditional clusters at a fraction of the cost. Our technology improves effective FLOP utilization and reduces latency, finally making the unit economics of trillion-parameter models sustainable. Founded by a team of MIT physicists and Meta AI experts.
Founders
Founder & CEO
Founder (YC W26), reimagining AI compute architectures | PhD EE (MIT → NASA → Stanford)
Founder & President
I am a technical leader with a decade of experience scaling AI/ML products. At Meta and X, I specialized in leading tiger teams to launch high-stakes 0-to-1 initiatives. Now, I am the Co-Founder of Piris Labs. We are building high-speed photonic interconnects to solve the energy and latency constraints in AI data centers. My current focus is delivering our E/O engine prototype to secure our seed round.
AI Research Report
Problem & Solution
Problem and Solution Report: Piris Labs
Piris Labs addresses one of the most critical bottlenecks in modern artificial intelligence: the 'memory wall' and the associated data movement inefficiencies within data centers. As AI models grow in size and complexity, the speed at which data can move between processors and memory has failed to keep pace with the raw computational power of GPUs. This imbalance leads to high latency, excessive power consumption, and poor utilization of expensive hardware, ultimately making the serving and training of large-scale models prohibitively expensive for many enterprises.
The core problem is that traditional electrical interconnects are reaching their physical limits in terms of bandwidth and energy efficiency. In a standard data center cluster, moving data between nodes often consumes more energy and time than the actual computation. This 'data movement bottleneck' results in a high 'cost per token' for inference workloads, limiting the scalability of real-time AI applications and increasing the carbon footprint of AI infrastructure.
Piris Labs' solution is a full-stack inference service that replaces traditional electrical pathways with proprietary optical interconnects. By pairing breakthrough photonic hardware with a vertically optimized software stack, the company treats the entire data center as a single, coherent compute node. A key component of their technology is the 'pi Conversion Engine,' which enables the direct and highly efficient conversion of electricity to light, significantly reducing the energy lost during data transmission.
The value proposition of this approach is substantial. Piris Labs claims their technology delivers 5x lower latency and 10x lower power consumption per bit compared to traditional electrical interconnects. For the end-user, this translates to a 2x lower cost per token for AI inference. By eliminating the networking bottlenecks that throttle GPU performance, Piris Labs enables faster, more cost-effective AI deployment without requiring a total overhaul of existing compute architectures.
Market & Competitors
Market and Competitors Report: Piris Labs
Piris Labs operates in the rapidly evolving AI infrastructure market, specifically targeting the networking and interconnect segment. The market is currently defined by a shift toward 'accelerated computing,' where traditional CPUs are supplemented by GPUs and specialized ASICs to handle the massive parallel processing requirements of Large Language Models (LLMs). As these clusters scale, the industry is moving toward 'co-packaged optics' and optical interconnects to maintain performance, a trend that Piris Labs is positioned to lead.
Competitive Landscape The competitive environment for Piris Labs is two-fold, consisting of established incumbents and specialized hardware startups:
- Incumbents: NVIDIA is the dominant player, providing not just the GPUs but also the InfiniBand and NVLink networking technologies that currently power most AI clusters. While NVIDIA is the primary provider, their solutions are often criticized for high power consumption and cost.
- Specialized Accelerators: Companies like Groq, Cerebras, and SambaNova are developing new architectures to speed up AI. Groq, in particular, focuses on ultra-low latency inference. Piris Labs differentiates itself by focusing on the interconnect rather than just the compute chip, aiming to make the entire data center more efficient.
- Photonic Competitors: Startups such as Lightmatter and LightOn are also exploring the use of light for computing. However, Piris Labs' specific focus on a 'full-stack' approach—combining proprietary hardware with a vertically optimized software stack—is intended to provide a more seamless 'plug-and-play' efficiency gain for existing inference workloads.
Competitive Advantages and Disadvantages Piris Labs' primary advantage lies in its leadership team's unique combination of optical physics and AI infrastructure scaling experience. The hire of Mohsen Moazami, former President of Groq, suggests a strong intent to compete on the commercial and execution front against established hardware players. Their claimed '10x lower power per bit' is a significant differentiator in an era where data center power availability is a major constraint on growth.
A potential disadvantage is the high capital intensity and long development cycles associated with 'hard tech' and semiconductor-adjacent ventures. Competing against the massive R&D budgets of incumbents like NVIDIA requires not only technical superiority but also successful integration into the complex supply chains of hyperscale cloud providers (AWS, Azure, Google Cloud).
Total Addressable Market
Quantitative and TAM Report: Piris Labs
Piris Labs operates at the intersection of the optical interconnect market and the broader AI inference infrastructure market. To estimate the Total Addressable Market (TAM), a layered methodology is employed, looking at the specific hardware niche the company occupies as well as the massive AI spending environment that drives demand for their solution.
Optical Interconnect Market (Core TAM) The most direct market for Piris Labs' hardware is the global optical interconnect market. According to Grand View Research, this market was valued at approximately $16.06 billion in 2024 and is projected to grow to $34.54 billion by 2030, representing a CAGR of 14.1%. Other analysts, such as Mordor Intelligence, provide even more aggressive estimates, forecasting the market to reach $40.03 billion in the same timeframe. This segment represents the immediate opportunity for Piris Labs' proprietary photonic hardware and 'pi Conversion Engine.'
AI Inference and Infrastructure Market (Adjacent TAM) The broader demand for Piris Labs' full-stack service is driven by the explosion in AI inference. MarketsandMarkets reports the AI inference market at $76.24 billion in 2024, with projections suggesting it will reach $254.98 billion by 2030. Furthermore, IDC forecasts that total AI infrastructure spending—which includes the servers and networking components Piris Labs aims to optimize—will reach a staggering $758 billion by 2029. In this context, Piris Labs' value proposition of reducing the 'cost per token' allows them to capture value from a much larger pool of AI operational expenditure.
Methodology and Market Potential The TAM for Piris Labs is calculated by aggregating the demand for high-speed, low-power data movement within data centers. While the 'Serviceable Obtainable Market' (SOM) would initially focus on high-end AI-as-a-service providers and hyperscalers, the total addressable opportunity exceeds $100 billion when considering the combined growth of optical networking and AI-specific compute. The company's claim of a 2x lower cost per token suggests a disruptive potential that could allow them to capture a significant percentage of the networking spend within the $758 billion AI infrastructure market as data centers transition from electrical to optical interconnects to overcome the memory wall.
Founder Analysis
Founders and Background Report: Piris Labs
Piris Labs is led by a founding team that combines deep academic expertise in optical physics with extensive industrial experience in AI infrastructure. The company was co-founded by Ali Khalatpour, who serves as CEO, and Keyvan Moghadam, who serves as President. Together, they bring a multi-disciplinary approach to solving the 'memory wall' in AI computing, leveraging backgrounds from top-tier research institutions and leading technology conglomerates.
Ali Khalatpour (CEO & Co-Founder) is an MIT-trained physicist and an optical scientist with over a decade of experience in the field. His academic and research pedigree includes affiliations with Harvard and Stanford. Notably, Khalatpour led the development of the optical engine for NASA's GUSTO (Galactic/Extragalactic ULDB Spectroscopic Terahertz Observatory) and is credited with developing the first high-temperature semiconductor terahertz (THz) laser. Before founding Piris Labs, he served as the Head of Metasurface Compute at Imagia Inc. and held lead scientist roles focusing on low-loss photonics and light engine development.
Keyvan Moghadam (President & Co-Founder) is an AI scientist and engineering leader with more than ten years of experience in the technology sector. His professional background is rooted in the AI leadership teams of major platforms, specifically Meta and X (formerly Twitter). Moghadam has a documented track record of building '0-to-1' AI infrastructure and scaling high-performing engineering teams from the ground up. His expertise lies in the software and systems architecture required to support large-scale AI workloads, complementing the hardware-centric innovations of his co-founder.
In addition to the founders, Piris Labs has successfully attracted high-level industry talent to its leadership team. In December 2025, the company announced that Mohsen Moazami, the former President of Groq, joined the firm. This strategic hire, alongside the company's acceptance into the Y Combinator Winter 2026 batch, underscores the team's credibility and the perceived potential of their technical approach in the competitive AI infrastructure market.
Unlock Full AI Research Report
Enter your email to access the complete analysis.
We'll never spam you. Unsubscribe anytime.