
Confluence Labs
We're an AI research lab focused on learning efficiency
About
While modern AI excels in any area you can collect a lot of data for, it struggles in areas where data is sparse or costly to attain. Designing new molecules, discovering new physics, and engineering new materials, and even developing more effective systems of governance are all domains where collecting data is extremely costly. We dream of a world where AI accelerates research in all of these domains and creates a more abundant future for humanity, but the current technology is not there. That’s why we started Confluence Labs. We are building AI that can design highly effective experiments in data-sparse domains and learn maximally from the data it already has.
Founders
AI Research Report
Problem & Solution
Problem and Solution Report: Confluence Labs
Confluence Labs addresses the critical bottleneck of 'learning efficiency' in modern artificial intelligence. Current machine learning models, particularly Large Language Models (LLMs), require vast amounts of data to achieve high performance. This dependency makes them ineffective in 'data-sparse' domains—such as hardware engineering, drug design, and materials science—where generating new data points requires expensive, time-consuming physical experiments. In these fields, the cost of failure is high, and the lack of massive datasets prevents traditional AI from providing meaningful acceleration.
The company's solution is a novel approach that combines LLM-driven Program Synthesis with discrete modeling. Instead of relying solely on pattern matching from large datasets, Confluence Labs uses LLMs to write code that describes transformations and logical rules. This allows the system to perform 'discrete program search,' enabling long-horizon reasoning and the ability to solve complex problems with very few examples. By shifting the focus from data volume to algorithmic efficiency, their models can assist humans in scientific discovery where data is a luxury.
The effectiveness of this solution is demonstrated by the company's performance on the ARC-AGI-2 benchmark, which is designed to measure an AI's ability to learn new tasks efficiently. Confluence Labs achieved a state-of-the-art (SOTA) score of 97.9% on the public evaluation set, significantly outperforming previous methods. This technical milestone serves as a proof of concept for their value proposition: providing a tool that can reason through scientific problems and generate informative hypotheses with minimal data input, thereby accelerating the pace of frontier research.
Market & Competitors
Market and Competitors Report: Confluence Labs
Confluence Labs operates in the rapidly evolving 'AI for Science' market. This landscape is characterized by a shift away from general-purpose AI toward specialized models capable of handling the rigors of scientific R&D. The target audience includes pharmaceutical companies, materials science labs, and hardware engineering firms that require high-precision modeling in environments where experimental data is scarce and expensive to produce.
The competitive landscape is dense, featuring both well-funded startups and established tech giants. In the AI-driven drug discovery space, key competitors include Insilico Medicine, Recursion Pharmaceuticals, and Exscientia, all of which have integrated AI deeply into their biological research pipelines. In the broader AI research space, organizations like Google DeepMind and OpenAI are also formidable competitors, particularly as they develop models for code generation and scientific reasoning (e.g., AlphaFold). Additionally, companies like Benchling and Dotmatics provide the digital infrastructure that these AI tools often integrate with.
Confluence Labs distinguishes itself through its specific focus on 'learning efficiency' and its hybrid approach of LLM-driven program synthesis. While many competitors focus on scaling data and compute, Confluence Labs emphasizes the ability to solve problems with minimal data, as evidenced by their ARC-AGI-2 SOTA result. This focus on reasoning over rote pattern matching provides a competitive advantage in niche, high-value scientific domains where data cannot be easily scraped or synthesized. However, as a small, early-stage team, their primary challenge will be scaling their technology to meet the operational needs of large-scale industrial partners compared to more established incumbents.
Total Addressable Market
Quantitative and TAM Report: Confluence Labs
Confluence Labs operates at the intersection of AI research and data-sparse scientific domains, specifically targeting drug discovery, materials science, and hardware engineering. Because the company's technology is a horizontal enabler for R&D, its Total Addressable Market (TAM) is a composite of several multi-billion-dollar sectors. The primary driver for this market is the increasing adoption of AI to reduce the time and cost associated with traditional laboratory experimentation.
In the pharmaceutical sector, the global AI in drug discovery market was estimated at approximately USD 2.35 billion in 2025. This market is projected to experience explosive growth, reaching an estimated USD 13.77 billion by 2033, representing a Compound Annual Growth Rate (CAGR) of 24.8%. McKinsey further supports this valuation, estimating that generative AI alone could produce between $60 billion and $110 billion in annual value across the pharmaceutical industry by optimizing R&D pipelines and improving success rates in clinical trials.
Beyond pharmaceuticals, the AI in materials discovery market represents another significant vertical. North America currently dominates this space, holding a 38% market share as of 2024. As industries seek more efficient ways to develop semiconductors, batteries, and sustainable materials, the demand for 'learning efficient' AI models—like those developed by Confluence Labs—is expected to grow. When aggregating these sectors, the total addressable opportunity for Confluence Labs' technology spans tens of billions of dollars globally, driven by the transition from empirical trial-and-error to AI-driven predictive modeling.
Founder Analysis
Founders and Background Report: Confluence Labs
Confluence Labs was founded by Brent Burdick and Niranjan Baskaran, two researchers and engineers with a focus on advancing AI learning efficiency. The team is currently based in San Francisco and is part of the Y Combinator Winter 2026 batch. Despite being a small team of two, they have already established a significant technical presence by achieving state-of-the-art (SOTA) results on the ARC-AGI-2 benchmark, a feat they have open-sourced for the research community.
Brent Burdick identifies as a self-taught engineer and researcher. According to his public profiles, he is a college dropout who moved to San Francisco to pursue high-impact technical projects. His background emphasizes a hands-on, non-traditional path to AI research, focusing on building complex systems and contributing to the 'learning efficiency' mission of Confluence Labs. He maintains a personal engineering blog and is active in the San Francisco tech scene.
Niranjan Baskaran brings a strong academic and competitive background to the venture. He is a Richmond Scholar, having received a full-ride scholarship to the University of Richmond, and has a history of success in high-stakes technical competitions. His achievements include winning first place at the Friends and Family Hackathon and the Vassar Innovation and Entrepreneurship Shark Tank Pitch Competition. Most recently, he returned to the Bay Area to begin a PhD in Statistics at UC Berkeley, further solidifying the company's grounding in rigorous mathematical and statistical methodologies.
Unlock Full AI Research Report
Enter your email to access the complete analysis.
We'll never spam you. Unsubscribe anytime.