Ishiki Labs

Ishiki Labs

Building the Future of Multimodal AI

Winter 2026
Deep Learning
Generative AI
AI

About

Current multimodal models can see and hear. But they talk when they shouldn't. They can't tell if you're speaking to them or someone else. We are building an AI that knows when to stay silent, yet still understanding what's going on in your conversation, so it can best assist when you do need it in real time. Our first version, fern-0.1: provides real-time expert opinions on demand, instant task delegation, zero interruptions. All as fast as ChatGPT voice and Gemini live.

Founders

Robert Xu

Founder

Co-founder & CTO of Ishiki Labs (W26). Previously worked on multi-modal AI and Orion AR glasses at Meta and research infra at Citadel Securities.

Amit Yadav

Founder

Cofounder and CEO at Ishiki Labs (W26). Previously: Research Scientist, first in LlaMA team training multimodal LLMs and then in Reality Labs at Meta training video assistant for smart glasses. PhD from Purdue University with 20+ publications at top conferences like CVPR, NeurIPS, and ICASSP

AI Research Report

Unlock Full AI Research Report

Enter your email to access the complete analysis.

We'll never spam you. Unsubscribe anytime.