
Human Archive
Multimodal data provider for robotics learning
About
We’re archiving the physical world for embodied intelligence by collecting and labeling aligned multimodal data. To build dexterous and perceptive robots that generalize robustly, we need massive amounts of real-world data across multiple modalities and environments. We have thought deeply about the fine line between biomimicry and its application to humanoid systems. Based on this research, we design and deploy custom hardware across residential and manufacturing settings. We then process the resulting data through internal QA, anonymization, and annotation pipelines to deliver diverse, high-fidelity datasets at scale to frontier labs developing robotics foundation models and general-purpose robotics companies. We believe we are at a historic inflection point, with a unique opportunity to leave a dent on humanity and reshape physical labor markets forever. That's why our team dropped out of Stanford and Berkeley and moved to Asia to collect the world’s largest annotated multimodal dataset.
Founders
Founder
building multimodal real-world datasets for robotics | prev. UC Berkeley MET (IEOR + Business)
Founder
Building the largest, real-world multi-modal robotics dataset. Berkeley dropout and previous farmer (sold mangoes & planted trees)
AI Research Report
Unlock Full AI Research Report
Enter your email to access the complete analysis.
We'll never spam you. Unsubscribe anytime.