
TL;DR
- Samsung Next backs Memories.ai, a startup tackling long-context video analysis.
- Platform processes up to 10 million hours of video, enabling deep tagging, indexing, and aggregation.
- Raised $8M in seed funding led by Susa Ventures.
- Use cases include marketing trend analysis and AI-powered security footage insights.
- Founders are ex-Meta researchers; tech stack includes on-device computing and video summarization.
Tackling the Limits of AI Video Comprehension
AI models today can summarize short clips, but they often struggle with multi-hour footage across multiple sources. This has hindered sectors like security surveillance and marketing analytics, where large-scale video data is essential.
Memories.ai aims to solve this gap with a scalable platform capable of analyzing up to 10 million hours of video, offering deep indexing, natural-language queries, and contextual tagging. The company’s approach involves layered processing: compressing noise, indexing data, and aggregating insights for end-users.
Co-founders Dr. Shawn Shen (formerly with Meta Reality Labs) and Enmin Zhou (ex–Meta machine learning engineer) built Memories.ai to enable contextual video intelligence far beyond what existing AI tools can manage.
“We wanted to build a solution to understand video across many hours better,” Shen told TechCrunch.
Funding Backed by Tech and Consumer Visionaries
Memories.ai recently closed an $8 million seed round, exceeding its original $4M goal due to overwhelming investor demand. The round was led by Susa Ventures with support from:
Susa Ventures’ Misha Gordon-Rowe praised Shen’s technical vision and cited a market gap for “long-context visual intelligence” as a key investment motivator. Meanwhile, Samsung Next was attracted to Memories.ai’s on-device AI capabilities, citing potential consumer applications for privacy-respecting security systems.
“This can unlock better security applications for people who are apprehensive of putting security cameras in their house,” said Sam Campbell of Samsung Next.
A Unique Tech Stack and Clear Use Cases
The startup’s end-to-end pipeline includes:
- Noise reduction and compression for storage efficiency
- Searchable indexing layer using natural-language input
- Segmented tagging for video elements
- Data aggregation layer for generating reports
This combination enables two core use cases:
- Marketing insights
- Identify brand trends across social platforms
- Recommend styles for upcoming video campaigns
- Identify brand trends across social platforms
- Security video analysis
- Detect suspicious activity through pattern recognition
- Train systems to flag potentially dangerous behavior
- Detect suspicious activity through pattern recognition
Shen notes that while current clients upload footage manually, the platform is evolving to sync content from shared drives, enabling more seamless integration.
Data Snapshot: Memories.ai Platform
Metric | Value |
Max Video Hours Processed | 10 million |
Funding Raised | $8 million |
Lead Investor | Susa Ventures |
Strategic Partner | Samsung Next |
Key Use Cases | Marketing, Security AI |
Founders’ Background | Meta Reality Labs, ML at Meta |
Long-Term Vision: Contextual AI Assistants
Looking ahead, Memories.ai wants to enable AI agents that can remember user context via smart glasses, daily recordings, or robotic assistants. Shen envisions a future where one can query:
“Tell me all about people I interviewed last week.”
The AI would then index, summarize, and present insights, mimicking human-like visual memory. The same infrastructure could power training systems for humanoid robots or navigation for autonomous vehicles by retaining visual context across time.
Competitive Landscape and Expansion Plans
While Memories.ai faces competition from startups like mem0 and Letta, its broader tech stack and long-context specialization offer differentiation. Established players like TwelveLabs and Google are also working on video AI, but Shen believes his platform’s horizontal design offers greater interoperability across models.
With a current team of 15 employees, the company plans to expand headcount and enhance search capabilities using the new capital.
“We’re building the memory layer for long-form video AI — and it must be contextual, searchable, and privacy-forward,” said Shen.