Comparison
Humalike vs Letta — different bets on what agent infrastructure is.
Letta — built on the MemGPT research from UC Berkeley's Sky Computing Lab — bets that memory is the central infrastructure problem for agents. Humalike sells HUMA — behavioral infrastructure companies wrap around their AI products, where memory is one of four primitives alongside turn-taking, social norms, and emotional cue reading.
This page isn't a feature-by-feature comparison. It's an honest read on where each bet leads, and where the architectures might fit together.
What we both see
Stateless agents aren't real agents.
Both companies start from the same place: an LLM that forgets you the moment the context window closes isn't doing what an agent has to do. Persistent state — who this person is, what they said, what they want, how the relationship has evolved — is foundational, not optional. That part of the problem is shared.
Where we diverge is on how much of the agent infrastructure problem memory alone solves, and what other primitives belong in the same runtime.
Where we bet differently
Memory as the center, or memory as one of four primitives?
Letta's bet
Memory is the central problem. Solve it well and you have agents.
Letta's research line — MemGPT and what came after — argues that the cleanest way to build agents that learn and evolve is to design the memory architecture first. Core memory, archival memory, recall, background “dream agents” that transform context over time.
Their bet is that durable, well-designed memory is the unlock — agents that can be taught through language, transferred across models, and improved through experience.
Humalike's bet
Sell the layer that handles all four primitives — memory included.
Humalike sells HUMA: behavioral infrastructure that companies wrap around their AI products. Four primitives composed at runtime — turn-taking, social norms, emotions and social cues, and relational memory. Memory matters; so does deciding when to speak, who to address in a multi-party room, and how to read the emotional state of the people involved.
Our bet is that even a perfectly persistent memory still needs the rest of the social runtime around it.
Same terrain, different center. Letta is going deep on a single primitive that flows through. Humalike is going for breadth across the four behaviors that compose into a social agent. The architectures are composable — these aren't competing bets so much as complementary ones.
What each of us is optimizing for
Agents that learn. Agents that fit.
Letta's vocabulary is memory-shaped: core, archival, recall, dream agent, memory palace. The depth is in how the agent stores, transforms, and retrieves information about its user and its world over long-running sessions. The natural buyer is anyone building a long-lived stateful assistant.
Humalike's vocabulary is behavior-shaped: turn-taking, social norms, emotional cue, multi-party dynamics, relational memory. The depth is in how the agent participates in a real human social setting. The natural buyer is anyone shipping an agent that has to live inside a room with other humans — robots, companions, gaming teammates, classroom agents, community managers.
Where this leads
Composed, not chosen.
If you're building a long-running single-user assistant where the central question is what the agent remembers and how it evolves with the user, Letta is built for that shape of problem. If your problem also involves multi-party social rooms, voice rhythm, norms tuned per deployment, or the full social surface — HUMA fits that wider scope.
These aren't mutually exclusive bets. The most natural shape of the future is probably: deep memory infrastructure + behavioral runtime, composed together. We don't think it's either/or.
FAQ
Questions on the comparison itself.
Is Humalike a Letta alternative?
Adjacent more than alternative. Letta is memory-first agent infrastructure born from the MemGPT paper at UC Berkeley's Sky Computing Lab. Humalike sells HUMA — behavioral infrastructure companies wrap around their AI products, where relational memory is one of four primitives, alongside turn-taking, social norms, and emotional cue reading. Same audience, tight technical adjacency, different center of gravity.
Can I use Letta's memory inside HUMA?
In principle, yes — both are designed as composable layers. HUMA's relational memory primitive is responsible for what an agent remembers about each person across sessions. If you wanted to back that with Letta's memory hierarchy (core, archival, recall, dream agents), the abstractions don't conflict. We haven't shipped a formal integration, but the architectural fit is real.
Why isn't Humalike going as deep on memory as Letta?
Different read on where the gap is. We think memory is necessary but not sufficient — even a perfectly persistent memory of a user doesn't tell the agent when to speak, who to address in a multi-party room, what social norms apply, or how to read the emotional state of the conversation it's in. We're building infrastructure for all of those problems together, with memory as one part of it.
How do you decide which to use?
If your problem is fundamentally about agents that learn and evolve across sessions — long-running stateful assistants, agents that get better over time, single-user deep relationships — Letta is built for that. If your problem also involves multi-party social rooms, turn-taking, emotional cue reading, voice integration, or the full behavioral surface around social agents — HUMA is shaped for that. Many real systems probably want both.
Useful reads