Building Agents? Watch Memento
LLMs sound like humans – so we often end up instructing them as if they experience the world like us.
But there’s a subtle difference – especially when used as Agents.
👀 Humans experience a continuous stream of input and reasoning.
We build tiny hypotheses along the way:
“Let me hover over the tooltip to see what this button is for.”
It’s a loop of sense → reason → act, in continuity.
🧠 Agents, on the other hand, live in snapshots:
See screen → Decide → Act → See new screen.

They’re like a human who:
- Looks at the screen
- Writes a letter to a controller to perform an action
- Closes their eyes while it’s happening ← VERY IMPORTANT
- Opens their eyes to a new scene – with no memory of the past The only continuity? 📝
A notepad on the table – a few scribbled notes before they "blacked out".
So we asked ourselves:
“If this were me, how would I use that notepad?”
We’d been giving agents summaries of prior steps – but something was still missing.
So we made a small tweak to the prompt:
👉 “Write a note to your future self”
Result: the agent now jots down whatever it wants its future self to know, such as:
- What hypothesis it’s testing
- Why it chose this action
- What to look for in the new state
So in the next iteration when it wakes up, it knows: “What was I thinking?”
That single line — “Write a note to your future self” —
gave our agent a memory-like thread.
A small change. A big leap in clarity and navigation. 🚀
#AI #Agents #LLM #StartUp #BuildInPublic #AgenticAI
