AI coding agents have a memory problem. Every new session starts from zero. The agent that spent 20 minutes yesterday figuring out your project’s quirky database connection string? Gone. The workaround for that Prisma edge case? Forgotten. The exact command to run tests with the right environment variables? It will rediscover it from scratch.

This isn’t a bug - it’s the nature of stateless LLM sessions. But it’s a productivity killer when you’re using an AI agent daily on the same codebase.

The Institutional Memory Problem

After a few weeks of using Claude Code on a production project, I noticed a pattern:

  1. Agent encounters a project-specific gotcha
  2. We debug together, find the solution
  3. Next session, same gotcha, same 10-minute detour

Some examples from real projects:

  • The database URL requires a specific query parameter that breaks psql but works for Prisma
  • Tests fail silently unless you source a specific env file first
  • The production deploy happens via git push, not CLI command (despite the CLI being installed)
  • A certain API returns 404 status but still contains valid data in the body

These aren’t bugs I’ll ever fix. They’re just… how the project works. Tribal knowledge that any long-term team member would internalize.

CLAUDE.md as Project Memory

Claude Code reads a CLAUDE.md file at the start of every session. It’s intended for project instructions, but it works equally well as a knowledge base. The insight: treat it like onboarding documentation that the AI maintains for itself.

Here’s the structure I’ve settled on:

## Learnings

- Schema changes: push to BOTH dev and prod databases
- `vercel link` overwrites `.env.local` - restore from git after
- DIRECT_DATABASE_URL with `?pool=true` breaks psql - param is Prisma-only
- Run `npm run build` before committing - catches type errors CI would reject
- Webhook returns 404 status but body contains valid data - don't check response.ok
- Background tasks: use `run_in_background` param, not shell `&`
- JSON fields in bash: avoid `->>` operators - fetch whole column instead

Each line is a compressed lesson learned. Imperative style, no fluff, one line per item.

What Qualifies as a Learning

The key is curation. Not everything belongs here.

Include:

  • Error solutions specific to this project’s setup
  • Non-obvious commands or workflows (the ones you’d forget and have to look up)
  • Gotchas that wasted time (especially if they’ll waste time again)
  • File locations that were hard to find
  • Workarounds for third-party quirks

Exclude:

  • Generic programming knowledge (“use async/await for promises”)
  • One-time issues unlikely to recur
  • Things already documented in README or official docs
  • Verbose explanations - if it needs a paragraph, it’s documentation, not a learning

The test: “Would this save 5+ minutes next time the agent encounters this situation?”

Curation Rules

Left unchecked, the Learnings section becomes a dumping ground. Every session adds more. Eventually it’s 200 lines of outdated advice, half of which contradicts the other half.

My rules:

  1. Max 30 items - if adding something new, remove something obsolete
  2. Merge duplicates - two similar learnings become one
  3. Remove when fixed - bug workaround for a bug you fixed? Delete it
  4. One line per item - forces compression, prevents rambling
  5. Review monthly - scan for stale entries

The agent itself can help curate. At the end of a productive session:

“Capture what we learned about the webhook integration to CLAUDE.md. Check for duplicates first.”

It will add the new insight and often notice related items that can be merged or removed.

The Compounding Effect

After three months on a project with maintained CLAUDE.md, the difference is stark. The agent:

  • Knows which database to use for which command
  • Remembers the exact test invocation that works
  • Avoids the deployment mistake it made in week one
  • Uses the project’s preferred patterns without being told

It’s not intelligence - it’s just reading a file. But the effect is an agent that feels like a team member who’s been on the project for months, not a contractor starting fresh every morning.

Practical Workflow

During a session: When you solve something tricky together, flag it mentally. After the fix is confirmed working:

Add to Learnings: Prisma Accelerate has 5MB response limit - use select not include

End of session: If the session was productive, ask for a learning capture:

Review this session and add any non-obvious findings to CLAUDE.md Learnings.
Only add if genuinely useful for future sessions.

Monthly: Skim the Learnings section. Delete anything that:

  • References fixed bugs
  • Duplicates other items
  • You’ve never actually needed again

What This Isn’t

This isn’t a replacement for documentation. Complex architectural decisions, API references, deployment procedures - those belong in proper docs that humans read too.

CLAUDE.md learnings are specifically for agent-to-agent knowledge transfer. The format is optimized for LLM consumption: terse, declarative, no context needed.

It’s also not a crutch for bad tooling. If your agent keeps forgetting how to run tests, maybe your test command is too complicated. Fix the root cause when possible; document the workaround when necessary.

Conclusion

  • AI coding agents lose context between sessions - every session starts fresh
  • A curated Learnings section in CLAUDE.md acts as persistent memory
  • Include: project-specific gotchas, non-obvious workflows, time-wasting bugs
  • Exclude: generic knowledge, one-time issues, anything in docs
  • Cap at 30 items, remove outdated entries, merge duplicates
  • The agent can help maintain its own memory with human approval
  • Compound effect: after months, the agent “knows” your codebase’s quirks

The effort is minimal - maybe 2 minutes per session when something noteworthy happens. The payoff is an agent that stops making the same mistakes and starts feeling like it actually learns.


Related: Debugging Random Reboots with Claude Code - See Claude Code’s systematic debugging approach in action.