How We Make Claude Remember: Learnings Over Skills
Skills don’t reliably auto-invoke. We built a three-layer system: searchable learnings files, a curation skill, and a post-commit hook that reminds you to document.
Skills don’t reliably auto-invoke. We built a three-layer system: searchable learnings files, a curation skill, and a post-commit hook that reminds you to document.
An entire infrastructure layer (CSS, JS, frameworks) is becoming optional for a growing class of consumers. Here’s how to serve markdown via HTTP content negotiation.
Your competitor uses Claude or GPT to move faster. If you don’t, you fall behind. That’s the trap.
The Best Agent Architecture Is Already in Your Terminal My project’s CLAUDE.md file had grown to 55KB—242 learnings crammed into one massive file. The problem? Claude prepends this file to every single prompt. A 55KB context file means less room for thinking and acting. Sessions hit context limits faster. Compaction happens sooner. I noticed the degradation: sessions became noticeably shorter, context compaction triggered more frequently, and the agent seemed to lose track of longer conversations. ...
The iteration trap is really a clarity trap. You iterate because you don’t know what you want.
How a 87-line inline script became a reusable GitHub Action for AI-generated changelog summaries posted to Slack.