Need help?
<- Back

Comments (28)

  • jackfranklyn
    The context rot concern is real but I've found it's more about structure than size. My CLAUDE.md is ~500 lines but it's organised into quick reference tables, critical rules (the stuff that MUST be followed), and project architecture notes. The key is frontloading the high-signal stuff.What's worked for me: having separate sections for "things Claude needs every session" vs "reference material it can look up when needed". The former stays concise. The latter can grow.The anti-pattern I see is people treating it like a growing todo list or dumping every correction in there. That's when you get rot. If a correction matters, it should probably become a linting rule or a test anyway.
  • bonsai_spool
    Are you seeing a benefit above doing this in the prompt / project structure?Currently, I have Claude set up a directory with CLAUDE.md, a roadmap, a frequently used commands guide, detailed by-phase plans, and a session log with per-session insights and next steps.After each phase is done, I have it update the documents and ask what could have been done to make the session easier—often the answer is clearer instructions, which eventually leads to topic-specific documents or a new Claude Skill.(edit) These reflection tasks are spelled out in the CLAUDE.md and in the docs directory, so I don't have to type them. Each new session, I paste a guide for how Claude should access the information from the last session.
  • vemv
    I won't lie, this sounds like a recipe for context rot.LLMs degrade as the context / prompt size grow. For that reason I don't even use a CLAUDE.md at all.There are very few bits that I do need to routinely repeat, because those are captured by linters/tests, or prevented by subdividing the tasks in small-enough chunks.Maybe at times I wish I could quickly add some frequently used text to prompts (e.g. "iterate using `make test TEST=foo`"), but otherwise I don't want to delegate context/prompt building to an AI - it would quickly snowball.
  • AlexCoventry
    I like the idea, but this [1]: # Check for POSITIVE patterns (new in v3) elif echo "$PROMPT" | grep -qiE "perfect!|exactly right|that's exactly|that's what I wanted|great approach|keep doing this|love it|excellent|nailed it"; then is fanciful.[1] https://github.com/BayramAnnakov/claude-reflect/blob/main/sc...
  • gingersnap
    Lately I've been thinking the opposite of this.I use claude code extensively, and lately it does all the coding for me. I've been asking it to go through claude.md to see if it needs to be updated after making changes.I'm now thinking that I will let claude code write code, but never touch claude.md. It needs to be as short and I need to have full control over it.As i lead the direction of the development the claude.md is my primary way of influencing it, in every single session.
  • marioftsbr
    I like the idea, but usually when something goes wrong I just "choose violence" , so my conversation goes like "this approach is stupid because X, try harder", followed by a "Finally!" when it works. Also, add a trigger to "You are absolutely right..." this is the most common pattern I see when it screws up.
  • ddltodata
    I recently learned about the hooks and skills feature. This is cool
  • erispoe
    I use the same motion to iterate on linter config and hooks (both claude code hooks and git hooks). So guardrails are actually enforced and iterated upon and do not clutter the context.
  • solarkraft
    I’ve been wondering why this isn’t more of a thing (knowledge extraction from the conversation in general). The big providers must already be training on this stuff anyway, no?
  • Bayram
    Released v2 that addresses some of the issues reported in comments + windows compatiblity
  • guilhermecgs
    i would love to see this on cursor as well
  • batirch
    Nice to see you on HN, Bayram aga.