<- Back
Comments (218)
- jedbergThe way I write code with AI is that I start with a project.md file, where I describe what I want done. I then ask it to make a plan.md file from that project.md to describe the changes it will make (or what it will create if Greenfield).I then iterate on that plan.md with the AI until it's what I want. I then ask it to make a detailed todo list from the plan.md and attach it to the end of plan.md.Once I'm fully satisfied, I tell it to execute the todo list at the end of the plan.md, and don't do anything else, don't ask me any questions, and work until it's complete.I then commit the project.md and plan.md along with the code.So my back and forth on getting the plan.md correct isn't in the logs, but that is much like intermediate commits before a merge/squash. The plan.md is basically the artifact an AI or another engineer can use to figure out what happened and repeat the process.The main reason I do this is so that when the models get a lot better in a year, I can go back and ask them to modify plan.md based on project.md and the existing code, on the assumption it might find it's own mistakes.
- 827aIMO: This might be a contrarian opinion, but I don't think so. Its much the same problem as asking, for example, if every single line you write, or every function, becomes a commit. The answer to this granularity is, much like anything, you have to think of the audience: Who is served by persisting these sessions? I would suspect that there is little reason why future engineers, or future LLMs, would need access to them; they likely contain a significant amount of noise, incorrect implementations, and red herrings. The product of the session is what matters.I do think there's more value in ensuring that the initial spec, or the "first prompt" (which IME is usually much bigger and tries to get 80% of the way there) is stored. And, maybe part of the product is an LLM summary of that spec, the changes we made to the spec within the session, and a summary of what is built. But... that could be the commit message? Or just in a markdown file. Or in Notion or whatever.
- dangI floated that idea a week ago: https://news.ycombinator.com/item?id=47096202, although I used the word "prompts" which users pointed out was obsolete. "Session" seems better for now.The objections I heard, which seemed solid, are (1) there's no single input to the AI (i.e. no single session or prompt) from which such a project is generated,(2) the back-and-forth between human and AI isn't exactly like working with a compiler (the loop of source code -> object code) - it's also like a conversation between two engineers [1]. In the former case, you can make the source code into an artifact and treat that as "the project", but you can't really do that in the latter case, and(3) even if you could, the resulting artifact would be so noisy and complicated that saving it as part of the project wouldn't add much value.At the same time, people have been submitting so many Show HNs of generated projects, often with nothing more than a generated repo with a generated readme. We need a better way of processing these because treating them like old-fashioned Show HNs is overwhelming the system with noise right now [2].I don't want to exclude these projects, because (1) some of them are good, (2) there's nothing wrong with more people being able to create and share things, (3) it's foolish to fight the future, and (4) there's no obvious way to exclude them anyhow.But the status quo isn't great because these projects, at the moment, are mostly not that interesting. What's needed is some kind of support to make them more interesting.So, community: what should we do?[1] this point came from seldrige at https://news.ycombinator.com/item?id=47096903 and https://news.ycombinator.com/item?id=47108653.YoumuChan makes a similar point at https://news.ycombinator.com/item?id=47213296, comparing it to Google search history. The analogy is different but the issue (signal/noise ratio) is the same.[2] Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)
- rfw300Why should it be? The agent session is a messy intermediate output, not an artifact that should be part of the final product. If the "why" of a code change is important, have your agent write a commit message or a documentation file that is polished and intended for consumption.
- onion2kConceptually this is very similar to the question of whether or not you should squash your commits. To the point that it's really the same question.If you think you should squash commits, then you're only really interested in the final code change. The history of how the dev got there can go in the bin.If you don't think you should squash commits then you're interested in being able to look back at the journey that got the dev to the final code change.Both approaches are valid for different reasons but they're a source of long and furious debate on every team I've been on. Whether or not you should be keeping a history of your AI sessions alongside the code could be useful for debugging (less code debugging, more thought process debugging) but the 'prefer squash' developers usually prefer to look the existing code rather than the history of changes to steer it back on course, so why would they start looking at AI sessions if they don't look at commits?All that said, your AI's memory could easily be stored and managed somewhere separately to the repo history, and in a way that makes it more easily accessible to the LLM you choose, so probably not.
- weliI agree so much
- D-MachineObviously yes, at least if not the prompts in the session, some simple / automated distillation of those prompts. Code generated by AI is already clearly not going to be reviewed as carefully as code produced by humans, and intentions / assumptions will only be documented in AI-generated comments to some limited degree, completely contingent on the prompt(s).Otherwise, when fixing a bug, you just risk starting from scratch and wasting time using the same prompts and/or assumptions that led to the issue in the first place.Much of the reason code review was/is worth the time is because it can teach people to improve, and prevent future mistakes. Code review is not really about "correctness", beyond basic issues, because subtle logic errors are in general very hard to spot; that is covered by testing (or, unfortunately, deployment surprises).With AI, at least as it is currently implemented, there is no learning, as such, so this removes much of the value of code review. But, if the goal is to prevent future mistakes, having some info about the prompts that led to the code at least brings some value back to the review process.EDIT: Also, from a business standpoint, you still need to select for competent/incompetent prompters/AI users. It is hard to do so when you have no evidence of what the session looked like. Also, how can you teach juniors to improve their vibe-coding if you can't see anything about their sessions?
- kzahelI would love to be able to share all my sessions automatically. But I would want to share a carefully PII/secrets redacted session. I added a "session sharing" feature to my agent wrapper that just grabs innerHTML and uploads to cloudflare. So I can share how I produced/vibe coded an entire project from start to finish.For example: https://github.com/kzahel/PearSync/blob/main/sessions/sessio...I think it's valuable to share that so people who are interested can see how you interact with agents. Sharing raw JSONL is probably a waste and contains too many absolute paths and potential for sharing unintentionally.https://github.com/peteromallet/dataclaw?tab=readme-ov-file#... is one project I saw that makes an attempt to remove PII/secrets. But I certainly wouldn't share all my sessions right now, I just don't know what secrets accidentally got in them.
- vtemianGit was designed for humans.Commits, branches, and the entire model works really well for human-to-human collaboration, but it starts to be too much for agent-to-human interactions.Sharing the entire session, in a human, readble way, offering a rich experiences to other humans to understand, is way better then having git annotations.That's why we built https://github.com/wunderlabs-dev/claudebin.com. A free and open-source Claude Code session sharing tool, which allows other humans to better understand decisions.Those sessions can be shared in PR https://github.com/vtemian/blog.vtemian.com/pull/21, embedded https://blog.vtemian.com/post/vibe-infer/ or just shared with other humans.
- jillesvangurpI think that's covered by the YAGNI rule. It has very little value that rapidly drops off as you commit more code. Maybe some types of software you might want to store some stuff for compliance/auditing reasons. But beyond that, I don't see what you would use it for.
- tototrainsI considered this and even built a claude code extension to bring history/chats into the project folder.Not once have I found it useful: if the intention isn't clear from the code and/or concise docs, the code is bad and needs to be polished.Well written code written with intention is instantly interpretable with an LLM. Sending the developer or LLM down a rabbit hole of drafts is a waste of cognition and context.
- LercI would say not, because it would lead some to think that what was said to the model represented what output was desired. While there is quite a bit of correlation with describing what you want with the output you receive, the nature of models as they stand mean you are not asking for what you want, you are crafting the text that elicits the response that you want. That distinction is important, and is model specific. Without keeping an archive of the entire model used to generate the output, the conversation can be very misleading.Conversations may also be very non-linear. You can take a path attempting something, roll back to a fork in the conversation and take a different path using what you have learned from the models output. I think trying to interpret someone else's branching flow would be more likely to create an inaccurate impression than understanding.
- YoumuChanShould my google search history be part of the commit? To that question my answer is no.
- abustamamI don't think it should be. I think a distilled summary of what the agent did should be committed. This requires some dev discipline. But for example:Make a button that does X when clicked.Agent makes the button.I tell it to make the button red.Agent makes it red.I test it, it is missing an edge case. I tell it to fix it.It fixes it.I don't like where the button is. I tell it to put it in the sidebar.It does that.I can go on and on. But we don't need to know all those intermediaries. We just need to know Red button that does X by Y mechanism is in the sidebar. Tests that include edge cases here. All tests passing. 2026-03-01And that document is persisted.If later, the button gets deleted or moved again or something, we can instruct the agent to say why. Button deleted because not used and was noisy. 2026-03-02This can be made trivial via skills, but I find it a good way to understand a bit more deeply than commit messages would allow me to do.Of course, we can also just write (or instruct agents to write) better PRs but AFAICT there's no easy way to know that the button came about or was deleted by which PR unless you spelunk in git blame.
- lionkorSone of the best engineers I've seen use commit messages to explain their intent, sometimes even in many sentences, below the message.I bet, without trying to be snarky, that most AI users don't even know you can commit with an editor instead of -m "message" and write more detail.It's good that AI fans are finding out that commits are important, now don't reinvent the wheel and just spend a couple minutes writing each commit message. You'll thank yourself later.
- raincoleI hope people start doing that. Not that it has any practical usage for the repo itself, but if everyone does that, it'd probably make it much easier for open weight models to catch up the proprietary ones. It'd be like a huge crowdsourced project to collect proprietary models' output for future training.
- gingersnapMy instinct is to say that I don't want the session as part of the commit. For me that is like a Slack thread discussing the new feature, and that is not something I would commit. I think that the split shouldn't be "is this done with a machine"=> commit, I think the split for AI should be the same as before. Is it code or changes of code, then it should be included. Is it discussing, going back and forth, that is not commited now. On the other hand, if you do a plan that is then implemented, I actually do think it makes sense to save the plan, either as commit, or if you save that back to the issue.
- brendanmc6A few things really leveled up both my software quality and my productivity in the last few months. It wasn’t session history, memory files, context management or any of that.1. Writing a spec with clear acceptance criteria.2. Assigning IDs to my acceptance criteria. Sounds tedious, but actually the idea wasn’t mine, at some point an agent went and did it without me asking. The references proved so useful for guiding my review that I formalized the process (and switched from .md to .yaml to make it easier).3. Giving my agents a source of truth to share implementation progress so they can plan their own tasks and more effectively review.Of course, I can’t help myself, I had to formalize it into a spec standard and a toolkit. Gonna open source it all soon, but I really want feedback before I go too far down the rabbit hole:https://acai.sh
- bloomcaI don't think it's worth to include the session -- it would bloat the context too much anyway.However, I do think that a higher-level description of every notable feature should be documented, along with the general implementation details. I use this approach for my side projects and it works fairly well.The biggest question whether it will scale, I suspect that no, and I also suspect it is probably better to include nothing than a poor/disjointed/rare documentation of the sessions.
- D-MachineAn important consideration somewhat missing in discussion in this thread: if we don't carefully document AI-assisted coding sessions, how can we ever hope to improve our use of AI coding tools?This applies both to future AI tools and also experts, and experts instructing novices.To some degree, the lack of documenting AI sessions is also at the core of much of the skepticism toward the value of AI coding in general: there are so many claims of successes / failures, but only a vanishingly small amount of actual detailed receipts.Automating the documentation of some aspects of the sessions (skills + prompts, at least) is something both AI skeptics and proponents ought to be able to agree on.EDIT: Heck, if you also automate documenting the time spent prompting and waiting for answers and/or code-gen, this would also go a long way to providing really concrete evidence for / against the various claims of productivity gains.
- brainloungeThe more fundamental question is: Is there information in the AI-coding session that should be preserved? Only if the answer is "yes", the next question becomes: Where do we store that data?git is only one possible location.I think there is very valuable information in session logs, like the prompts, or the usage statistics at the end of the session, which model was used etc. But git history or the commit messages should focus on the outcome of the work, not on the process itself. This is why the whole issue discussion before work in git starts is also typically kept separately in tickets. Not in git itself, but close to it.There're platforms like tulpal.com which move the whole local agent-supported process to the server and therefore have much better after-the-fact observability in what happened.
- ChicagoDaveThe last 5 sessions. Beyond that I archive them outside the repo. But I do save them for review and summaries.
- jumploopsI've been experimenting with a few ways to keep the "historical context" of the codebase relevant to future agent sessions.First, I tried using simple inline comments, but the agents happily (and silently) removed them, even when prompted not to.The next attempt was to have a parallel markdown file for every code file. This worked OK, but suffered from a few issues:1. Understanding context beyond the current session2. Tracking related files/invocations3. Cold start problem on an existing codebasesTo solve 1 and 3, I built a simple "doc agent" that does a poor man's tree traversal of the codebase, noting any unknowns/TODOs, and running until "done."To solve 2, I explored using the AST directly, but this made the human aspect of the codebase even less pronounced (not to mention a variety of complex edge-cases), and I found the "doc agent" approach good enough for outlining related files/uses.To improve the "doc agent" cold start flow, I also added a folder level spec/markdown file, which in retrospect seems obvious.The main benefit of this system, is that when the agent is working, it not only has to change the source code, but it has to reckon with the explanation/rationale behind said source code. I haven't done any rigorous testing, but in my anecdotal experience, the models make fewer mistakes and cause less regressions overall.I'm currently toying around with a more formal way to mark something as a human decision vs. an agent decision (i.e. this is very important vs. this was just the path of least resistance), however the current approach seems to work well enough.If anyone is curious what this looks like, I ran the cold start on OpenAI's Codex repo[0].[0]https://github.com/jumploops/codex/blob/file-specs/codex-rs/...
- dolebirchwoodI drop a lot of F-bombs and other unpleasantries when I talk to the robots, so I'd rather not.
- causalIf a car is used to get you somewhere, should you put the exhaust in bags to bring with you?
- hakanderyalI created a system which I call 'devlog'. Agent summarizes what it did & how it did in a concise file, and its gets committed along with first prompt and the plan file if any. Later due to noise & volume, I started saving those in a database and adding only devlog id to commit nowadays.Now whenever I need to reason with what agent did & why, info is linked & ready on demand. If needed, session is also saved.It helps a lot.
- micwIMO it depends a bit, but in most cases: No!If you do proper software development (planing, spec, task breakdown, test case spec, implementation, unit test, acceptance test, ...) implementation is just a single step and the generated artifact is the source code. And that's what needs to be checked in. All the other artifacts are usually stored elsewhere.If you do spec and planing with AI, you should also commit the outcome and maybe also the prompt and session (like a meeting note on a spec meeting). But it's a different artifact then.But if you skip all the steps and put your idea directly to an coding agent in the hope that the result is a final, tested and production ready software, you should absolutely commit the whole chat session (or at least make the AI create a summary of it).
- mandel_xI’ve been thinking about a simple problem: We’re increasingly merging AI-assisted code into production, but we rarely preserve the thing that actually produced it — the session. Six months later, when debugging or reviewing history, the only artifact left is the diff. So I built git-memento. It attaches AI session transcripts to commits using Git notes.
- xhcuvuvycNo? For the same reason I don't want to work 8 hours a day with the boss looking over my shoulder.
- alainrkMy complete reasoning, notes, errors have never been part of the commit. I don't see a valid reason on why the raw conversation must be included. Rather I have hooks (or just "manually" invoked) to process all of it and update the relevant documentation that I've been putting under docs/.
- rhgraysoniiI think the decisions it made along the way are worth tracking. And it’s got some useful side effects with regard to actually going through the programming and architecture process. I made a tool that really helps with this and finds a pretty portable middle ground that can be used by one person or a team too, it’s flexible. https://deciduous.dev/
- willbeddowIncreasingly, I'd like the code to live alongside a journal and research log. My workflow right now is spending most of my time in Obsidian writing design docs for features, and then manually managing claude sessions that I paste them back and forth into. I have a page in obsidian for each ongoing session, and I record my prompts, forked paths, thoughts on future directions, etc. It seems natural that at some point this (code, journal, LLM context) will all be unified.
- umairnadeem123IMO this is solving the wrong problem. the session log is just noise - its like attaching your google search history to a stackoverflow answer to "prove" you did the research. nobody wants to read 500 lines of an agent going back and forth debugging a race condition.the actual problem is that AI produces MORE code not better code, and most people using it aren't reviewing what comes out. if you understood the code well enough to review it properly you wouldn't need the session log. and if you didn't understand it, the session log won't help you either because you'll just see the agent confidently explaining its own mistakes.> have your agent write a commit message or a documentation file that is polished and intended for consumptionthis is the right take. code review and commit messages matter more now than they ever did BECAUSE there's so much more code being generated. adding another artifact nobody reads doesn't fix the underlying issue which is that people skip the "understand what was built" step entirely.
- tezzaI put a link to the LLM session at the end of the commit, and prefix with POH: if I wrote it by hand.POH = Plain Old HumanEasy to achieve.Why NOT include a link back? Why deprive yourself of information?
- otarIn the ideal world a specification file should be committed to the repository and then linked to the PR/commit. But it slows you down and is no longer a vibe coding?Soon only implementation details will matter. Code can be generated based on those specifications again and again.
- kkarpkkarpFor my own projects in private repos I would benefit from exporting the session. For example if I need to return to the task, it could be great to give it as a contextFor my work as one of developers in team, no. The way I prompt is my asset and advantage over others in a team who always complain about AI not being able to provide correct solutions and secures my career
- crossroadsguyGoodness no! Sometimes I literally SHOUT at these agents/chats and often stoop down to using cuss words, which I am not proud of, but surprisingly it has shown to work here and there. As real as that is, I'd not want that on record in a commit.
- fladrifI think this is a lot of "kicking can down the road" of not understanding what code the ai is writing. Once you give up understanding the code that is written there is no going back. You can add all the helper commit messages, architecture designs, plans, but then you introduce the problem of having to read all of those once you run into an issue. We've left readability on the wayside to the alter of "writeability".The paradigm shift, which is a shift back, is to embrace the fact that you have to slow down, and understand all the code the ai is writing.
- visargaYes, it should remain part of the commit, and the work plan too, including judgements/reviews done with other agents. The chat log encodes user intent in raw form, which justifies tasks which in turn justify the code and its tests. Bottom up we say the tests satisfy the code, which satisfies the plan and finally the user intent. You can do the "satisfied/justified" game across the stack.I only log my own user messages not AI responses in a chat_log.md file, which is created by user message hook in the repo.
- reflecttThe session capture problem is harder than it looks because you need to capture intent, not steps.A coding session has a lot of 'left turn, dead end, backtrack' noise that buries the decision that actually mattered. Committing the full session is like committing compiler output — technically complete, practically unreadable.We've been experimenting with structured post-task reflections instead: after completing significant work, capture what you tried, what failed, what you'd do differently, and the actual decision reasoning. A few hundred tokens instead of tens of thousands. Commits with a reflection pointer rather than an embedded session.The result is more useful than raw logs. Future engineers (or future AI sessions) can understand intent without replaying the whole conversation. It's closer to how good commit messages work — not 'here's what changed' but 'here's why'.Dang's point about there being no single session is also real. Our biggest tasks span multiple sessions and multiple contributors. 'Capture the session' doesn't compose. 'Capture the decision' does.
- phyzix5761Have AI explain the reasoning behind the PR. I don't think people really care about your step by step process but reviewers might care about your approach, design choices, caveats, and trade offs.That context could clarify the problem, why the solution was chosen, key assumptions, potential risks, and future work.
- segmondyIt's already bad enough that people are saying there's too much code to read and review. You want to add session to it? Running it again, might not yield the same output. These models are non deterministic and models are often changed and upgraded.
- root_axisThis seems wrong, like committing debug logs to the repo. There's also lots of research showing that models regularly produce incorrect trace tokens even with a correct solution, so there's questionable value even from a debugging perspective.
- daemonkI did this in the beginning and realized I never went back to it. I think we have to learn to embrace the chaos. We can try to place a couple of anchors in the search space by having Claude summarize the code base every once in a while, but I am not sure if even that is necessary. The code it writes is git versioned and is probably enough to go on.
- ramozWe think so as well with emphasis on "why" for commits (i.e. intent provenance of all decisions).https://github.com/eqtylab/y just a prototype, built at codex hackathonThe barrier for entry is just including the complete sessions. It gets a little nuanced because of the sheer size and workflows around squash merging and what not, and deciding where you actually want to store the sessions. For instance, get notes is intuitive; however, there are complexities around it. Less elegant approach is just to take all sessions in separate branches.Beyond this, you could have agents summarize an intuitive data structure as to why certain commits exist and how the code arrived there. I think this would be a general utility for human and AI code reviewers alike. That is what we built. Cost /utility need to make sense. Research needs to determine if this is all actually better than proper comments in code
- anonundefined
- daxfohlI think so. If nothing else, when you deploy and see a bug, you can have a script that revives the LLMs of the last N commits and ask "would your change have caused this?" Probably wouldn't work or be any more efficient than a new debugging agent most of the time, but it might sometimes and you'd have a fix PR ready before you even answered the pager, and a postmortem that includes WHY it did so, and a prompt to prevent that behavior in the future. And it's cheap, so why not.Maybe not a permanent part of the commit, but something stored on the side for a few weeks at a time. Or even permanently, it could be useful to go back and ask, "why did you do it that way?", and realize that the reason is no longer relevant and you can simplify the design without worrying you're breaking something.
- natex84If the model in use is managed by a 3rd party, can be updated at will, and also gives different output each time it is interacted with, what is the main benefit?If I chat with an agent and give an initial prompt, and it gets "aspect A" (some arbitrary aspect of the expected code) wrong, I'll iterate to get "aspect A" corrected. Other aspects of the output may have exactly matched my (potentially unstated) expectation.If I feed the initial prompt into the agent at some later date, should I expect exactly "aspect A" to be incorrect again? It seems more likely the result will be different, maybe with some other aspects being "unexpected". Maybe these new problems weren't even discussed in the initial archived chat log, since at that time they happened to be generated in a way in alignment with the original engineers expectation.
- saratogacxI've gotten into the habit of having the LLM produce a description of their process and summarize the change, Than I add that along with the model I used after my own commit message. It lets me know where I use AI and what I thought it did as well as what I thought it did.The entire prompt and process would be fine if my git history was subject to research but really it is a tool for me or anyone else who wants to know what happened at a given time.
- burntoutgrayYES! The session becomes the source code.Back in the dark ages, you'd "cc -s hello.c" to check the assembler source. With time we stopped doing that and hello.c became the originating artefact. On the same basis the session becomes the originating artefact.
- DonThomasitosEverything in git can and must be merge-able when merging branches. After all, git is a collaboration tool, not a undo-redo stack.
- heavyset_goIf you need LLM sessions included to understand or explain commits, you're doing something wrong.Saving sessions is even more pointless without the full context the LLM uses that is hidden from the user. That's too noisy.
- jes5199instead of committing code, we should just save videos of all of the zoom meetings about the code
- JachIn general, no, but sometimes, yes, or at least linked from the commit the same way user stories/issues are. Admittedly the 'sometimes' from my perspective is mostly when there's a need to educate fellow humans about what's possible or about good prompt techniques and workarounds for the AI being dumb. It can also reveal more of x% by AI, y% by human by for example diffing the outputs from the session against the final commits.
- grahar64If AI could reliably write good code then you shouldn't need to even commit the code as the general rule is you shouldn't commit generated code. Commit the session when you don't need to commit the code
- genghisjahnIf you can, run several agents. They document their process. Trade offs considered, reasoning. Etc. it’s not a full log of the session but a reasonable history of how the code came to be. Commit it with the code. Namespace it however you want.
- rcyI haven't adopted this yet, but have a feeling that something like this is the right level of recording the llm contribution / session https://blog.bryanl.dev/posts/change-intent-records
- travisgriggsIn our (small) team, we’ve taken to documenting/disclosing what part(s) of the process an LLM tool played in the proposed changes. We’ve all agreed that we like this better, both as submitters and reviewers. And though we’ve discussed why, none of us has coined exactly WHY we like this model better.
- darepublicIf a human writes code, should the jira ticket be part of the commit? I am actually thinking about potential merits.
- SamDc73pre-ai if I had to include Google search queries in a commit, I’d be so embarrassed I’d probably never commit code like ever
- hirako2000What's the value given answers are not deterministic.
- danhergirOne of the use cases i see for this tool is helping companies to understand the output coming from the llm blackbox and the process which the employee took to complete a certain task
- stubbiIsn’t that what entire.io, founded by former GitHub CEO, is doing?
- anishguptaisn't a similar thing done by entire cli? the startup which raised $60M seed recently
- rclabshell to the no, in between coding sessions, I go out on plenty of sidebars about random topics that help me, the prompter understand the problem more. Prompts in this way are entirely related to context (pre-knowledge) that is not available to the LLMs.
- jiveturkeyhttps://entire.io thinks so
- spionA summary of the session should be part of the commit message.
- ares623Maybe Git isn't the right tool to track the sessions. Some kind of new Semi-Human Intelligence Tracking tool. It will need a clever and shorter name though.
- estobligatory: git notesLots of comments mentioned this, for those who aren't aware, please checkoutGit Notes: Git's coolest, most unloved feature (2022)https://news.ycombinator.com/item?id=44345334I think it's a perfect match for this case.
- nautilus12This would just record a lot of me cursing at and calling the AI an idiot.
- globular-toastLike any discussion about AI there are two things people are talking about here and it's not always clear which:1. Using LLMs as a tool but still very much crafting the software "by hand",2. Just prompting LLMs, not reading or understanding the source code and just running the software to verify the output.A lot of comments here seem to be thinking of 1. But I'm pretty sure the OP is thinking of 2.
- igetspamYes.EOM
- mock-possumI’ve had the same thought, but after playing around with it, it just seems like adding noise. I never find myself looking at generated code and wondering “what prompt lead to that?” There’s no point, I won’t get any kind of useful response - I’m better off talking to the developer who committed it, that’s how code review works.
- lsc4719Proof sketch is not proof
- tayo42I feel like publishing the session is like publishing a sketch book. I don't need all of my mistakes and dumb questions recorded.If that was important, why are we not already doing things like this. Should I have always been putting my browser history in commits?
- x3n0ph3n3I include my "plans" and a link to my transcript on all my PRs that include AI-generated code. If nothing else, others on my team can learn from them.
- dborehamI've thought about this, and I do save the sessions for educational purposes. But what I ended up doing is exactly what I ask developers to do: update the bug report with the analysis, plan, notes etc. In the case there's a single PR fixing one bug, GitHub and Claude tend to prefer this information go in the PR description. That's ok for me since it's one click from the bug.
- foamzouNo. Prompt-like document is enough. (e.g. skills, AGENTS.md)
- hsuduebc2I must say that would certainly show some funny converstaions in a log.
- ragginope. Someones going to leak important private data using something like this.Consider:"I got a bug report from this user:... bunch of user PII ..."The LLM will do the right thing with the code, the developer reviewed the code and didn't see any mention of the original user or bug report data.Now the notes thing they forgot about goes and makes this all public.
- adrian-vega[dead]
- hackersk[dead]