<- Back
Comments (233)
- fastasucanIt makes my work suck, sadly. Team dynamics also contributes to that, admittedly.Last year I was working on implementing a pretty big feature in our codebase, it required a lot of focus to get the business logic right and at the same time you had be very creative to make this feasible to run without hogging to much resources.When I was nearly done and worked on catching bugs, team members grew tired of waiting and starting taking my code from x weeks ago (I have no idea why), feeding it to Claude or whatever and then came back with a solution. So instead of me finishing my code I had to go through their version of my code.Each one of the proposals had one or more business requirements wrong and several huge bugs. Not one was any closer to a solution than mine was.I had appreciated any contribution to my code, but thinking that it would be so easy to just take my code and finishing it by asking Claude was rather insulting.
- humbleharbingerI'm an engineer at Amazon - we use Kiro (our own harness) with Opus 4.6 underneath.Most of my gripes are with the harness, CC is way better.In terms of productivity I'm def 2-4X more productive at work, >10x more productive on my side business. I used to work overtime to deliver my features. Now I work 9-5 and am job hunting on the side while delivering relatively more features.I think a lot of people are missing that AI is not just good for writing code. It's good for data analysis and all sorts of other tasks like debugging and deploying. I regularly use it to manage deployment loops (ex. make a code change and then deploy the changes to gamma and verify they work by making a sample request and verifying output from cloudwatch logs etc). I have built features in 2 weeks that would take me a month just because I'd have to learn some nitty technical details that I'd never use again in my life.For data analysis I have an internal glue catalog, I can just tell it to query data and write a script that analyzes X for me.AI and agents particularly have been a huge boon for me. I'm really scared about automation but also it doesn't make sense to me that SWE would be automated first before other careers since SWE itself is necessary to automate others. I think there are some fundamental limitations on LLMs (without understanding the details too much), but whatever level of intelligence we've currently unlocked is fundamentally going to change the world and is already changing how SWE looks.
- ecopoesisI'm a manager at a large consumer website. My team and I have built a harness that uses headless Claude's (running Opus) to do ticket work, respond to and fix PR comments, and fix CI test failures. Our only interaction with code is writing specs in Jira tickets (which we primarily do via local Claudes) and adding PR comments to GitHub PRs.The speed we can move at is astounding. We're going to finish our backlog next quarter. We're conservatively planning on launching 3x as many features next quarter.Claude is far from perfect: it's made us reassess our coding standards since code is primarily for Claude now, not for humans. So much of what we did was to make code easier for the next dev, and that just doesn't matter anymore.
- greenpizza13I work at a very prominent AI company. We have access to every tool under the sun. There are various levels of success for all levels — managers, PMs, engineers.We have cursor with essentially unlimited Opus 4.6 and it’s fundamentally changed my workflow as a senior engineer. I find I spend much more time designing and testing my software and development time is almost entirely prompting and reviewing AI changes.I’m afraid my coding skills are atrophying, in fact I know the are, but I’m not sure if the coding was the part of my job I truly enjoyed. I enjoy thinking higher-level: architecture, connecting components, focusing on the user experience. But I think using these AI tools is a form of golden handcuffs. If I go work at a startup without the money I pay for these models, I think for the first time in my career I would be less likely to be able to successfully code a feature than I could last year.So professionally there are pros and cons. My design and architecture skills have greatly improved as I am spending more time doing this.Personally it’s so much fun. I’ve made several side projects I would have never done otherwise. Working with Claude code on greenfield projects is a blast.
- hdhdhsjsbdhIt has made my job an awful slog, and my personal projects move faster.At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.In my personal projects, I get to experience a bit of the fun I think others are having. You can very quickly build out new features, explore new ideas, etc. You have to be thoughtful about the design because the codebase can get messy and hard to build on. Often I design the APIs and then have Claude critique them and implement them.I think the future is bleak for people in my spot professionally – not junior, but also not leading the team. I think the middle will be hollowed out and replaced with principals who set direction, coordinate, and execute. A privileged few will be hired and developed to become leaders eventually (or strike gold with their own projects), but everyone in between is in trouble.
- IzkataI don't use it.I know my mind fairly well, and I know my style of laziness will result in atrophying skills. Better not to risk it.One of my co-workers already admitted as much to me around six months ago, and that he was trying not to use AI for any code generation anymore, but it was really difficult to stop because it was so easy to reach for. Sounded kind of like a drug addiction to me. And I had the impression he only felt comfortable admitting it to me because I don't make it a secret that I don't use it.Another co-worker did stop using it to generate code because (if I'm remembering right) he can tell what it generates is messy for long-term maintenance, even if it does work and even though he's new to React. He still uses it often for asking questions.A third (this one a junior) seemed to get dumber over the past year, opening merge request that didn't solve the problem. In a couple of these cases my manager mentioned either seeing him use AI while they were pairing (and it looked good enough so the problems just slipped by) or saw hints in the merge request with how AI names or structures the code.
- INTPenisI'm always skeptical to new tech, I don't like how AI companies have reserved all memory circuits for X years, that is definitely going to cause problems in society when regular health care sector businesses can't scale or repair their infra, and the environmental impact is also a discussion that I am not qualified to get into.All I can say for sure is that it is absolutely useful, it has improved my quality of life without a doubt. I stick to the principle that it's here to make my work life balance better, not increase output for our owners.And that it has done, so far. I can do things that would have taken me weeks of stressful and hyperfocused work in just hours.I use it very carefully, and sparingly, as a helpful tool in my toolbox. I do not let it run every command and look into every system, just focused efforts to generate large amounts of boilerplate code that would require me to have a lot of docs open if I were to do it myself.I definitely don't let it read or write my e-mails, or write any text. Because I always loved writing, and will never stop loving it.It's here to stay, because I'm not alone in feeling this way about it. So the staunch AI-deniers are just wasting their time. Just like any other tech, it's going to be used against humans, against the already oppressed.I definitely recognize that the tech has made some people lose their minds. Managers and product owners are now vibe coding thinking they can replace all their developers. But their code base will rot faster than they think.
- wk_endAround a year ago I started a new position at a very large tech company that I won't name, working on a pre-existing web project there. The code base isn't terrible - though not very good either, by-and-large - but it's absolutely massive, often over-engineered, pretty unorthodox, and definitely has some questionable design decisions; even after more than a year of working with it I still feel like a beginner much of the time.This year I grudgingly bit the bullet and began using AI tools, and to my dismay they've been a pretty big boon for me, in this case. Not just for code generation - they're really good at probing the monolith and answering questions I have about how it works. Before I'd spend days pouring over code before starting work to figure out the right way to build something or where to break in, pinging people over in India or eastern Europe with questions and hoping they reply to me overnight. AI's totally replaced that, and it works shockingly well.When I do fall back on it for code generation, it's mostly just to mitigate the tedium of writing boilerplate. The code it produces tends to be pretty poor - both in terms of style and robustness - and I'll usually need to take at least a couple of passes over it to get it up to snuff. I do find this faster than writing everything out by hand in the end, but not by a lot.For my personal projects I don't find it adds much, but I do enjoy rubber ducking with ChatGPT.
- tryauuumI had a couple of nice moments, like claude helping me with rust (which I don't understand) and claude finding a bug in a python library I was usingAlso some not so nice moments (small rust changes were OK, but with a big one claude fumbled + I couldn't really verify that it worked so I didn't merge to code to master even when it seemingly worked)I think it really helps to break the ice so to say. You no longer feel the tension, the pain of an empty page. You ask claude to write something, and improving something is so mentally easierAlso I mostly use claude as a spell checker / linter for the projects I'm too lazy to install proper tools for that. vim + claude, what else would you needLuckily my company pays for the subscription, speding personal money on LLMs (especially on US LLMs) would feel strange for some reason. Ideally I want to own an LLM, have it at home but I am too lazy
- onlyrealcuzzoI work at a FAANG.Professionally, I have had almost no luck with it, outside of summarizing design docs or literally just finding something in the code that a simple search might not find: such is this team's code that does X?I am yet to successfully prompt it and get a working commit.Further, I will add that I also don't know any ICs personally who have successfully used it. Though, there's endless posts of people talking about how they're now 10x more productive, and everyone needs to do x y an z now. I just don't know any of these people.Non-professionally, it's amazing how well it does on a small greenfield task, and I have seen that 10x improvement in velocity. But, at work, close to 0 so far.Of the posts I've seen at work, they typically tend to be teams doing something new / greenfield-ish or a refactor. So I'm not surprised by their results.
- shmelI got insanely more productive with Claude Code since Opus 4.5. Perhaps it helps that I work in AI research and keep all my projects in small prototype repos. I imagine that all models are more polished for AI research workflow because that's what frontier labs do, but yeah, I don't write code anymore. I don't even read most of it, I just ask Claude questions about the implementation, sometimes ask to show me verbatim the important bits. Obviously it does mistakes sometimes, but so do I and everyone I have ever worked with. What scares me that it does overall fewer mistakes than I do. Plan mode helps tremendously, I skip it only for small things. Insisting on strict verification suite is also important (kind of like autoresearch project).
- simonwThe majority of code I've written since November 2025 has been created using agents, as opposed to me typing code into a text editor. More than half of that has been done from my iPhone via Claude Code for web (bad name, great software.)I'm enjoying myself so much. Projects I've been thinking about for years are now a couple of hours of hacking around. I'm readjusting my mental model of what's possible as a single developer. And I'm finally learning Go!The biggest challenge right now is keeping up with the review workload. For low stakes projects (small single-purpose HTML+JS tools for example) I'm comfortable not reviewing the code, but if it's software I plan to have other people use I'm not willing to take that risk. I have a stack of neat prototypes and maybe-production-quality features that I can't ship yet because I've not done that review work.I mainly work as an individual or with one other person - I'm not working as part of a larger team.
- QuadrupleAAs a veteran freelance developer - aside from some occasional big wins, I'd say it's been net neutral or even net negative to my productivity. When I review AI-generated code carefully (and if I'm delivering it to clients I feel that's my responsibility) I always find unnecessary complexity, conceptual errors, performance issues, looming maintainability problems, etc. If I were to let it run free, these would just compound.A couple "win" examples: add in-text links to every term in this paragraph that appears elsewhere on the page, plus corresponding anchors in the relevant page parts. Or, replace any static text on this page with any corresponding dynamic elements from this reference URL.Lose examples: constant, but edit format glitches (not matching searched text; even the venerable Opus 4.6 constantly screws this up), unnecessary intermediate variables, ridiculously over-cautious exception-handling, failing to see opportunities to isolate repeated code into a function, or to utilize an existing function that exactly implements said N lines of code, etc.
- turlockmikeI stopped writing code a year ago. Claude code is a multiplier when you know how to use it.Treat it like an intern, give it feedback, have it build skills, review every session, make it do unit tests. Red green refactor. Spend time up front reviewing the plan. Clearly communicate your intent and outcomes you want. If you say "do x" it has to guess what you want. If you say "I want this behaviour and this behaviour, 100% branch unit tested, adhearing to contributing guidelines and best practices, etc" it will take a few minutes longer, but the quality increases significantly.I uninstalled vscode, I built my own dashboard instead that organizes my work. I get instant notifications and have a pr review kick off a highly opinionated or review utilizing the Claude code team features.If you aren't doing this level of work by now, you will be automated soon. Software engineering is a mostly solved problem at this point, you need to embed your best practices in your agent and keep and eye on it and refine it over time.
- fandango1So I am a software engineer at Microsoft, so we have been using the Github Copilot very regularly. It gives us unlimited Opus credits.The good thing is that the work gets much quicker than before, and it's actually a boon for thatThe issue is inflated expectationsFor example: If a work item ideally would take two weeks before AI, it is expected now to be done in like 2 days.So we still need to find a sweet spot so that the expectations are not unbelievable.MS is a mature place so they're still working on it and take our feedback seriously. at least that's what I have seen
- wg0I foresee that the AI blindness at CEO/CFO level and the general hype (from technical and non technical press and media) in our society that software engineering is over etc will result in severe talent shortage in 5-7 years resulting in bidding wars for talent driving salaries 3x upwards or more.
- parastiWe're using Augment Code heavily on a "full rewrite of legacy CRM with 30 years of business rules/data" Laravel project with a team size of 4. Augment kind of became impossible to avoid once we realized the new guy is outpacing the rest of us while posessing almost no knowledge of code and working fully in the business requirements domain, extracting requirements from the customer and passing them to AI, which was encoding them in tests and implementing them in code.I'm using `auggie` which is their CLI-based agentic tool. (They also have a VS Code integration - that became too slow and hung often the more I used it.) I don't use any prompting tricks, I just kind of steer the agent to the desired outcome by chatting to it, and switch models as needed (Sonnet 4.6 for speed and execution, GPT 5.1 for comprehension and planning).My favorite recent interaction with Augment was to have one session write a small API and its specification within the old codebase, then have another session implement the API client entirely from the specification. As I discovered edge cases I had the first agent document them in the spec and the second agent read the updated spec and adjust the implementation. That worked much, much better than the usual ad hoc back and forth directly between me and one agent and also created a concise specification that can be tracked in the repo as documentation for humans and context for future agentic work.
- Ger_OnimoI'm mostly really enjoying it! While it's not my main job, I've always been a tool builder for teams I work on, so if I see a place where a little UI or utility would make people's life easier, I'd usually hack something together in a few hours and evolve it over time if people find it useful. That process is easily 10x faster than before.My main work is training Text-to-Speech models, and the friction of experimenting with model features or ideas has dropped massively. If I want to add a new CFG implementation, or conditioning vector, 90% of the time Opus can one-shot it. It generally does a good job of making the model, inference and training changes simultaneously so everything plays nicely. Haven't had any major regressions or missed bugs yet, but we'll see!The downside is reviewing shitty PRs where it's clear the engineer doesn't fully understand what they're doing, and just a general attitude of "I dunno, Claude suggested it" that's getting pretty exhausting.
- pikerI am working on a sub 100KLOC Rust application and can't productively use the agentic workflows to improve that application.On the other hand, I have tried them a number of times in greenfield situations with Python and the web stack and experienced the simultaneous joy and existential dread of others. They can really stand new projects up quick.As a founder, this leaves me with what I describe as the "generation ship" problem. Is it possible that the architecture we have chosen for my project is so far out of the training data that it would be faster to ditch the project and reimplement it from scratch in a Claude-yolo style? So far, I'm convinced not because the code I've seen in somewhat novel circumstances is fairly mid, but it's hard to shake the thought.I do find chatting with the models incredibly helpful in all contexts. They are also excellent at configuring services.
- peabIt's been great - I work on a lot of projects that are essentially prototypes, to test out different ideas. It's amazing for this - I can create web apps in a day now, which in the past I would not have been able to create at all, as I spent most of my career on the backend.
- michelbI’m not a professional developer but I can find my way around several languages and deployment systems. I have used Claude to migrate a mediumsized Laravel 5 app to Laravel 11 in about 2-3 days. I would not have dared to touch it otherwise.In my day job I’m currently a PM/operations director at a small company. We don’t have programmers. I have used AI to build about 12 internal tools in the past year. They’re not very big, but provide huge productivity gains. And although I do not fully understand the codebase, I know what is where. Three of these tools I’m now recreating based on our usage and learnings.I have learned a ton about all kinds of development concepts in a ridiculously short timeframe.
- zmjIt's great. I'd guess 80-90% of my code is produced in Copilot CLI sessions since the beginning of the year. Copilot CLI is worse than Claude Code, but not by a huge amount. This is mostly working in established 100k+ LOC codebases in C# and TypeScript, with a couple greenfield new projects. I have to write more code by hand in the greenfield projects at their formative stage; LLMs do better following conventions in an existing codebase than being consistent in a new one.Important things I've figured out along the way:1. Enable the agent to debug and iterate. Whatever you'd do to test and verify after you write your first pass at an implementation, figure out a way for an agent to do it too. For example: every API call is instrumented with OpenTelemetry, and the agent has a local collector to query.2. Make scripts or skills to increase the reliability of fallible multi-step processes that need to be repeated often. For example: getting an oauth token to call some api with the appropriate user scopes for the task.3. Continually revise your AGENTS.md. I'll often end a coding session by asking the agent whether there's anything from this session that should be captured there. That adds more than it removes, so every few days I'll compact it by having an agent reword the important stuff for conciseness and get rid anything obvious from implementation.
- ChrisMarshallNYDefine "professional."I write stuff for free. It's definitely "professional grade," and lots of people use the stuff I ship, but I don't earn anything for it.I use AI every day, but I don't think that it is in the way that people here use it.I use it as a "coding partner" (chat interface).It has accelerated my work 100X. The quality seems OK. I have had to learn to step back, and let the LLM write stuff the way that it wants, but if I do that, and perform minimal changes to what it gives me, the results are great.
- Kon5oleLike many others I started feeling it had legs during the past few months. Tools and models reached some level where it suddenly started living up to some of the hype.I'm still learning how to make the most of it but my current state is one of total amazement. I can't believe how well this works now.One game-changer has been custom agents and agent orchestration, where you let agents kick off other agents and each one is customized and keeps a memory log. This lets me make several 1000 loc features in large existing codebases without reaching context limits, and with documentation that lets me review the work with some confidence.I have delivered several features in large legacy codebases that were implemented while I attended meetings. Agents have created greenfield dashboards, admin consoles and such from scratch that would have taken me days to do myself, during daily standups. If it turned out bad, I tweaked the request and made another attempt over lunch. Several useful tools have been made that save me hours per week but I never took the time to make myself.For now, I love it. I do feel a bit of "mourning the craft" but love seeing things be realized in hours instead of days or weeks.
- sornaensisI have good success using Copilot to analyze problems for me, and I have used it in some narrow professional projects to do implementation. It's still a bit scary how off track the models can go without vigilance.I have a lot of worry that I will end up having to eventually trudge through AI generated nightmares since the major projects at work are implemented in Java and Typescript.I have very little confidence in the models' abilities to generate good code in these or most languages without a lot of oversight, and even less confidence in many people I see who are happy to hand over all control to them.In my personal projects, however, I have been able to get what feels like a huge amount of work done very quickly. I just treat the model as an abstracted keyboard-- telling it what to write, or more importantly, what to rewrite and build out, for me, while I revise the design plans or test things myself. It feels like a proper force multiplier.The main benefit is actually parallelizing the process of creating the code, NOT coming up with any ideas about how the code should be made or really any ideas at all. I instruct them like a real micro-manager giving very specific and narrow tasks all the time.
- bgdkbtvFor professional work, I like to offload some annoying bug fixes to Claude and let it figure it out. Then, perusing the changes to make sure nothing silly is being added to the codebase. Sometimes it works pretty well. Other times, for complicated things I need to step in and manually patch. Overall, I'm a lot less stressed about meeting deadlines and being productive at work. On the other hand, I'm more stressed about losing my employment due to AI hype and its effectiveness.For my side projects, I do like to offload the tedious steps like setup, scaffolding or updating tasks to Claude. Things like weird build or compile errors that I usually would have to spend hours Googling to figure out I can get sorted in a matter of minutes. Other than that, I still like to write my own code as I enjoy doing it.Overall, I like it as a tool to assist in my work. What I dislike is how much peddling is being done to shove AI into everything.
- jebarkerI work in an R&D team as research scientist/engineer.Cursor and Claude Code have undoubtedly accelerated certain aspects of my technical execution. In particular, root causing difficult bugs in a complicated codebase has been accelerated through the ability to generate throwaway targeted logging code and just generally having an assistant that can help me navigate and understand complex code.However, overall I would say that AI coding tools have made my job harder in two other ways:1. There’s an increased volume of code that requires more thorough review and/or testing or is just generally not in keeping with the overall repo design.2. The cost is lowered for prototyping ideas so the competitive aspect of deciding what to build or which experiment to run has ramped up. I basically need to think faster and with more clarity to perform the same as I did before because the friction of implementation time has been drastically reduced.
- nzoschkeIt’s going very well.Experience level: very senior, programming for 25 years, have managed platform teams at Heroku and Segment.Project type: new startup started Jan ‘26 at https://housecat.com. Pitch is “dev tools for non developers”Team size: currently 2.Stack: Go, vanilla HTML/CSS/JS, Postgres, SQLite, GCP and exe.dev.Claude code and other coding harnesses fully replaced typing code in an IDE over the past year for me.I’ve tried so many tools. Cursor, Claude and Codex, open source coding agents, Conductor, building my own CLIs and online dev environments. Tool churn is a challenge but it pays dividends to keep trying things as there have been major step functions in productivity and multi tasking. I value the HN community for helping me discover and cut through the space.Multiple VMs available over with SSH with an LLM pre-configured has been the latest level up.Coding is still hard work designing tests, steering agents, reviewing code, and splitting up PRs. I still use every bit of my experience every day and feel tired at end of day.My non-programmer co-founder, more of a product manager and biz ops person, has challenges all the time. He generally can only write functional prototypes. We solve this by embracing the functional prototype and doing a lot of pair programming. It is much more productive than design docs or Figma wireframes.In general the game changer is how much a couple of people can get done. We’re able to prototype ideas, build the real app, manage SOC2 infra, marketing and go to market better than ever thanks to the “willing interns” we have. I’ve done all this before and the AI helps with so much of the boilerplate and busywork.I’m looking for beta testers and security researchers for the product, as well as a full time engineer if anyone is interested in seeing what a “greenfield” product, engineering culture and business looks like in 2026. Contact info in my profile.
- drrobI've only recently begun using copilot auto-complete in Visual Studio using Claude (doing C# development/maintenance of three SaaS products). I've been a coder since 1999.The suggestions are correct about 40% of the time, so I'm actually surprised when they're right, rather than becoming reliant on them. It saves me maybe 10 minutes a day.
- HorizonXPI am having the greatest time professionally with AI coding. I now have the engineering team I’ve always dreamed of. In the last 2 months I have created:- a web-based app for a F500 client for a workflow they’ve been trying to build for 2 years; won the contract- built an iPad app for same client for their sales teams to use- built the engineering agent platform that I’m going to raise funding- a side project to do rough cuts of family travel videos (https://usefirstcut.com, soft launch video: https://x.com/xitijpatel/status/2026025051573686429)I see a lot of people in this thread struggling with AI coding at work. I think my platform is going to save you. The existing tools don’t work anymore, we need to think differently. That said, the old engineering principles still work; heck, they work even better now.
- max_I use it as a research tool.What it has done is replace my Googling and asking people looking up stuff on stack over flow.Its also good for generating small boiler plate code.I don't use the whole agents thing and there are so many edge cases that I always need to understand & be aware of that the AI honestly think cannot capture
- sgillenI use it all the time now, switching between claude code, codex, and cursor. I prefer CC and codex for now but everyone is copying everyone else's homework.I do a lot of green field research adjacent work, or work directly with messy code from our researchers. It's been excellent at building small tools from scratch, and for essentially brute forcing undocumented code. I can give it a prompt like "Here is this code we got from research, the docs are 3 months out of date and don't work, keep trying things until you manage to get $THING running".Even for more production and engineering related tasks I'm finding it speeds up velocity. But my engineering is still closer to greenfield than a lot of people here.I do however feel less connected to the code, even when reviewing thoroughly, I feel like I internalize things at a high level, rather than knowing every implementation detail off the dome.The other downside is I get bigger and more frequent code review requests from colleagues. No on is just handing me straight up slop (yet...)
- aliljetAs crazy as this seems, it's unlocking another variation of software engineering I didn't think was accessible. Previously, super entrenched and wicked expensive systems that might have taken years of engineering effort, appear to be ripe for disruption suddenly. The era of software systems with deeply engineered connectivity seem to be on the outs...
- crslAt work I mostly use claude code and chatgpt web for general queries, but cursor is probably the most popular in our company. I don’t think we are "cooked" but it definitely changes how development will be done. I think the process of coming up with solutions will still be there but implementation is much faster now.My observations:1. What works for me is the usual, work iteratively on a plan then implement and review. The more constraints I put into the plan the better.2. The biggest problem for me is LLM assuming something wrong and then having to steer it back or redoing the plan.3. Exploring and onboarding to new codebases is much faster.4. I don’t see the 10x speedup but I do see that now I can discard and prototype ideas quickly. For example I don’t spend 20-30 minutes writing something just to revert it if I don’t like how it looks or works.5. Mental exhaustion when working on multiple different projects/agent sessions is real, so I tend to only have one. Having to constantly switch mental model of a problem is much more draining than the “old” way of working on a single problem. Basically the more I give in into vibing the harder it is to review and understand.
- jantbUsing claude-code for fixing bugs in a rather huge codebase. Reviews the fixes and if i think it wrote something I would make a pr off i use it. Understanding is key I think and giving it the right context. I have about 20 years of experience of programming and I’m letting it code in a domain and language I know very well. It saves me a lot when the bug requires finding a needle in a haystack.
- throwawayFantaAt my FAANG, there was a team of experienced engineers that proved they could deliver faster and more performant code than a complete org that was responsible for it earlier.So now a lot of different parts of the company are trying to replicate their workflow. The process is showing what works, you need to have AI first documentation (readme with one line for each file to help manage context), develop skills and steering docs for your codebase, code style, etc,. And it mostly works!For me personally, it has drastically increased productivity. I can pick up something from our infinitely huge backlog, provide some context and let the agent go ham on fixing it while i do whatever other stuff is assigned to me.
- stephbookI develop prototypes using Claude Code. The dead boring stuff."Implement JWT token verification and role checking in Spring Boot. Secure some endpoints with Oauth2, some with API key, some public."C# and Java are so old, whatever solutions you find are 5 years out of date. Having an agent implement and verify the foundation is the perfect fit. There's no design, just ever-chaning framework magic. I'd do the same "Google and debug" cycle, but 10 times slower.
- dondraper36It’s a fantastic performance booster for a lot of mundane tasks like writing and revising design docs, tests, debugging (using it like a super smart and active rubber duck), and system design discussions.I also use it as a final check on all my manually written code before sending it for code review.With all that said, I have this weird feeling that my ability to quickly understand and write code is no longer noticeable, nor necessary.Everyone now ships tons of code and even if I do the same without any LLM, the default perception will be that it has been generated.I am not depressed about it yet, but it will surely take a while to embrace the new reality in its entirety
- block_daggerI am having a blast at work. I've been leaning hard into AI (as directed by leadership) while others are falling far far behind. I am building new production features, often solo or with one or two other engineers, at lightning speed, and being recognized across the org for it. This is an incredible opportunity for many engineers that won't last. I'm trying to make the most of it. It will be sad when software is no longer a useful pastimes for humans. I'm thinking another three years and most of us will be unemployed or our jobs will have been completely transformed into something unrecognizable a few short years ago.
- makerofthingsI am required to maximise my use of AI at work and so I do. It's good enough at simple, common stuff. Throw up a web page, write some python, munge some data in C++, all great as long as the scale is small. If I'm working on anything cutting edge or niche (which I usually am) then it makes a huge mess and wastes my time. If you have a really big code base in the ~50million loc range then it makes a huge mess.I really liked writing code, so this is all a big negative for me. I genuinely think we have built a really bad thing, that will take away jobs that people love and leave nothing but mediocrity. This thing is going to make the human race dumber and it's going to hold us back.
- mentalgearI use it mostly to explore the information space of architectural problems, but the constant "positive engagement feedback" (first line of each generation "brilliant insight") start being deeply insincere and also false by regularly claiming "this is the mathematically best solution - ready to implement ?" only that it isn't when considering it truly.I have moved away from using an LLM now before having figured out the specifications, otherwise it's very very risky to go down a wrong rabbit hole the LLM seduced you into via its "user engagement" training.
- florilegiumsonIt has reignited my passion for coding by making it so I don't have to use my coding muscle as much during the day to improve our technologically boring product.
- ameliusAI is great for getting stuff to work on technologies you're not familiar with. E.g. to write an Android or iOS app or an OpenGL shader, or even a Linux driver. It's also great for sysadmin work such as getting an ethernet connection up, or installing a docker container.For main coding tasks, it is imho not suitable because you still have to read the code and I hate reading other people's code.And also, the AI is still slow, so it is hard to stay focused on a task.
- TrueSlacker0I am no longer in software as a day job so i am not sure of my input applys. I traded that world for opening a small brewery back in 2013. So I am a bit outdated on many modern trends but I still enjoy programming. In the last fee months using both gemeni and now movong over to claude, I have created at least 5 (and growing) small apps that have radically transformed what i am able to do at the business. I totally improved automation of my bookkeeping (~16hrs a month categorizing everything down to 3ish), created an immense amount of better reports on production, sales and predictions from a system i had already been slowly writing all these years, I created a run club rewards tracking app instead of relying on our paper method, improved upon a previously written full tv menu display system that syncs with our website and on premis tvs and now i am working on a full productive maintenance trigger system and a personal phone app to trigger each of these more easily. Its been a game changer for me. I have so many more ideas planned and each one frees up more of my waste time to create more.
- stainluThe biggest win for me has been cross-stack context switching. I maintain services in TypeScript, Python, and some Go, and the cost of switching between them used to be brutal - remembering idioms, library APIs, error handling patterns. Now I describe what I need and get idiomatic code in whichever language I'm in. That alone probably saves me 30-40 minutes on a typical day.Where it consistently fails: anything involving the interaction between systems. If a bug spans a queue producer and its consumer, or the fix requires understanding how a frontend state change propagates through API calls to a cache invalidation - the model gives you a confident answer that addresses one layer and quietly ignores the rest. You end up debugging its fix instead of the original issue.My stack: Claude Code (Opus) for investigation and bug triage in a ~60k LOC codebase, Cursor for greenfield work. Dropped autocomplete entirely after a month - it interrupted my thinking more than it helped.
- cloud8421At $WORK, my team is relatively small (< 10 people) and a few people really invested in getting the codebase (a large Elixir application with > 3000 modules) in shape for AI-assisted development with a very comprehensive set of skills, and some additional tooling.It works really well (using Claude Code and Opus 4.6 primarily). Incremental changes tend to be well done and mostly one-shotted provided I use plan mode first, and larger changes are achievable by careful planning with split phases.We have skills that map to different team roles, and 5 different skills used for code review. This usually gets you 90% there before opening a PR.Adopting the tool made me more ambitious, in the sense that it lets me try approaches I would normally discard because of gaps in my knowledge and expertise. This doesn't mean blindly offloading work, but rather isolating parts where I can confidently assess risk, and then proceed with radically different implementations guided by metrics. For example, we needed to have a way to extract redlines from PDF documents, and in a couple of days went from a prototype with embedded Python to an embedded Rust version with a robust test oracle against hundreds of document.I don't have multiple agents running at the same time working on different worktrees, as I find that distracting. When the agent is implementing I usually still think about the problem at hand and consider other angles that end up in subsequent revisions.Other things I've tried which work well: share an Obsidian note with the agent, and collaboratively iterate on it while working on a bug investigation.I still write a percentage of code by hand when I need to clearly visualise the implementation in my head (e.g. if I'm working on some algo improvement), or if the agent loses its way halfway through because they're just spitballing ideas without much grounding (rare occurrence).I find Elixir very well suited for AI-assisted development because it's a relatively small language with strong idioms.
- eleventhborn> If you've recently used AI tools for professional coding work, tell us about it.POCC (Plain Old Claude Code). Since the 4.5 models, It does 90% of the work. I do a final tinkering and polishing for the PR because by this point it is easy for me to fix the code than asking the model to fix it for me.The work: Fairly straightward UI + backend work on a website. We have designers producing Figma and we use Figma MCP to convert that to web pages.POCC reduces the time taken to complete the work by at least 50%. The last mile problem exist. Its not a one-shot story to PR prompt. There are a few back & forths with the model, some direct IDE edits, offline tests, etc. I can see how having subagents/skills/hooks/memory can reduce the manual effort further.Challenges: 1) AI first documentation: Stories have to be written with greater detail and acceptance criteria. 2) Code reviews: copilot reviews on Github are surprisingly insightful, but waiting on human reviews is still a bottleneck. 3) AI first thinking: Some of the lead devs are still hung up on certain best practices that are not relevant in a world where the machine generates most of the code. There is a friction in the code LLM is good at and the standards expected from an experienced developer. This creates busy work at best, frustration at worst. 4) Anti-AI sentiment: There is a vocal minority who oppose AI for reasons from craftsmanship to capitalism to global environment crisis. It is a bit political and slack channels are getting interesting. 5) Prompt Engineering: Im in EU, when the team is multi-lingual and English is adopted as the language of communication, some members struggle more than others. 6) Losing the will to code. I can't seem to make up my mind if the tech is like the invention of calculator or the creation of social media. We don't know its long term impact on producing developers who can code for a living.Personally, I love it. I mourn for the loss of the 10x engineer, but those 10x guys have already onboarded the LLM ship.
- spprashantI only just started using it at work in the last month.I am a data engineer maintaining a big data Spark cluster as well as a dozen Postgres instances - all self hosted.I must confess it has made me extremely productive if we measure in terms of writing code. I don't even do a lot of special AGENTS.md/CLAUDE.md shenanigans, I just prompt CC, work on a plan, and then manually review the changes as it implements it.Needless to say this process only works well because: A) I understand my code base. B) I have a mental structure of how I want to implement it.Hence it is easy to keep the model and me in sync about what's happening.For other aspects of my job I occasionally run questions by GPT/Gemini as a brainstorming partner, but it seems a lot less reliable. I only use it as a sounding board. I does not seem to make me any more effective at my job than simply reading documents or browsing github issues/stack overflow myself.
- daringrain32781Like a lot of things, it’s neither and somewhere in the middle. It’s net useful even if just for code reviews that make you think about something you may have missed. I personally also use it to assist in feature development, but it’s not allowed to write or change anything unless I approve it (and I like to look at the diff for everything)
- mellosoulsIt churns through boring stuff but it's like I imagine the intellectual equivalent of breaking in a wild horse at times, so capable, so fast but easy to find yourself in a pile on the floor.I'm learning all the time and it's fun, exasperating, tremendously empowering and very definitely a new world.
- al_borlandFor asking quick questions that would normally send me to a search engine, it’s pretty helpful. It’s also decent (most of the time) and throwing together some regex.For throw away code, I might let the agent do some stuff. For example, we needed to test timing on DNS name resolution on a large number of systems to try and track down if that was causing our intermittent failures. I let an agent write that and was able to get results faster than if I did it myself, and I ultimately didn’t have to care about the how… I just needed something to show to the network team to prove it was their problem.For larger projects that need to plugin to the legacy code base, which I’ll need to maintain for years, I still prefer to do things myself, using AI here and there as previously mentioned to help with little things. It can also help finding bugs more quickly (no more spending hours looking for a comma).I had an agent refactor something I was making for a larger project. It did it, and it worked, but it didn’t write it in a way that made sense to my brain. I think others on my team would have also had trouble supporting it too. It took something relatively simple and added so many layers to it that it was hard to keep all the context in my head to make simple edits or explain to someone else how it worked. I might borrow some of the ideas it had, but will ultimately write my own solution that I think will be easier for other people to read and maintain.Borrowing some of these ideas and doing it myself also allows me to continue to learn and grow, so I have more tools in my tool belt. With the DNS thing that was totally vibe coded, there were some new things in there I hadn’t done before. While the code made sense when I skimmed through it, I didn’t learn anything from that effort. I couldn’t do anything it did again without asking AI to do it again. Long-term, I think this would be a problem.Other people on my team have been using AI to write their docs. This has been awful. Usually they don’t write anything at all, but at least then I know they didn’t writing anything. The AI docs are simply wrong, 100% hallucinations. I have to waste time checking the doc against the code to figure that out and then go to the person that did it to make them fix it. Sometimes no doc is better than a bad doc.
- wrsOn two greenfield web apps using straightforward stuff (Preact, Go, PostgreSQL) Claude Code has been very helpful. Especially with Claude Code and Opus >= 4.5, adding an incremental feature mostly just works. One of these is sort of a weird IDE, and Opus even does OK with obscureish things like CodeMirror grammars. I literally just write a little paragraph describing what I want, have it write the code and tests, give it a quick review, and 80% of the time it’s like, great, no notes.To be clear, this is not vibecoding. I have a strong sense of the architecture I want, and explicitly keep Claude on the desired path much like I would a junior programmer. I also insist on sensible unit and E2E test coverage with every incremental commit.I will say that after several months of this the signalling between UI components is getting a bit spaghettilike, but that would’ve happened anyway, and I bet Claude will be good at restructuring it when I get around to that.I also work in a giant Rails monolith with 15 years of accumulated cruft. In that area, I don’t write a whole lot, but CC Opus 4.6 is fantastic for reading the code. Like, ask “what are all the ways you can authenticate an API endpoint?” and it churns away for 5 minutes and writes a nice summary of all four that it found, what uses them, where they’re implemented, etc.
- mc-0I just moved to a new team in my company that prides itself on being "AI-First". The work is a relatively new project that was stood up by a small team of two developers (both of whom seem pretty smart) in the last 4 months. Both acknowledged that some parts of their tech stack, they just don't at all understand (next.js frontend). The backend is a gigantic monorepo of services glued together.The manager & a senior dev on my first day told me to "Don't try to write code yourself, you should be using AI". I got encouraged to use spec-driven development and frameworks like superpowers, gsd, etc.I'm definitely moving faster using AI in this way, but I legitimately have no idea what the fuck I am doing. I'm making PRs I don't know shit about, I don't understand how it works because there is an emphasis on speed, so instead of ramping up in a languages / technologies I've never used, I'm just shipping a ton of code I didn't write and have no real way to vet like someone who has been working with it regularly and actually has mastered it.This time last year, I was still using AI, but using it as a pair programming utility, where I got help learn to things I don't know, probe topics / concepts I need exposure to, and reason through problems that arose.I can't control the direction of how these tools are going to evolve & be used, but I would love if someone could explain to me how I can continue to grow if this actually is the future of development. Because while I am faster, the hope seems to be AI / Agents / LLMs will only ever get better and I will never need to have an original thought or use crtical thinking.I have just about 4 years of professional experience. I had about 10 - 12 months of the start of my career where I used google to learn things before LLMs became sole singular focus.I wake up every day with existential dread of what the future looks like.
- spicyusernameOriginally my workflow was:- Think about requirement- Spend 0-360 minutes looking through the code- Start writing code- Realize I didn't think about it quite enough and fix the design- Finish writing code- Write unit tests- Submit MR- Fix MR feedbackUntil recently no LLM was able to properly disrupt that, however the release of Opus 4.5 changed that.Now my workflow is:- Throw as much context into Opus as possible about what I want in plan mode- Spend 0-60 minutes refining the plan- Have Opus do the implementation- Review all the code and nitpick small things- Submit MR- Implement MR feedback
- shockwaveriderOne thing I use Claude for is diagramming system architecture stuff in LateX and it’s great, I just describe what I am visualizing and kaboom I get perfect output I can paste into overleaf.
- balls187Very powerful tool. In the right hands.
- its-kostyaThe output the agent creates falls into one of these categories:1. Correct, maintainable changes 2. Correct, not maintable changes 3. Correct diff, maintains expected system interaction 4. Correct diff, breaks system interaction.In no way are they consistent or deterministic but _always_ convincing they are correct.
- cedwsFor my job which is mostly YAML engineering with some light Go coding (Platform) I'm finding it useful. We're DRY-ing out a bunch of YAML with CUE at the moment and it's sped up that work up tremendously.When it comes to personal projects I'm feeling extremely unmotivated. Things feel more in reach and I've probably built ten times the number of throwaway projects in the past year than I have in previous years. Yet I feel no inspiration to see those projects through to the end. I feel no connection to them because I didn't build them. I have a feeling of 'what's the point' publishing these projects when the same code is only a few prompts away for someone else too. And publishing them under my name only cheapens the rest of my work which I put real cognitive effort into.I think I want to focus more on developing knowledge and skills moving forward. Whatever I can produce with an LLM in a few hours is not actually valuable unless I'm providing some special insight, and I think I'm coming to terms with that at the moment.
- lazystarmy team is anti-AI. my code review requests are ignored, or are treated more strictly than others. it feels coordinated - i will have to push back the launch date of my project as a result.another teammate added a length check to an input field, and his request was merged near instantly, even though it had zero unit testing. this team is incredibly cooked in the long term, i just need to ensure that i survive the short term somehow.
- __mpI enjoy Opus on personal projects. I don’t even bother to check the code. Go/JavaScript/Typescript/CSS works very well for me. Swift not so much. I haven’t tried C/C++ yet. Scala was Ok.Professionally I hardly use the tools for coding, since I’m in an architecture role and mostly write design docs and do reviews. And I write the occasional prototype.I have started building tools to integrate copilot (Opus) better with $CORP. This way I can ask it questions across confluence and github.Leveraging Claude for a project feels very addictive to me. I have to make a conscious effort to stop and I end up working on multiple projects at the same time.
- throwatdem12311I am forced to use it. They want us to only have code written by Claude. We are forced to use spec-kit for everything so every PR has hundreds, if not thousands, of lines of markdown comitted to the repo per ticket. I basically only review code now. It changes so fast it is impossible to have a stable mental model of the application. My job is now to goto meetings, go through the motions of reviewing thousands of lines of slop per day while sending thousands of lines of slop to others. Everything I liked about the job has been stolen from me, only things I disliked or was indifferent to are left.If this is what the industry is now… this will be my last job in it.Curse everyone involved with creating this nightmare.
- seanmcdirmidI’m transitioning from AI assisted (human in the loop) to AI driven (human on the loop) development. But my problems are pretty niche, I’m doing analytics right now where AI-driven is much more accessible. I’m in a team of three but so far I’m the only one doing the AI driven stuff. It basically means focusing on your specification since you are handing development off to the AI afterwards (and then a review of functionality/test coverage before deploying).Mostly using Gemini Flash 3 at a FAANG.
- Tade0Daily Claude user via Cursor.What works:-Just pasting the error and askig what's going on here.-"How do I X in Y considering Z?"-Single-use scripts.-Tab (most of the time), although that doesn't seem to be Claude.What doesn't:-Asking it to actually code. It's not going to do the whole thing and even if, it will take shortcuts, occasionally removing legitimate parts of the application.-Tests. Obvious cases it can handle, but once you reach a certain threshold of coverage, it starts producing nonsense.Overall, it's amazing at pattern matching, but doesn't actually understand what it's doing. I had a coworker like this - same vibe.
- theturtletalks5 years ago, I set out to build an open-source, interoperable marketplace. It was very ambitious because it required me to build an open source Shopify for not only e-commerce, but restaurants, gyms, hotels, etc that this marketplace could tap into. Without AI, this vision would’ve faltered but with AI, this vision is in reach. I see so many takes about AI killing open-source but I truly think it will liberate us from proprietary SaaS and marketplaces given enough time.
- bcrosby95I've been working on a client server unity based game the last couple of years. It's pretty bad at handling that use case. It misses tons of corner cases that span the client server divide.
- jwrMeasured 12x increase in issues fixed/implemented. Solo founder business, so these are real numbers (over 2 months), not corporate fakery. And no, I am not interested in convincing you, I hope all my competitors believe that AI is junk :-)
- gooseyardI haven't actively looked into it, but on a couple of occasions after google began inserting gemini results at the top of the list, I decided to try using some of the generated code samples when then search didn't turn up anything useful. The results were a mixed bag- the libraries that I'd been searching for examples from were not very broadly used and their interfaces volatile enough that in some cases the model was returning results for obsolete versions. Not a huge deal since the canonical docs had some recommendations. In at least a couple of cases though, the results included references to functions that had never been in the library at all, even though they sounded not only plausible but would have been useful if they did in fact exist.In the end, I am generally using the search engine to find examples because I am too lazy to look at the source for the library I'm using, but if the choice is between an LLM that fabricates stuff some percentage of the time and just reading the fucking code like I've been doing for decades, I'd rather just take my chances with the search engine. If I'm unable to understand the code I'm reading enough to make it work, it's a good signal that maybe I shouldn't be using it at all since ultimately I'm going to be on the hook to straighten things out if stuff goes sideways.Ultimately that's what this is all about- writing code is a big part of my career but the thing that has kept me employed is being able to figure out what to do when some code that I assembled (through some combination of experimentation, documentation, or duplication) is not behaving the way I had hoped. If I don't understand my own code chances are I'll have zero intuition about why it's not working correctly, and so the idea of introducing a bunch of random shit thrown together by some service which may or may not be able to explain it to me would be a disservice to my employers who trust me on the basis of my history of being careful.I also just enjoy figuring shit out on my own.
- fleventynineAt the end of the day, I'm being paid to ensure that the code deployed to production meets a particular bar of quality. Regardless of whether I'm reviewing code or writing it, If I let a commit be merged, I have to be convinced that it is a net positive to the codebase.People having easy access to LLMs makes this job much harder. LLMs can create what looks at the surface like expert-written code, but suffers from below-the-surface issues that will reveal themselves as intermittent issues or subtle bugs after being deployed.Inexperienced devs create huge commits full of such code, and then expect me to waste an entire day searching for such issues, which is miserable.If the models don't improve significantly in the future, I expect that most high-stakes software teams will fire all the inexperienced devs and have super-experienced engineers work with the bots directly.
- pgtI am getting disproportionately good results with the models by following a process: spec -> plan -> critique -> improve plan -> implement plan.
- owenpalmerIt's a game changer for reading large codebases and debugging.Error messages were the "slop" of the pre-LLM era. This is where an LLM shines, filling in the gaps where software engineering was neglected.As for writing code, I don't let it generate anything that I couldn't have written myself, or anything that I can't keep in my brain at once. Otherwise I get really nervous about committing.The job of a software engineer does and always has relied upon taking responsibility for the quality of one's work. Whether it's auto-complete or a fancier auto-complete, the responsibility should rest on your shoulders.
- dxxmxndI also work at big tech. Claude code is very good and I have not written code by hand in months. These are very messy codebases as well.I have to very much be in the loop and constantly guiding it with clarifying questions but it has made running multiple projects in parallel much easier and has handled many tedious tasks.
- synthcVery hit or miss.Stack: go, python Team size: 8 Experience, mixed.I'm using a code review agent which sometimes catches a critical big humans miss, so that is very useful.Using it to get to know a code base is also very useful. A question like 'which functions touch this table' or 'describe the flow of this API endpoint' are usually answered correctly. This is a huge time saver when I need to work on a code base i'm less familiar with.For coding, agents are fine for simple straightforward tasks, but I find the tools are very myopic: they prefer very local changes (adding new helper functions all over the place, even when such helpers already exist)For harder problems I find agents get stuck in loops, and coming up with the right prompts and guardrails can be slower than just writing the code.I also hates how slow and unpredictable the agents can be. At times it feels like gambling. Will the agents actually fix my tests, or fuck up the code base? Who knows, let's check in 5 minutes.IMO the worst thing is that juniors can now come up with large change sets, that seem good at a glance but then turn out to be fundamentally flawed, and it takes tons of time to review
- jansanIt is great to solve "puzzle" problems and remove road blocks. In the past, whenever I got stuck, I often got frustrated and gave up. In so many cases AI hints to the corret solution and helps me continue. It's like a knowledgeable colleague that you can ask for help when you get stuck.Another thing is auditing and code polishing. I asked Claude to polish a working, but still rough browser pluging, consisting of two simple Javascript files. It took ten iterations and a full day of highly intensive work to get the quality I wanted. I would say the result is good, but I could not do this process vey often without going insane. And I do not want to do this with a more complex project, yet.So, yes, I am using it. For me it's a tool, knowledge resource, puzzle solver, code reviewer and source for inspiration. It's not a robot to write my code.And never trust it blindly!
- the__alchemistHere's my anecdote: I use ChatGPT, Gemini (web chat UI) and Claude. Claude is a bit more convenient in that it has access to my code bases, but this comes at the cost of I have to be careful I'm steering it correctly, while with the chat bots, I can feed it only the correct context.They simplify discrete tasks. Feature additions, bug fixes, augmenting functionality.They are incapable of creating good quality (easily expandable etc) architecture or overall design, but that's OK. I write the structs, module layout etc, and let it work on one thing at a time. In the past few days, I've had it: - Add a ribbon/cartoon mesh creator - Fixed a logical vs physical pixel error on devices where they were different for positioning text, and setting window size - Fixed a bug with selecting things with the mouse under specific conditions - Fixing the BLE advertisement payload format when integrating with a service. - Inputting tax documents for stock sales from the PDF my broker gives me to the CSV format the tax software uses Overall, great tool! But I think a lot of people are lying about its capabilities.
- mono442I almost don't write any code by hand anymore and was able to take up a second part-time job thanks to genai.
- saadn92Exceptionally well. I’ve been using it for my side project for the last 7 months and have learned how to use it really well on a rather large codebase now. My side project has about 100k LOC and most of it is AI generated, though I do heavily review and edit.
- anyonecancodeSomewhat against the common sentiment, I find it's very helpful on a large legacy project. At work, our main product is a very old, very large code base. This means it's difficult to build up a good understanding of it -- documentation is often out of date, or makes assumptions about prior knowledge. Tracking down the team or teams that can help requires being very skilled at navigating a large corporate hierarchy. But at the end of the day, the answers for how the code works is mostly in the code itself, and this is where AI assistance has really been shining for me. It can explore the code base and find and explain patterns and available methods far faster than I can.My prompts end to be in the pattern of "I am looking to implement <X>. <Detailed description of what I expect X to do.>. Review the code base to find similar examples of how this is currently done, and propose a plan for how to implement this."These days I'm on Claude Code, and I do that first part in Plan mode, though even a few months ago on earlier, not-as-performant models and tools, I was still finding value with this approach. It's just getting better, as the company is investing in shared skills/tools/plugins/whatever the current terminology is that is specific to various use cases within the code base.I haven't been writing so much code directly, but I do still very much feel that this is my code. My sessions are very interactive -- I ask the agent to explain decisions, question its plans, review the produced code and often revise it. I find it frees me up to spend more time thinking through and having higher level architecture applied instead of spending frustrating hours hunting down more basic "how does this work" information.I think it might have been an article by Simon Willison that made the case for there being a way to use AI tooling to make you smarter, or to make you dumber. Point and shoot and blindly accept output makes you dumber -- it places more distance between you and your code base. Using AI tools to automate away a lot of the toil give you energy and time to dive deeper into your code base and develop a stronger mental model of how it works -- it makes you smarter. I keep in mind that at the end of the day, it's my name on the PR, regardless of how much Claude directly created or edited the files.
- fantasizrI still like using it for quick references and autocomplete, boilerplate function. It's funny that text completion with tab is now seen as totally obsolete to some folks.
- jv22222I feel bad saying this because so many folks have not had the best of luck, but it's changed the game for me.I'm building out large multi-repo features in a 60 repo microservice system for my day job. The AI is very good at exploring all the repos and creating plans that cut across them to build the new feature or service. I've built out legacy features and also completely new web systems, and also done refactoring. Most things I make involve 6-8 repos. Everything goes through code review and QA. Code being created is not slop. High quality code and passes reviews as such. Any pushback I get goes back in to the docs and next time round those mistakes aren't made.I did a demo of how I work in AI to the dev team at Math Academy who were complete skeptics before the call 2 hours later they were converts.
- suzzer99I work for a university. We've got a dedicated ChatGPT instance but haven't been approved to use a harness yet. Some devs did a pilot project and approval/licenses are supposedly coming soon.
- truscheI've been using opencode and oh-my-opencode with Claude's models (via github Copilot). The last two or three months feel like they have been the most productive of my 28-year career. It's very good indeed with Rails code, I suspect it has something to do with the intentional expressiveness of Ruby plus perhaps some above-average content that it would be trained on for this language and framework. Or maybe that's just my bias.It takes a bit of hand holding and multiple loops to get things right sometimes, but even with that, it's pretty damn good. I don't usually walk away from it, I actively monitor what it's doing, peek in on the sub-agents, and interject when it goes down a wrong path or writes messy code. But more often than not, it goes like this: - Point at a GH issue or briefly describe the task - Either ask it to come up with a plan, or just go straight to implementation - When done, run *multiple* code review loops with several dedicated code review agents - one for idiomatic Rails code, one for maintainabilty, one for security, and others as needed These review loops are essential, they help clean up the code into something coherent most times. It really mirrors how I tend to approach tasks myself: Write something quickly that works, make it robust by adding tests, and then make it maintainable by refactoring. Just way faster.I've been using this approach on a side project, and even though it's only nights an weekends, it's probably the most robust, well-tested and polished solo project I've ever built. All those little nice-to-have and good-to-great things that normally fall by the wayside if you only have nights and weekends - all included now.And the funny thing is - I feel coding with AI like this gets me in the zone more than hand-coding. I suspect it's the absence of all those pesky rabbit holes that tend to be thrown up by any non-trivial code base and tool chain which can easily distract us from thinking about the problem domain and instead solving problems of our tools. Claude deals with all that almost as a side effect. So while it does its thing, I read through it's self-talk while thinking along about the task at hand, intervening if I disagree, but I stay at the higher level of abstraction, more or less. Only when the task is basically done do I dive a level deeper into code organisation, maintainability, security, edge cases, etc. etc.Needless to say that very good test coverage is essential to this approach.Now, I'm very ambiguous about the AI bubble, I believe very firmly that it is one, but for coding specifically, it's a paradigm shift, and I hope it's here to stay.
- ruduhudiI have the freedom to work with AI tools as much as I as I want and kind of lead the team in the direction I see fit.It’s a lot of fun for exploring ideas. I’ve built things very fast that I would not have done at all otherwise. I have rewritten a huge chunk of semi-outdated docs into something useful with a couple of Prompts in a day. Claude does all the annoying dependency update breaks the build kinds of things. And the reviews are extremely useful and a perfect combination with human review as they catch things extremely well that humans are bad at catching.But in the production codebase changes must be made with much more consideration. Claude tends to perform well ob some tasks but for other I end up wasting time because I just don’t know up front how the feature must look so I cannot write a spec at the level of precision that claude needs and changing code manually is more efficient for this kind of discovery for me than dealing with large chunks of constantly changing code.And then there’s the fact that claude produces things that work and do the thing described in the prompt extremely well but they are always also wring in sone way. When I let AI build a large chunk of code and actually go through the code there’s always a mess somewhere that ai review doesn’t see because it looks completely plausible but contains some horrible security issue or complete inconsistency with the rest of the codebase or, you know, that custom yaml parser nobody asked for and that you don’t want your day job to depend on.
- eranationWhat you said about "we're all cooked" and "AI is useless" is literally me and everyone I know switching between the two on an hourly basis...I find it the most exciting time for me as a builder, I can just get more things done.Professionally, I'm dreading for our future, but I'm sure it will be better than I fear, worse than I hope.From a toolset, I use the usual, Cursor (Super expensive if you go with Opus 4.6 max, but their computer use is game changing, although soon will become a commodity), Claude code (pro max plan) - is my new favorite. Trying out codex, and even copilot as it's practically free if you have enterprise GitHub. I'm going to probably move to Claude Code, I'm paying way too much for Cursor, and I don't really need tab completion anymore... once Claude Code will have a decent computer use environment, I'll probably cancel my Cursor account. Or... I'll just use my own with OpenClaw, but I'm not going to give it any work / personal access, only access to stuff that is publicly available (e.g. run sanity as a regular user). Playing with skills, subagents, agent teams, etc... it's all just markdowns and json files all the way down...About our professional future:I'm not going to start learning to be a plumber / electrician / A/C repair etc, and I am not going to recommend my children to do so either, but I am not sure I will push them to learn Computer Science, unless they really want to do Computer Science.What excites me the most right now is my experiments with OpenClaw / NanoClaw, I'm just having a blast.tl;dr most exciting yet terrifying times of my life.
- lnrdI work at a unicorn in EU. Claude Code has been rolled out to all of engineering with strict cost control policies, even with these in place we burn through tens of thousands of euro per months that I think could translate in 15/20 hires easily. Are we more productive than adding people to the headcount? That's a good question that I cannot answer.Some senior people that were in the AI pilot, have been using this for a while, and are very into it claimed that it can open PRs autonomously with minimum input or supervision (with a ton of MD files and skills in repos with clear architecture standards). I couldn't replicate this yet.I'm objectively happy to have access to this tool, it feels like a cheat code sometimes. I can research things in the codebase so fast, or update tests and glue code so quickly that my life is objectively better. If the change is small or a simple bugfix it can truly do it autonomously quicker than me. It does make me lazier though, sometimes it's just easier to fire up claude than to focus and do it by myself.I'm careful to not overuse it mostly to not reach the montlhy cap, so that I can "keep it" if something urgent or complex comes my way. Also I still like to do things by hand just because I still want to learn and maintain my skills. I feel that I'm not learning anything by using claude, that's a real thing.In the end I feel it's a powerful tool that is here to stay and I would be upset if I wouldn't have access to it anymore, it's very good. I recently subscribed to it and use it on my free time just because it's a very fun technology to play with. But it's a tool. I'm paid because I take responsability that my work will be delivered on time, working, tested, with code on par with the org quality standards. If I do it by hand or with claude is irrelevant. If i can do it faster it will likely mean I will receive more work to do. Somebody still has to operate Claude and it's not going to be non-technical people for sure.I genuinely think that if anyone still believes today that this technology is only hype or a slop machine, they are in denial or haven't tried to use a recent frontier model with the correct setup (mostly giving the agent a way to autonomously validate it's changes).
- devldI'm repeating this 3rd time, but, a non-technical client of mine has whipped up an impressive SaaS prototype with tons of features. They still need help with the cleanup, it's all slop, but I was doing many small coding requests for that client. Those gigs will simply disappear.I just got started using Claude very recently. I have not been in the loop how much better it got. Now it's obvious that no one will write code by hand. I genuinely fear for my ability to make a living as soon as 2 years from now, if not sooner. I figure the only way is to enter the red queen race and ship some good products. This is the positive I see. If I put 30h/week into something, I have productivity of 3 people. If it's a weekend project at 10h/week, I have now what used to be that full week of productivity. The economics of developing products solo have vastly changed for the better.
- devmorIt’s not really that useful for what people tell me it will be useful for, most of the time. For context, I am a senior engineer that works in fintech, mostly doing backend work on APIs and payment rail microservices.I find the most use from it as a search engine the same way I’d google “x problem stackoverflow”.When I was first tasked with evaluating it for programming assistance, I thought it was a good “rubber duck” - but my opinion has since changed. I found that if I documented my goals and steps, using it as a rubber duck tended to lead me away from my goals rather than refine them.Outside of my role they can be a bit more useful and generally impressive when it comes to prompting small proof of concept applications or tools.My general take on the current state of LLMs for programming in my role is that they are like having a junior engineer that does not learn and has a severe memory disorder.
- AdeptusAquinasVery much mixed. I've used Claude to generate small changes to an existing repo, asking it to create functions or react template in the style of the rest of the file its working in, and thats worked great. I still do a lot of the fine tuning there, but if the codebase isn't one I am overly familiar with this is a good way to ensure my work doesn't conflict with the core team's.I have also done the agentic thing and built a full CLI tool via back-and-forth engagement with Claude and that worked great - I didn't write a single line of code. Because the CLI tool was calling an API, I could ask Claude to run the requests it was generating and adjust based on the result - errors, bad requests etc, and it would fairly rapidly fix and coalesce on a working solution.After I was done though, I reckon that if instead of this I had just done the work myself I would have had a much smaller, more reliable project. Less error handling, no unit tests, no documentation sure, but it would have worked and worked better - I wouldn't need to iterate off the API responses because I would have started with a better contract-based approach. But all of that would have been hard, would have required more 'slow thinking'. So... I didn't really draw a clean conclusion from the effort.Continuing to experiment, not giving up on anything yet.
- morkalorkMore work, shorter deadlines, smaller headcount, higher expectations in terms of adaptability/transferability of people between projects (who needs knowledge transfer, ask the AI!).But in the end, the thing that pisses me off was a manager who used it to write tickets. If the product owner doesn't give a shit about the product enough to think and write about what they want, you'll never be successful as a developer.Otherwise it's pretty cool stuff.
- tonymetAI has helped me break through procrastination by taking care of tedious tasks that beforehand had a low ROI (boilerplate, CI/CD configs, test coverage, shell scripting/automation).- create unit tests and benchmark tests that required lots of boiler plate , fixtures- add CI / CD to a few projects that I didn't have motivation to- freshen up old projects to modern standards (testing, CI / CD, update deps, migrations/deprecations)- add monitoring / alerting to 2 projects that I had been neglecting. One was a custom DNS config uptime monitor.- automated backup tools along with scripts for verifying recovery procedure.- moderate migrations for deprecated APIs and refactors within cli and REST API services- auditing GCP project resources for billing and security breaches- frontend, backend and offline tiers for cloud storage management app
- HamukoI wrote a blog post over the New Year's of my experience at a company where AI coding was widely used and developer skill wasn't particularly high.https://burakku.com/blog/tired-of-ai-coders/I think the addendum to that is that I've since left.
- x3n0ph3n3I use Cursor and have been pretty happy with the Plan -> Revise -> Build -> Correct flow. I don't write everything with it, but the planning step does help me clarify my thoughts at times.One of the things that has helped the most is all the documentation I wrote inside the repository before I started using AI. It was intended for consumption by other engineers, but I think Cursor has consumed it more than any human. I've even managed to make improvements not by having AI update it, but asking AI "What unanswered questions do you have based on reading the documentation?" It has helped me fill in gaps and add clarity.Another thing I've gotten a ton of value with is having it author diagrams. I've had it create diagrams with both the mermaid syntax and AWSDAC (Diagram-as-Code). I've always found crafting diagrams a painstaking process. I have it make a first pass by analyzing my code + configuration, then make corrections and adjustments by explaining the changes I want.In my own PRs, I have been in the habit of posting my Cursor Plan document and Transcript so that others can learn from it. I've also encouraged other team members to do the same.I feel bad for any teams that are being mandated to use a certain amount of AI. It seems to me that the only way to make it work is by having teams experiment with it and figure out how to best use it given their product and the team's capacity. AI is like a pair of Wile-E-Coyote rocket skates. It'll get you somewhere fast, but unless you've cleared the road of debris and pointed in exactly the right direction, you're going to careen off a cliff or into a wall.
- gnerd00FAANG colleague writes this week -- "I am currently being eaten alive by AI stuff for my non-(foss-project) work. I spend most of my day slogging through AI generated comments and code trying to figure out what is good, not good, or needs my help to become good. Or I'm trying to figure out how to prompt the tools to do what I want them to do"This fellow is one of the few mature software engineers I have ever met who is rigorously and consistently productive in a very challenging mature code base year in and year out. or WAS .. yes this is from coughgooglecough in California
- anonundefined
- vcryanLove it. I can create closer to the speed of decision making.
- alexmuroAI has definitely shifted my work in a positive way, but I am still playing the same game.I run a small lab that does large data analytics and web products for a couple large clients. I have 5 developers who I manage directly, I write a lot of code myself and I interact directly with my clients. I have been a web developer for long enough to have written code in coldfusion, php, asp, asp.net, rails, node and javascript through microsoft frontpage exports, to jquery,to backbone, angular and react and in a lot of different frameworks. I feel this breadth of watching the internet develop in stages has given me a decent if imperfect understanding of many of the tradeoffs that can be made in developing for the web.My work lately is on an analytics / cms / data management / gis platform that is used by a couple of our clients and we've been developing for a couple of years before any ai was used on it all. Its a react front end built on react-router-7 that can be SPA or SSR and a node api server.I had tried AI coding a couple times over the past few years both for small toy projects and on my work and it felt to me less productive than writing code by hand until this January when I tried Claude Code with Opus 4.5. Since then I have written very few features by hand although I am often actively writing parts of them, or debugging by hand.I am maybe in a slightly unique place in that part of my job is coming up with tasks for other developers and making sure their code integrates back, I've been doing this for 10 years plus, and personally my sucess rate with getting someone to write a new feature that will get use is maybe a bit over 50%, that is maybe generous? Figuring out what to do next in a project that will create value for users is the hard part of my job whether I am delegating to developers or to an AI and that hasn't changed.That being said I can move through things significantly faster and more consistently using AI, and get them out to clients for testing to see if they are going to work. Its also been great for tasks which I know my developers will groan if I assign to them. In the last couple months I've been able to- create a new version of our server that is free from years of cruft of the monorepo api we use across all our projects. - implement sqlite compatablity for the server (in addition to original postgres support) - Implement local first sync from scratch for the project - Test out a large number of optimization strategies, not all of which worked out but which would have taken me so much longer and been so much more onerous the cost benefit ratio of engaging them would have been not worth it - Tons of small features I would have assigned to someone else but are now less effort to just have the AI implement. I think the biggest plus though is the amount of documentation that has accrued in our repo since using starting to use these tools. I find AI is pretty great at summarizing different sections of the code and with a little bit of conversation I can get it more or less exactly how I want it. This has been hugely useful to me on a number of occasions and something I would have always liked to have been doing but on a small team that is always under pressure to create results for our clients its something that didn't cross the immediate threshold of the cost benefit ratio. In my own use of AI, I keep the bottleneck at my own understanding of the code, its important to me that I maintain a thorough understanding of the codebase. I couple possibly go faster by giving it a longer leash, but that trade off doesn't seem wise to me at this point, first because I'm already moving so much faster than I was very recently and secondly because it doesn't seem very far from the next bottleneck which is deciding what is the next useful thing to implement. For the most part, I find the AI has me moving in the right direction almost all the time but I think this is partly for me because I am already practiced in communicating the programmers what to implement next and I have a deep understanding of the code base, but also because I spend more than half of the time using AI adding context, plans and documentation to the repo.I have encouraged my team to use these tools but I am not forcing it down anyone's throat, although its interesting to give people tasks that I am confident I could finish much quicker and much more to my personal taste than assigning it. The reactions from my team are pretty mixed, one of the strongest contributors doesn't find a lot of gains from it. One has found similar productivity gains to myself, others are very against it and hate it.I think one of the things it will change for me is, I can no longer just create the stories for everyone, learning how to choose on what to work on is going to be the most important skill in my opinion so over the next couple months I am going to be shifting so everyone on my team has direct client interactions and I am going to try to shift away from writing stories to having meetings where I help them decide on their own what to work on. Still part of the reason that I can afford to do this is because I can now get as much or more work done than I was able to with my whole team at this time last year.That's a big difference in one way, and I am optimistic that the platform I am working on will be a lot better and able to compete with large legacy platforms that it wouldn't have been able to compete with in the past, but still it just tightens the loop of trying new things and getting feedback and the hardest part of the business is still communication with clients and building relationships that create value.
- syngrog66It solves no actual problem I have. Yet introduces many new ones. Its a trap. So I don't use it and have a strong policy against using it or allowing others to use it on things I work on. BATNA is key.
- jacquesmFor those of you for who it is working: show your code, please.
- aplomb1026[dead]
- isotropic[dead]
- agenticbtcio[dead]
- miki_ships[dead]
- ptak_dev[dead]
- xorgun[dead]
- throwaway613746[dead]