<- Back
Comments (1035)
- impulser_I'm pretty sure he's talking about companies and people outsourcing their decision making and thinking to AI and not really about using AI itself.I don't think using AI to write code is AI psychosis or bad at all, but if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter. They literally post screenshots of ChatGPT as their thinking and reasoning about the topic instead of just doing a little bit of thinking themselves.These things are dog shit when it comes to ideas, thinking, or providing advice because they are pattern matchers they are just going to give you the pattern they see. Most people see this if you just try to talk to it about an idea. They often just spit out the most generic dog shit.This however it pretty useful for certain tasks were pattern matching is actually beneficial like writing code, but again you just can't let it do the thinking and decision making.
- kalkinI think there's a reasonable argument that our entire society right now is under AI psychosis:The stock market keeps going up in the face of the indefinite closure of Hormuz. We're investing in datacenters at a scale that only makes sense if AI capabilities continue to advance to the point where they surpass most humans at most white collar tasks, if not reach superintelligence.And what are the possible outcomes?- Bust. We've come away with a useful tool but the hundreds of billions of capital expenditure were thrown away on a pipe dream.- Success! We're the dog that's caught the car. Then what? Currently the political debate is, to caricature only slightly, between "oh no the datacenters will use more water than golf courses" and "lol what are you going to do, regulate matrix multiplication?". How the hell are we going to cope with introducing a new intelligent species?Either way, it sure seems like we're collectively operating more in the interests of the future AI than in the interests of humanity. What is this, if not a sort of psychosis?
- wrxdI’m at a FAANG and we have $300/day token quota. Personally I don’t use that much of it but management is pushing really hard for it. “the quota has been raised for a reason, use it”. Any task: “have you tried working on it with Claude?”. Every meeting “now engineer x and y will show you what he did with AI”.It’s not all useless but most of the days I think I would be more productive if some processes were streamlined rather than if I had to throw tokens at them and still fail.Of all the showcases I’ve seen the best are the ones written by people assuming that the token bonanza will not last so they used AI to build tools they wished they had. AI used to build the tool but by no means used by the tool, so if/when token quota gets reduced we still have a functional tool.
- charlotte-fyiI feel in a really weird position where I both really dislike what AI is doing to the experience and practice of writing code, to the point where I want a job doing literally anything else besides using the computer, but also think that these tools are extremely powerful and only getting better.I think Mitchell's point is well taken -- it's possible for these tools to introduce rotten foundations that will only be found out later when the whole structure collapsed. I don't want to be in the position of being on the hook when that happens and not having the deep understanding of the code base that I used to.But humans have introduced subtle yet catastrophic bugs into code forever too... A lot of this feels like an open empirical question. Will we see many systems collapse in horrifying ways that they uniquely didn't before? Maybe some, but will we also not learn that we need to shift more to specification and validation? Idk, it just seems to me like this style of building systems is inevitable even as there may be some bumps along the way.I feel like many in the anti camp have their own kind of reactionary psychosis. I want nothing to do with AI but I also can't deny my experience of using these tools. I wish there were more venues for this kind of realist but negative discussion of AI. Mitchell is a great dev for this reason.
- foxfiredMaybe this is what will turn software engineering into an Engineering field.Right know, prompters are setting up whole company infrastructure. I personally know one. He migrated the companies database to a newer Postgres version. He was successful in the end, but I was gnawing my teeth when he described every step of the process.It sounded like "And then, I poured gasoline on the servers while smoking a cigarette. But don't worry, I found a fire extinguisher in the basement. The gauge says it's empty, but I can still hear some liquid when I shake it..."If he leaves the company, they will need an even more confident prompter to maintain their DB infrastructure.
- thisisitRecently I had a request come through to allow finance analysts to vibe code their apps. During a discussion one of the finance managers let the cat out of the bag. Turns out our CFO had met fellow CFOs at a get together. They talked about how each of them were using AI. Our CFO was lagging behind and felt that we need to "accelerate" our usage of AI. He wants to push it just because he lost a bragging contest.
- vividfrierI feel like I'm in a different field compared to the rest of hacker news.I'm in a big tech company where everything is standardised. All our microservices have the same tech stack. We're in a monorepo. Most microservices are... I wouldn't say tiny or micro but small enough.And I haven't written a single line of code myself since what - February maybe?We still haven't seen an increase in incidents, we ship more features at a higher quality. We address the tech debt we didn't have time for in the past.We still require a code review for any change and it's becoming a bottleneck - for sure.But it all feels... Mature and the next step of software engineering.We don't really vibe though. At least I don't. I see it more as comment driven development. I need to understand the code and what I want to achieve where in the codebase but I'll leave godo comments explaining this before asking an agent to fill in the blanks.
- zmmmmmI think AI rescue consulting is going to be come a significant mode of high value consulting, similar to specialists who come in to try and deal with a security breach or do data recovery.Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable. It will become a special kind of process to clean room out such a mess and rebuild it fresh (probably still with AI) after distilling out core design principles to avoid catastrophic breakdown.Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first, place but it will take us 20 years to learn them, just like original software eng took a lot longer than expected to reach a stable set of design principles (and people still argue about them!).
- Simon_O_RourkeI'm just waiting for my current company to have a Sev 1 CritSit so I can document the bejesus out of the root cause and expose our non-technical AI evangelist leadership as the sort of goons most of the senior development staff already suspect.Only by walking us into some revenue or customer impacting failure - through inappropriately having junior devs doing senior level things - will some sense of sanity start to prevail again.
- miekMy very large employer has always been glacially slow on modernization and tech adoption. It may now, oddly enough, become a competitive advantage.
- perching_aixI'm going through a mixed experience regarding this, personally.Management is really pushing AI. It's obnoxious, and their idea on how it fits into my team's job specifically is completely, hilariously detached from reality. On the off chance someone says something reasonable, unless it fits the mold, it's immediately discarded. The mold being "spec driven development". We're not even a product team for crying out loud. I straight up started skipping these meetings for the sake of my sanity. It's mindwash, and it's genuinely dizzying. The other reason I stopped attending is because it ironically makes me more disinterested in AI, which I consider to be against my personal interests on the long run overall.On the flipside, I love using Claude (in moderation). It keeps pulling off several very nice things, some of which Mitchell touched on in this post (the last one):- I write scripts and automation from time to time; Claude fleshes them out way better with way more safety features, feature flags, and logging than I'd otherwise have capacity to spend time on- Claude catches missed refactors and preexisting defects, and does a generally solid pass checking for defects as a whole- Claude routinely helps with doing things I'd basically never be able to justify spending time on. Yesterday, I one-shotted an entire utility application with a GUI to boot, and it worked first try; I was beyond impressed.- Claude helped me and a colleague do some partisan cross-team investigation in secret. We're migrating <thing> and we were evaluating <differences>. There was a lot of them. Management was in a limbo, unsure what to do, flip-flopping between bad options. In a desperate moment, I figured, hey, we kinda have a thing now for investigating an inhuman amount of stuff in detail - so I've put together a care package for my colleague with all our code, a bunch of context, a capture of all the input data for the past one week, and all the logs generated. Colleague put his team's side of the story next to it, and with the help of Claude, did some extremely nice cross-functional investigation. Over the course of a few weeks, he was able to confirm like a dozen showstopper bugs, many of which would have been absolutely fiendish if not impossible to fix (or even catch) if we went live without knowing about them. One even culminated in a whole-ass solution re-architecturing. We essentially tore down a silo wall with Claude's help in doing this.So ultimately, it really is a mixed bag, with some really deep lowpoints and some really nice higlights. I also just generally find it weird that a technical tool [category] is being pushed down people's throats with a technical reasoning, but by management. One would think this goes bottom up, or is at least a lot more exploratory. The frenzy is real.
- dtnewmanIf you feel this way, you might like my new CLI tool, Burn, Baby, Burn (those tokens) (https://github.com/dtnewman/burn-baby-burn/tree/main).Show HN here: https://news.ycombinator.com/item?id=48151287
- thr0wHard to have sober talk about this since a lot of discourse is AI psychosis vs. AI naysayers. Does software quality seem to have taken a jump in the past few years to anyone? Not to me, seems to be getting worse. Think that's a decent signal. Can tell you I'm dealing with a non-technical VP who loves blast submitting vibe-coded PRs and while there's some quick wins, overall quality is bad, and we had our first real production outage that Claude one-shot caused but could not one-shot solve.
- GroxxBug reports also go down when people lose faith that they will be fixed, because reporting them is often a substantial time commitment. You see it happen pretty regularly as trust in a group/company collapses.
- vadepaysa"Just use autoresearch and it will fix your app's memory leaks in an hour" is what I was nonchalantly told by someone who has never written a line of code ever.I guess what I relate to the most is how dismissive people get about real software engineering work.I may have skill issues, but I am yet to reach the level of autonomous engineering people tend to expect out of AI these days.
- gopalvThe AI psychosis is not the anti-opinion to the use of AI.I use AI coding tools every day, but AI tools have no concept of the future.The selfish thinking that an engineer has when they think "If this breaks in prod, I won't be able to fix it. And they'll page me at 3AM" we've relied on to build stable systems.The general laziness of looking for a perfect library on CPAN so that I don't have to do this work (often taking longer to not find a library than writing it by hand).Have written thousands of lines of code with AI tool which ended up in prod and mostly it feels natural, because since 2017 I've been telling people to write code instead of typing it all on my own & setting up pitfalls to catch bad code in testing.But one thing it doesn't do is "write less code"[1].[1] - https://xcancel.com/t3rmin4t0r/status/2019277780517781522/
- trizozaYou're speaking of my company and I'm forever grateful.I'm afraid to say this out loud internally because I'm afraid of the next round of layoffs and I want to keep my job. So I just keep on shipping at a high pace, building massive cognitive debt and hoping the agents will get so good in near future, that there won't be the need for understanding the codebase.
- flumpcakesThere's a lot of people writing bad code. With AI being forced top down (with the promise of turning people into 10x-ers), we're going to get a lot of people writing bad code 10x faster.I really do worry - I especially worry about security. You thought supply chain security management was an impossible task with NPM? Let me introduce to AI - you can look forward to the days of AI poisoning where AIs will infiltrate, exfiltrate, or just destroy and there's no way of stopping it because you cannot examine the internals of the system.AI has turbo charged people's lax attitude to security.God help us.
- jimbokunThis reminds me of Rich Hickey’s “Simple Made Easy” and his approach in making Clojure.Even before LLMs generating entire programs, complex frameworks allowed developers to write the initial versions of programs very quickly, but at the cost of being hard to understand and thus hard to debug or modify.Some of us are betting that the AIs will always be smart enough to debug, maintain and modify the programs written by AI, no matter how convoluted or complex. I’m not so sure.
- low_tech_loveWe built too many layers of abstraction, so much that even the people in power have forgotten where the fantasies are. The objective reality is behind so many curtains that we forgot what is powering the whole theatre play to begin with. Or maybe we know but became too far detached from it to care. If you are at the same the better and the player, then what’s left?
- agnosticmantis> I lived through the great MTBF vs MTTR (mean-time-between-failure vs. mean-time-to-recovery) reckoning of infrastructure during the transition to cloud and cloud automation.What's the historical context for this MTBF vs. MTTR reckoning?
- blazespinThe problem is that the only thing that has proved out so far is cyber security. Unfortunately cyber security improvements is not going to improve living standards, and it's just going to increase the cost of just doing business. There is no productivity boost, in fact it's the opposite.What we need is automated research that leads to real results. This is possible, but it has yet to prove out. I am concerned that unless the AI companies focus entirely on this, it may be a while before we actually see true benefits from this.What's worse, is there is an urgent and desperate need for automated research, as we have been seeing diminishing returns in human produced research for some time now: https://web.stanford.edu/~chadj/IdeaPF.pdf
- bob1029The longer I look at the AI transformation, the more it seems like a people problem than a technology problem. The technology is undeniably there. The people are all over the place.I am watching a 10 person company try to run 3 different AI initiatives in parallel. Everyone wants to be "the guy" on this one. I cannot imagine there will ever be a bigger opportunity to ego trip as a technology person. This is it. This is the last call before it's all over. There are many businesses out there that are beyond traumatized by human developers taking them on bad rides. The microsecond they think this stuff will work they are going to fire everyone.The psychosis comes from the tension here. We effectively have The Empire vs the rebel alliance now. I know how the movies go, but in real life I think I'd rather be working on the Death Star than anywhere else.
- sometimelurkerI'd like to chime in and mention that its really obvious how to RL a coding agent to get the human addicted asap. and its also clear that there's a ton of $$$ to be made by doing this. therefore its done. the only LLMs I use are the ones I run locally because i know they aren't RL'ed for that metric (no incentive for the company that made them to make their open weights models addictive)
- mattbrewsbytesThe race to invent variants of Gas Towns, Ralph loops, pump out videos, blogs, etc. showing off greenfield development with cleverly named agents running in parallel is another case of engineering people diving head first into Resume Driven Development.Sure there are industry changing things going on. What if you're working on an app thats a decade old and has had different teams of people, styles, frameworks (thanks to the JS-framework-a-week Resume Driven Development)? Some markdown docs and a loop of agents isn't going to help when humans have trouble understanding what the app does.
- elifAI psychosis is real, but at worst is only premature. AI-denial psychosis is far more pervasive, and will bite far more people in the long run.
- bsolesMy company is one. They just made "AI use" a mandatory performance goal for next year's reviews. I am thinking about retiring at this point...
- rgloverPeople just need to calm down. We're scaring the shit out of ourselves for no reason. Just like, chill man.It's a tool; not the second coming.
- iamacyborgCompany I just left is reportedly now using Claude to analyse the metadata generated from the company MDM that tracks actual laptop use, and then pulling people up if they're not working "enough".They're also reportedly now giving staff AI-related "homework" in an attempt to force staff to use AI more.
- choegerSo rewriting gets cheaper and cheaper. New features fall more or less into the same category. Refinement doesn't.The question is: Will we live in the world of breathless re-implementation, new features every week, rebranding every quarter or will we eventually discover the value of stability, software that does its thing more or less optimally for decades?Recent examples of things like curl or Firefox are interesting in that regard. Will we end up with a nearly perfect HTTP user agent and stick with it for decades?
- bsenftnerThis is a critical communications issue that is becoming what I believe the defining characteristic of "This Age": nobody knows how to discuss disagreement, and because it cannot even be discussed communication ends, followed by blind obedience, forced bullying, retreat and abandonment. This is going to be a hell of a ride, because nobody can really discuss the situation with a rational tone.
- GlyptodonThat people don't realize full test coverage just means every line is hit, not that everything is correct is always funny to me. (I don't view as an argument against tests, but with AI it's especially important as if you're aren't careful it'll be very happy to make coverage that is not quite right.)
- tossandthrowI don't entirely know what rational discussions that can not be had?It seems like he is pointing out that Ai will increase the complexity of a system oblivion, and that this is the discussion that can not be had.Bit I am more than happy to talk about how I am using Ai to reduce complexity and remove architectural debt that I otherwise could not justify spending time on.
- germandiagoHonest comment: it is transition time. This time is to make bets and take positions. Your humble position maybe.I already took a couple of decisions. It will go wrong or well. But is was decided a year and a bit ago.If you think the future will be different, stop doing the same you used to do the same way you used to do it.My analysis is that the labour market will increasingly bargain salaries and will make pressure on you. So how safe is that compared to before? Maybe working for someone as an employed full time person is not the best thing you can do anymore.
- tacostakohashi"no no, it has full test coverage"at least at my BigCo, AI is being used for everything - writing slop, writing tests, code reviews, etc.it would make sense to use AI for writing code, but human code review. or, human code, but AI test cases... or whatever combination of cross-checking, trust-but-verify, human in the loop, etc. people prefer.i think once it gets used for everything, people have lost the plot, it's the inmates running the asylum.
- fjdjshshI find talking about X psychosis (or generally using mental illness metaphors) unproductive. It sets up the conversation to be "nothing else to do with this person".Maybe the problem is you, but you won't figure that out if you think the other person has psychosis.For example, maybe you need to do a better job explaining, changing your language, simplifying things, being more concrete with consequences.Or maybe you aren't understanding that the other person has different objectives/ loss function that makes them make seemingly weird conclusions.
- lordmomahis worry is similar with search engine, I believe 90% of population don't even know how to properly do a good search in Google, that's why the info asymmetry still exists and the gap is bigger. It's just now we have AI.
- jpeaseAlso, potentially a good band name in there:“very resilient catastrophe machine”
- hoooWhy do you all still submit twitter.com links when that domain does not even work?
- dkobiaThe primary issue here is that CEOs and investors are particularly vulnerable to AI psychosis which is then forcibly propagated to the rest of the organization. Understandably, the perceived benefits are almost impossible to ignore, compounded by the FOMO of the AI first/AI native narrative being sold by AI influencers.
- sdeframondSometimes I feel like "doing it with AI" is the new "rewriting python in rust".Rewriting in rust does makes things faster but if an algorithm is O(n²), the improvement won't take us much farther.Similarly with AI, if complexity is not structurally adressed, the velocity gains are but temporary.
- ttzLike all things... this too shall pass.
- thinkingemoteUp to 80% of software projects fail. Most startups will fail. VC's and bankers know this.Does using AI increase or lower that failure rate?Does seeing a project that uses AI fail mean it wasn't going to fail if it didn't use AI?To try to answer it with my gut: I imagine that we could see more projects failing, but the percentage that fail would be the same. Most projects that use AI will fail because most projects generally will fail, but the time and cost to get a successful project will lower.
- mmaunderAmazing how the dev community is suffering from a similar inability to approach the subject of real world AI efficiencies and business benefits. I don’t think it’s helpful to accuse the other side of psychosis. It disqualifies any data or experience they bring to the conversation.
- robotswantdataMost labs are shilling “AI worker” dreams to these very companies
- apalmerI don't think it's helpful to call this psychosis. N Beyond that I don't think it's even irrational.It is definitely factual that there is a complete paradigm shift in the prioritization of quality in software. It's beyond just AI side effects, and now its own stand alone thing.There have always been many industries, companies, and products who are low on quality scale but so cheap that it makes good business sense, both for the producer and the consumer.Definitely many companies are explicitly chosing this business strategy. Definitely also many companies that don't actually realize they are implicitly doing this.Wether the market will accept the new software quality paradigm or not remains an open question.
- ryanSrichGenerally agree. I use AI very heavily, but rarely am I letting it actually think for me. It's a tool that reduces the time it takes for ideas in my head to manifest into reality. If you don't have those ideas, or a poor understanding of the system the AI is working on, you're going to produce slop. If you can't recognize this slop, you're more susceptible to having psychosis.
- matt3210Less users can be the cause of less bug reports
- apassintofuturethis AI transitionary phase to Quantum, light chanels and new way computation will be architected will in the future be looked at similar to a toxoplasmosis like societal wide parasite, which invaded the host in order for the host to act more favorably to it
- david_blitz1What is described in the tweet may be worrying or not but it does not describe anything close to psychotic behavior.
- weinzierl"its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"Hmm, I agree with the point OP is making, but I'm not so sure this is the best supporting argument. The bottleneck is finding the bugs and if he'd criticized people saying AI will be the panacea to that I'd be with him, but people saying agents are fast and good at fixing human found bugs is nothing I'd object to.Agents are fixing bugs so quickly and at a scale humans can't do already.
- linkregisterI don't doubt there are companies totally misusing coding agents and LLMs in production. There are also real companies with real revenue and solid architecture using LLMs to deliver products. There are also companies with real revenue and rapidly accumulating tech debt.Eventually the companies that can't cope with undisciplined engineering will succumb to unacceptable reliability and be outcompeted, just like in the "move fast and break things" era.
- ben_wI was thinking about a different topic that could have the same headline just the other day.Never mind code, what happens when the CEOs, or the investors, listen to the sycophantic voices of their LLMs?I think it looks like every product becomes the next Juicero of its field.
- wg0Reminds me of this horrifying documentary: https://www.netflix.com/us/title/81095095
- wg0> lived through the great MTBF vs MTTR (mean-time-between-failure vs. mean-time-to-recovery) reckoning of infrastructure.Can someone please remind and refresh my memory what this whole debate was with what arguments?
- kseniamorphIt's worrying because it feels like a loss of control. But there must be control. And this what responsibility is. You should worry only about people who don't understand responsibility, not AI-inspired ones
- shoopadoopThe massive, destabilizing layoffs feel like AI psychosis to me.
- solaticI don't think this is actually anything new. In large-enough companies, even before AI, it was and is quite common for executives to lose touch with base reality. I don't think anyone is under any delusion that people like Mark Zuckerberg intimately know the entirety of their corporate codebases. Everything is filtered through layers and layers of middle management whose summaries, cherry-picked statistics, and perpetually up-and-to-the-right graphs make it difficult to have an objectively informed opinion. Companies would, are, and will have mass layoffs that unintentionally (or, intentionally but with indifference to the consequences) fire key engineers whose loss results in "familiarity debt" within the systems those engineers owned.Calling this "psychosis" is maybe a neologism but it's apt in perspective.All that's actually new with "AI psychosis" is an acceleration of that phenomenon. The agents will summarize status faster than any middle manager. Claude will happily draw you any "up-and-to-the-right" graph you please, with the most common contemporary examples being "tokens burned" and "lines of code written". And vibe coding doesn't even require paying the cost of a mass layoff to get the "familiarity debt".There have always been both good and bad engineering leaders. No tool will magically make a bad leader into a good leader overnight. There is nothing new under the sun.
- wesselbindtI was under the impression that anyone that uses the MTTR abbreviation knows enough to understand that you need to balance it with change failure rate, deploy frequency, and lead time.
- crnkofeSounds pretty accurate. Bunch of comments on this thread sound like AI is some kind of a new doomsday cult. The most annoying thing I find personally is that all engineering principles are getting crushed by non techies. Management counting token usage, forcing agent use, reducing headcount in the name of productivity gain. Devs building bridges but nobody knows what the bridge is, what are the standards to which it was built, how it works and how to maintain it. VCs counting extra money claiming chasing the holy profit is the future. The abundance of engineering apathy is disturbing.
- deadbabeAt work they are purging any developers who are not all in on AI. I must constantly be in full support of AI to not get fired, despite whatever my true thoughts are, including anything I post on LinkedIn. There can be no doubt.
- ivanjermakovDeprecating immature workflows (LLM agents in this case) is much simpler and faster than building them from scratch. Many companies get this risk assessment right. The case where being wrong is much more costly than being right.
- zombotCase in point: Amazon pressuring its workers to maximize AI use.https://news.ycombinator.com/item?id=48148337
- leeoniya> "no no, it has full test coverage"i don't have enough fingers (and toes) to count how many times i've demonstrated that "100% coverage" is almost universally bullshit.
- nialseI'm starting to long for the age after AI. When the generative euphoria has settled and all outputs are formally verified based on exquisite architectures and standards.
- imrozimI use ai to build a startup but I still decide what to build. Letting ai makes product decisions is where companies loose it.
- anonundefined
- insane_dreamerJust talked to an exec yesterday about their multinational company, where the newly-installed CEO just came in with "everyone needs to be using AI" and "we should be doing everything with AI".I cautioned them that this a terrible idea -- you have business people who don't know what they're talking about, and all they know if "if we don't 'do AI' we'll be left behind because our competitors are 'doing AI'" (whatever tf "doing AI" means).Yes, LLMs are a great tool. But they're not like some magic bullet you stick into everything. Use it where it makes sense, and treat it like you would other tools.You make "doing AI" some kind of KPI in your org, and you're going to have people "doing AI" amazingly (LOC counts! tokens burned! tickets cleared!) while not actually being more productive, and potentially building something that is going to come down on your head for the next team to "clean up the AI mess".
- anonundefined
- LogicFailsMeI shut down AI Agent fanatics on the regular. But chop one head off there and two take its place. And I say that as someone working with Claude and Codex daily. While they are both incredibly good at clearly described and defined atomic tasks, application scope makes them lose their minds and the slop ensues.
- dudulTotally unrelated pet peeve of mine, I hate when people write this: "MTBF vs MTTR (mean-time-between-failure vs. mean-time-to-recovery)".You first use the full words and then introduce the acronym that you're going to use in the rest of the text: "Mean Time Between Failures (MTBF) vs. Mean Time to Recovery (MTTR)".With the latter, readers understand the term immediately, even if they don’t know the acronym. And they don't have to read these weird letters before getting the explanation.
- LunicLynxEither this or we humans are out of the picture soon.
- spicyusernameWe're definitely in the mess around phase of AI adoption.I don't think it's super clear what we'll find out.We've all built the moat of our careers out of our expertise.It is also very possible that expertise will be rendered significantly less valuable as the models improve.Nobody ever cared what the code looked like. They only ever cared if it solved their problem and it was bug free. Maybe everything falls apart, or maybe AI agents ship code that's good enough.Given the state of the industry were clearly going to find out one way or the other, hah!
- CodingJeebusAnyone who's taken VC funding has no choice. More money has been spent on AI commercialization than the atomic bomb, the US interstate build-out, the ISS and the Apollo program combined. Failure is going to be catastrophic and therefore, one tied to this ship cannot accept a world in which it fails.
- IfkaluvaThe Twitter post doesn’t even document some of the most psychotic things that are happening.
- keepamovinIt seems the diagnosis of psychosis is too quick: it seeks to reestablish the frame of expert for the developer identity that is being replaced by it.“It feels like entire companies are deluded into thinking they don’t need me, but they still need me. Help!”The broad sentiment across statements of this “AI psychosis” type is clear, but I think the baseline reality is simpler. How can you be so certain it’s psychosis if you don’t know what will unfold? Might reaching for the premature certainty of making others wrong, satisfying that it might be to the ego, be simply a way to compensate the challenges of a changing work environment, and a substitute for actually considering the practical ways you could adapt to that? Might it not be more helpful and profitable to consider “how can I build windmills, ride this wave, and adapt to the changing market under this revolution” than soothing myself with the delusion that all these companies think they don’t need me now, but they’ll be sorry.The developer role is changing, but it doesn’t have to be an existential crisis. Even though it may feel that way — but probably it’s gonna feel more that way the more you remain stuck in old patterns and over-certainty about how things are doesn’t help, (tho it may feel good). This is the time to be observant and curious and get ready to update your perspective.You may hide from this broad take (that AI psychosis statements are cope) by retreating into specific nuance: “I didn’t mean it that way, you’re wrong. This is still valid.” But the vocabulary betrays motive. Resorting to clinical derogatory language like “AI psychosis” invokes a “superior expert judgment” frame immediately, and in zeitgeist context this is a big tell. It signifies a need to be right, anda deeply defensive pose rather than a clear assay of what’s real in a rapidly changing world. The anxiety driving the language speaks far louder than any technical pedantry used to justify it, and is the most important and IMO profitable thing to address.
- throwawaypathMitchellh is on to something. Some of the AI products I've seen seem like psychosis hallucinatory fever dreams, using terms and concepts that have no meaning. Funding? $50,000,000 pre-seed.
- JeremyJaydanIf you don't use it you lose it, and a lot of people are losing it..
- itqwertzThe real AI psychosis is the expectation of 5x/10x productivity gains akin to the mythical 10x developer during the 2010s JS growth period.At the end of the day, we can only read so much and take on so much work before we bottleneck ourselves. Cognitive overload leads to burnout. Rumplestiltskin vibes with this AI stuff…
- trwhiteThe DevOps team at my company wants to hire a replacement for a very talented engineer. They’ve been interviewing candidates. The board got wind of it and someone not in their team decided they needed an AI Engineer, which is absolutely not what they want. So to release the funds they have been forced to change the job description and go after a different type of role altogether. It’s complete nonsense.
- robofanaticHe is a billionaire and still thinks at a developer level is pretty remarkable! Hope other billionaires pay attention to this!
- the13The entire problem is vibe coding is only good for demos, prototyping and finding signs of product market fit without actually releasing a product into the market.You should not release a product into the market unless you have a good enough product that can keep you and your client compliant, safe and secure - including not leaking their customer info all over the place.Prompt injection risk, etc. are massive for agentic AI without deterministic guardrails that actually work in practice.Stop testing in production if you're shipping in a regulated industry. Ridic!If you're not technical, you can get someone who is after signs of p-m fit, demos, but BEFORE deployment. This is common sense and best practices but startup bros dgaf because they're just good at sales and marketing & short term greedy.Comical.
- pojzonIm not afraid to say AI model trained on petabytes of data is better than me in many things.Thankfully most of those things are a very small percent of my overall work.If its a big percent of your work -> you are in trouble friend.
- madroxI saw this first hand at a company, and I think this is what happens when you combine FOMO with an utter lack of industry best practices. No one knows where they are going, but are convinced they are not getting there fast enough.What's more, the only people they talk to about it are others at the same company. There is no external touchstone. There are power dynamics from hierarchy. No new ideas other than what is generated within the company. In other circumstances, this is a textbook environment for radicalization.I would encourage all leadership to take a deep breath. You have time to think slow.
- heohkI call them True Believers
- jqpabc123Corporate management in the USA is focused on the short term and reactionary.Changing this focus is not easy but one thing that will usually do the trick is economic issues.In other words; in order to get any serious consideration, something has to be broken.AI is perfectly capable of doing this given enough time.
- epolanskiMy biggest grief, among many, is that the field is just no longer enjoyable to work in.I cannot deny the impact of AI for my daily tasks at this point.But I just don't enjoy the field anymore. With increased productivity, also coming from my stellar coworkers, it feels like we're rat racing who outputs more.The quality is good, and having very strong rails at language and implementation level, strong hygiene, etc helps tremendously.But reality is that the pace of product vastly outpaces the pace at which I can absorb it's changes (I'm also in a very complex business logic field), and the same might be true about my understanding of the systems which are changing too fast for me to keep up.I feel mentally fatigued from a long time, I don't enjoy coding no more bar the occasional relaxing personal project where I can spend the time I want without pressures on architectural or implementation details.I'm increasingly thinking of changing field, this one is dying right under our eyes.I often read comments about HN users still delving at their place with technical details or rewriting AI code to their liking.I'm increasingly sure that these people live in happy bubbles where this luxury still exists. But this methodology of work is disappearing across the industry, team by team.Of course SE will not disappear over night, but the productivity expectations, the complexity ballooning are raising the bar where only incredibly skilled and productive engineers will be still able to practice SE properly, and as long as they meet stakeholders expectations or keep living in those bubbles.
- anonundefined
- mrwaffleSaying the _quiet_ part out loud.
- nwah1Is he talking about github?
- nunezWelcome to the club, Mitchell! Pizza's to the right.In all seriousness...well, yeah. AI is a monkey's paw, and that's how monkey paws work. So many movies and books warned us!
- tamimioThe hype or psychosis is mainly by mediocre/non expert/middle manager/you name it, especially when a person who never wrote a single line of code suddenly is making a wall of text, and it actually works!? Oh my!!But in reality, anyone who knows their field and are going after certain specific issue, they will find soon how AI is nothing but an assistant, sure it can help and automate some stuff, but that’s it, you need to keep it leashed and laser focused on that specific issue. I personally tried all high end ones, and I found a common theme, they are designed to find a solution or an answer no matter what, even if that solution is a workaround built on top of workarounds, it’s like welding all sort of connections between A and B resulting in a fractal structure rather than just finding a straight path, if you keep it going and flowing on its own, the results are convoluted and way over complicated, and not the good complexity, the bad kind.
- anonundefined
- tonymetGood point but he didn't go far enough. I would expand the AI psychosis to include all local optimization based on phony measurements , even time spent , DAU etc (which are mostly bots & synth accounts). In other words AI psychosis has been going on for 20+ years.The only reason it worked has been expansive money policy and a larger share of the cost of goods being dumped into marketing value while manufacturing costs dropped abroad. so no one bothered to check.
- teeray> "no no, it has full test coverage"There’s this delusion that if we somehow write enough tests that we’ll expunge every defect from software. It’s like everyone forgets that the halting problem exists.
- mattgreenrocksThe only way many people learn that the stove is hot is by burning their hands on it.Let them.
- slopinthebagI have a ton of respect for Mitchell - I didn't really know who he was until Ghostty but his writings and viewpoints on AI seem really grounded and make the most sense to me. Including this one.Many people on this forum are suffering under this same psychosis.
- lo_zamoyskiPossibly psychosis. Possibly just serious ignorance and mob mentality. Leadership is supposed to be phlegmatic and measured; instead, we are saddled with hysterical hotheads. (Of course, when they are phlegmatic and chasing fads, then it does indeed resemble psychosis.)Worth also noting is that while there is plenty to criticize about AI use — especially any cultish behavior surrounding it — plenty of naïveté about the quality of its results, there is a also a strain of categorical opposition to it among some tech people that is equally off and that has all the hallmarks of the chickens coming home to roost.For years, many in tech gladly “automated away” all sorts of jobs. Large salaries were showered on them for doing so, or at least promising to do so (there was and is plenty of bullshit here, too). Now, AI appears to threaten to derail the tech gravy train, especially for SWE work that’s run-of-the-mill (which is most of it). Now automation is bad. It’s a delicious juxtaposition.
- LAC-TechI am really looking for more reasoned approaches to AI.I am very close to using it as a pair programmer, but with me actually coding. I am just so tired of fixing its mistakes.
- daneel_wI work for a small telecom services provider whose current VP immediately set an AI course when stepping on board 6 months ago. Involving AI in everything and every task is now our first priority - across all employee segments, not just us system developers - and leadership is embarking on a program to measure employees' AI usage levels as a means to gauge everyone's individual efficiency. It's like the era of the evangelic crypto bros all over again.
- HNisCISI'm in a company going through this. Everyone outsources their thinking to LLMs and the results are painfully mediocre. The smart ones will use it to get their bearings on the topic then go to primary sources, the not so bright just ctrl-c ctrl-v.Have you ever been in an HN thread where you're an SME on the thread topic and just been horrified by the confidently incorrect nonsense 90% of the thread is throwing around? Welcome to the training set motherfuckers.LLMs do the same thing for what should be obvious reasons. If you search things that have some depth and you know the answer you'll be flooded by how often the models will just vomit confident half truths and misrepresented facts. They're better than they used to be, not just lying whole cloth most of the time, but truth is an asymptotic thing, not an exponential one.
- BrenBarn> "its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"The groundwork for that was laid long ago with the idea of constant updates. It's been fine for years to ship bugs and rely on a rapid release cycle and constant pressure on users to upgrade everything all the time. To roll that back requires a lot more than toning down AI psychosis; it requires going back to a go-slow mindset where you actually don't release things until they're ready. It still needs to be done, but it's harder than just laying off the AI kool-aid.
- ApocryphonMake the most of it. Their delusion is your opportunity.
- topherPedersenHype & greed are a hell of a drug
- gregjorPsychosis means inability to distinguish the real from the not real -- delusion. I don't think the article describes that, at least not in a literal or clinical sense. The author lifted a term usually applied to people who fall in love with chatbots and applied it to the context of software developers not understanding AI coding tools, and the limitations of those tools.AI coding swept over the software industry faster than most previous trends. OOP and its predecessor "structured programming" took a lot longer. Agile and XP got traction fairly quickly but still took longer than AI -- and met with much of the same kind of resistance and dire predictions of slop and incompetence.AI tools have led to two parallel delusions: The one Mitchell Hashimoto describes, and the notion that we (programmers) knew how to produce solid, reliable, useful, maintainable code before AI slop came along. As always with tools that give newbs, juniors, managers some leverage (real or imagined) we -- programmers -- get upset and react to the threat with dire warnings. We talk about "technical debt" and "maintainability" and "scalability."In fact the large majority of non-trivial software projects fail to even meet requirements, much less deliver maintainable code with no tech debt. Most programmers don't know how to write good code for any measure of "good." Our entire industry looks more like a decades-long study of the Dunning-Kruger effect than a rigorous engineering discipline. If we knew how to write reliable code with no tech debt we could teach that to LLMs, but instead we reliably get back the same kind of mediocre code the LLMs trained on (ours), only the LLMs piece it together faster than we can.With 50 years in the business behind me, and several years of mocking and dismissing AI coding whenever someone brought it up, I got dragged into it by my employer. And then I saw that with guidance and a critical eye, reasonably good specs, guardrails, it performed just as well and sometimes more throroughly than me and almost all of the people I have worked with during my career. It writes better code and notices mistakes, regressions, edge cases better than I can (at least in any reasonable amount of time).AI coding tools only have to perform better -- for whatever that means to an organization -- than the median programmers. If we set the bar at "perfect" they of course fail, but so do we. We always have. Right now almost all of the buggy, insecure, ugly, confusing software I use came from teams of human programmers who didn't use AI. That will quickly change and I can blame the bugs and crashes and data losses and downtime on AI, we all can, but let's not pretend we're really losing ground with these tools or that we could all, as an industry, do better than the LLMs, because all experience shows that we can't.
- andreasgl
- ghostlyy[flagged]
- naorsabag[flagged]
- reiter[flagged]
- NexiunDev[flagged]
- luodaint[dead]
- dshaqra[flagged]
- chanki[flagged]
- zhenglei11[flagged]
- zzvimercm[flagged]
- taffydavidThis post calls out how you can't argue with these people because they say its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"the top reply is from someone doing exactly that, arguing "but the agents are so fast!"
- mashijian[flagged]
- mayliu2000[flagged]
- singpolyma3This is... Not what psychosis means? Being wrong is not psychosis
- openclawclub[flagged]
- phoebe_builds[flagged]
- kazinator[dead]
- hopppPointing out the obvious.A lot of companies have been under AI psychosis for years and will be forever.
- anonundefined
- lordmoma[dead]
- bolangiWhen war psychosis is not enough....
- panavm[flagged]
- squirrelon[flagged]
- xorgun[dead]
- vivianzhe[flagged]
- zombiwoof[dead]
- wehaRtz[flagged]
- jgbuddy[flagged]
- klashn[flagged]
- gverrilla'AI psychosis' is a slop concept.
- senordevnycAssuming he’s right, I don’t see how that constitutes “psychosis”, as opposed to this beyond yet another of a billion examples of companies jumping on a bandwagon / cargo cult, and then learning they took it too far.And also, he might not be right. But the good news is, we’ll all get to find out together!
- selectivelyI do not believe 'AI psychosis' is an actual thing.
- awesomeusernameIf you know these things you can take them into account while driving the AI.Sorry, I don't buy your argument
- elevationMitchell aches because his career has been solving broadly scoped problems by building a collection of thoughtful primitives for others to extend. LLMs seem to do the opposite but at great speed, and it hurts to watch.
- woeiruaThis doesn’t constitute AI psychosis. His argument is that we need to retain understanding of the systems we use, but there’s no compelling argument as to why that is the case. (I get that people are going to be offended by that statement, but agents are already better than the average software engineer. I don’t see why we need to fight this, except for economic insecurity caused by mass layoffs.)It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.
- hmokiguessThe tone of the twitter post feels very personal, and emotional, and I am sorry for the author. I hope he can find peace and calm with the pace of change to put forward his best self without needing to act like he needs to defend or fight something.The energy feels misdirected and maybe also a communication issue, I think spreading awareness needs to come not from attacking and also not from attempts to change people’s perception. It’s also quite challenging to distill a concept when it’s new, we learn both from our experiences and experiences of others; but, so far, these alleged systems that will eventually collapse, haven’t done so yet and it makes it sound like you’re preaching and predicting, condemning even, rather than raising awareness and education.Not trying to sound hopelessly optimistic either, just that the other extreme isn’t also helpful, and that a spectrum is not what we want it to be but what the collective shapes it, so saying psychosis is rejecting the harsh reality that they’re far removed from your worldview and not working towards an understanding.EDIT: Maybe I'm old and I don't get twitter, I also don't know much about the challenges he faced communicating his concerns, I sort of had a meta comment with the intent of "try listening more first, some people are difficult to reason with but respond better if you just let them speak and look for a teachable moment during the conversation". Anyways, I'm in agreement that there's too much unsupervised AI in the wild, I'm not saying he's wrong more like saying that doubling down on "stop doing that" will likely be ignored by those that are already ignorant to it, hence what I wrote above.
- sheepscreekI have respect for Mitchel and I’ve spent a good deal of time trying to think of ways to justify his message. I can’t. Either I am missing a big piece or he is worrying about something that comes naturally as more software gets developed (and sooner).In any case, this is what blue-green deployments and gradual rollouts are for. With basic software engineering processes, you can make your end user experience pretty much bullet proof. Just pay EXTRA attention when touching DNS, network config (for core systems) and database migrations.Distributed systems are a bit more tricky but k8s and the likes have pretty solid release mechanisms built-in. You are still doomed if your CDN provider goes down. You just have to draw a line somewhere and face the reality head on (for X cost per year this is the level of redundancy we get, but it won’t save us from Y).The one thing I hadn’t mentioned - one I AM worried about - is security! I’ve been worried about it from before Mythos (basic prompt injection) and with more powerful models now team offence is stronger than ever.