Need help?
<- Back

Comments (193)

  • losvedir
    I really like Oxide's take on AI for prose: https://rfd.shared.oxide.computer/rfd/0576 and how it breaks the "social contract" where usually it takes more effort to write than to read, and so you have a sense that it's worth it to read.So I get the frustration that "ai;dr" captures. On the other hand, I've also seen human writing incorrectly labeled AI. I wrote (using AI!) https://seeitwritten.com as a bit of an experiment on that front. It basically is a little keylogger that records your composition of the comment, so someone can replay it and see that it was written by a human (or a very sophisticated agent!). I've found it to be a little unsettling, though, having your rewrites and false starts available for all to see, so I'm not sure if I like it.
  • tomsyouruncle
    I think this article overlooks the act of engaging in problem solving with an agent.Personally I find it super helpful to discuss stuff back and forth: It takes a view, explores the code and brings some insight. You take a view and steer the analysis. And together you arrive at a conclusion.By that point the AI’s got so much context it typically does a great job summarising the thought process for wider discussion so I can tweak and polish and share.
  • written-beyond
    I've had friendships broken because people couldn't understand why I didn't appreciate their "hard work" on writing a series of 10 articles they wrote with Claude.Mind you this person is an excellent writer, they had great success with ghost writing and running a small news website where they wrote and curated articles. But for some reason the opportunity for Claude to write stuff they can never have the time for is too great for them to ignore.I don't care if you used AI for 99.99% of your research for writing the content but when I read your content it should be written by you. It's why I never take any article seriously on linkedin, even before AI, they all lack any personalization.
  • raincole
    > AI-generated code feels like progress and efficiency, while AI-generated articles and posts feel low-effortI've noticed that attitude a lot. Everyone thinks their use of AI is perfectly justified while the others are generating slops. In gamedev it's especially prominent - artists think generating code is perfectly ok but get acute stress response when someone suggests generating art assets.
  • dontwannahearit
    It's pretty much over for the human-internet. Search was gamed, its usefulness has plummeted, so humans will increasingly ask their LLM of choice and that LLM will have been trained on the content of the internet.So when someone wants to know something about the topic that my website is focused on, chances are it will not be the material from the website they see directly, but a summary of what the LLM learned from my website.Ergo, if I want to get my message across I have to write for the LLM. It's the only reader that really matters and it is going to have its stylistic preferences (I suspect bland, corporate, factual, authoritative, avoiding controversy but this will be the new SEO).We meatbags are not the audience.
  • ravirajx7
    AI has kind of ruined internet for me.I no longer feel joy in reading things as almost most of the writing seem same and pale to me as if everyone is putting thoughts in the same way.Having your own way of writing always felt personal in which you expressed your feelings most of the time.The most sad part for me is I no longer am able to understand someone's true feelings (which anyway was hard to express in writing as articulation is hard).We see it being used from our favourite sports person in their retirement post or from someone who has lost their loved ones or someone who just got their first job and it's just sad that we no longer can have that old pre AI days back again.
  • petetnt
    I agree with the general statement, if you didn’t spend time on writing it, I am not going to spend time reading it. That includes situations where the writer decides to strip all personality by letting AI format the end product. There’s irony in not wanting to read AI content, but still using it for code and especially documentation though, where the same principle should apply.
  • dematz
    >I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.Doesn't ai;dr kind of contradict ai generated documentation? If I want to know what claude thinks about your code I can just ask it. Imo documentation is the least amenable thing to ai. As the article itself says, I want to read some intention and see how you shape whatever you're documenting.(AI adding tests seems like a good use, not sure what's meant by scaffolding)
  • numbers
    I remember this was back in 2023, when ChatGPT had first launched, and I had a manager whose English was not very good. He started sending emails that felt like they were written by a copywriter. And the messaging was so hard to parse through because there's so much ChatGPT fluff around it. Very quickly we realized that what he was saying was usually in the middle somewhere, but we'd have to read through the intro and the ending of the emails just so that we couldn't miss anything. It felt like wasting 2-3 extra minutes per team member.
  • trollbridge
    I would be glad to read anyone's prompts they use to generate AI text. I don't see why I need to necessarily read the output, though.I can take the other person's prompt and run it through an LLM myself and proceed from there.
  • mikemarsh
    > I can't imaging writing code by myself again, specially documentation, tests and most scaffolding> Why should I bother to read something someone else couldn't be bothered to write?Interesting mix of sentiments. Is this code you're generating primarily as part of a solo operation? If not, how do coworkers/code reviewers feel about it?
  • 9999gold
    > I can't imaging writing code by myself again, specially documentation, tests and most scaffoldingShouldn’t we bother to write these things?
  • furyofantares
    I call out articles on here constantly and have gotten kind of tired of it. Well, very tired of it. I am in full agreement with this post.I don't have any solutions though. Sometimes I don't call out an article - like the Hashline post today - because it genuinely contains some interesting content. There is no doubt in my mind that I would have greatly preferred the post if it was just whatever the author promoted the LLM with rather than the LLM output and would have better communicated their thoughts to me. But it also would have died on /new and I never would have seen it.
  • ecshafer
    > When it comes to content..This is the root cause of the problem. Labeling all things as just "content". Content entering the lexicon is a mind shift in people. People are not looking for information, or art, just content. If all you want is content then AI is acceptable. If you want art then it becomes less good.
  • weinzierl
    "Growing up, typos and grammatical errors were a negative signal. Funnily enough, that’s completely flipped for me."For me too and for writing it has the upside that it's sooo relaxing to just type away and not worry about the small errors much anymore.
  • brikym
    I've been throwing in typos and shitty grammar for a while just to seem authentic. I suppose now that will be copied.
  • yrds96
    Good article. Nothing is more frustate than reading a text that I don't even know if the person who "wrote" it, actually read it.If someone wants to me read a giant text generated by a small and poor prompt, I don't wanna read itIf someone wants to fix that by increasing the effort and do a better prompt and express better the ideas, I rather read that prompt over the llm output
  • Starlevel004
    I laugh every time somebody qualifies their anti-AI comments with "Actually I really like AI, I use it for everything else". The problem is bad, but the cause of the problem (and especially paying for the cause of the problem)? That's good!
  • giancarlostoro
    The correct way to use AI for writing is to ask for feedback, not the entire output. This is my personal opinion, English is not my first language, so sometimes I miss what's obvious to a native speaker. I've always used tools that tell me what's wrong with my writing as an opportunity to learn to do better next time. When I finally had Firefox on my computer and it corrected my spelling, it helped me to improve my spelling 100-fold. I still have weird grammar issues with punctuation here and there, and don't ask me where to put a coma (comma?) - that's another one, because I always forget.I think using AI for writing feedback is fine, but if you're going to have it write for you, don't call it your writing.
  • stormed
    My issue is that I don't necessarily trust content if it looks generated. I think I might've lost the link, but when I was helping my company with integrating Microsoft Entra with Ubuntu, I noticed the documentation from both Microsoft & Canonical had heavily generated documentation that was flat out wrong and had me going into loops figuring out unnecessary steps that were seemingly hallucinated.ai;dr is what I'm going to start saying, it's just frustrating to see.
  • hashstring
    Agree but also getting tired from all these blogs that state more or less the same thing about LLMs. I’ve read this before.
  • TheChelsUK
    Thoughts with the people who use AI to help construct their thoughts because their cognitive decline impacts the ability to construct words and sentences, but still enjoying the production of content, blogging and th indieweb.These blanket binary takes are tiresome. There is nuance and rough edges.
  • cgriswald
    > For me, writing is the most direct window into how someone thinks, perceives, and groks the world. Once you outsource that to an LLM, I'm not sure what we're even doing here. Why should I bother to read something someone else couldn't be bothered to write?Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.Outsourcing thinking is bad. Using an LLM to assist in communicating thought is or at least can be good.The real problem I think the author has here is that it can be difficult to tell the difference and therefore difficult to judge if it id worth your time. However, I think author/publisher reputation is a far better signal than looking for AI tells.
  • logicprog
    Yeah, I use LLM agents extensively for coding, but I have never once allowed an LLM to write anything for me. In the past month, I literally wrote 40,000 words of researched essays on various topics, and every single word was manually written, and every source manually read, myself. Writing is how I think, how I process information, and it's also an activity where efficiency is really not the goal.
  • phito
    I roll my eyes every time I see a coworker post a very a long message full of emojis, obviously generated by a LLM with 0 post editing. Even worse when it's for social communication such as welcoming a new member in the team. It just feels so fake and disingenuous, I might even say gross.I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.
  • pwillia7
    I am 100% right there with you. Writing in my voice is maybe the last thing I have that I can do differently and 'better' than an LLM in a couple years time, or even right now if I'm really being honest.I haven't even really tried to use LLMs to write anything from a work context because of the ideas you talk about here.
  • elischleifer
    "The less polished and coherent something is, the more value I assign to it." - maybe a bit of an overstatement ;)
  • esafak
    The purpose of communication is to reduce the cost of obtaining information; I tell you what I have already figured out and vice versa. If we're both querying the same oracle, there is nothing gained beyond the prompt itself (which can be valuable).
  • Tycho
    When people put together memos or decks in the last, even if that weren’t read very carefully, at least they reassured management that someone had actually things through. But that is no longer a reliable signal.
  • andrewdb
    We are getting to a point where AI will be able to construct sound arguments in prose. They will make logical sense. Dismissing them only because of their origin is fallacious thinking.Conclusion:Dismissing arguments solely because they are AI-generated constitutes a class of genetic fallacy, which should be called 'Argumentum ad machina'.Premises:1. The validity of a logical argument is determined by the truth of its premises and the soundness of its inferences, not by the identity of the entity presenting it.2. Dismissing an argument based on its source rather than its content constitutes a genetic fallacy.3. The phrase 'that's AI-generated' functions as a dismissal based on source rather than content.Assumptions:1. AI-generated arguments can have true premises and sound inferences2. The genetic fallacy is a legitimate logical error to avoid3. Source-based dismissals are categorically inappropriate in logical evaluation4. AI should be treated as equivalent to any other source when evaluating arguments
  • soperj
    > specially documentationHow we can tell that this wasn't written by an LLM.
  • nate
    i am absolutely on the fence here. I do like the ai cleanup of my rambling can do. but yes, i'm tempted to just leave it rambly, misspelled, etc. i find myself swearing more in my writing, just to give it more signal that: yeah, this probably aint an ai talking (writing) like this to you :) and yes, caps, barely.
  • smithza
    Please read through this incredible book review (book is All Things Are Full of Gods by David Bentley Hart). It is the kind of philosophy that everyone is looking past. Syntactic vs informational determinacy. LLMs is designed to create copy that is syntactically determinate (it is a complex set of statistics functions). Whereas the best human prose actually is the opposite--it does not converge on syntactic determinacy (see quote below) but instead converges on informational determinacy. The plot resolves as the reader's knowledge grows from abstraction and ignorance to empathy, insight and anticipation.https://www.thenewatlantis.com/publications/one-to-zero Semantic information, you see, obeys a contrary calculus to that of physical bits. As it increases in determinacy, so its syntactical form increases in indeterminacy; the more exact and intentionally informed semantic information is, the more aperiodic and syntactically random its physical transmission becomes, and the more it eludes compression. I mean, the text of Anna Karenina is, from a purely quantitative vantage of its alphabetic sequences, utterly random; no algorithm could possibly be generated — at least, none that’s conceivable — that could reproduce it. And yet, at the semantic level, the richness and determinacy of the content of the book increases with each aperiodic arrangement of letters and words into coherent meaning. Edit: add-onIn other words, it is impossible for an LLM (or monkeys at keyboards [0]) to recreate Tolstoy because of the unique role our minds play in writing. The verb writing hardly appears to apply to an LLM when we consider the function it is actually doing.[0] https://libraryofbabel.info
  • phyzome
    Seems pretty silly to me to rail against AI-generated writing and then say it's good for documentation.
  • keybored
    > Before you get your pitchforks out..I know it’s just modern writing style to preempt all responses. But can’t you just plainly state your business without professing your appreciation?People who waste other’s time with bullshit are aholes. I don’t care if it’s My Great Friend And Partner in Crime, Anthropics LLM or it’s a tedious template written in PHP with just enough substitutions and variations to waste five sentences on it before closing it.Actually, saying that it’s the same thing is a bit like saying “guns don’t shoot people”. At least you had to copy-paste that PHP template from somewhere and adapt it to your spam. Back in the day.
  • smallerfish
    I've been doing a lot of AI writing for a site - to do it well takes effort. I have a research agent, a fact check agent, a logical flow agent, a narrative arc analyzing agent, etc etc. Once I beat the article roughly into the shape I want it to be, I then read through end to end, either making edits myself or instructing an editor agent to do it. You can create some high quality writing with it, and it is still quicker than doing it the human-only way. One thing I like (which is not reason enough by itself) is that it gives you a little distance from the writing, making it easier to be ruthless about editing...it's much harder to cut a few paragraphs of precious prose that you spent an hour perfecting by hand. Another bonus is that you have fewer typos and grammatical issues.But of course, like producing code with AI, it's very easy to produce cheap slop with it if you don't put in the time. And, unlike code, the recipient of your work will be reading it word by word and line by line, so you can't just write tests and make sure "it works" - it has to pass the meaningfulness test.
  • dizhn
    Short and sweet to have coined the term? Or did it exist already?
  • grishka
    > Before you get your pitchforks out..> ..and call me an AI ludditeOh please do call me an AI luddite. It's an honor for me.
  • BobAliceInATree
    > I'm having a hard time articulating this but AI-generated code feels like progress and efficiency, while AI-generated articles and posts feel low-effort and make the dead internet theory harder to dismiss.I think it's the size of the audience that the AI-generated content is for, is what makes the difference. AI code is generally for a small team (often one person), and AI prose for one person (email) or a team (internal doc) is often fine as it's hopefully intentional and tailored. But what's even the point for AI content (prose or code) for a wide audience? If you can just give me the prompt and I can generate it myself, there's no value there.
  • 0gs
    all writing is developer documentation.
  • ef2k
    I really liked this post. It's concise and gets straight to the point. When it comes to presenting ideas, I think this is the best way to counter AI slop.
  • benatkin
    Don't worry, author, I don't think you're a luddite. You make that quite clear with this:> I can't imaging writing code by myself againAfter that, you say that you need to know the intention for "content".I think it's pretty inconsistent. You have a strict rule in one direction for code and a strict rule in the opposite direction for "content".I don't think that writing code unassisted should be taken for granted. Addy Osmani covered that in this talk: https://www.youtube.com/watch?v=FoXHScf1mjA I also don't think all "content" is the sort of content where you need to know the intention. I'll grant that some of it is, for sure.Edit: I do like intentional writing. However, when AI is generating something high quality, it often seems like it has developed an intention for what it's building, whether one that was conceived and communicated clearly by the person working with the AI or one that emerged unexpectedly through the interaction. And this applies not just to prose but to code.
  • xpe
    > Why should I bother to read something someone else couldn't be bothered to write?This is an easy but not very insightful framing.I want to read intelligent, thoughtful text that is useful in some way: to me, to society, to humanity. Ceteris paribus, the source of the information does not necessarily matter; it only matters as a matter of association. To put it another way, “human” vs “machine” is not the core driving factor for me.All other things equal, I would rather read A over B:A. high quality AI content, even if it is “only” the result of 6 minutes of human question framing and light editing [1]B. low quality purely human content, even if it was the result of 60 minutes of effort.There is increasingly less ability to distinguish “human” writing from “AI” writing. Some people fool themselves on their AI-detection prowess.To be direct: I want meaningful and satisfying lives for humans. If we want to reward humans for writing more, we better reflect on why, and if we still really want that, we better find ways that work. I don’t think “buy local” as a PR campaign will be easily transferred to a “read human” movement.[1]: Of course AI training data is drawn from humans, so I do not discount the human factor. My point is that quantifying the effort put into it is not simple.
  • unconed
    >Before you get your pitchforks out and call me an AI luddite, I use LLMs pretty extensively for work.Chicken.Seriously, the degree to which supposed engineering professionals have jumped on a tool that lets them outsource their work and their thinking to a bot astounds me. Have they no shame?
  • AnimalMuppet
    Does anyone remember the Cluetrain Manifesto? They complained about corporate-speak, saying that it sounded "literally inhuman". Well, AIs are at least that bad. AIs trained on all those corporate statements, and learned to write that way, and we hate it just like we hate corporate PR-speak.
  • fleebee
    I wasn't about to call them a luddite. This is a pretty poorly veiled attempt at drumming the inevitability of AI coding. Did they really need to defend their preference for not reading LLM prose with "I will never write code manually again"?
  • noonker
    I love that we came to almost the same conclusion regarding grammar. I wrote a very similar article you might enjoyhttps://noonker.github.io/posts/2024-07-25-i-respect-our-sha...
  • Der_Einzige
    Just use our antislop techniques and no one will ever know you used an LLM. https://arxiv.org/abs/2510.15061 (ICLR 2026)Also you could long use "logit_bias" in the API of models which supported it to ban the EM dash, ban the word "not", ban semicolons, and ban the "fancy quotes" that were clearly added by "those who need to watch" to make sure that they can clearly figure out if you used an LLM or not.
  • martythemaniak
    I use a technique where LLMs help me write, but the final output is manual and entirely mine. It's a bit of heavy process, but I think it blends the power of LLM and authenticity of my thoughts fairly well, I'll paste in my blog post below (which wasn't produced using this method, hence the rambly nature of it):If you care about your voice, don't let LLMs write your words. But that doesn't mean you can't use AI to think, critique and draft lots of words for you. It depends on what purpose you're writing it for. If you're writing an impersonal document, like a design document, briefing, etc then who cares. In some cases you already have to write them in a voice that is not your own. Go ahead and write these in AI. But if you're trying to say something more personal then the words should be your own, AI will always try to 'smooth' out your voice, and if you care about it, you gotta write it yourself.Now, how do you use AI effectively and still retain your voice? Here's one technique that works well: start with a voice memo, just record yourself maybe during a walk, and talk about a subject you want, free form, skip around jump sentences, just get it all out of your brain. Then open up a chat, add the recording or transcript, clearly state your intent in one sentence and ask the AI to consider your thoughts, your intent and ask clarifying questions. Like, what does the AI not understand about how your thoughts support the clearly stated intent of what you want to say? That'll produce a first draft, which will be bad. Then tell the AI all the things that don't make sense to you, that you don't like, just comment on the whole doc, get a second draft. Ask the AI if it has more questions for you, you can use live chat to make this conversation go smoother as well, when the AI is asking you questions, you can talk freely by voice. Repeat this one or two more times, and a much finer draft will take shape that is closer to what you want to say. During this drafting state, the AI will always try to smooth or average out your ideas, so it is important to keep pointing out all the ways in which it is wrong.This process will help you with all the thinking involved being more up-front. Once you're read and critiqued several drafts, all your ideas will be much more clear and sort of 'cached' and ready to be used in your head. Then, sit down and write your own words from scratch, they will come much easier after all your thoughts have been exercised during the drafting process.
  • Handy-Man
    OP took it from here without credit https://www.threads.com/@raytray4/post/DUmB657FR4P
  • alontorres
    I think that this requires some nuance. Was the post generated with a simple short prompt that contributed little? Sure, it's probably slop.But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.
  • extra__tofu
    said “groks the world”; didn’t read
  • nubg
    ai;dr
  • exe34
    I tell all my friends: send me your prompts. Don't send me the resulting slop.
  • xvector
    Many engineers suck at writing. I'm fine with AI prose if it's more organized and information-dense than human prose. I'm sick of reading 6 page eng blogs to find a paragraph's worth of information.
  • pevansgreenwood
    [dead]
  • ai_ai
    [dead]
  • FrankRay78
    Pop quiz. How much of the following article is AI generated versus hand written intention? Come on, tell me if you actually can tell anymore. https://bettersoftware.uk/2026/01/31/the-business-analyst-ro...
  • dsign
    Ever worried that ChatGPT would rattle you to the authorities because there is such a thing as thought crime? For that reason, there is a vast, unexplored territory where abhorrent ideas and pornographic vulgarity combine with literary prose (or convoluted, defective, god-awful prose, like the one I'm using right now) and entertaining story-telling that will remain human-only for a while. May we all find a next read that we love. Also, we all may need to (re-)learn to draw phalli.
  • charcircuit
    >Why should I bother to read something someone else couldn't be bothered to write?This take is baffling to me when I see it repeated. It's like saying why should people use Windows if Bill Gates did not write every line of it himself. We won't be able to see into Bill's mind. Why should you read a book if they couldn't bother to write it properly and have an editor come in and fix things.The main purpose of a creative work is not seeing intimately into the creator's mind. And the idea that it is only people who don't care who use LLMs is wrong.