Need help?
<- Back

Comments (186)

  • com2kid
    People demand free support.When I worked at Microsoft, it cost over $20 to have a human customer support agent pick up the phone when someone called in for help. That was greater than our product margin. Every time someone called for help, we basically lost the entire profit on that sale, and then some.Most common support calls where for things that were explained in the manual, the out of box experience, tutorial documents, FAQ pages, and so on and so forth.Did we have actual support issues that needed fixing, yes of course. And the insanely high cost of customer support drove us to improve our first use experience. But holy cow people don't realize how expensive support calls are.Edit: To explain some of the costs - This was back when people worked in physical call centers, so first off we were paying for physical office space. Next up training, each CSR had to be trained on our product. This took time and we had to pay for that training time. We also had to write support material, and update that support material for each new version that came out. All of this gets amortized into the cost of support. Because workers tend not to stay long, you pay for a lot of training.Add in all the other costs associated with running a call center and the cost per call, even for off shore call centers, is not cheap.In a reasonable world we'd just raise the price of the product by $x based on what % of people we expect to call in for support (ignore for a minute that estimating that number is hard), but the world isn't reasonable. Downwards price pressure comes from all sides, primarily VC backed competitors who are OK burning $$ to gain market share, and competitors at other FAANGs that are OK burning money to gain market share.The result is that everyone is going to try and reduce support costs because holy cow per user margins are low now days for huge swaths of product categories (Apple's iPhone being a notable exception...)
  • SaberTail
    The "figure out what you want to say" is key. I've started to think of LLMs, at least in a business setting, as misunderstanding amplifiers.How many times at work have you been talking to someone else where they're using common words as jargon? Maybe it's something like "the online system" or "the platform". And it's perfectly clear to them what they mean, but everyone else in the company either doesn't know what that actually is, or they have a distorted idea based on the conventional definitions of the words. Even without LLMs in the mix, this can lead to people coming out of meetings with completely different understandings of what's going on.My experience is few people are actually providing the relevant context to the LLM to explain what they mean in situations like this. Or they don't have the actual knowledge and are using the LLM in the hopes it'll fill in for their ignorance. The LLMs are RLHFed to sound confident, so they won't convey that they don't know what a piece of jargon means. Instead they'll use a combination of the common meaning and the rest of the context to invent something. When this gets copy/pasted and sent around, it causes everyone who isn't familiar to get the wrong idea. Hence "misunderstanding amplifier".To the point of the article, this is soluble if people take the time to actually figure out what they are trying to convey. But if they did that, they wouldn't need the LLM in the first place.
  • hidelooktropic
    It matters less to me that the helper is an AI/human than the kind of help I'm getting.The bigger problem to me is "help" is always framed as my needing to be educated, not a problem with the service. This is especially prevalent for technical customers who are legitimately trying to draw attention to a bug in the platform only to get how-to help articles pasted back to them.
  • appreciatorBus
    This submission might be an HN record for highest % of commenters who skipped reading the article. I'm sure it's always high but so far there are 125 comments and maybe 3 or 4 referencing what was in the actual article.
  • hatthew
    Writing is fundamentally the transfer of information from your brain to my brain. If you have 1000 bits of information you want to transfer, you can't give 300 bits of information to an LLM and have it fill in the remaining 700, because it doesn't know what those 700 bits are. If it's able to guess those 700 bits correctly, then they aren't true information, and you really only have 300 bits you want to transfer. You might as well transfer those bits to me directly, rather than having the LLM add on an extra superfluous 700 bits that I then have to filter out.
  • senko
    I thought this was going to be about (customer support) chatbots, which can be a good thing."Don't make me talk to your [customer support] chatbot" reads like "Don't make me go to an ATM for a cash withdrawal". If I can solve a thing quickly and effectively without waiting forever to speak to an overworked customer support agent on another contitent, I would very much like that!Well, anyways, the post is not about that. It's about posting AI-generated text (blog posts, PR summaries). Which I agree with, although there are a bunch of holes in the argument, such as:> 1. Figure out what you want to say. 2. Say it. That first figuring-out part is important.Well, yeah, I can figure out what I want to say, then have the chatbot say it. So looks like the second part is important, too.
  • jfreds
    AI pull request descriptions are my current pet peeve. The ones I have seen are verbose and filled with meaningless fluff words (“optimized”, “performant” for what? In terms of what?), they leak details about the CoT that didn’t make it into the final solution (“removed the SQLite implementation” what SQLite implementation? There isn’t one on main…), and are devoid of context about _why_ the work is even being done, what alternatives were considered etc.My first round of code review has become a back and forth with the original author just asking them questions about their description, before I even bother to look at code. At first I decided I’d just be a stick in the mud and juniors would learn to get it right the first time, but it turns out I’m just burning myself out due to spite instead.
  • Juminuvi
    100% agree. Hopefully etiquette will catch up if enough folks talk about this.Side note, the number of comments here from people who clearly didn’t read the article is impressive
  • kokanee
    I view the issue of inefficient communication as a problem that will wane as LLMs progress, and a bit idealistic about the efficiency of most human-to-human communication. I feel strongly that we shouldn't be forced to interact with chatbots for a much simpler reason: it's rude. It's dismissive of the time and attention of the person on the other end; it demonstrates laziness or an inability to succeed without cutting corners, and it is an affront to the value of human interaction (regardless of efficiency).
  • Molitor5901
    Related: Please don't make me talk to your AI pretend-human complete with Asian accent and background call center sounds. That's even more insulting that a chat bot.
  • daft_pink
    I just signed up with Gusto for one of my companies. They charged me for premium support automatically and when I tried to dispute it I had to talk in circles with their AI named Gus. Why am I paying through the nose for premium support just to chat with an AI?
  • mrandish
    As a customer, I just want the information I need. While I don't want to talk to a chatbot, I also don't want to talk to a human - and for the same reason: they usually don't have the info I need.That's the aspect I don't understand. The information I want is almost always something some other customers have asked already. I'd much prefer to avoid their customer support maze entirely and help myself on a searchable wiki. Unfortunately, most company's online product support FAQs usually only contain answers to obvious shit on the order of RTFM and "is it plugged in." Why not just post the doc their advanced tier 3 support people share amongst themselves? It can be under a warning label like 'preliminary advanced info for engineers'.I realize people like me represent only around 2-3% of the customers seeking support but it's 2-3% that is able to self-serve and takes more time than average because we invariably have to work through front-line support to get escalated to someone with the non-obvious info that's still been asked many times before. So maybe we're only ~2% but we suck up 4% of support bandwidth and we probably take up closer to ~20% of Tier 3 support - the most expensive, scarce type.
  • red75prime
    I have a shorter, more cynical version of this: if a person doesn’t provide enough input to a chatbot, I’d be better off talking to the chatbot directly.
  • hungryhobbit
    I find chatbot conversations to be incredibly similar to dreams.It's human nature to want to share your dreams, because they are fascinating to you.However, it's also human nature to want to punch someone in the face when they start talking about this crazy dream they had last night ... because it has nothing to do with you, and doesn't interest you at all.Similarly, when an AI says something useful to you, in response to your prompts, it's very particular to you. When you try to share it with others ... you get the article.
  • aprentic
    People want to spend as little as possible while getting support for their product as long as possible.Companies want people to spend as much as possible while doing the minimum work on the product.Chatbots let companies spend almost nothing while pretending to provide long-term support.I wonder if something similar to a copyleft license could help. What if there was a contractual "fair business" pledge that companies could add? I imagine that good enough lawyers could craft something that essentially said, "You can only display this contract if you legally guarantee that you do X, Y, Z and do not do A, B C."
  • pizzathyme
    The key thing here is not whether it's AI. The key thing is quality and signal. No one wants to read to a low quality human comment either.If the AI output was actually better than talking to a real human, more useful, more concise, serving the job to be done, then no one would have a problem with it. In fact they would appreciate it. That future is not here in many areas.The problem is people are wielding AI right now and either [a] the models they are using are not good enough, [b] they aren't being given enough context, or [c] they are deployed in a way that makes it sloppy(Insert joke about whether this comment is AI. It's not, but joke away)
  • avatardeejay
    I definitely lean pro AI, and I feel an air of condescension here that doesn't thrill me. But, it wasn't overwhelming and the point does kind of resonate.I see it in a reddit post, or a twitter comment, I've suspected it in text messages. And like that angle, the like "you're a human. can you please, just" and feeling a little out there for pouring my soul into every word I right wherever it is, like, that idea resonates. That frustration to be reading a lengthy blurb in what's become an over-saturated style where I have to work even harder to discern their real meaning than if they were actually that verbose to begin with.
  • shubhamintech
    The worst part isn't that chatbots are bad at their job. It's that teams shipping them genuinely don't know how bad they are. Nobody's reading the conversations, so nobody catches users hitting dead ends, rage-clicking, or dropping off right after a failed session. The data exists, it just sits there unread.
  • geauxvirtual
    The only Chatbot I've semi-enjoyed interacting with was SiriusXM during my latest yearly try to cancel dance they make you go through. Usually this was a 45 minute to an hour phone call with Customer Support to eventually get a cheaper rate and continue service.This last time, it sent me to a Chatbot. In five minutes, I got a cheaper rate than I was previously paying. I'm sort of looking forward to the next interaction to see if I can get even cheaper rates or finally cancel the service.
  • tombert
    I guess part of the advantage of being an extremely long-winded writer who makes lots of typos is that people know that what I'm writing is probably written by a human.Though maybe people will start supplying context like "no em dashes, and occasionally misspell a word or two", and soon you won't even be able to tell that.
  • tl2do
    The article doesn't address where human oversight is actually necessary. I sometimes use AI for simple spell checking—requiring human review for that would be over-complication. In some more difficult tasks, having AI review AI output works fine for me.
  • guerython
    On my team we always ship an agent draft with a short human anchor first. Two sentences that explain the motivation and the checks we ran, then the bot block with a label like “agent draft” for anyone who wants the raw output. That way readers know what we actually think and don’t have to guess whether the chat log is the human opinion. Do you have a checklist for when that human intro is enough versus when the whole thing needs to stay private?
  • mazone
    How about the trend that people just copy paste AI responses back to you in slack.
  • this_user
    I don't know, occasionally there are some funny results. For instance, I have managed to get AWS' support bot to start smack talking their platform and criticising its often needlessly complex and sometimes incoherent design before cheekily offering to help me make my relative simple setup even more complex and enterprise-ready.
  • TimFogarty
    I have noticed that my writing ability has atrophied since I was writing essays in school. Now at work most of my writing is slack messages. Writing longer more thoughtful pieces about strategy or performance review has become a slog. I suspect that a lot of people have had a similar experience so offloading the pain of writing to an LLM is appealing.But frankly LLMs suck at writing. It's not only formulaic, it's uninspired!! So I worry that we're entering an era of mediocre writing. I like the "Have you considered writing?" suggestion. I've been trying to make a habit of writing book reviews so I can counter some of the writing atrophy I've developed. Hopefully it will help me become a better thinker too. As Ray says here: "Understanding your own point of view is an enriching exercise."
  • adamtaylor_13
    I agree in principle, but, for me, it all comes down to execution.I used a product that implemented a VERY good AI chatbot as part of their email support and it was better than human support. It was nearly instant in its response time and answered all of my questions perfectly.In fact, it wasn't until after the interaction that I realized it was an AI bot! Pretty good IMO and I'd prefer that interaction over holding "...because your call IS important to us."
  • ifokiedoke
    Reading almost all the comments gives me the sad validation that people truly do not read the article before commenting...This article is not about support chatbots. It's about clearing up your writing/thoughts and communicating clearly even if you used a chatbot to get there.
  • kazinator
    I don't mind talking to a chatbot if solves problems and doesn't go in circles.Don't make me talk to a chatbot while there is zero forward progress in solving the problem.
  • namegulf
    To start with, for qualifying the chat it's okay for a chatbot to ask some quick questions so that it can connect with the right person.Forcing a customer anything beyond that is RUDE!
  • lemoncookiechip
    I don't care if it's a human, a chatbot, or a dog if they fix my problem.I don't want to contact customer support in the first place, if I'm forced to, it's because something is very wrong and in that case I don't want to be listening to elevator music and "your call is important to us, please hold" for an hour, and get my call disconnected forcing me to call again.Issue is that I've yet to have a chatbot actually fix my issues, or most 1st contact human operators for that matter.
  • godelski
    > The only acceptable pro-AI response to the accusation of AI Slop is to join team Anti-Slop. I'll never understand why this is controversial. Especially in techie and engineering communities. We of all people love to be grumpy. It's in our nature because the first step to solving problems is recognizing them. Sweeping shit under the rug is for business people who's only interest is money and have no care for the product
  • ivarv
    For a similar take see Cory Doctorow's recent "No one wants to read your AI slop" - https://pluralistic.net/2026/03/02/nonconsensual-slopping/#r...
  • cerved
    I instruct Claude to write like peff, writes much better commit messages now
  • arewethereyeta
    Amen! All our banks introduced this we cannot talk to a human unless it's fraud.
  • jackyli02
    To avoid "AI barging into human conversations unsolicited", you can either stop the AI from barging in, or remove the premise that this is a "human conversation". The latter might be easier.
  • anon
    undefined
  • foxglacier
    Some people deserve it because they already write in a vague overly diplomatic, bloated way that's hard to extract the core meaning from - or has no meaning underneath all the words. Schoolteachers, I mean you. I'm happy using AI to email teachers because they might as well be human-powered AIs themselves.We do need to be more tolerant of AI writing though. Some people need it because they can't express their ideas well themselves. You wouldn't say "no wheeled vehicles allowed inside" because that would exclude handicapped people who need wheelchairs.
  • einpoklum
    > We increasingly use coding agents to create PRs.No we don't, and neither should you. Don't make me read your chatbot's PR.
  • ares623
    Day by day i'm starting to lean towards this take https://anthonymoser.github.io/writing/ai/haterdom/2025/08/2...
  • anon
    undefined
  • aiwrita
    Nice
  • data-ottawa
    Topical, but not related to the article: I just had the worst experience with corporate AI chatbot.Apparently my mobile provider switched to a voice chatbot only phone system in September. I called them today because of a price increase and some weird long distance charges on my bill.I call, the chatbot answers, I confirm my account info and enter my PIN then ask it “why did my price go up”, “your price went up because we made a change to your account”. Wow, super. I ask it if it can reduce the price, no. I ask it about the long distance charges and it tells me to check my account statement online. I ask to be transferred to an human and it asks why, but it does transfer me over to their callback system. As a first line of support defence, that wasn’t so bad.The timing of the transfer is off so I only hear half of what the callback system says. It requires me to verify the number I want to be called back on, then I tells me I can type in a time I want the callback. I think to myself, is this 24 hour time, how does AM/PM work, what time-slots are available, how do those work?While I’m thinking about all of this it repeats the instructions. I type in 1 minute into the future because I don’t want to waste my own time waiting, just please call me back ASAP. “That time was unavailable”. I guess that makes sense, it’s very soon and maybe slots are on 5 minute windows. I have to confirm my phone number again and I can enter a new time. I try 10 minutes from now, that’s on a clean 5 minute boundary. Nope. Confirm again and try the nearest 15 minute boundary. Nope — and this time it hangs up and I have to call back and start from zero.I call back and explain to the bot (again) that I need to talk a human, it’s a billing issue, thanks. It fails to understand me and that seems to get it stuck in a loop. After two more turns it asking me to repeat myself I hang up and call back again. This time the bot does understands me and I try 3 more callback times for today, all of which fail, and it hangs up.I’ve just spent 10 minutes talking to a wall and punching in numbers. The phone wall is clearly unassailable — all paths lead to the broken callback subsystem.I try the text chat bot on the site. I convince it to put me in contact with a human chat operator, who I then have to convince it’s worth it for them to call me because he couldn’t help me and it took over 1 minute per chat turn/iteration.Finally a human calls me! Before we can talk I need to open a text, click the link, then enter the code she tells me, then enter my account PIN. It felt like I might be getting phished, this required such a weird chain of info. She tells me they put a notice of the price increase in my December bill (which was the right amount, so I didn’t read it), so this is all above board. She says if I want a cheaper plan I should check the app and she won’t even tell me what the options are. I ask if they’ll price match competitors and she says no.At this point I told her I was considering leaving if they won’t price match (and also that the new support service was very bad). She says she’s sorry to hear that, to check the app, then pauses and asks me if I’m happy with my internet services, as if anything about our interaction says “please sign me up with more services!” I know they have to ask about the upsell because they always do, but wow.The entire reason I’ve been a customer so long was that for a decade I would call and within 5 minutes be able to update my account to a new plan. Usually the support staff were nice and could offer me some loyalty discount, and I was happy.I just (as in just) finished cancelling my entire account, then checked HN, and “Don’t make me talk to your chatbot” is the top article. Serendipity at its finest.
  • jmyeet
    I'm reminded of the Air Canada customer service chatbot. It completely made up a refund policy (and there are still people on HN who insist LLMs don't hallucinate) and a court ruled the company had to honor it [1].The only way to deal with this is to make the implentation not worth it by constantly bypassing it to speak to a human and/or making it cost money by getting it to give you things you're not otherwise entitled to.I really wonder how these things will handle prompt injection and similar things. I have no confidence any of this is secure.Wait until this comes to healthcare and it'll be chatbots handling appeals to prior authorization denials, wasting even more physician time.[1]: https://www.wired.com/story/air-canada-chatbot-refund-policy...
  • onion2k
    There's been a lot of "the world doesn't work the way I want it to" on HN recently. I suspect this is a function of an aging readership more than anything particularly groundbreaking about hot takes on the up and coming tech."Anything invented after you're thirty-five is against the natural order of things." Douglas Adams
  • causal
    I have found that chatbots embedded in some spaces can be useful, e.g. docs. Stagehand for example embeds a little query form at the bottom of their docs page, and I've found the chatbot it engages can quickly direct me to the documentation I'm looking for: https://docs.stagehand.dev/v3/first-steps/introduction