Need help?
<- Back

Comments (201)

  • boh
    I think the big secret is that AI is just software. In the same way that a financial firm doesn't all of sudden make a bunch of money because Microsoft shipped an update to Excel, AI is inert without intention. If there's any major successes in AI output it's because a person got it to do that. Claude Code is great, but it will also wipe out a database even though it's instructed not to (I can confirm from experience). The idea that there's some secret innovation that will come out any minute doesn't change the fact that it's software that requires human interaction to work.
  • firefoxd
    "you will all lose your jobs and it will wipe out half of humanity."If you lead with this, people will stop questioning why their sprint velocity hasn't increased 10 fold. Managers start asking leads, instead of hiring more devs can we add Agent.md to our repos?The Apocalypse sells. They are afraid that you'll find out that AI is just another useful tool. That's the real threat, not to humanity, but to their hype.Edit: i made a video about this recently: https://youtu.be/nB0Vz-fh8EI
  • deepsquirrelnet
    This is my own take, directly related to this that I posted a little while back. The one thing that I think the article missed is the geopolitical angle they’re also working:* We need to completely deregulate these US companies so China doesn't win and take us over* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)* If you don't use AI, you will not be able to function in a future job* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is rightThey have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.
  • Imnimo
    My read is not so much "if we say this is dangerously powerful, it will make people want to buy our product", but rather that there is a significant segment of AI researchers for whom x-risk, AI alignment, etc. is a deal-breaker issue. And so the Sam Altmans of the world have to treat these concerns as serious to attract and retain talent. See for example OpenAI's pledge to dedicate 20% of their compute to safety research. I don't get the sense that Sam ever intended to follow through on that, but it was very important to a segment of his employees. And it seems like trying to play both sides of this at least contributed to Ilya's departure.On the other hand, it seems like Dario is himself a bit more of a true believer.
  • staminade
    AI company leaders didn't invent this concern about the potential dangers of AI, either as a cause of economic disruption, or as a potential extinction risk. Superintelligence was published in 2014, and even then it wasn't a new topic. Technologists, philosophers and science fiction authors have been discussing and speculating about AI risk for decades.Also, the idea that AI leadership seized on and amplified these concerns purely for marketing purposes isn't plausible. If you're attempting to market a new product to a mass audience, talking about how dangerous and potentially world-ending it is is the most insane strategy you could choose. Any advantage in terms of getting people's attention is going to be totally outweighed by the huge negative associations you are creating in the minds of people who you want to use your product, and the likelihood of bringing unwanted scrutiny and regulation to your nascent industry.(Can you imagine the entire railroad industry saying, "Our new trains are so fast, if they crash everybody on board will die! And all the people in the surrounding area will die! It'll be a catastrophe!" They would not do this. The rational strategy is to underplay the risks and attempt to reassure people. Even more so if you think genuinely believe the risks are being overstated.)Occam's razor suggests that when the AI industry warned about AI risk they believed what they were saying. They had a new, rapidly advancing technology, and absent practical experience of its dangers they referred to pre-existing discussions on the topic, and concluded it was potentially very risky. And so they talked about them in order to prepare the ground in case they turned out to be true. If you warn about AI causing mass unemployment, and then it actually does so, perhaps you can shift the blame to the governments who didn't pay attention and implement social policies to mitigate the effects.I don't think the AI industry deserve too much of our sympathy, but there is a definite "damned if you do, damned if you don't aspect" to AI safety. If they underplay it, they will get accused of ignoring the risks, and if they talk about it, they get accused scaremongering if the worst doesn't happen.
  • tptacek
    I have never heard of "Heidy Khlaaf, chief AI scientist at the AI Now Institute", but the sentiment in this article is diametrically opposite that of the vulnerability research scene.There is contention among vulnerability researchers about the impact of Mythos! But it's not "are frontier models going to shake up vulnerability research and let loose a deluge of critical vulnerabilities" --- software security people overwhelmingly believe that to be true. Rather, it's whether Mythos is truly a step change from 4.7 and 5.5.For vulnerability researchers, the big "news" wasn't Mythos, but rather Carlini's talk from Unprompted, where he got on stage and showed his dumb-seeming "find me zero days" prompt, which actually worked.The big question for vulnerability people now isn't "AI or no AI"; it's "running directly off the model, or building fun and interesting harnesses".LaterI spoke with someone who has been professionally acquainted with Khlaaf. Khlaaf is a serious researcher, but not a software security researcher; it's not their field. I think what's happening here is that the BBC doesn't know the difference between AI safety prognosis and software security prognosis, or who to talk to for each topic.
  • Micanthus
    > According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they're already doing to the world.Am I not allowed to be concerned about _both_?I do not believe that Sam Altman and other AI company execs believe that the singularity is imminent. If they did, they wouldn't behave so recklessly. Even if they don't care about the rest of humanity, there's too much risk to themselves if they actually believe what they're saying.But I think it's correct to be worried about a potential future AI apocalypse. Personally I doubt that LLMs will scale to full sentience, but I believe we'll get there eventually. And whether it's in 2 years or 200 years I'm worried about it. Plenty of smart people who aren't working for AI companies (and thus have no motive to use it as hype or distraction) hold this belief and it really doesn't seem that crazy.But yeah, obviously let's focus primarily on the real harms AI is causing in our society right now.
  • bryan0
    > Why do AI companies want us to be afraid of them? ... According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they're already doing to the world.People seem unable to make up their mind if AI is very dangerous or is it not. I think what the AI companies and this author agree on, is that this technology is potentially extremely dangerous. AI impacts labor markets, the environment, warfare, mental health, etc... It's harder now to find things which it will not impact.So if we agree that AI is potentially dangerous, it makes the title question moot: Both AI companies and this author want people to be aware of the dangers that AI poses to society. The real question is what do we do about it?The nuance here is that AI can be incredible positive as well. It's like the invention of fire, you can use it for good or bad, and there will be many unintended consequences along the way.We could legislate and ban AI tech. People have proposed this seriously, yet this feels completely unrealistic. If the US bans AI research, then this research will move elsewhere. I think it is like trying to ban fire because it's dangerous: some groups will learn to work with fire and they will get an extreme advantage over those groups that don't. (or they will destroy themselves in the process).So maybe instead of demonizing the AI companies, we have a nuanced debate about this tech and propose solutions that our best for our society?
  • tangotaylor
    Finally the media is catching on.Lee Vinsel's criti-hype article nailed this 5 years ago, before we even had the chatbot economy we do now: https://sts-news.medium.com/youre-doing-it-wrong-notes-on-cr...
  • InputName
    In lieu of a technological moat, companies search for regulatory capture.
  • DalasNoin
    Quote from the article: ""AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies," Altman said in 2015."Altman wasn't even at OpenAI at that point, so why would that be marketing?
  • dimva
    They say AI will destroy humanity because they believe it. OpenAI and Anthropic were created by people who believed this. There's nothing nefarious about them saying this.Why are they still building it? Because each team thinks that THEY are the ones who can prevent it from destroying humanity, but they have to get to AGI first, before the other teams make an AI that does destroy humanity.But also, if AGI doesn't destroy humanity, it would be the most powerful weapon in the world, and they want to be the ones in control of it. Keeping the focus on Armageddon distracts from the real and severe problems that arise if a single person, or even a small group, controls an AGI.
  • baggachipz
    Why wouldn't they continue crying wolf when it always gets them free advertising from a gullible/complicit press?
  • skybrian
    It’s an extraordinary situation and I’m wondering what sort of analogies make sense.If there were tobacco companies warning everyone who would listen in the 1950’s that cigarettes cause cancer, it would be, like, points for honesty, but why don’t you stop selling them then?The difference being that there are a lot of good uses for AI chat and it doesn’t directly harm most people.It seems like the customers who would misuse AI are getting left out of the discussion? It’s as if arms dealers were being solely blamed for war, or if arms dealers were expected to stop wars.The difference being that a single, general purpose product that can do such a wide variety of things isn’t really comparable to making weapons that are only good for one thing.Maybe it’s as if car manufacturers in the early 20th century were predicting highways, traffic, and pollution.Or imagine if early dot-com companies were predicting the various dangers of social networks?
  • afh1
    They want regulation for others but not them. Otherwise there might be competition.
  • FiberBundle
    Another potential reason, not mentioned in the article, is that open source models obviously pose the biggest threat in the labs' ability to monetize their tech. Anthropic especially seems to be very anti open-source. If frontier models start to plateau and don't have capabilities that truly differentiate them, nobody will pay what the labs would want to charge. Posing the tech as a danger is a way for them to make the government regulate open source models.
  • netcan
    I remember thinking Altman seemed to be over-reacting and fanning the flames about "Ai bias" circa gpt3.There was a when little panic about the fear of bigoted computers at that point.But... it got a lot of earned advertising and they also sort of did a "pre-burn." They saturating the space with "bigoted Ai concern" for a while, and now I don't ever see it come up.There's a "get ahead of the inevitable" thing going on. Also, obviously, prospectus hype.Besides all that, these are geeks and they're excited. This is what an excited geek looks like.
  • twobitshifter
    I’m afraid of AI but not because I think it’s going to become skynet tomorrow, it’s because of all the social ills that are already clearly attached to it.- Spam- Deep Fakes- Porn- Buggy Software- Economic Bubbles- Degradation in people’s abilities and learned dependence on ChatGPT for basic functions.- Job loss through enshittification ala AI interviews and Telemarketers- Climate Change, noise pollution etc.- Mass SurveillanceIt’s much more an Idiocracy AI than Terminator.
  • mrwh
    What happens when unemployment hits 25%, and youth unemployment hits 50%, in a democracy? That's the real terror here, not hacking.
  • gdiamos
    I originally thought evil killer robots discussions in AI labs was an idea out of Hollywood.Then I saw how effective it was at raising money.
  • andai
    Beating the drum of utopia and apocalypse was a suspiciously common tactic in the last century. Also, in a slightly different way, the twenty before that.
  • gdiamos
    I used to think evil killer robot discussions among AI researchers was an idea based in Hollywood, not science.Then I realized how effective the fear was at fundraising...
  • sixtyj
    Article mentions a book "The AI Con" that argues that much of what is labeled "artificial intelligence" is a misleading term that obscures ordinary automation while concentrating power in a small number of technology firms.So fear-mongering seems to be just a tool how to get attention and more customers.Hey ma, I use very dangerous tool now. I am OG.
  • throwaway132448
    The same reason Palantir does: Its their brand - it’s just marketing.Glad people are finally catching on.
  • SpicyLemonZest
    > It's a strange way for any company to talk about its own work. You don't hear McDonald's announcing that it's created a burger so terrifyingly delicious that it would be unethical to grill it for the public.> Here's one theory.But the author never gets back to this! It's the main observation the theory has to account for; why don't we see other companies speak this way, if it's such an effective strategy for deflecting non-apocalyptic concerns?
  • api
    I’d say the same thing about Palantir. It’s very clear that they are playing into the hatred and speculation about them to puff themselves up and get attention in the “any attention is good attention” era. Being a literal comic book villain syndicate is sexier than being Millennial/GenZ TRW.(I am not saying I approve of all the stuff they are being used for or all the statements of its management.)
  • 7777777phil
    If your model converges to the same outputs as everyone else's because everyone trained on the same data, the only thing left to differentiate on is brand, and fear is great at building brand."We are too dangerous to commoditize" pitches better than "we are mostly typical of the internet's median answer", those are kind of the same statement.
  • p0w3n3d
    FUS - Fear Uncertainty Doubt
  • scratchyone
    Honestly we should have learned this claim from AI companies was purely fear-mongering back when GPT-2 was "too dangerous to release".
  • dicksent
    gotta bring up the hype
  • registeredcorn
    Are you telling me that it's sexier to say, "In its current form, we cannot contain its power" rather than, "We're working out the last set of bugs before the start of Q3"?
  • Sol-
    You can gauge the quality of the article by seeing Emily Bender quoted, who will insist on stochastic parrots when AI does billions of dollars of economically useful work.
  • nyc_data_geek1
    Because your fear is their marketing, is their valuation. There, saved you a click.
  • throwaway911282
    dario is the biggest proponent of fear mongering marketing playbook
  • feverzsj
    That's exactly how religion works.
  • deferredgrant
    [flagged]
  • yuhmahp
    Assuming this article isn't written by an AI