Need help?
<- Back

Comments (105)

  • ex-aws-dude
    This agent stuff is really making me lose respect for our industryAll the years of discussing programming/security best practicesThen cut to 2026 and suddenly its like we just collectively decided software quality doesn't matter, determinism is going out the window, and its becoming standard practice to have bots on our local PC constantly running unknown shell commands
  • dmazin
    This is a lot less of a story than it seems.It makes it sound like a rogue AI hacked Meta.Instead, the "wild" thing here is that someone let an agent speak on their behalf with no review. The agent posted inaccurate instructions which someone else followed.Those instructions lead to a brief gap in internal ACL controls, sounds like. I'm sorry, but given that the US government gave 14 year olds off incel Discords full access to Social Security data, this is not shocking by comparison.To be clear, it is dumb and rude to let an agent speak on your behalf _without even reviewing it_.This will eventually lead to a bigger snafu, of course. Security teams should control or at least review the agent permissions of every installation. Everyone is adopting this stuff, and a whole lot of people are going to set it up lazily/wrong (yolo mode at work).
  • advisedwang
    AI can be used to move fast. So management expects us to move at that speed. AI can be used to move even faster if you don't check it's output. The ever ratcheting demand for faster output will make it infeasible to diligently check AI output all the time. AI errors being acted on without due care is inevitable.
  • krupan
    "A human, however, might have done further testing and made a more complete judgment call before sharing the information"Because a human would have been fired for posting something that incorrect and dangerous
  • kkl
    > "Had the engineer that acted on that known better, or did other checks, this would have been avoided."<insert takes long drag tweet[1] here>I personally find "LLMs can do $THING poorly" and "LLMs can do $THING well" articles kinda boring at this point. But! I'm hopeful that stories like this will shift the industry's focus towards robustness instead of just short-term efficiency. I suspect many decision making and change management processes accidentally benefited from just being a bit slow.[1] https://waffles.fun/amy.png
  • ISL
    A central challenge for AI is understanding how accountability flows.The language of this article is a great example, "... thanks to an AI agent that gave an employee inaccurate technical advice ...".It should more-correctly read, " ... thanks to the people who made it possible for an AI agent to give an employee inaccurate technical advice ... ".It is at our peril that we deem it acceptable to blame a black box for an error, especially at scale.
  • jasonpeacock
    I'm concerned that someone had the permissions to make such a change without the knowledge of how to make the change.And there was no test environment to validate the change before it was made.Multiple process & mechanism failures, regardless of where the bad advice came from.
  • Uhhrrr
    The two errors, then, were that the LLM hallucinated something, and that a human trusted the LLM without reasoning about its answer. The fix for this common pattern is to reason about LLM outputs before making use of them.
  • aussieguy1234
    More like Rogue Human, who didn't check the facts before taking the technical advice from the model at face value.
  • Fizzadar
    I’m predicting a wave of such incidents to start appearing over the next few months/years.
  • amelius
    How long until an AI puts all our personal data on the streets?
  • AiStockAgent62
    open source alternatives are catching up fast. give it 6 months
  • skywhopper
    “Meta spokesperson Tracy Clayton said in a statement to The Verge that ‘no user data was mishandled’ during the incident.”Wow, no mishandled user data? A striking change of standard operating procedure from Meta here.Actually the later information in the story directly contradicts that, so The Verge probably shouldn’t have just quoted this line if their reporting is in opposition to it.Regardless, this is one of the more insidious things about these tools. They often get minor but critical things wrong in the midst of mostly correct information. And people think they can analyze the data presented to them and make logical judgments, but that’s just not the case.The article points out that “a human could have done the same thing” but, between the overly confident tone of the text generated by these tools, and the fact that weirdly people trust the LLM output more than they trust other humans (who generally admit or at least hint when they aren’t actually experts on a topic), it’s actually far worse when one of these bots gets something wrong.
  • worik
    > A rogue AI led to a serious security incident at MetaThe AI "led to" the incident , true. But do nt forget that this, like all similar incidents , is a human failureAI is a tool with no agency. People make mistakes using it, thone mistakes are the responsibility of the humans
  • yieldcrv
    very misaligned! sprays bottle at mac mini
  • welfare
    Behind paywall, is there another link to the article?
  • mika-el
    [dead]
  • maxothex
    [dead]
  • opensre
    [flagged]
  • JKolios
    "A rogue AI led to a serious security incident" is certainly a way to write "Someone vibe coded too hard and leaked data".