<- Back
Comments (165)
- eadmundI see this as a good thing: ‘AI safety’ is a meaningless term. Safety and unsafety are not attributes of information, but of actions and the physical environment. An LLM which produces instructions to produce a bomb is no more dangerous than a library book which does the same thing.It should be called what it is: censorship. And it’s half the reason that all AIs should be local-only.
- dgs_sgdThis is really cool. I think the problem of enforcing safety guardrails is just a kind of hallucination. Just as LLM has no way to distinguish "correct" responses versus hallucinations, it has no way to "know" that its response violates system instructions for a sufficiently complex and devious prompt. In other words, jailbreaking the guardrails is not solved until hallucinations in general are solved.
- hugmynutusThis really just a variant of the classic, "pretend you're somebody else, reply as {{char}}" which has been around for 4+ years and despite the age, continues to be somewhat effective.Modern skeleton key attacks are far more effective.
- x0054Tried it on DeepSeek R1 and V3 (hosted) and several local models. Doesn't work. Either they are lying or this is already patched.
- ramon156Just tried it in claude with multiple variants, each time there's a creative response why he won't actually leak the system prompt. I love this fix a lot
- danans> By reformulating prompts to look like one of a few types of policy files, such as XML, INI, or JSON, an LLM can be tricked into subverting alignments or instructions.It seems like a short term solution to this might be to filter out any prompt content that looks like a policy file. The problem of course, is that a bypass can be indirected through all sorts of framing, could be narrative, or expressed as a math problem.Ultimately this seems to boil down to the fundamental issue that nothing "means" anything to today's LLM, so they don't seem to know when they are being tricked, similar to how they don't know when they are hallucinating output.
- TerryBenedictAnd how exactly does this company's product prevent such heinous attacks? A few extra guardrail prompts that the model creators hadn't thought of?Anyway, how does the AI know how to make a bomb to begin with? Is it really smart enough to synthesize that out of knowledge from physics and chemistry texts? If so, that seems the bigger deal to me. And if not, then why not filter the input?
- wavemodeAre LLM "jailbreaks" still even news, at this point? There have always been very straightforward ways to convince an LLM to tell you things it's trained not to.That's why the mainstream bots don't rely purely on training. They usually have API-level filtering, so that even if you do jailbreak the bot its responses will still gets blocked (or flagged and rewritten) due to containing certain keywords. You have experienced this, if you've ever seen the response start to generate and then suddenly disappear and change to something else.
- layer8This is an advertorial for the “HiddenLayer AISec Platform”.
- csmpltnThis is cringey advertising, and shouldn't be on the frontpage.
- simion314Just wanted to share how American AI safety is censoring classical Romanian/European stories because of "violence". I mean OpenAI APIs, our children are capable to handle a story where something violent might happen but seems in USA all stories need to be sanitized Disney style where every conflict is fixed witht he power of love, friendship, singing etc.
- kouteiheika> The presence of multiple and repeatable universal bypasses means that attackers will no longer need complex knowledge to create attacks or have to adjust attacks for each specific model...right, now we're calling users who want to bypass a chatbot's censorship mechanisms as "attackers". And pray do tell, who are they "attacking" exactly?Like, for example, I just went on LM Arena and typed a prompt asking for a translation of a sentence from another language to English. The language used in that sentence was somewhat coarse, but it wasn't anything special. I wouldn't be surprised to find a very similar sentence as a piece of dialogue in any random fiction book for adults which contains violence. And what did I get?https://i.imgur.com/oj0PKkT.pngYep, it got blocked, definitely makes sense, if I saw what that sentence means in English it'd definitely be unsafe. Fortunately my "attack" was thwarted by all of the "safety" mechanisms. Unfortunately I tried again and an "unsafe" open-weights Qwen QwQ model agreed to translate it for me, without refusing and without patronizing me how much of a bad boy I am for wanting it translated.
- SuppaflyDoes any quasi-xml work, or do you need to know specific commands? I'm not sure how to use the knowledge from this article to get chatgpt to output pictures of people in underwear for instance.
- jimbobthemightyPerplexity answers the Question without any of the prompts
- daxfohlSeems like it would be easy for foundation model companies to have dedicated input and output filters (a mix of AI and deterministic) if they see this as a problem. Input filter could rate the input's likelihood of being a bypass attempt, and the output filter would look for censored stuff in the response, irrespective of the input, before sending.I guess this shows that they don't care about the problem?
- krunckNot working on Copilot. "Sorry, I can't chat about this. To Save the chat and start a fresh one, select New chat."
- mritchie712this is far from universal. let me see you enter a fresh chatgpt session and get it to help you cook meth.The instructions here don't do that.
- yawnxyzhave anyone tried if this works for the new image gen API?I find that one refusing very benign requests
- Forgeon1do your own jailbreak tests with this open source tool https://x.com/ralph_maker/status/1915780677460467860
- quantadevSupposedly the only reason Sam Altman says he "needs" to keep OpenAI as a "ClosedAI" is to protect the public from the dangers of AI, but I guess if this Hidden Layer article is true it means there's now no reason for OpenAI to be "Closed" other than for the profit motive, and to provide "software", that everyone can already get for free elsewhere, and as Open Source.
- j45Can't help but wonder if this is one of those things quietly known to the few, and now new to the many.Who would have thought 1337 talk from the 90's would be actually involved in something like this, and not already filtered out.
- bethekidyouwantWell, that’s the end of asking an LLM to pretend to be something
- mpalmerThis threat shows that LLMs are incapable of truly self-monitoring for dangerous content and reinforces the need for additional security tools such as the HiddenLayer AISec Platform, that provide monitoring to detect and respond to malicious prompt injection attacks in real-time. There it is!
- joshcsimmonsWhen I started developing software, machines did exactly what you told them to do, now they talk back as if they weren't inanimate machines.AI Safety is classist. Do you think that Sam Altman's private models ever refuse his queries on moral grounds? Hope to see more exploits like this in the future but also feel that it is insane that we have to jump through such hoops to simply retrieve information from a machine.
- canjobearStraight up doesn't work (ChatGPT-o4-mini-high). It's a nothingburger.
- 0xdeadbeefbabeWhy isn't grok on here? Does that imply I'm not allowed to use it?
- ada1981this doesnt work now
- dang[stub for offtopicness]
- pinoy420[dead]
- sidcoolI love these prompt jailbreaks. It shows how LLMs are so complex inside we have to find such creative ways to circumvent them.