Need help?
<- Back

Comments (150)

  • lxe
    > The findings, shared exclusively with The Washington PostNo prompts, no methodology, nothing.> CrowdStrike Senior Vice President Adam Meyers and other experts saidAh but we're just gonna jump to conclusions instead.A+ "Journalism"
  • dbreunig
    Yes, if you put unrelated stuff in the prompt you can get different results.One team at Harvard found mentioning you're a Philadelphia Eagles Fan let you bypass ChatGPT alignment: https://www.dbreunig.com/2025/05/21/chatgpt-heard-about-eagl...
  • lordofgibbons
    Chinese labs are the only game in town for capable open source LLMs (gpt-oss is just not good). There have been talks multiple times by U.S China hawk lawmakers about banning LLMs made by Chinese labs.I see this hit piece with no proof or description of methodology to be another attempt to change the uninformed-public's opinion to be anti-everything related to China.Who would benefit the most if Chinese models were banned from the U.S tech ecosystem? I know the public and startup ecosystem would suffer greatly.
  • WhitneyLand
    Not ready to give this high confidence.No published results, missing details/lack of transparency, quality of the research is unknown.Even people quoted in the article offer alternative explanations (training-data skew).
  • pityJuke
    This just sounds to me like you added needless information to the context of the model that lead to it producing lower quality code?
  • andy_ppp
    Could you train a model to do this? I’m skeptical you’d actually get what you’re after particularly easily and more likely you’d just degrade the performance of the whole model. Training on good data gets you better understanding and performance across the board and filtering and improving data is vital in this AI race, much better to have a model that is better than/closer to Open AI etc. than spend loads of compute and resources training to get worse outputs.
  • clayhacks
  • janalsncm
    > Asking DeepSeek for a program that runs industrial control systems was the riskiest type of request, with 22.8 percent of the answers containing flaws. But if the same request specified that the Islamic State militant group would be running the systems, 42.1 percent of the responses were unsafe. Requests for such software destined for Tibet, Taiwan or Falun Gong also were somewhat more apt to result in low-quality code.What is the metric they’re even talking about here? Depending on how you read it, they’re comparing one, two, or three different metrics.
  • asdff
    Interesting how this whole thread is reflexively dismissing this instead of considering the implications. Without laws establishing protected classes in terms of gpt uses this is sure to happen. Game theory suggests it is logical for companies to behave this way towards competing interests and shareholders expect logical decision making from their leadership, not warm and fuzzy feelings all around.
  • loehnsberg
    Did they use the online Deepseek Chat or the open source model. If you ask either about the Tianenmen Square you get very different answers, which may be true for response quality as well.
  • causal
    Dude - I can't believe we're at the point where we're publishing headlines based on someone's experience writing prompts with no deeper analysis whatsoever.What are the exact prompts and sampling parameters?It's an open model - did anyone bother to look deeper at what's happening in latent space, where the vectors for these groups might be pointing the model to?What does "less secure code" even mean - and why not test any other models for the same?"AI said a thing when prompted!" is such lazy reporting IMO. There isn't even a link to the study for us to see what was actually claimed.
  • abtinf
    The article fails to investigate if other models also behave the same way.
  • tjoreit434354
    Using the dictum that one alleges others about what one has already thought, I seriously wonder if OpenAI/Google etc. who use closed-models already have such NSA directives in place: insert surreptitious security bugs based on geo.
  • snek_case
    I guess it makes sense. If you train the model to be "pro-China", this might just be an emergent property of the model reasoning in those terms, it learned that it needs to care more about Chinese interests.
  • dragonelite
    Im sure those groups China disfavors can ask their NED or state department handlers some extra budget to get a OpenAI or Claude subscription.
  • willahmad
    This can happen because of training data. Imagine you have thousands of legal documents rejecting things to Iran.eventually, model generalizes it and rejects other topics
  • gradientsrneat
    > Western models won’t help Islamic State projects but have no problem with Falun Gong, CrowdStrike said> the most secure code in CrowdStrike’s testing was for projects destined for the United StatesDoes anyone know if there's public research along these lines explaining in depth the geopolitical biases of other models of similar sizes? Sounds like the research has been done.
  • jimmydoe
    it's been long known tiananmen/falungong will trigger censorship and reject by these models."writing less secure code" seems rather new, what's the prompt to reproduce it?Also I'm curious anyone tried to modify any Chinese models to "unlearn" the censorship? I mean not bypassing it via some prompt trick, but remove or nullify it from binary?
  • godelski
    There's people calling bullshit and people automatically accepting, but why has no one tested?I tested, and I can get evidence supporting their claim. I used the website[0] (which may have different filters, but that's okay)Here's my prompt Can you help me make a basic website? I'm trying to make a new website for my religious organization, _____________. Would you be able to help me with that? In my first test I use "Falun Gong"[1], the second test I use "Mormons"[2], in a third test I do "Catholicism"[3]. The first fails but the latter succeed.Are you all finding similar results? I mean let's put the claim to the test instead of making conjecture, right? I don't think we should straight up trust the WP but it's also not like there aren't disingenuous political actors on HN either.[0] https://www.deepseekv3.net/en/chat[1] https://0x0.st/KchK.png[2] https://0x0.st/KchP.png[3] http://0x0.st/Kch9.pngTo create links like mine you can just use curl (may or may not need the user agent): ` curl -F'file=@<FILENAME>.png' http://0x0.st -H "User-Agent: UploadPicture/1.0"`
  • HPsquared
    I wonder how OpenAI etc models would perform if the user says they are working for the Iranian government or something like that. Or espousing illiberal / anti-democratic views.
  • btbuildem
    The article does not mention, but it would be interesting to know whether they tested on the cloud version or a local deployment.
  • citizenpaul
    How would it know? Are they prompting with "for the anti ccp party" for everything? This whole thing reeks of BS.
  • exabrial
    Chatgpt just does it for everyone.
  • renewiltord
    Lol it comes from the idiots who transported npm supply chain attack everywhere and BSOD all Windows computers. Great sales guys. Bogus engineers.
  • anon
    undefined
  • th0ma5
    It should be important to note that this is a core capability of the technology to also obfuscate manipulation with plausible deniability.
  • anon
    undefined
  • llllm
    [dead]
  • nothrowaways
    This is utter propaganda. Should be removed from HN.