Need help?
<- Back

Comments (296)

  • lebovic
    I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.[1]: https://news.ycombinator.com/item?id=47174423[2]: https://news.ycombinator.com/item?id=47149908
  • parl_match
    Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
  • hank2000
    Stay strong Anthropic. We just like you more for this.
  • steve_adams_86
    Anthropic is welcome to set up shop here in Canada! I hear Victoria BC is great. Absolutely brimming, overflowing with technology talent
  • silisili
    Not to intentionally sidetrack the conversation, but when did we start calling service members 'warfighters?'I've been seeing it a lot lately, but don't remember ever really seeing it before. Do members of the military prefer this title?
  • seizethecheese
    This part stood out to me:“To the best of our knowledge, these exceptions have not affected a single government mission to date.”I had assumed these exceptions (on domestic surveillance and autonomous drones) were more than presuppositions.
  • egonschiele
    Heck yeah, so happy to see Anthropic fighting. This is what real leadership looks like. I'd love to see the same from Google and OpenAI.
  • soared
    Is this the first company to actually face to face stand up to the current administration?
  • byang364
    I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.
  • mythz
    Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.
  • netinstructions
    This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
  • Intermernet
    What's stopping the government from using the usual nasty tricks the world has known about for decades?DPA? All Writs Act?Force them to comply and then prevent them talking about it with NSLs?I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?Did the world learn nothing from Snowden?
  • zmmmmm
    Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.
  • mkl
    > we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rightsMass surveillance of people constitutes a violation of fundamental rights. The red line is in the wrong place.
  • jleyank
    Just don’t help big brother see more. If you job leads to such results, think hard whether that’s what you should be doing.Perhaps it’s time or even past time to think of ways of screwing up their training sets.
  • rglover
    Was bracing for another rug pull around all this, but kudos to Dario and co for their continued vigilance. Refreshing to see.
  • anon
    undefined
  • ndgold
    Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.
  • lovehashbrowns
    Happy to be a paying Anthropic customer right now.
  • wewewedxfgdf
    Remember "small government"?
  • Jordan-117
    Why is DoD contracting with Anthropic exclusively rather than OpenAI or Google? Their models are all roughly as powerful and they seem both more capable and more willing to cozy up with the military (and this administration) than a relatively scrappy startup focused on model sentience and well-being. Hell, even Grok would be a better fit ideologically and temperamentally.
  • stego-tech
    I am genuinely shocked that a tech company actually stood on principle. My doubts about AI, Anthropic, and Mr. Amodei remain, but man, I got the warm and fuzzies seeing them stick to their principles on this - even if one clause (autonomous weapons) is less principled and more, “it’s not ready yet”.
  • astrolx
    Generally, I am supporting of that move. One thing leaves me non-plussed as a non-USA citizen, "the mass domestic surveillance of Americans" exception. That means that Claude can still be used for mass surveillance of everybody else on the planet right?
  • sneilan1
    I'm a lot happier now being an anthropic customer.
  • jryio
    This an appropriate rewind to unreasonable behavior.I applaud Anthropic's candor in the public sphere. Unfortunately the country party is unworthy of such applause.
  • jkells
    But of course, wholesale surveillance on the rest of the world is fine.I guess our democracies don't count and we don't have any rights.
  • tlogan
    This basically means that the government is already using OpenAI, Gemini, and other AI systems for large scale surveillance. They just wanted to add Anthropic to the list, and Anthropic said no.The most import point of this story is that this is already happening. And it will likely continue regardless of who is elected.
  • throw310822
    From the statement:"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.In practice, this means:If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
  • seydor
    This has been an exceptional publicity campaign for anthropic, among others
  • monomyth
    based on the replies so far hacker news are ideologically captured
  • anon
    undefined
  • tw04
    I just want to point out how 1984 fascist dictatorship it still feels to call it “the department of war”. That’s not normal. None of this is normal.
  • ParentiSoundSys
    Many conservative commentators and Palmer Luckey have been all over Twitter saying, "it's not Anthropic's job to set policy," which reminds me of the classic tune from Tom Lehrer:"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
  • wjessup
    Any commentary about how adversaries won't have regulations?
  • markvdb
    Hours ago, OpenAI raised $110B.
  • rorylawless
    Could this escalate to the point that Anthropic exits the US and sets up shop elsewhere? Or would the company cease to exist before it got to that point?
  • solenoid0937
    The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.
  • Waterluvian
    > Allowing current models to be used in this way would endanger America’s warfighters and civilians.That’s okay! The use of autonomous weapons is only risky for the civilians of the country you’re destabilizing this week!
  • anon
    undefined
  • nightshift1
  • mikeyouse
    Remember when A16Z and a bunch of other muppets insisted they had to back Trump because Biden was too hostile to private companies, especially AI ones? Incredible.
  • ok_dad
    Don't worry, OpenAI will kneel for the king:> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.https://news.ycombinator.com/item?id=47188698Fuck this authoritarian bullshit.
  • engineer_22
    > If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected./In theory./In practice, if your biggest customer tells you to drop Anthropic, you listen to them.
  • eaglelamp
    Anthropic knew they were going to lose this contract to OpenAI, and this is an attempt to salvage publicity from the loss.This administration is comfortable with blatantly picking winners and OpenAI is better connected with the admin than Anthropic.
  • Justitia
    I'm not sure if OpenAI knows that scooping this might hurt their brand by a lot.
  • tushar-r
    This makes it seem like they really like the Anthropic product and are using it quite a bit more than the others? Or is it just me making random connections?
  • jackyli02
    People can still brush this off by saying Anthropic is doing this to create more buzz for its next round. But they are taking unpopular stances and could be burning bridges. Simply take a look at PLTR and it's obviously more lucrative to lean the other way.
  • tehjoker
    You know what? I have not seen an American company take a stand like this… uh ever. I don’t think there should be any engagement with the military what so ever but I will offer a kudos to Anthropic.I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.
  • 50208
    This is what fighting early stage facism looks like.
  • bawolff
    I'm of the opinion that anthropic's "moral" stances are bullshit, not particularly coherent when you dig deep and more about branding. If so, this is grade A marketing.They want to present themselves as moral. What better endorsement than by being rejected by the US military under Trump? You get the people who hate trump and the people who hate the military in one swoop.At the same time its kind of a non story. Anrhropic says it doesn't want its products used in certain ways, US military says fine, you can't be part of the project where we are going to make the AI do those things. Isn't that a win for both sides ? What's the problem?It would be like someone part of a boycott movement being surprised the company they are boycotting doesn't want to hire them.
  • nseggs
    There is literally no world where I take any organizations which has been strong armed by fucking Pete Hegseth seriously lmao. Thank you Anthropic both for building the best models for general engineering and for having a fucking backbone.
  • lazzlazzlazz
    Turns out that Dario was lying about not having heard from the Department of War, as reported by Undersecretary Emil Michael:https://x.com/USWREMichael/status/2027568070034608173
  • vcryan
    Amazing the Pete Hegseth is even a person that anyone would ever need to take seriously.
  • dbg31415
    ChatGPT wasted no time bending over backwards to appease Trump."We'd sure love to turn our AI into a mass surveillance tool! Please, aim it at the Americans Population! And Kill Bots, we can't wait!"
  • SilverElfin
    This is what real leadership looks like. Not the silence and complicity that you see from big tech, who regularly bend the knee and bestow bribes and gifts onto the Trump administration.
  • Rapzid
    Hegseth is the, ultra unqualified, Secretary of Defense. Defense. JFC even when "pushing back" everyone is capitulating.
  • joeross
    Hegseth is so pathetic.
  • JohnnyLarue
    [dead]
  • theturtle
    [dead]
  • verdverm
    Title is off: "Statement on the comments from Secretary of War Pete Hegseth"This is another statement, to their customers about Hegseth's social post, but perhaps resulting in further escalation because you know the other side doesn't like having their weaknesses pointed out.
  • anon
    undefined
  • ddoottddoott
    [flagged]
  • piskov
    [flagged]
  • collinmcnulty
    This an extremely polite “fuck you, make me”. It’s good to see that they have principles, and I suspect strongly that Anthropic will come out on top here if they stand firm.
  • water9
    I fundamentally do not like the idea of one adult determining what knowledge another adult is entitled to.It’s the library of Alexandria all over again.
  • chirau
    Doesn't NSA have a backdoor to all these companies by default? I could have sworn I read somewhere years ago that the government demands a backdoor to all US companies if they can't get in on their own.
  • erelong
    I think Anthropic sounds well-intentioned but is blundering this incident in a big way and they really needed to work better towards a deal instead of isolating themselves with a "principled stance" that sets up a competitor to swoop in and take the contracts they had