<- Back
Comments (495)
- grey-areaThis has much broader implications for the US economy and rule of law in the US.If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?This marks an important turning point for the US.
- 5o1ecist> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.This is a trap. Two, I guess, but let's take the first one:Domestic mass surveillance. Domestic.Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...Expanding:> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.Banning domestic mass surveillance is irrelevant.The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.
- doodlebuggingThe best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.All of this should remain a bridge too far, forever.EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.
- thimabiThe problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
- dangHere's the sequence (so far) in reverse order - did I miss any important threads?Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)Tech companies shouldn't be bullied into doing surveillance - https://news.ycombinator.com/item?id=47160226 - Feb 2026 (157 comments)The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)
- kace91Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
- ArchieScrivenerThe USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)
- davidw"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
- largbaeThe signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
- MeekroI've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.What, then, is this really about?
- footaWell that aged poorly.
- celltalkWouldn’t it be ironic if US used open source Chinese models for domestic mass surveillance and autonomously killing people without human oversight… democracy at its best.
- dataflowWhy are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.
- culiBefore you leave a comment about how meaningless this is unless they do XYZ,please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers
- GaryBlutoIf the DoW/DoD wants Anthropic, they'll get Anthropic, whether we know about it publicly or not. It's not unlikely that they're already working together and just putting on a show for the public.
- rabbitlordI am not a fan of Anthropic guys, but this time I stand with it. We all should.
- jurschreuderThey always already wanted it to be Grok, Grok is the only, what they call "not woke AI".
- david_shawI'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
- codepoet80Nicely done. Hold this line — there’s got to be one somewhere.
- lightyrs» Have there been any mistakes in signature verification for this letter?» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.
- sourcegriftCute, I will also sign this since there are only upsides of Good optics and no downsides Let me know when any of them resigns after the companies do inevitably take the million dollar contracts.
- txrx0000This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
- motbus3The important thing to know is that no one wants a conflict. Don't be used for that. Don't accept that.We protect our families when we are home. That's all everybody wants.
- _aavaa_Yes, take disparate sets of employees and like, oh idk unionize while you still have power.
- conductrYou can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.
- fschuettTed Kaczynski was right about technology
- mitch-flindellThe primary purpose of these products is mass surveillance why else would they be allowed to be built ?
- tomcamPlease take this question at face value. I tend to be slightly pro defense department in this context, but it is not a strongly held belief.What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.
- hedayetJust one thing - unless you're at a principal level or higher, don't quit as long as your conscience can take it. You'll be replaced by 10 other people overnight.
- abhijitrThe book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.
- zahlmanIs there a particular reason why the actual letter content requires JavaScript to load while everything else is readable?
- PostOnceMy take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.
- mythzThese 2 Exceptions shouldn't have to be disputed.At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.
- MattDaEskimoThis was a brave, heartwarming read. Thank you to the teams
- latencyhawkWell, I think I will get the 200 sub.
- mortsnortKneecapping the country's best AI lab seems like a bad way to win at the cyber.
- rayinerThis seems squarely within the purpose of the Defense Production Act: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?
- driverdanThis is a nice gesture but completely meaningless. There is absolutely no commitment in this. "We hope our leaders.." has no conditions, no effects.If you're an employee and actually believe in this you need to commit to something, like resigning.
- QuarrelI know it is a serious topic, but before I clicked on it, I assumed this was going to be about Prime numbers...Maybe it can get reused after this stuff is over.
- bcookeI'd love to see this extended to any American regardless of past/present employment with Google or OpenAI
- siliconc0wWe need key AI researchers at these companies to speak out - execs will not care otherwise. If Jeff Dean made this a red line, it might matter.
- focusgroup0> domestic mass surveillance and autonomously killing people without human oversightspoiler alert: this is already happeningdo labs in China have a choice in the matter?
- himata4113Does this mean there is a non zero chance we will get some kind of grok+chinese model mix that's used across the entire US military? Ironic isn't it.
- bottlepalmWe all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.
- tgvSo now they suddenly develop a conscience? Killing education, and by implication actively dumbing the future world, putting large parts of the labor market at risk, porn fakes, and destroying artistic creation, are acceptable in the name of profit, apparently.
- mftbStand your ground.
- gcanyonNo problem! The DoD^HW will just use DeepSeek!(I wish this were a joke)
- trinsic2I missing the actual letter. I think that part of the content is hidden behind some javascript. Can someone post it.
- spuzThey should be collecting signatures from employees at xAI. I think they're probably most likely to fill the space left by Anthropic.
- ipaddrAnd people were wondering how OpenAI will find profitability.
- anonundefined
- paradoxylMore Far Left treason, documented.
- theahuraOpenAI is nothing without its people
- yayrIt's good that there are still empathic humans in the decision and build chain when it comes to AI systems...
- dmixNot using Claude only weakens the state. Just don’t oblige
- ripped_britchesNo surprise to have not heard anything from xAI
- snickerbockers>We are the employees of Google and OpenAI, two of the top AI companies in the world.Does this mean you dipshits are going to stop your own domestic surveillance programs? You sold your souls to the devil decades ago, don't pretend like you have principles now.
- qupHegseth shared a Trump tweet a few hours ago saying they're going to quit doing business with Anthropic.https://x.com/i/status/2027487514395832410
- anonnon> Signed,The people who:> steal any bit of code you put on the internet regardless of the license you use or its terms, then use it to train their models, then turn around and try to sell it to you> made it so you can't afford new, more powerful computers or smartphones anymore, or perhaps even just replacements for the ones you already have, thanks to massive GPU, DRAM, SSD, and now even HDD shortages> flood the internet with artificial, superficial content> aggressively DDoS your websiteReal pillars of society.
- anigbrowlWe hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.[90 minutes later]Ah! Well, neverthelessOK, this is a cheap shot on my part. But still: we hope? What kind of milquetoast martyrdom is this? Nobody gives a shit about tech workers as living, breathing, human moral agents. You (a putative moral actor signed onto this worthy undertaking) might be a person of deep feeling and high principle, and I sincerely admire you for that. But to the world at large, you're an effete button pusher who gets paid mid-six figures to automate society in accordance with billionaires' preferences and your expressions of social piety are about as meaningful as changing the flowers in the window box high up on the side of an ivory tower. The fact that ~80% of the signatories are anonymous only reinforces this perception.If you want this to be more than a futile gesture followed by structural regret while you actively or passively contribute to whatever technologically-accelerated Bad Things come to pass in the near and medium term, a large proportion of you (> 500/648 current signatories) need to follow through and resign over the weekend. Doing so likely won't have that much direct impact, but it will slow things down a little (for the corporate and governmental bad actors who will find deployment of the new tech a little bit harder) and accelerate opposition a little (market price adjustments of elevated risk, increased debate and public rejection of the militaristic use of AI).Hope, like other noble feelings, doesn't change anything. Actions, however poorly coordinated and incoherent, change things a little. If your principles are to have meaning, act on them during the short window of attention you have available.
- jfengelGood luck with that. I just don't see either Google or OpenAI listening to their employees on this. They might have their own reasons for not wanting to help build Skynet, but if they don't, I'm sure those employees can readily be replaced with somebody more compliant.
- ReptileManIt is really nice to see employees creating lists for the next round of laoffs themselves.
- renewiltordWell, it looks like OpenAI will be working with the Pentagon: https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...My personal guess is that Sam Altman said he'd let policy violations go without a complaint and Dario Amodei said he wouldn't.
- HardCodedBiasSo much insanity.Anthropic wanted a veto on use of force by USG. That is intolerable, no private party can have a veto over the sovereign. It is that simple.Anthropic should have just walked away (and taken the settlement lumps) when they realized that the USG knew. But no, they started some crazy campaign.This is so irrational on Anthropic. Purchasing managers across the US (and the world) have to understand now that while Anthropic has the best model on the planet, it is not rational (if you prefer it is not rational in ways commonly understood). It is a risk and must be treated as such.
- chkaloonToo late
- dluanoops turns out you will all be divided
- lazzlazzlazzThe signatories of this site are leaping at a misguided opportunity for moral credit, when the reality is that they're getting whipped into a left-leaning frenzy.As Undersecretary Jeremy Lewin clarified today[1], these weighty decisions should not be made by activists inside companies, but made by laws and legitimate government.[1]: https://x.com/UnderSecretaryF/status/2027594072811098230
- csneekyClaude is better for much than GPT atm. You really think the government is going to hamstring the engineering of weapons and intelligence capabilities by not using it?
- love2readHow is posting on this website with your full name not career suicide?
- yoyohello13I hope Anthropic will survive this. If they don’t it will just be perfect proof that you cannot be both moral and successful in the US.
- ChrisArchitectPreviously: https://news.ycombinator.com/item?id=47175931
- imiricThe levels of irony in this case are staggering.The employees of these companies are complicit in creating the greatest data harvesting and manipulation machine ever built, whose use cases have yet to be fully realized, yet when the government wants to use it for what governments do best—which was reasonable to expect given the corporate-government symbiosis we've been living in for decades—then it's a step too far?Give me a fucking break. Stop the performative outrage, and go enjoy the fruits of your labor like the rest of the elites you're destroying the world with.
- senderista"We hope our leaders will put aside their differences and stand together"
- lovichYou’re kinda already conceding to some of your opponents points when you use legally invalid names like “Department of War”I appreciate the sentiment but don’t preconcede to your opposition by using their framing.
- blaze998December 14, 2024>After famed investor Marc Andreessen met with government officials about the future of tech last May, he was “very scared” and described the meetings as “absolutely horrifying.” These meetings played a key role on why he endorsed Trump, he told journalist Bari Weiss this week on her podcast.>What scared him most was what some said about the government’s role in AI, and what he described as a young staff who were “radicalized” and “out for blood” and whose policy ideas would be “damaging” to his and Silicon Valley’s interests.>He walked away believing they endorsed having the government control AI to the point of being market makers, allowing only a couple of companies who cooperated with the government to thrive. He felt they discouraged his investments in AI. “They actually said flat out to us, ‘don't do AI startups like, don't fund AI startups,” he said....keep making petitions, watch the whole thing burn to the ground when Trump decides to channel the Biden ideas in this field.
- mooglyWe have international laws and rules of war. We have weapon treaties (well, some of them are expiring). Sure, not everyone is signatory, or even follow the conventions they have ratified, but at least having these things in place makes it even remotely possible to categorize and document violations and start processes towards rulebreakers and antihumanist actions.So I looked into what they cooked up in 2023, plus which countries signed it (scroll down to a link to the actual text). It's an extraordinarily pathetic text. Insulting even.https://www.state.gov/bureau-of-arms-control-deterrence-and-...
- HWNDUS7Sweet. Looking forward to another CTF season of He Will Not Divide Us.I love performative acts of wealthy Silicon Valley drags.
- verdvermUse the feedback forms within their platforms to let the companies know your thoughts
- nullbyte"He will not divide us!"
- verisimiIt's great that people are taking a moral position re their work, and are seemingly prepared to take a bit of a risk in expressing themselves.However, if we're honest, Google has a long history of selling 'the people' out on domestic surveillance. There is even a good argument that this is what it was created for in the first place, given it was seeded with money from inqtel, the CIA venture capital fund. So, while I commend acting with your conscience in this (rather minor) case, and I'm glad to see people attempt to draw a line somewhere, what will this really come to? I strongly suspect this is event itself is just theater for the masses, where corporates and their employees get to stand up to government (yay!). The reality is probably all that is being complained about, and far worse, has been going on for years.How far would these signatories go? Would they be prepared to walk away from all that money? Will they stop the rest of the dystopian coding/legislation writing, or is that stuff still ok (not that evil)?Ultimately, is gaining the money worth the loss of one's soul? If you know better, and know that it is wrong to assist corporations and governments in cleaving people open for profit and control, but do it anyway for the house, private schools, holidays, Ferrari, only taking a stand in these performative, semi-sanctioned events - is this really the standard you accept for yourself? If so, then no problem. If not, what exactly are you doing the rest of the time? Are you able to switch your morality/heart/soul off? Judge yourself. If you find you are not acting in accord with yourself, everything is already lost.
- fzeroracerIt's rather amusing that this is the proverbial 'red line', not y'know, everything else this administration has been tearing up and running roughshod over. Maybe this would've been less of an issue if companies were more proactive about this bullshit in the first place?That's why it's hard for me to feel bad about companies suddenly finding themselves on the receiving end. They dug their grave inch by inch and are suddenly surprised when they get shoved into it.
- nilespotterThese models are weapons whether the frontier provider founders and their trite and lofty mission statements like it or not.Private individuals and private companies do not get to create a defensive weapon with unprecedented power in a new category in the US and not share it with the US military.You guys are batshit insane.
- alfiedotwtfIt would be funny in the end if the only ones left to not say no to Trump were Alibaba
- krautburglarYou have 1) stolen everybody's shit and put it behind a paywall, 2) cornered the hardware market in some RICO-worthy offensive that has priced one of the few affordable pasttimes for young people out of reach, 3) changed your climate story (lie) on a dime, and started putting the horrible power-guzzling data centers on any strip of land within spitting distance of a power plant. I hope you all go out of business, and I hope it happens French Revolution style.Of course they were going to use it for military purposes you spiritual abortions, and there is nothing your keyboard-soft hands can do about it.
- dupedThe Department of War doesn't exist, don't meet the fascists on their own terms at any level. They don't debate or operate in good faith.
- nobodywillobsrvIt really feels like I am no longer impressed with Anthropic safety.Do they have even a basic understanding of the different regimes of safety and what allegiance means to ones own state?It would be fine to say they are opting out of all forms of protection against adversaries.But it feels like just more insane naive tech bro stuff.As someone outside the tech bro bubble in fintech in London, can somebody explain this in a way that doesn't indicate these are sort of kids in a playground who think there is no such thing as the wolf?Again, opting out and specializing in tech that you are going to provide to your enemies AND friends alike is fine. That is a good specialization. But this is not what I hear. I hear protest songs not deep thinking of thousand year mind set.
- jackblemmingSo big tech wants to court Trump with millions in donations and now that the big bully they supported is bullying them.. we’re supposed to feel some kind of sympathy? Am I missing something here? Why did Anthropic get involved with the military in the first place?
- remarkEonThis whole episode is very bizarre.Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.[1] https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART...
- politicianI simply do not understand why Americans tech companies and their employees will hew and cry about supporting the military. For those of you who support their position, have you ever stopped to consider that your safe, comfortable lives of free speech and protests and TikTok and food and gas and Amazon Next-Day deliveries is enabled by a massive nuclear deterrent operated by the very military you oppose?It is just so disappointing to come here and read these naive takes. Yes, Anthropic should be compelled to support the military using the DPA if necessary.
- hakrgrl1.5 hours after this was posted, Sam Altman stated openai will work with the DoW.So much for this waste of a domain name. https://x.com/sama/status/2027578652477821175"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. "
- mrcwinnOpenAI employees lol.You’ve lost utterly and completely. Even if you, as an individual, are a good person.
- know-how[dead]
- techreader2[dead]
- THESMOKINGUN[dead]
- huflungdung[dead]
- drsalt[flagged]
- kledru[flagged]
- civcounter[flagged]
- piskov[flagged]
- anonundefined
- nemo44xCorrect. You will not be divided. You will likely be subtracted.
- kopirganWe will not be divided! United in obeying only orders from woke governments, be it on gender ideology, "misinformation", "fact checking" or takedowns, cancellations, blackouts and bans.
- charcircuitImagine if a gun manufacturer sold a gun that you couldn't use against X or Y country. Private companies imposing such demands on our military should not be respected. Having weapons that can randomly detect a false positive and shut themselves down because they think you are using it wrong is a feature I would never want built in.I have also been against these terms of services of restricting usage of AI models. It is ridiculous that these private companies get to dictate what I can or can't do with the tools. No other tools work like this. Every other tools is going to be governed by the legal system which the people of the country have established.
- anonundefined
- infamouscow[flagged]
- hakrgrlHow cute they bought a domain and everything
- angusikI'm here to support Pentagon (: