{"id":13446,"date":"2026-02-25T13:30:54","date_gmt":"2026-02-25T13:30:54","guid":{"rendered":"https:\/\/inernews.online\/?p=13446"},"modified":"2026-02-25T13:30:54","modified_gmt":"2026-02-25T13:30:54","slug":"anthropic-vs-the-pentagon-why-ai-firm-is-taking-on-trump-administration-government-news","status":"publish","type":"post","link":"https:\/\/inernews.online\/?p=13446","title":{"rendered":"Anthropic vs the Pentagon: Why AI firm is taking on Trump administration | Government News"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div aria-live=\"polite\" aria-atomic=\"true\">\n<p>A row is simmering between the United States government and Anthropic, one of the tech companies that develops artificial intelligence (AI) tools for defence and civilian uses.<\/p>\n<p>According to recent reports, Anthropic\u2019s Claude software was used in a US military operation, which resulted in the abduction of Venezuelan President Nicholas Maduro in January this year.<\/p>\n<p>US Defense Secretary Pete Hegseth has given the company until Friday to loosen its rules about how its AI tools can be used by the Pentagon, or risk losing its government contract, The Associated Press and Reuters news agencies reported on Tuesday, quoting unnamed sources.<\/p>\n<p>But Anthropic is refusing to back down over safeguards which prevent its technology from being used to conduct US domestic surveillance and to programme autonomous weapons which can hit targets without human intervention.<\/p>\n<h2 id=\"what-is-anthropic\">What is Anthropic?<\/h2>\n<p>Anthropic is an AI company founded in 2021 by former OpenAI executives.<\/p>\n<p>It was the first AI developer to be used in classified operations by the US Defense Department, which is housed at the Pentagon in Washington, DC.<\/p>\n<p>Anthropic is best known for building Claude, a popular large language model (LLM) and has rapidly become one of the most prominent AI development companies.<\/p>\n<p>LLM is a type of AI technology which generates text, visual or audio output similar to content created by humans after analysing massive datasets such as books, archives, websites, pictures and videos.<\/p>\n<p>For military and defence use, LLMs can summarise large volumes of text, analyse data, translate, transcribe and draft memos. In theory, they can also be used to support autonomous or semi-autonomous weapons systems, which can identify and hit targets without the need for human instruction. However, most AI companies have terms that prohibit this use.<\/p>\n<p>Anthropic positions itself as a \u201cresponsible\u201d developer in the AI landscape. On its website, the company describes itself as a \u201cPublic Benefit Corporation\u201d committed to the \u201cresponsible development and maintenance of advanced AI for the long-term benefit of humanity\u201d.<\/p>\n<p>In November, the company alleged that a Chinese state-sponsored hacking group had manipulated the Claude code in an attempt to infiltrate about 30 targets globally, including government agencies, chemical companies, financial institutions and tech giants. Some of these attempts were successful.<\/p>\n<p>Earlier this month, Mrinank Sharma, an AI safety researcher at Anthropic, resigned from his position over concerns about the use of AI.<\/p>\n<p>In a statement posted on his X account on February 9, Sharma wrote: \u201cThe world is in peril. And not just from AI, or bioweapons, but from whole series of interconnected crises unfolding in this very moment.\u201d<\/p>\n<p>\u201cMoreover, throughout my time here, I\u2019ve repeatedly seen how hard it is to truly let our values govern our actions. I\u2019ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too,\u201d he added.<\/p>\n<h2 id=\"which-other-ai-companies-does-the-us-military-work-with\">Which other AI companies does the US military work with?<\/h2>\n<p>The Pentagon announced last summer that it was awarding defence contracts to four AI companies \u2013 Anthropic, Google, OpenAI and xAI. Each contract is worth up to $200m.<\/p>\n<p>Anthropic was the first AI company to be approved for classified military networks, on which it reportedly works with partners like US software company Palantir Technologies, which has been criticised for its links to the Israeli military. Elon Musk\u2019s xAI, which operates the Grok chatbot, says Grok is also ready to be used in classified settings, according to an unnamed senior Pentagon official, AP reported.<\/p>\n<p>But the Trump administration wants to be able to use the products of these AI companies without restrictions. Hegseth said his vision for military AI systems means that they operate \u201cwithout ideological constraints that limit lawful military applications\u201d, before adding that the Pentagon\u2019s \u201cAI will not be woke\u201d.<\/p>\n<h2 id=\"why-is-anthropic-at-odds-with-the-pentagon\">Why is Anthropic at odds with the Pentagon?<\/h2>\n<p>Sources reported that at a meeting on Tuesday, Hegseth gave Anthropic CEO Dario Amodei until Friday, 5pm (22:00 GMT) to agree to provide Anthropic\u2019s AI models for use on the Pentagon\u2019s new internal network with fewer restrictions.<\/p>\n<p>Officials at the US Defense Department warned they could designate Anthropic a supply chain risk or use the Defense Production Act to essentially give the military more authority to use its products even if it doesn\u2019t approve of how they are used, according to a person familiar with the meeting and a senior Pentagon official, neither of whom were authorised to comment publicly and spoke on condition of anonymity, AP reported.<\/p>\n<p>Amodei has also previously raised ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.<\/p>\n<p>\u201cA powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,\u201d he wrote in an essay last month.<\/p>\n<p>The person familiar called the tone of the Tuesday meeting \u201ccordial\u201d but said Amodei refused to budge on two key issues \u2013 fully autonomous military targeting operations and domestic surveillance of US citizens.<\/p>\n<p>In a podcast appearance on Tuesday in which he explained his refusal to give in to the Pentagon\u2019s demands, Amodei reiterated his concerns around \u201cautonomous drone swarms\u201d \u2013 likely autonomous drones which can attack targets without human input \u2013 and mass surveillance.<\/p>\n<p>\u201cThe constitutional protections in our military structures depend on the idea that there are humans who would disobey illegal orders with fully autonomous weapons,\u201d Amodei said, noting that autonomous drones would not be able to make such a distinction.<\/p>\n<p>The Pentagon objects to Anthropic\u2019s ethical restrictions because military operations require tools which do not have built-in limitations, the senior Pentagon official said. The official argued that the Pentagon has issued only lawful orders and stressed that using Anthropic\u2019s tools legally would be the military\u2019s responsibility.<\/p>\n<h2 id=\"how-was-claude-used-in-venezuela\">How was Claude used in Venezuela?<\/h2>\n<p>On January 3, US special forces abducted Maduro, who remains in US custody and faces trial on drugs and weapons charges in New York.<\/p>\n<p>US media reports revealed on February 14 that Anthropic\u2019s Claude had been used in the operation to strike Caracas and capture Maduro.<\/p>\n<p>An unnamed Anthropic official approached by The Wall Street Journal declined to comment on whether Claude, or any other AI model, was used in any operation. However, the official did say that any use of Claude in the private sector or by the government would need to be in compliance with Claude\u2019s usage policies.<\/p>\n<p>According to the usage policies listed on Anthropic\u2019s website, Claude cannot be used for surveillance, the development of weapons or \u201cinciting violence\u201d.<\/p>\n<p>A total of 83 people, including 47 Venezuelan soldiers, were killed during the US special operation in Venezuela.<\/p>\n<p>US media have also reported that Anthropic has partnered with Palantir Technologies, whose tools are also used by the Defense Department and by federal law enforcement agencies.<\/p>\n<p>It is unclear how exactly Claude was used during the raid on Caracas in January, but AI tools can be used to control drones, analyse images and summarise intercepted communications.<\/p>\n<p>In July 2025, Francesca Albanese, the United Nations special rapporteur on human rights in the occupied Palestinian territory, released a <a href=\"https:\/\/www.ohchr.org\/en\/documents\/country-reports\/ahrc5923-economy-occupation-economy-genocide-report-special-rapporteur\" target=\"_blank\" rel=\"noopener\">report<\/a>\u00a0mapping the corporations aiding Israel in the displacement of Palestinians and its genocidal war on Gaza, in breach of international law.<\/p>\n<p>The report found that Palantir had expanded its support to the Israeli military since the start of its genocidal war on Gaza in October 2023.<\/p>\n<\/div>\n<p><br \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A row is simmering between the United States government and Anthropic, one of the tech companies that develops artificial intelligence (AI) tools for defence and civilian uses. According to recent reports, Anthropic\u2019s Claude software was used in a US military operation, which resulted in the abduction of Venezuelan President Nicholas Maduro in January this year. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":13447,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-13446","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-explained"],"_links":{"self":[{"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/posts\/13446","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=13446"}],"version-history":[{"count":0,"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/posts\/13446\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=\/wp\/v2\/media\/13447"}],"wp:attachment":[{"href":"https:\/\/inernews.online\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=13446"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=13446"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/inernews.online\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=13446"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}