Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Here's a statement of the obvious: The opinions expressed here are those of the participants, not those of the Mutual Fund Observer. We cannot vouch for the accuracy or appropriateness of any of it, though we do encourage civility and good humor.

    Support MFO

  • Donate through PayPal

Hegseth Attacks Anthropic Chief as Deadline Looms in Standoff

Following are excerpts from a current report in The New York Times:

The A.I. firm had rejected military officials’ latest offer. Anthropic has until 5:01 p.m. on Friday to give them unrestricted access to its model.
A standoff between the Pentagon and the artificial intelligence company Anthropic appeared to be deepening as the two sides hurtled toward a 5:01 p.m. deadline Friday that military officials gave the firm to either allow them unrestricted access to its most advanced model or face consequences.

Defense Department officials criticized Anthropic’s leader after the company on Thursday rejected their latest offer to settle the dispute. The Pentagon has threatened to either cut the company off from government business by declaring it a supply chain threat or force it to provide its frontier model without restrictions under the Defense Production Act. Emil Michael, a top Pentagon official who oversees artificial intelligence, attacked Dario Amodei, the chief executive of Anthropic, who on Thursday released a statement about why the company would not agree to the Defense Department’s latest terms.

“It’s a shame that @DarioAmodei is a liar and has a God-complex,” Mr. Michael wrote late Thursday. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”

On the surface, the battle between the Pentagon and Anthropic is a contract dispute over technical details of how the artificial model works, and the military’s use of it. But it has also ballooned into a deeply political fight, involving questions of the military’s ability to employ cutting-edge technology the way it sees fit and what A.I. can or should be used for.

Officials from the State Department took to social media to reinforce the Pentagon’s case and chastise Anthropic, while Democratic senators backed the company. Senator Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, argued that Anthropic was being threatened by Pete Hegseth, the defense secretary, for prioritizing safety.

For Anthropic, a firm that prioritizes both national security and technological safety, the political stakes are high. Supporters cheered Mr. Amodei’s assertion that his company would not bend or allow its model to be used for mass surveillance of Americans or to command pilotless drones. The company has said it is willing to continue negotiating but will not back down from its red lines. Employees at the company have cheered their CEO’s firm stance. And in a rare moment of unity across Silicon Valley A.I. companies, employees at two of Anthropic’s competitors, OpenAI and Google, signed letters backing the position staked out by Anthropic.

One letter published Thursday was signed by nearly 50 employees at OpenAI and 175 at Google. It criticized the Pentagon’s negotiating tactics and called on its leaders to “put aside their differences and stand together to continue to refuse the Department of War’s current demands.” “They’re trying to divide each company with fear that the other will give in,” the letter said.

The Pentagon said on Thursday that it had no interest in using Claude for Government, Anthropic’s model that works on classified systems, for either activity. Mr. Amodei said the Pentagon’s assertion that it would not use Claude for domestic surveillance or autonomous drones was undercut by the legal language in their contract. “In a narrow set of cases, we believe A.I. can undermine, rather than defend, democratic values,” he wrote. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

It is unclear what exactly will happen after 5:01 p.m. Friday. Any action by the Pentagon to label the company a supply chain risk or to force it to comply with the Defense Production Act would prompt legal action by Anthropic. Labeling the company a supply chain threat would block it from doing business with the government. But that, in turn, could have far-reaching effects for the Pentagon and intelligence agencies, because Anthropic’s Claude has been the primary A.I. program used in classified systems.

While many of the uses of artificial intelligence to assist military operations on the ground are still in a developmental stage, the models are actively used for intelligence analysis. Forcing Claude off government computers would hurt analysts at the National Security Agency sifting through overseas communications intercepts. It could also hamper C.I.A. analysts searching for patterns in intelligence reports.

The Pentagon is ready to move forward with Grok, produced by Elon Musk’s xAI, on its classified system. But Grok is considered by current and former government officials to be an inferior product. And switching A.I. software would take time and almost certainly cause disruption.

Comment:   "The Pentagon said on Thursday that it had no interest in using Claude for Government, Anthropic’s model that works on classified systems, for either activity."

Right. No interest at all, until the easily predictable moment when Trump or Hesgeth decide that maybe they do have an interest. You can bet your life on that one. I really hope that Anthropic holds it's ground.

Comments

  • Who really think he is above “God”? Anthropic held its ground and Trump ordered to move on to other AI.
Sign In or Register to comment.