Home / Technology
5 key insights from Sam Altman's AMA on OpenAI's Pentagon agreement
Sam Altman encouraged users to inquire about OpenAI's agreement with the Pentagon
Sam Altman joined X on Saturday evening and encouraged users to inquire about OpenAI's agreement with the Pentagon.
Altman, on Friday evening, revealed that his company had secured a deal with the Department of War to utilise its AI models.
OpenAI's agreement followed Anthropic's rejection of an ultimatum regarding the terms of using its advanced model, Claude, for extensive domestic surveillance and fully autonomous weaponry.
Here are 5 key insights from Altman's AMA.
The OpenAI-Pentagon deal was expedited, and Altman acknowledges the unfavorable appearance
The agreement with the Pentagon was finalised quickly "to reduce tensions," Altman shared on X.
He further stated that the deal was "hurried."
Nonetheless, the "appearance doesn't look favorable" for OpenAI, he remarked.
"If we are correct and this results in easing the tension between the DoW and the industry, we will seem insightful, and like a company that endured considerable difficulty to support the industry," he expressed.
"If not, we will remain perceived as rushed and reckless," he added.
Altman also mentioned noticing "positive indicators" for OpenAI's future in this matter.
OpenAI accepted the Pentagon deal as they were comfortable with the contract's terms
Altman was questioned about why the Department of War chose OpenAI over Anthropic. He mentioned he couldn't speak for his competitor but speculated on why OpenAI secured the contract first.
"Initially, I noticed reports suggesting both parties were very close to an agreement, with both sides keen to finalize one," Altman wrote. "I've observed how high-stress negotiations can rapidly unravel, and that was likely a significant factor here."
He noted that OpenAI and the Department of War "found comfort in the contractual terms" as well.
"I suspect Anthropic desired more operational control than we did," he added.
OpenAI has three redlines but is willing to adjust them as technology progresses
Altman mentioned that OpenAI has "three redlines." However, these could adapt — and more redlines could be introduced — as technology evolves and "new risks" emerge.
"However, it is crucial to note: we are not elected. We have a democratic process where we choose our leaders through elections," Altman wrote. "We possess expertise in the technology and comprehend its boundaries, but it is concerning for a private company to dictate ethics in critical areas."
"It seems reasonable for us to determine how ChatGPT should address a sensitive question," he continued. "But I am reluctant to be the one deciding the course of action if a nuclear threat approaches the US," he added.
Altman mentioned that OpenAI had been engaging with the Department of War for "several months" on non-classified projects, before "matters intensified on the classified side."
"We found the DoW accommodating in meeting our needs, and we aim to support them in their critical mission," he expressed.
"I believe the current trajectory poses a risk for Anthropic, healthy competition, and the US," Altman wrote on X as well. "We negotiated to ensure comparable terms would be extended to all other AI laboratories."
He also appealed for "understanding" for the Department of War, given their "vital mission."
