Home / Technology
AI leaders split from Elon Musk over safety and strategy
McNeill said Musk’s strategy is seen by many AI leaders as potential risk rather than just competitive edge
A growing divide is emerging in the artificial intelligence (AI) industry, with leading figures increasingly positioning themselves in opposition to Elon Musk’s aggressive approach to innovation, according to former Tesla president Jon McNeill.
Speaking to Axios, McNeill said Musk’s strategy — characterised by rapid execution, minimal constraints, and a willingness to challenge established safeguards — is seen by many AI leaders as a potential risk rather than just a competitive edge.
McNeill, now co-founder of DVx Ventures, described a widening philosophical split in the industry, particularly involving Sam Altman of OpenAI and Dario Amodei of Anthropic.
According to McNeill, Altman’s relationship with Musk has deteriorated to the point where their approaches often diverge deliberately.
The tension reflects deeper disagreements over AI safety and governance. While companies like OpenAI and Anthropic emphasise guardrails and cautious deployment, Musk — through his startup xAI — has pushed for fewer restrictions, promoting its chatbot Grok as more open and less filtered.
Critics argue that such an approach risks harmful outputs, pointing to past issues with Grok generating controversial content.
Musk has attributed these incidents to user manipulation and said fixes have been implemented.
Despite concerns, McNeill noted that Musk’s boundary-pushing style has historically driven results, from streamlining Tesla’s purchasing process to unconventional campaign tactics in the 2024 US election cycle.
The rift also has deep roots. Musk co-founded OpenAI alongside Altman in 2015 before leaving in 2018, later becoming one of its most vocal critics.
As the race toward advanced AI intensifies, McNeill suggests the competition is no longer just about technology — but about fundamentally different visions for how it should be built and controlled.
