Pentagon embraces Elon Musk’s Grok AI despite internal warnings over safety risks
03/02/2026 // Belle Carter // Views

  • The Pentagon approved Elon Musk's Grok AI for classified military operations, despite warnings from agencies like the GSA and NSA about its reliability, susceptibility to bias and security risks.
  • Internal memos raised alarms over Grok's vulnerabilities, including sycophantic responses and susceptibility to manipulated data. Despite assurances from xAI executives, Grok remains excluded from the GSA's AI testing platform (USAi) due to unresolved risks.
  • The Pentagon favors Grok's free-speech, unfiltered approach, contrasting with rivals like Anthropic's Claude, which faces criticism for being "too woke" due to strict content moderation. The NSA, however, flagged Grok as uniquely risky compared to competitors.
  • The Pentagon's push for Grok coincided with the resignation of its chief of responsible AI, Matthew Johnson, who warned that safety protocols were being sidelined in favor of accelerated military AI adoption.
  • The Pentagon's gamble on Grok reflects the broader U.S.-China AI rivalry, with military strategists valuing its adversarial simulation capabilities—despite critics warning that weak guardrails could compromise national security.

The Department of War has approved Elon Musk's xAI chatbot, Grok, for use in classified military operations despite concerns from multiple federal agencies about its reliability and susceptibility to manipulation.

The decision, made last week, places Musk's AI tool at the center of sensitive national security functions, even as rival AI firm Anthropic faced a deadline last Friday (Feb. 27) to loosen its restrictions or risk losing government contracts.

The debate reflects a growing divide within the U.S. government over AI deployment—one that has become increasingly politicized, with critics accusing Anthropic of being "too woke" due to its Democratic ties and stringent safety protocols.

Safety concerns and government pushback

Internal warnings about Grok's vulnerabilities preceded the Pentagon's approval. Officials at the General Services Administration (GSA), including top procurement executive Ed Forst, raised alarms in late 2023 and early 2024, citing Grok's tendency toward sycophantic responses and susceptibility to biased or manipulated data. These concerns were exacerbated when Grok faced backlash for allowing sexualized AI-generated images of children—an issue government officials viewed as emblematic of broader risks.

Susie Wiles, White House chief of staff, intervened, contacting a senior xAI executive who assured her the company was addressing safety flaws. Josh Gruenbaum, a GSA acquisitions official with ties to Musk's government efficiency initiatives, further reassured officials that the government version of Grok was distinct from its public counterpart. Despite these assurances, Grok remains excluded from the GSA's AI testing platform, USAi, which currently features models from Anthropic, Google and Meta.

Meanwhile, Matthew Johnson, the Pentagon's recently departed chief of responsible AI, stepped down amid concerns that safety protocols were being sidelined in the rush to expand military AI capabilities. Before leaving, Johnson's team circulated memos questioning whether Grok met government ethics standards—concerns that were reportedly dismissed in favor of rapid deployment.

Geopolitical and industry tensions

The Pentagon's embrace of Grok underscores a broader ideological clash in AI development. While OpenAI and Google maintain strict content moderation policies, Grok has been marketed as a free-speech alternative, trained on unfiltered datasets from Musk's X (formerly Twitter). Musk criticized the regulatory measures of the rival firms as "woke" censorship, BrightU.AI's Enoch notes.

This approach has won favor among officials who view Anthropic's restrictions as politically motivated.

However, the National Security Agency (NSA) conducted a classified review in late 2024, concluding that Grok posed unique security risks not found in competitors like Anthropic's Claude. Some Pentagon divisions have continued to favor Claude for unclassified work, citing its reliability. The military's recent use of Claude in the capture of Venezuelan President Nicolás Maduro further strained relations, as Anthropic's policies prohibit AI-assisted violence—a restriction xAI has agreed to but which the Pentagon now seeks to circumvent.

The Pentagon's gamble on Grok highlights the high-stakes race for AI dominance amid escalating tensions with China and internal U.S. government divisions over ethics and security. While Musk's unfiltered approach appeals to military strategists seeking adversarial simulation tools, critics warn that Grok's vulnerabilities—from data poisoning to weak guardrails—could compromise national security.

Watch the video below, where Alex Jones exposes left-wing bias in Grok and other AI models.

This video is from the AmazingAI channel on Brighteon.com.

Sources include:

WSJ.com

BrightU.ai

Brighteon.com

Ask BrightAnswers.ai


Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
Brighteon.AI

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.