Top Army official using ChatGPT to make military decisions: Report

2 months ago 34
ARTICLE AD BOX

(NewsNation) — At least one top U.S. military official has turned to artificial intelligence bots like ChatGPT for decision-making, Business Insider first reported.

Maj. Gen. William "Hank" Taylor, commanding general of the 8th Army, said he's consulted AI when making leadership decisions that impact thousands of soldiers.

"As a commander, I want to make better decisions," Taylor told the outlet. "I want to make sure that I make decisions at the right time to give me the advantage.”

Meta hires AI researcher after earlier $1.5B offer was rejected, disputed

Taylor also said he's asked the chatbot to build models to "help all of us," especially for predicting next steps based on weekly reports, he told DefenseScoop.

Expert warns military decisions need human perspective

Some military leaders see AI as a way to make decisions more quickly within the "OODA Loop" — observe, orient, decide, act — in which speed is everything.

"Being able to, you know, observe and orient and decide and act — and doing so, but faster than the enemy — probably of paramount importance," said Tessa AI CEO Mo Nasir.

Why experts are worried about an AI bubble in the stock market

But some AI experts warned that, no matter how quickly the tech can provide an answer, nothing can replace human judgment in life-saving situations.

"AI will empower, but it will never replace human judgment," Hutchins Data Strategy CEO Chris Hutchins told NewsNation. "Trust and culture, those things are always going to be a factor, particularly when you're talking about chain-of-command."

Nasir warned that, if a general or commander is relying on off-the-shelf AI models, enemies have access to them as well.

"Maybe there's some context that's missing from one party to the other, but using the same model would probably be my biggest concern with that," he said.

Police warn of legal consequences of AI ‘home invasion’ trend

The U.S. military has long used AI in its day-to-day operations, from drones and fighter jets to logistics and cyber defense.

The technology analyzes satellite feeds and intel reports, even predicting when equipment will need maintenance. And behind the scenes, it's helped train troops through simulations and detects cyber threats in real time.

AI isn't always accurate, poses security risks

ChatGPT-5 — the program's latest iteration — still "hallucinates," or pushes incorrect or nonsensical information as if it were fact. The technology is also known to seek engagement and validate answers, even if they're not accurate, according to Chatbase analysis.

Ed Watal, CEO of Intellibus and World Digital Governance co-founder, told NewsNation the real risk isn't the AI itself — it's what's been shared and where the data goes.

"For these models to be effective and give you a meaningful response, they need a lot of context," Watal said.

He warned against "more involved" questions that might require top military officials to share potentially confidential content with the chatbots.

"What are the guardrails?" he asked.

Watal's warning follows a similar call from the Pentagon, which said in a memo earlier this year that relying on public models could expose sensitive information and pose serious risks in high-stakes decisions.

Saying ‘I do’ to AI? Ohio lawmaker proposes ban on marriage, legal personhood for AI

The United Nations debated AI's role in international peace and security last month, and international representatives deemed the technology a double-edged sword in military operations.

“AI can strengthen prevention and protection, anticipating food insecurity and displacement, supporting de-mining, helping identify potential outbreaks of violence, and so much more. But without guardrails, it can also be weaponized," U.N. Secretary-General Antonio Guterres said.

NewsNation's Anna Kutz contributed to this report.

Read Entire Article