The only winning move
“AI is one thing that, as a commander, it’s been very, very interesting for me.”
Team of army experts in data center analyzing missiles flight paths with deep learning tools. Credit: Getty Images
Last month, OpenAI published a usage study showing that nearly 15 percent of work-related conversations on ChatGPT had to deal with “making decisions and solving problems.” Now comes word that at least one high-level member of the US military is using LLMs for the same purpose.
At the Association of the US Army Conference in Washington, DC, this week, Maj. Gen. William “Hank” Taylor reportedly said that “Chat and I are really close lately,” using a distressingly …
The only winning move
“AI is one thing that, as a commander, it’s been very, very interesting for me.”
Team of army experts in data center analyzing missiles flight paths with deep learning tools. Credit: Getty Images
Last month, OpenAI published a usage study showing that nearly 15 percent of work-related conversations on ChatGPT had to deal with “making decisions and solving problems.” Now comes word that at least one high-level member of the US military is using LLMs for the same purpose.
At the Association of the US Army Conference in Washington, DC, this week, Maj. Gen. William “Hank” Taylor reportedly said that “Chat and I are really close lately,” using a distressingly familiar diminutive nickname to refer to an unspecified AI chatbot. “AI is one thing that, as a commander, it’s been very, very interesting for me.”
Military-focused news site DefenseScoop reports that Taylor told a roundtable group of reporters that he and the Eighth Army he commands out of South Korea are “regularly using” AI to modernize their predictive analysis for logistical planning and operational purposes. That is helpful for paperwork tasks like “just being able to write our weekly reports and things,” Taylor said, but it also aids in informing their overall direction.
“One of the things that recently I’ve been personally working on with my soldiers is decision-making—individual decision-making,“ Taylor said. “And how [we make decisions] in our own individual life, when we make decisions, it’s important. So, that’s something I’ve been asking and trying to build models to help all of us. Especially, [on] how do I make decisions, personal decisions, right — that affect not only me, but my organization and overall readiness?”
That’s still a far cry from the Terminator vision of autonomous AI weapon systems that take lethal decisions out of human hands. Still, using LLMs for military decision-making might give pause to anyone familiar with the models’ well-known propensity to confabulate fake citations and sycophantically flatter users.
In May, the Army rolled out the Army Enterprise LLM Workspace—built on the commercial Ask Sage platform—to streamline simple text-based tasks such as press releases and personnel descriptions. For other so-called “back office” military work, though, early tests have shown that generative AI might not always be the most efficient use of the military budget.
“There are many times that we find folks using this technology to answer something that we could just do in a spreadsheet with one math problem, and we’re paying a lot more money to do it,” Army CIO Leonel Garciga told DefenseScoop in August. “Is the juice worth the squeeze? Or is there another way to get at the same problem that may be less cool from a tech perspective, but more viable from an execution perspective?”
In 2023, the US State Department listed the best practices for military use of AI, focused on ethical and responsible deployment of AI tools within a human chain of command. The report stressed that humans should remain in control of “decisions concerning nuclear weapons employment” and should maintain the capability to “disengage or deactivate deployed systems that demonstrate unintended behavior.”
Since then, the military has shown interest in using AI technology in the field for everything from automated targeting systems on drones to “improving situational awareness” via an OpenAI partnership with military contractor Anduril. In January 2024, OpenAI removed a prohibition on “military and warfare uses” from ChatGPT’s usage policies, while still barring customers from “devlop[ing] or us[ing] weapons” via the LLM.
Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.