Diego Argüello, Contributing Editor, News, GameDeveloper.com
November 6, 2025
3 Min Read
One of Square Enix’s goals for the next few years is to use generative AI to “automate 70 percent of QA and debugging tasks in game development,” which the company aims to do by the end of 2027.
The goal, spotted by VGC, was detailed in the company’s progress report on its medium-term business plan, which was released alongside the financial results for the six-month period ended September 30, 2025. Classified as “promoting AI utilization in Japan,” the goal i…
Diego Argüello, Contributing Editor, News, GameDeveloper.com
November 6, 2025
3 Min Read
One of Square Enix’s goals for the next few years is to use generative AI to “automate 70 percent of QA and debugging tasks in game development,” which the company aims to do by the end of 2027.
The goal, spotted by VGC, was detailed in the company’s progress report on its medium-term business plan, which was released alongside the financial results for the six-month period ended September 30, 2025. Classified as “promoting AI utilization in Japan,” the goal is part of the company’s plan to “roll out initiatives to create additional foundational stability.”
As explained in the report, the company has initiated a joint research with the Matsuo Laboratory at the University of Tokyo, and there was a company-wide business idea contest held under the theme of AI, with “several selected ideas developed into projects and currently being promoted internally,” though the nature of these projects is currently unknown.
The aim of the joint research is to improve the “efficiency of game development processes through AI technologies,” the company said. Leaning on gen AI specifically, Square Enix wants to “improve the efficiency of QA operations and establish a competitive advantage in game development.”
In terms of structure, the joint research team includes more than ten members, comprising researchers from the Matsuo-Iwasawa Laboratory at the University of Tokyo, as well as engineers from the Square Enix Group, which is advancing the “collaborative development.”
Related:The Game Awards’ Future Class program remains on ice
Days ago, multiple Japanese studios, including Square, told OpenAI to stop pilfering their work
This week, the Content Overseas Distribution Association (CODA) submitted a written request to OpenAI to demand the U.S. company stop using its members’ content to train models such as Sora 2.
The Japanese anti-piracy organization represents numerous media and video game companies, including Square Enix, Cygames, Bandai Namco, and FromSoftware owner Kadokawa Corporation.
CODA claimed OpenAI might be committing copyright infringement after it confirmed a larger number of outputs produced by video generation model, Sora 2, “closely resembles Japanese content or images.”
“CODA has determined that this is the result of using Japanese content as machine learning data. In cases, as with Sora 2, where specific copyrighted works are reproduced or similarly generated as outputs, CODA considers that the act of replication during the machine learning process may constitute copyright infringement,” reads a statement.
At the start of 2024, Square Enix president and representative director Takashi Kiryu said the company would be “aggressive” when applying AI and other technologies throughout the year to “create new forms of content for consumers.” In 2025, game developers aren’t warming up to gen AI, and some are more worried than ever that the technology will lower the quality of games.
Related:Book Excerpt: The Game Needs to Change: Towards Sustainable Game Design
Last month, Final Fantasy series composer Nobuo Uematsu talked to JASRAC Magazine about his opinion on gen AI in music, stating that he “has never used the technology, and probably never will.” (The interview was translated by Automaton.)
“I think it still feels more rewarding to go through the hardships of creating something myself. When you listen to music, the fun is also in discovering the background of the person who created it, right? AI does not have that kind of background though,” Uematsu told JASRAC. “Even when it comes to live performances, music produced by people is unstable, and everyone does it in their own unique way. And what makes it sound so satisfying are precisely those fluctuations and imperfections.”
About the Author
Contributing Editor, News, GameDeveloper.com
Diego Nicolás Argüello is a freelance journalist and critic from Argentina. Video games helped him to learn English, so now he covers them for places like The New York Times, NPR, Rolling Stone, and more. He also runs Into the Spine, a site dedicated to fostering and supporting new writers, and co-hosted Turnabout Breakdown, a podcast about the Ace Attorney series. He’s most likely playing a rhythm game as you read this.