Indonesia, Malaysia and the Philippines are setting the pace on global AI governance norms, says an academic.
xAI and Grok logos are seen in this illustration taken, February 16, 2025. REUTERS/Dado Ruvic/Illustration/File photo
21 Jan 2026 05:59AM
SINGAPORE: While much attention has focused on the row between the United States and the United Kingdom over banning Grok (Elon Musk’s artificial intelligence-powered chatbot on X), Indonesia and Malaysia had already imposed bans on the platform days earlier.
These interventions appear to be establishing a regional pattern, with [the Philippines becoming the third cou…
Indonesia, Malaysia and the Philippines are setting the pace on global AI governance norms, says an academic.
xAI and Grok logos are seen in this illustration taken, February 16, 2025. REUTERS/Dado Ruvic/Illustration/File photo
21 Jan 2026 05:59AM
SINGAPORE: While much attention has focused on the row between the United States and the United Kingdom over banning Grok (Elon Musk’s artificial intelligence-powered chatbot on X), Indonesia and Malaysia had already imposed bans on the platform days earlier.
These interventions appear to be establishing a regional pattern, with the Philippines becoming the third country to announce a ban on Grok. This marks an important regulatory pivot: Southeast Asian states are moving from late adopters to early movers on a highly contested frontier of AI safety, online harms and platform governance.
Indonesia’s decision on Jan 10 to temporarily block access to Grok marked the first instance of a state intervening directly against the platform. The move was triggered by concerns over the tool’s “digital undressing” capability, which facilitates the creation of non-consensual, sexualised nude or near-nude deepfake images, including of children.
Malaysia followed within a day, imposing a similar temporary restriction after documenting repeated misuse of the system to generate obscene and manipulated content, notwithstanding prior regulatory warnings and safeguard mechanisms that depended largely on post-hoc user reporting.
The Philippines, announcing an official ban on Jan 15, has characterised Grok’s “undressing” capability as a cybercrime, placing it within the category of online sexual abuse and exploitation of children. In these cases, authorities framed the restrictions as conditional and corrective, signalling that access would only be restored once xAI and X demonstrated compliance with domestic legal obligations and implemented more robust, ex ante safety measures.
Crucially, these interventions were grounded not in moral regulation but in established policy rationales around digital safety, rights protection and platform accountability, as emphasised by Indonesia’s Communication and Digital Affairs Ministry, the Malaysian Communications and Multimedia Commission and the Philippine Department of Information and Communications Technology.
POLITICAL AND REGULATORY DYNAMICS
Given that Indonesia and Malaysia draw on largely Islamic moral frameworks, and the Philippines is predominantly Catholic, a knee-jerk interpretation might attribute the bans to religious conservatism. However, this framing risks overlooking the political and regulatory dynamics at work, particularly since other conservative or religious societies have not taken similarly aggressive action on Grok despite facing comparable online harms.
What appears to differentiate Indonesia, Malaysia and the Philippines is a convergence of political incentives, regulatory experience with platform controls, and global reputational considerations.
These governments have prior experience in blocking or restricting platform access over concerns such as pornography, gambling and online sales of illegal items, and those experiences have given them legal and operational tools to move quickly when confronted with a new category of harm.
In this sense, these measures reflect a pragmatic application of existing statutes to an emerging technology, rather than being driven solely by cultural or religious sensibilities.
It is also significant that Indonesia, Malaysia and the Philippines have acted before many Western and more technologically advanced jurisdictions. This development comes at an interesting time, as in December 2025 the United States moved to roll back what it described as “cumbersome” AI regulation, signalling an even more anti-governance stance.
The United Kingdom, meanwhile, initially issued warnings that Grok would no longer be permitted to self-regulate, with urgent investigations currently ongoing regarding a ban.
Yet even as the United States and European states continue to deliberate on governance responses, Indonesia, Malaysia and the Philippines have already exercised enforcement-oriented measures to address specific harms. In doing so, the three states have shifted from being seen as late adopters to emerging early movers in AI oversight.
OPPORTUNITIES AND PITFALLS FOR SOUTHEAST ASIA
For Southeast Asia, this moment reveals both potential and pitfalls.
On the one hand, it highlights a niche leadership space for the region: crafting practical, context-specific norms around AI harms tied to gender, children and disinformation, rather than waiting for broad frameworks patterned on Western models. This is a realistic goal, especially since the region already has its own declaration on the protection of children from online exploitation and abuse.
If the Association of Southeast Asian Nations (ASEAN) can build on this momentum, there is an opportunity to develop regional principles on AI-generated sexual and gender-based harms, deepfakes and child protection. This could signal to AI companies that compliance with safety expectations is non-negotiable.
On the other hand, the Grok bans also underscore the risks of fragmentation. If governments unilaterally block or permit high-risk AI tools without shared standards, global firms will navigate a patchwork regulatory environment, and vulnerable groups may be left exposed where protections are weakest.
Moreover, unilateral bans may push harmful content into less regulated spaces or drive users to circumvent restrictions, undermining regulatory objectives.
To translate this moment into sustained leadership, Southeast Asian countries will need to deepen regulatory capacity and work towards ASEAN-level cooperation on enforcement.
Using Indonesia, Malaysia and the Philippines’ Grok decisions as case studies, policymakers can design clear expectations for AI providers, particularly for high-risk systems, including risk assessments, safety-by-design requirements for image tools, rapid takedown obligations and meaningful engagement with regulators before market entry.
In doing so, Southeast Asia could emerge not merely as a reactive regulator of AI harms, but as a contributor to global AI governance norms reflecting regional social, legal and political contexts.
A MORE PROACTIVE APPROACH
The bans by Indonesia, Malaysia and the Philippines on Grok should be understood not as isolated or culturally specific interventions, but as early signals of a more proactive Southeast Asian approach to governing AI-enabled harms.
By acting decisively on sexual deepfakes, these states have shown that meaningful AI regulation need not wait for comprehensive frameworks, but can proceed through targeted, enforcement-oriented responses to clearly identifiable risks. Southeast Asian states have precedent for such interventions, including managing AI-generated deepfakes during election periods.
Whether this becomes a one-off intervention or the foundation for longer-term leadership will depend on what follows. Without regional coordination and sustained investment in regulatory capacity, unilateral bans risk fragmentation and uneven protection.
Conversely, if ASEAN members use this moment to develop shared expectations for AI providers and minimum safeguards against high-risk applications, Southeast Asia could help shape emerging global norms on AI safety and platform accountability. In this sense, the Grok case marks not an endpoint, but a test of whether the region can convert early action into coordinated AI governance.
Karryl Kim Sagun Trajano is a Research Fellow for Future Issues and Technology at the S Rajaratnam School of International Studies, Nanyang Technological University, Singapore. This commentary first appeared on Lowy Institute’s blog, The Interpreter.
Source: Others/el