
The discussion was held as part of the ORF-ECSSR Symposium titled ‘Secure Frontiers, Shared Futures.’ Interventions by speakers highlighted emerging threats from digital integration, validation of Artificial Intelligence (AI) models, and context-specific standards.
Discussing the emerging risk scenario, the two speakers noted that accelerated digital integration amplifies cybersecurity risks like operational failures at scale, skill shortages, and loss of trust in critical infrastructure sectors such as healthcare and ports. Malfunctions in the AI system could erode public confidence, while legacy systems that operate…

The discussion was held as part of the ORF-ECSSR Symposium titled ‘Secure Frontiers, Shared Futures.’ Interventions by speakers highlighted emerging threats from digital integration, validation of Artificial Intelligence (AI) models, and context-specific standards.
Discussing the emerging risk scenario, the two speakers noted that accelerated digital integration amplifies cybersecurity risks like operational failures at scale, skill shortages, and loss of trust in critical infrastructure sectors such as healthcare and ports. Malfunctions in the AI system could erode public confidence, while legacy systems that operate critical infrastructure pose obstacles to AI adoption, increasing the risk of breaches by malicious actors. Moreover, non-digitised legacy data, often found in the Global South, leaves a gap in vital historical inputs for AI training, heightening misinformation and accountability issues.
Cultural and sectoral contexts are also necessary to understand the impact of AI systems. Echoing this, an NIST pilot evaluation report from November 2025 noted that current evaluation approaches often fail to adequately account for the real-world impacts of AI systems. This holds particularly true for health and genomic data, where AI models often prioritise health challenges faced by the Global North (such as cancer), while downgrading concerns of the Global South (such as malaria).
Another theme that emerged during the discussion was the validation challenge for AI models. The speakers noted that there are no universal ‘gold’ standards for testing and trusting AI models, especially when foreign models are deployed in the Global South. Methods such as red teaming and code certification fall short, particularly when vulnerabilities remain undetected. Trust often hinges on vendor origin (e.g., US vs. China) or certifications. Measures such as synthetic data, local data centres, and standards from the National Institute of Standards and Technology (NIST)/International Organization for Standardization were proposed to mitigate risks while avoiding exposure to real personal data.
Panellists also underlined their scepticism about a single global AI cybersecurity standard, favouring market-driven, use-case-specific approaches over rigid overarching rules. They emphasised that standards should reduce transaction costs, boost investment, and provide market advantages without stifling innovation through third-party audits and goodwill guidelines. Compliance costs for businesses resulting from standards often impact productivity.
The discussions also highlighted India’s initiatives and priorities for securing AI models, framed within data sovereignty and targeted development. These included a push for local data centres, the development of AI models to understand agricultural and health practices in rural areas, and financial inclusion and payment security through UPI.
In the context of India-United Arab Emirates bilateral cooperation, the speakers highlighted the importance of shared best practices, incident reporting, AI resilience, data ownership clarity, and expert/civil society exchanges. Joint efforts on data centres, culturally representative training, and a shared AI language (avoiding false flags) could also strengthen bilateral ties. Moreover, talent mobility can address skill gaps and help unlock trade benefits. Through these steps, both countries can build pragmatic, collaborative resilience to address emerging challenges from deploying AI models.
This event report has been written by Sameer Patil.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.