Advertisement
Advertisement
Unlike traditional machine learning approaches that focus on identifying statistical patterns, causal AI focuses on understanding the causal relationships between variables in a system. Photo: Shutterstock
Opinion
Jane Lee
Jane Lee

Why Hong Kong must embrace causal AI, the new reasoning intelligence

  • The analytical capability and transparency inherent in causal AI could give Hong Kong a solid advantage in AI deployment and governance
The discourse surrounding artificial intelligence (AI) has become increasingly complex as this rapidly advancing technology continues to transform industries, influence societal norms and raise profound ethical questions. As AI algorithms are applied across multiple domains, there are growing calls to ensure these systems are transparent, accountable and free of bias.

Questions have also been raised over the blurring of boundaries between humans and machines, as well as the ethical implications of machines with unbridled decision-making capabilities.

As Hong Kong explores AI governance, the inherent limitations and risks of current AI technologies underscore the growing importance of the emerging field of causal AI, also known as causal reasoning or modelling AI.
Unlike traditional machine learning approaches that focus on identifying statistical patterns, causal AI focuses on understanding the causal relationships between variables in a system, rather than associations or correlations, which are prone to data inaccuracies.

The field of causal AI is poised to play an increasingly important role in developing more trustworthy and valuable systems, and has shown impressive capabilities across domains. The methodology holds significant promise for high-stakes applications, such as in healthcare, finance and public policy, where “explainability” and accountability are paramount.

However, causal AI represents a paradigm shift in the human-AI dynamic and can help address the significant concern that increasingly, AI systems will have limited or no human interaction.

Importantly, causal AI can endow other forms of AI with the ability to augment their decision-making and maintain human agency, with the AI system acting as a tool or executor, and humans as curators and decision-makers. This scientific methodology can significantly support regulatory and oversight mechanisms to ensure AI systems explain their decision-making processes and outcomes coherently and transparently.

While regulators worldwide grapple with the intricacies of governing these emerging technologies, the European Union recently established a legal and regulatory framework with provisions coming into force over the next six months to three years.

Implementing the EU Artificial Intelligence Act sets the stage for increased international cooperation and harmonisation of governance standards around AI. Like climate change, AI requires coordinated efforts and collaboration among stakeholders across different jurisdictions to ensure responsible development and implementation.
Like many other jurisdictions, Hong Kong lacks a comprehensive statutory or regulatory framework to govern AI. There are guidelines on the ethical use of AI and the Monetary Authority has issued “high-level principles”, recognising the need to provide guardrails and guidance. But much more robust regulatory oversight is required.
Hong Kong’s Office of the Privacy Commissioner for Personal Data released its report, “Artificial Intelligence: Model Personal Data Protection Framework”, on June 11. Photo: May Tse
Hong Kong’s common law system, role as an international arbitration centre and depth of professional services, combined with the analytical capability and transparency inherent in causal AI, could give the city a solid competitive advantage in establishing a robust, evidence-based framework for the responsible deployment of AI and its ethical governance.
A dedicated agency and regulatory structure are needed to set AI development and deployment standards. This would allow Hong Kong to attract top talent, spur innovation and become a global leader in ethical innovation governance and accountable AI use. Such proactive measures would benefit the city and allow it to play a more significant role internationally as other jurisdictions seek models for AI policymaking.
Within this context, cybersecurity remains a key consideration. Traditional machine learning models, often reliant on statistical correlations in data, can be vulnerable to sophisticated cyberattacks that exploit these patterns. In contrast, causal AI’s focus on uncovering causal mechanisms offers a more robust defence against malicious actors seeking to manipulate or subvert AI systems.

By modelling the causal drivers of complex phenomena, causal AI can better detect anomalies, identify cyber threat sources and respond with greater agility. This capability is essential as AI increasingly integrates into critical infrastructure, financial systems and other high-stakes domains.

02:15

Airports across the world see operations disrupted as Microsoft systems outage hits globally

Airports across the world see operations disrupted as Microsoft systems outage hits globally
We need look no further than the recent outage for millions of Microsoft users caused by a botched software update to understand the scale of disruption that can occur when IT systems are disrupted globally. Some sources estimate losses of over US$5 billion for Fortune 500 companies alone.

Worryingly, the annual cost of cybercrime is estimated to reach US$10.5 trillion by 2025, and consumers will ultimately bear the cost.

The discourse surrounding advanced technologies reflects their profound transformative potential and the urgent need to thoughtfully navigate their complex ethical, social and economic challenges.

By seizing the opportunity to establish a comprehensive, evidence-based approach to AI governance and regulation, Hong Kong can become a significant global player in the ethical and responsible use of these powerful technologies.

Dr Jane Lee is president of Our Hong Kong Foundation

1