NATIONAL GUIDELINES ON AI GOVERNANCE AND ETHICS: EVERYTHING YOU NEED TO KNOW

National Guidelines on AI Governance and Ethics: Everything You Need to Know

As the digital landscape evolves, artificial intelligence (AI) emerges as a transformative force across industries, influencing everything from healthcare to finance and even entertainment. While AI holds immense potential for innovation and efficiency, it also raises significant ethical concerns. Issues such as algorithmic bias, data privacy, and accountability have prompted governments worldwide to create frameworks that guide the responsible use of AI technologies. Malaysia’s current regulatory landscape concerning AI draws inspiration from existing legal statutes such as the Personal Data Protection Act 2010 (“PDPA”) and Consumer Protection Act 2010 (“CPA”) to name a few. However, these existing regulations may not adequately address the nuanced challenges posed by the complexities AI and big data technology.[1]

The Need for AI Governance

As AI systems increasingly influence our communication methods, work dynamics and decision-making processes, ensuring the ethical use of AI is paramount[2]. The risks associated with unregulated AI include:-

  • Bias and Discrimination: AI systems can inadvertently perpetuate existing biases present in training data, leading to unfair treatment of certain groups.
  • Lack of Transparency: Many AI algorithms operate as “black boxes,” making it difficult to understand how decisions are made.
  • Privacy Concerns: The collection and processing of personal data by AI systems raise questions about user consent and data security.
  • Accountability Issues: In cases where AI systems cause harm, it can be challenging to identify who is responsible.

To address these gaps and risks, the Ministry of Science, Technology, and Innovation (MOSTI) issued the Malaysia’s National Guidelines on AI Governance and Ethics (“National Guidelines on AIGE”) on 20.09.2024. This article aims to provide a comprehensive overview on the National Guidelines on AIGE, exploring their principles, implications, challenges, and the broader context of AI governance.

Objectives of the National Guidelines AIGE

  1. To support the implementation of the Malaysia National AI Roadmap 2021 – 2025 (“AI-RMAP”).
  2. To facilitate the implementation of Responsible AI according to the 7 AI Principles.
  3. To build trustworthiness in AI, which is emphasized by Responsible AI.
  4. To manage risks caused by the development and deployment of AI technology.
  5. To maximize the benefits of AI to enhance the national productivity, economic growth and competitiveness.

Overview of the National Guidelines on AIGE – The Seven (7) AI Principles

The National Guidelines on AIGE provide a comprehensive framework comprising several core principles aimed at guiding the responsible development and deployment of AI technologies in Malaysia. Below are the key components[3]:

1. Transparency

Transparency is crucial for building trust in AI systems. The guidelines emphasize the importance of making AI algorithms and their decision-making processes clear and understandable to users and stakeholders. This involves providing documentation that outlines how AI systems function, including the data used and the rationale behind decisions. By ensuring that AI systems are explainable, organizations can mitigate fears of arbitrary decision-making and empower users with the knowledge needed to challenge or seek recourse against AI-driven decisions.

2. Fairness

One of the most pressing ethical concerns in AI is the potential for bias. The guidelines advocate for fairness in AI systems, emphasizing the necessity of evaluating algorithms for discriminatory outcomes. Organizations are encouraged to adopt practices that promote diversity and inclusivity in data collection, ensuring that the datasets used to train AI models represent a broad spectrum of society. By actively seeking to eliminate bias, organizations can contribute to a more equitable society.

3. Accountability and Responsibility

Accountability is a cornerstone of ethical AI governance. The guidelines underscore the necessity for organizations to assume responsibility for the outcomes produced by their AI systems. This includes establishing clear lines of accountability, such as appointing dedicated teams or individuals to oversee AI governance. By creating a culture of responsibility, organizations can ensure that ethical considerations are integrated into their AI development processes.

4. Privacy and Security

Given that data is the lifeblood of AI systems, the guidelines highlight the importance of protecting personal information. Organizations must adhere to data protection regulations, implementing robust security measures to safeguard data against breaches. Ethical data handling practices—ranging from collection to storage and processing—are essential to building user trust and ensuring compliance with legal frameworks.

5. The Pursuit of Human Benefit and Happiness

A human-centric approach to AI emphasizes the importance of designing technologies that prioritize human well-being. The guidelines advocate for involving diverse stakeholders in the AI design process, ensuring that systems are developed with an understanding of user needs and societal impacts. By focusing on the human experience, organizations can create AI applications that genuinely address real-world challenges.

6. Inclusiveness

AI should be inclusive to ensure equitable access for all stakeholders and avoid social disparities. To achieve this, AI systems must benefit everyone and align with national principles and the Federal Constitution. This can be accomplished through three key strategies: employing inclusive development methods, creating tools that address the specific needs of vulnerable groups, and promoting diversity among AI developers and decision-makers.

7. Reliability, Safety and Control

AI systems must prioritize safety, security, and proper functionality, preventing unauthorized access. Developers should obtain consent from individuals before using or sharing personal data, ensuring privacy and data protection throughout the AI system’s lifecycle. Collected information must not be used unlawfully or discriminatorily. Additionally, developers should implement security-by-design and privacy-by-design principles and adhere to international standards for information security and privacy in their AI systems.

Implications of the Guidelines

The National Guidelines on AIGE following the 7 AI Principles are intended for three main stakeholder group:-

End Users

The guide for end users is key to reducing risks, protecting rights, and building trust in AI by emphasizing on best practices when using AI technology. By following the guidelines, users are informed on their rights and responsibilities while maximizing AI.

Policy Makers of Government, Agencies, Organizations and Institutions

AI is being integrated into government processes to boost efficiency and transparency, but responsible practices are essential. The National Guidelines on AIGE aim to balance innovation with accountability and fairness while minimizing risks across different sectors.

For Developers, Designers, Technology Providers & Suppliers

The guidelines recommend that organizations developing and deploying AI technologies implement several measures to promote responsible AI use. The National Guideline on AIGE help organizations eliminate bias, protect data privacy, and consider social and environmental impacts in algorithm development. They promote collaboration through open datasets to enhance transparency and accountability. Establishing a performance index[4] to evaluate ethical AI use is essential for measuring adherence to the 7 AI principles and fostering continuous improvement in responsible practices.

Sectoral National AIGE Guidelines and Code of Ethics Development

To advance national guidelines for responsible AI, the initiative encourages industry players to develop tailored, sector-specific guidelines, including ethical codes and capacity-building efforts. These guidelines address unique challenges, ethical considerations, and regulatory needs within each sector, fostering trust and accountability. The initiative also emphasizes the importance of workforce development through upskilling and reskilling in AI-related areas. This includes enhancing skills in low-code/no-code tools, digital trust, and computational thinking, which collectively promote responsible AI adoption and innovation for societal benefit.

Integration and Synergy of Responsible AI and Environmental, Social, and Governance (“ESG”)

Advancing the 7 AI Principles requires integrating AI governance frameworks within organizations and across industries. The necessities in collaboration and coherence within internal structures and decision-making processes are important. To effectively address the ethical, legal, and societal implications of AI, stakeholders must engage in cross-sector dialogue, sharing insights and best practices. This collective action ensures that AI governance is a shared responsibility, enabling organizations to align their policies with external regulations and industry standards. Fostering a collaborative culture allows stakeholders to monitor and adapt governance mechanisms in response to evolving challenges, ultimately supporting the effective adoption of AI while upholding the 7 AI Principles.

International Harmonization for Standard Regulation, Certification and Accreditation of AI Product and Service

The initiative for responsible AI focuses on international harmonization of regulatory frameworks to address the global nature of AI technologies. It encourages collaboration among governments, international organizations, and stakeholders to develop consistent standards for AI deployment. This alignment aims to resolve regulatory inconsistencies and enhance Malaysia’s governance of AI, ensuring it is effective and equitable. By promoting regional and international dialogue, stakeholders can share best practices and tackle emerging regulatory challenges together. Overall, the initiative seeks to enhance interoperability and innovation while upholding the 7 AI Principles and building trust in AI technologies.

Conclusion

The National Guidelines AIGE mark a pivotal step toward establishing a responsible AI ecosystem in Malaysia while serving as a valuable reference for other nations facing similar challenges. By prioritizing the seven AI principles, Malaysia can foster innovation while safeguarding stakeholders’ rights and dignity.

Successful implementation of these guidelines requires continuous collaboration among all stakeholders, ensuring that the framework remains dynamic and relevant. Regular reviews and updates from stakeholders by engagement and feedback mechanisms will be essential to adapt with the rapidly evolving landscape of AI technology. Ultimately, these guidelines offer hope for a future where technology enhances human well-being while upholding ethical standards.

For personalised legal support, please reach out to our Financial Services and Mergers and Acquisitions practice group led by our Managing Partner, Mr. Nazmi Zaini (nazmi@nzchambers.com) and Financial Services Senior Associate, Mr. Aryn Rozali (aryn@nzchambers.com). We are here to help you navigate this regulatory compliance process.

Authors:

  1. Nazmi Zaini
  2. Aryn Rozali

 

References:

[1] Zulhuda, S., & Karim, R. (2024). Cyber law in Malaysia: principles and practices. Sweet & Maxwell (p/g 536).

[2] Ibid.

[3] The National Guidelines on AI Governance & Ethics, 2.7.3

[4] The National Guidelines on AI Governance & Ethics, Appendix 9