The Governance Framework for AI



If you think artificial intelligence or AI belongs in futuristic movies and science fiction novels, you might want to think again. You would be surprised to know that AI is involved in helping you complete a myriad of tasks, such as filing your income tax on the Inland Revenue Authority of Singapore (IRAS) portal. When a person uses the IRAS website or other government websites, Jamie, a virtual assistant with the face of a real human woman, pops up to assist you with your queries. Jamie is a chatbot which uses natural language processing to first comprehend what users are asking, and then provide an appropriate response.

Search engines including Google’s “search” function and mobile phones also use AI to suggest completions of search terms as you begin typing, while Amazon’s recommendation engine uses AI to recommend products to browsing users to buy, based on their buying patterns. According to Forbes, this recommendation engine, which uses data from customers’ preferences as shown by previous purchases to create a list of products which it then recommends to the customer, contributes to 35% of Amazon’s revenue.

This brings us to a stark realisation – tech-savvy companies have amassed huge amounts of personal data, be they customers’ buying preferences and patterns or age, hobbies, etc, to gain a good understanding of their current client base, and the value of such information will correspondingly fuel the use of AI. With technology, these companies can derive deeper insights from the data collected to formulate strategies to retain their clientele. However, the use of AI is not regulated and this could be a cause for concern in time to come.

The Singapore government recognised businesses’ growing use of AI and personal data, and in June 2018, launched three initiatives: one, the formation of the Advisory Council on the Ethical Use of AI and Data; two, the setup of the Research Programme on Governance of AI and Data Use at Singapore Management University School of Law, and three, the release of a Discussion Paper on AI and Personal Data to “facilitate constructive and systemic discussions on ethical, governance and consumer protection issues relating to the commercial deployment of AI”. Published by the Personal Data Protection Commission (PDPC), the Discussion Paper was further developed into a Model AI Governance Framework (Model Framework) for industry’s voluntary adoption.

The Model AI Governance Framework has two guiding principles: one, that decisions made by AI should be explainable, transparent and fair, and two, that companies’ AI systems, robots and decisions should be human-centric.


The Model Framework was released as a living document for broader consultation in January this year at the World Economic Forum (WEF) Annual Meeting in Davos. This framework, Asia’s first, aims to provide guidance to organisations deploying AI at scale on how to do so in an ethical and responsible manner. In this article, we will look at the Model Framework’s guiding principles and four focal areas.

The Model Framework has two guiding principles: one, that decisions made by AI should be explainable, transparent and fair, and two, that companies’ AI systems, robots and decisions should be human-centric. Further, the Model Framework recommends four areas that companies implementing AI should consider:

1) Internal governance structures and measures;
2) Determining an AI decision-making model;
3) Operations management;
4) Customer relationship management.


The Model Framework recommends that organisations should have appropriate internal governance structures and measures to oversee their use of AI. Where possible, an organisation can use or adapt its existing internal governance structure or implement new structures. In addition, the risks of AI can also be managed under the enterprise risk management structure. It suggests that ethical considerations can also be introduced to the organisation as corporate values, to be managed through ethics review boards or similar structures. The framework also noted that the support of the organisation’s top management and its board is crucial to the organisation’s AI governance.

Two aspects are emphasised under internal governance. The first aspect deals with the need to allocate clear roles and responsibilities to the appropriate personnel and departments. These personnel and departments should be adequately trained and given the necessary resources to perform their roles competently. Their duties include using the existing risk management framework or applying risk control measures in assessing and managing the risks of deploying AI, selecting the appropriate AI decision-making model, and managing the process of AI model selection and training.

For the second aspect, a risk management system should be implemented. This system will manage datasets, assess and manage the risks of inaccuracy and bias, and review exceptions that come up during model training.


Before organisations deploy AI solutions, they should ask themselves what commercial objectives they hope to achieve with these solutions. Is it to ensure consistency in decision making, reduce costs, or improve operational efficiency? These objectives, when determined, can then be weighed against the risks of using AI.

Subsequently, organisations should determine the appropriate level of human involvement in AI decision making using a risk impact assessment process. This process will enable organisations to identify, review, and mitigate relevant risks. It will also help organisations develop clarity and confidence in using the AI solutions, and be better prepared in responding to challenges from individuals, other businesses and regulators.

After conducting the impact assessment, organisations can then determine the appropriate level of human involvement in AI decision making. Broadly, they are (1) Human-in-the-loop; (2) Human-out-of-the-loop, and (3) Human-over-the-loop.

As its name suggests, “Human-in-the-loop” requires human oversight in decision making. The human is actively involved in the decision making and retains full control. AI only provides recommendations or input. Decisions cannot be exercised without affirmative actions by the human. An example is a doctor who uses AI to suggest possible diagnoses and treatments. Here, AI provides information to guide the human’s decision, but the human decides on the final diagnosis and treatment.

In the second model, “Human-out-of-the-loop”, humans do not have oversight over the execution of decisions; the AI has full control. For example, an AI-enabled product recommendation solution may automatically suggest products and services to individuals based on predetermined demographic and behavioural profiles, without having humans involved in making those recommendations.

In the last model, “Human-over-the-loop”, humans are allowed to modify the parameters during the algorithm’s functioning. An example given is the Global Positioning System (GPS) for navigation purposes, which plans the traffic routes from Point A to Point B. It offers several routes for selection, and the human can modify the parameters of the algorithm if there are unforeseen road conditions. The human does not have to reprogramme the entire route.


This explains the stages in AI deployment, and the considerations and measures in managing data and using them to train the machine learning models.

Generally, in an AI adoption process, the raw data is first formatted and cleansed, before algorithms are applied to the data. These algorithms include statistical models, decision trees and neural networks. Algorithms are repeated until the best model emerges. This model is then used to produce probability scores that can be factored in by applications in their decision-making and problem-solving processes. The AI adoption process is not always unidirectional but a continuous process of learning.

The quality and selection of data are important. Organisations should mitigate the risk of using biased, inaccurate or non-representative data. They also need to understand where the data came from. Keeping a data provenance record – a historical record of the data – allows organisations to know the quality of its data and trace potential sources of error. Other good practices would be to pay attention to selection bias and measurement bias.

Organisations should also consider measures to enhance the transparency of algorithms. One such measure is explainability, which refers to the ability to explain how an algorithm functions and how it arrives at a particular prediction. It is not always possible for explanations to be given, especially when it relates to aspects such as proprietary information, intellectual property rights protection, anti-money laundering detection, information security and fraud protection. For example, in the case of proprietary information, if a company explains how its algorithm works, its business competitor may be able to copy its algorithm. Similarly, if fraudsters understand how the algorithm works, they will be able to work around it to commit fraud.

Other measures include repeatability and traceability. Repeatability refers to the ability of an algorithm to perform an action or make a decision consistently, if given the same scenario. Traceability refers to the ability of an algorithm to document decision-making processes, to enable easy understanding by the user.


To build trust between an organisation using AI and individuals, appropriate communications about the organisation’s use of AI should be carried out. The organisation should develop a policy for the explanation of AI use – how AI works in a decision-making process, how a specific decision was reached, etc. Easy-to-understand language should also be used to facilitate readability. Other options to improve the consumer experience include giving individuals the option to opt out, giving them a feedback channel and providing a decision review channel.

In conclusion, the Model Framework provides useful guidance to companies in dealing with the governance aspects of AI deployment. The effective governance of AI is important for companies to show that they are well prepared when using AI. This will unquestionably help build trust and confidence with their clients, which is an important element in customer retention.

This article was written by Insights and Publications, ISCA and originally published in the ISCA Journal.