Can Artificial Intelligence be moral?
Dr. David Hardoon, UnionBank’s transformative leader for Data and Artificial Intelligence, explains his view on the morality of AI in The Business Times.
Can Artificial Intelligence be moral?
Source: Business Times
CAN Artificial Intelligence be moral? In my opinion, no. Should this prevent us from establishing how to morally use AI? Absolutely not. In fact, the absence of AI moral capability should drive our need for explicit and clear frameworks for the moral use of AI outputs. I use the term “moral”, somewhat sensationally, to emphasise the use of AI as a tool of judgement (decision making or decision support) where outcomes need to adhere to principles of “right” and “wrong”. However, in reality, such polarity is not always practicable and the terms “ethical” and “fair” are more familiar and more commonly used.
The discourse on AI ethics, or to be more specific, fairness, is not new. The more we embed AI in our lives, and the more we understand the possibilities that AI may bring, the more we want to assure that AI decisions, whether made in a supervised or unsupervised manner, are done so fairly, ethically, morally.
Why? because we want – no, we demand – fairness in our lives.
The relevance of this could not be more pertinent. Take the recent issues and government statements on TraceTogether (TT) as a vivid illustration. The debate should not be on what data is collected but instead focus on the realisation that data, once collected, may be used in ways not previously conceived. It should further emphasise the need for assurance on the fair use of data and avoidance of potential inherent bias in the interpretation or judgement on collected data.
How then should companies and governments balance this perceived dichotomy of AI innovation and governance, thereby achieving the exploration of growth possibilities while concurrently preventing unknown mishaps from occurring? In a counter intuitive manner, by slowing down and appreciating not only the possibilities of AI but more acutely its limitations.
The most important limitation, in my view, is at the core of the definition of Computation Intelligence, which is described in “Computational Intelligence: A Logical Approach” as being “any device that perceives its environment and takes actions that maximises its chances of successfully achieving its goals”. An AI algorithm is a statistical engine that searches for patterns within data that result in the optimal statistical outcome.
For example, in the scenario of predicting which wood samples would have the highest durability or which individuals are likely to buy a particular type of coffee, an AI algorithm, while dependent on the quality of data provided, treats both the wood and the individuals in a similar manner. It does not discriminate living from non-living, it is incapable of value judgement. The AI algorithm will statistically identify the patterns from which inference can be made on the wood with the highest predicted durability or individual with the likelihood to purchase coffee, as the case may be.
In other words, there is no possibility of moral judgement with respect to the impact of either algorithm – it is simply not considered. Guidelines are necessary, not to change AI per se but to provide a framework for the developers of AI and the users of AI output in assuring that moral and materiality questions have been taken into consideration.
The world has responded accordingly. Governing principles and guidelines have been developed by governments and commercial entities alike, including Singapore’s PDPC’s 2019 Model AI Governance Framework and MAS’s earlier 2018 Fairness, Ethics, Accountability and Transparency (FEAT) principles. FEAT was the attempt to bridge the then lack of regulatory guidelines with respect to the use of AI in the financial sector – a simple, succinct, set of questions that we should be asking ourselves as we set about to use AI in a governed manner.
Nonetheless, the introduced principles, while indeed addressing the gap of overarching governance framework, made another challenge evident. The questions of fairness and ethics require each AI algorithm to undergo a lengthy process of subjective review. How should companies incorporate such frameworks in a systematic manner in order to operationalise AI on a large scale, where algorithms may need to adapt promptly to behavioural changes?
Principles are just not enough. Named after the Roman goddess of truth, the Veritas initiative was created. This was envisioned to be the next stage, going beyond the FEAT principles. The so-called “truth” was the creation of systematic tests that would assess the AI “morality” automatically. MAS recently announced the completion of the first phase of the Veritas initiative and published the FEAT Fairness Principles Assessment Methodology and Case Studies.
Recent discourse has touched on the extent by which data collection needs to be done under overt and explicit consent. Is it fair, ethical, moral to use data in ways that users were not made aware or cognisant of, at the point of consent? This brings up an interesting point – that it is not only consent that needs to be regularly reaffirmed and sought out for issues to do with changes to terms and conditions, but that there is a need for a mechanism to validate users’ understanding.
The concept of fairness is the intersect of multiple criteria and objectives. History impacts present behaviour. Identified behaviour may differ entirely from our perception of what our behaviour has been or is. Data is likely to be eternally imperfect. Algorithms will always have errors.
Can AI be moral? No. Although perhaps one day we will unlock the mathematics of morals and this will be a tangible possibility. In the interim, companies can achieve moral use of AI by establishing governance frameworks as well as to concurrently achieve large scale AI operationalisation by deconstructing AI into two pillars:
BUILD
Incorporation of FEAT-like principles in the development and independent validation of new AI algorithms as part of a standard operating model.
RUN
Automatic and systematic validation of AI output against an existing metric of materiality and severity. Akin to a driverless train, when in doubt or past an acceptable tolerance threshold, “stop” and allow a human to intervene.
What is truly fair? The ideal of trustworthiness is perhaps a far better and more pragmatic high watermark to aspire to than fairness. What we need from AI is trust and honesty – at the bare minimum, mechanisms and controls that enable us to identify when things go wrong so that we can intervene.