The potential of AI to improve our business and our daily lives is unquestionable, but we must make sure we understand the implications of this technological power rather than being caught up in the buzz. In this article, Alex Guzelkececiyan looks at an ethical framework to guide AI implementation and considers some of the solutions being trialled.
Great Power and Responsibility
‘With great power comes great responsibility’ -a saying popularised by Stan Lee’s Spider-Man.1 The great power of Artificial Intelligence (AI) is that it allows us to be proactive rather than reactive. We can leverage more data, find more intricate patterns, and achieve all of this faster than before.
So, where are we on responsibility? NGOs and tech giants have begun the work of trying to educate the public on the risks of AI. Although there is a limited amount of concrete AI governance, most government discussions are centred around self-governance. This trust in the tech industry to keep itself in check is questionable at best.2
Since the onus is on us, let’s educate ourselves. In this article we will take a high-level tour of the ethical considerations surrounding AI implementation and consider some of the solutions that are currently being trialled.
Our considerations will be split into The Alan Turing Institute’s FAST framework3: Fairness, Accountability, Safety and Transparency.
What this article is and what it is not
This article is not a hit-piece on AI. In research conducted by PwC, 84% of CEOs surveyed reported that AI was expected to change their business in the next 5 years with 80% already having some sort of implementation.4 In separate research, PwC estimated that AI will contribute $15.7 trillion to the global economy by 2030.5 AI is unquestionably our future, it will improve the world and it is exciting to see science fiction turn into reality before our very eyes. No reference here, this is purely opinion.
On the topic of science fiction, this article is not considering the ethics around malevolent, sentient AI from the Terminator movies. Current AI technology is referred to as ‘narrow’ AI as it can only execute the narrow list of tasks it is trained to. AI that can emulate humans in the ability to choose which problems to fix and work across a wide range of areas is still entirely science fiction.6
What is AI?
For a ‘normal’ computer programme, a human is required to design every equation and process. With AI, a human instead picks the type of algorithm while a provided ‘training’ dataset allows the AI to adjust the equations of the algorithm to produce the most accurate result – this is the ‘intelligence’ part.6 There are many different types of algorithms and processes of course – think buzzwords like ‘neural networks’, ‘deep learning’ and ‘supervised/unsupervised learning’. Although fascinating, a discussion on these would take up far too much space and will be saved for a subsequent article. What is important for this article is the understanding that these ‘intelligent’ processes are just mathematical algorithms used to interpret data.
Fairness
Bias predominantly appears due to the data used to train the AI model. For example, a model used to approve bank loans requiring women to earn 30% more than their male counterparts to qualify for the same sized loan7 or Amazon’s recruitment AI penalising female applicants8. Both defects occurred because the models were trained with data that included these biases. An AI model is a mathematical algorithm; it does not know that these patterns are not fair. An AI model used by Florida’s judicial system was found to be twice as likely to label an African American defendant as high-risk compared to a white defendant. This was due to the use of training data that came from overpoliced areas where crimes are disproportionately reported against African Americans.9 Black women being misclassified by facial recognition software 35% of the time10 is partly a result of the AI being trained predominantly using images of white men.
How can we stop AI exacerbating our biases? We can pre-process the data to remove any protected information and make sure our data sampling is representative.11 We can adjust the AI’s algorithm so that it maximises its ‘fairness score’ as defined by a combination of the 21 metrics proposed by Arvind Narayanan. He admits that these aren’t comprehensive and that they are purely based on computer science/statistical literature rather than philosophical theories of fairness.12 There is still work to be done, but it’s a start. We can post-process the data to make sure the prediction is not altered when the protected information is changed.13
We can also harness AI’s ability to ‘call us out’ on our biases. For example, an AI model was able to detect zip-codes predominantly populated by ethnic-minorities by looking at the results of a separate AI tool used to calculate credit scores. This is despite the fact that the zip-codes are not involved in the process of calculating the score. This ‘checker’ AI was then able to retune the original model until it could no longer accurately predict these zip codes. Thus, the AI was able14 to hold us accountable for our inherent biases.
Accountability
A sentence that is repeated in this article: AI is a mathematical algorithm. It cannot make a mistake in and of itself. The issue lies in the data it is provided or how it is programmed – both are human responsibilities. If Word doesn’t work like it should, it’s Microsoft’s fault. If an AI makes predictions which are either inaccurate or biased, it is the human creator who is at fault.15 Every AI must have ‘in-the-loop’ humans who are ultimately responsible for the entire AI process: input data, process design, and output predictions. Humans must work alongside AI to input real-world contexts into the decision-making process. For example, in high-risk situations such as healthcare, AI should be used to inform a trained doctor’s decision rather than to make it for them.16
We cannot hide behind the jargon of ‘hidden layers in neural networks’ to shirk the blame onto the computer.17 An AI must have built-in end-to-end monitoring and each stage must be individually open to audit. This will require a lot of extra work and will actually negatively impact the speed of our AI, but it is vital.
Consider this example from a TED talk18 which illustrates my point: An AI is given the constituent limbs of a robot and a mini obstacle course. The aim is to make a robot which can cross the finish line. As humans, we expect the AI to put together a human-like robot with arms and legs to walk across the course. Instead, the AI stacks all the parts of the robot in a vertical pole that topples over and technically crosses the finish line. Although comical, this example serves as a warning that, if left unchecked, an AI may complete a given task in a completely inappropriate fashion. Depending on the task this could have more detrimental consequences than a wrongly formed robot.
Safety
Of course, the safety considerations of technology in general (data privacy, reliability etc.) are all relevant here, but we will focus on issues specific to AI. We will consider three key areas: accuracy, security and robustness.19
There are various accuracy metrics used to measure the correctness of outputs. Different industries will have different thresholds, but we must be aware that the AI tool is making a prediction and so can never be 100% accurate. Every tool has a built-in error scoring system which it attempts to minimise. However, we must also be conscious of the fact that not all errors are created equal. Some AI is now being created with different weightings attributed to different error classes so that more serious errors are more severely penalised.19
Apart from inherent issues like accuracy, AI is also vulnerable to external ‘adversarial attacks’. This term defines any malicious action that tries to sabotage the algorithm. In its most common form, the input data is adjusted – usually in a minor, imperceptible way – to result in an incorrect output. For example, causing an autonomous vehicle to misclassify a ‘stop’ sign which could result in a crash. The current strategy used to counteract such attacks is called ‘model hardening’. The AI is attacked in a controlled setting so that it can be trained to detect attacks when it is put into real-world use.19
The real world is random and unstructured. Information is often incomplete or provided in a form that we are not used to, and niche, special cases arise that we have never seen before. A human can exercise their judgement to make sense of the situation in real time while a ‘conventional’ computer programme will stop working and display an error message. AI is somewhere in the middle and we must be aware of its limitations. Again, these errors may not necessarily be catastrophic and may only occur in very rare cases, but we must constantly monitor and be aware of this risk to give us the best chance of mitigating it.
Transparency
84% of CEOs in a recent PwC survey agree that AI based decision-making must be explainable if it is to be trusted.20 Especially in highly regulated industries, such as financial services, we cannot have blind faith in a ‘blackbox’ AI with no understanding of how and why predictions are21 made.
McKinsey has attempted to deal with this AI problem… with AI. The entire process uses three
different AI models. The first is the one actually doing the predicting. The second model queries the first model across every stage of the process to ascertain why it is making the predictions it is making. The third then translates the insights of the second into plain language.22 We must be careful here with the definition of plain language. Although we should endeavour to make the language as accessible as possible, the result may read like a scientific paper. But that’s fine; someone trained will be able to understand it. Baby steps.
What to do next?
We must not get caught up in the buzz of technological transformation and AI. As with all endeavours, the true potential is unlocked when we think deeply about what we are doing, strategize, and mitigate against risks. In a subsequent article we will look more specifically at ethical implications for financial services institutions and their customers. We will also take a deep dive into the inner workings of AI – I promise there won’t be any equations.
AI is often thought of as an avenue to rid our world of human inefficiencies but without care it will merely serve to exacerbate existing problems. We must not let our biases corrupt our AI solutions. We must take ownership of our AI and monitor our solutions at all times. We must be aware of the limitations and weaknesses of our AI. We must make sure we understand exactly how our AI is working and not just take its prediction as gospel.
We have the power, and we must shoulder the responsibility.
Sources
- https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility
- https://www.birmingham.ac.uk/research/quest/emerging-frontiers/ai-and-the-law.aspx
- https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf
- https://www.pwc.com/gx/en/issues/reinventing-the-future/take-on-tomorrow/business-ai-maturity-divide.html?WT.mc_id=CT1-PL50-DM2-TR2-LS4-ND30-TTA9-CN_take-on-tomorrow-
- https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html
- https://www.ibm.com/cloud/learn/what-is-artificial-intelligence
- https://www.oliverwyman.com/our-expertise/insights/2020/nov/ai-can-make-bank-loans-more-fair.html
- https://www.bbc.co.uk/news/technology-45809919
- https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/Tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/MGI-Tackling-bias-in-AI-June-2019.pdf
- https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist/
- https://www.oliverwyman.com/our-expertise/insights/2020/nov/ai-can-make-bank-loans-more-fair.html
- https://www.youtube.com/watch?v=jIXIuYdnyyk&t=113s&ab_channel=ArvindNarayanan
- https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/Tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/MGI-Tackling-bias-in-AI-June-2019.pdf
- https://www.oliverwyman.com/our-expertise/insights/2020/nov/ai-can-make-bank-loans-more-fair.html
- https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf
- https://www.mckinsey.com/featured-insights/artificial-intelligence/ai-ethics-podcast
- https://www.ibm.com/cloud/learn/what-is-artificial-intelligence
- https://www.youtube.com/watch?v=OhCzX0iLnOc&ab_channel=TED
- https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf
- https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai/responsible-ai-practical-guide.pdf
- https://www.mckinsey.com/featured-insights/artificial-intelligence/ai-ethics-podcast
- https://www.mckinsey.com/featured-insights/artificial-intelligence/ai-ethics-podcast