Ethics in artificial intelligence: Discriminatory biases

September 12th, 2023

Introduction

With the rapid advances in artificial intelligence (“AI”), the call for an ethical framework for AI is growing louder. But what exactly does that entail?

“Derived from the Greek word ‘ethos,’ which means ‘way of living,’ ethics is a branch of philosophy that is concerned with human conduct, more specifically the behaviour of individuals in society. Ethics examines the rational justification for our moral judgments; it studies what is morally right or wrong, just or unjust.”1

As for AI, for the purposes of this discussion, we have adopted the definition established by the European Parliament in 2020: “the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity”.2

So, just like human conduct, artificial intelligence that displays human-like capabilities raises ethical, moral and social questions. And although these systems are capable of learning, they are trained on predefined representations of the world, partial models generated by humans.

Against this backdrop, many organizations have been looking at developing ethical principles for AI that must be considered from the outset in the design of all algorithms. Those principles include transparency and explicability, accountability, privacy, justice and fairness (non-discrimination) and safety and security (non-harm or non-malfeasance).

Without attempting to cover every ethical issue that might arise in AI, the authors of this article have chosen to focus on one human behaviour already being mirrored in AI: discriminatory biases.

 

Discriminatory AI has many underlying causes

Discriminatory biases exist outside of AI and algorithms. They first exist in humans in the form of shortcuts taken by the brain that lead to incorrect and subjective decisions or conclusions.3 The biases found in AI are therefore often the same as those found in humans, but amplified by AI’s access to astronomical amounts of biased data. AI is said to be biased when its output is not neutral, fair or equitable.4

Moreover, AI exhibits several types of algorithmic biases, which can be grouped into two broad categories. First, there are cognitive biases, which are integrated by the human designers. These include confirmation bias5, repetition bias6 and stereotype bias7. In these cases, human designers are subject to biases triggered by stereotypes and prejudices deeply rooted in their subconscious, which cause them to hold to their perception of the world even when data contradicts their thinking.8 Thus, the cognitive biases of human designers are imbedded in the form of algorithmic biases into the AI systems that they program.

There are also statistical biases arising from training data. They include representative bias9, data bias10 and variant omission bias11. These are situations where there is a mismatch between the data used by an algorithm and the reality it is trying to measure because of the way the data was collected.12 This often occurs when artificial intelligence is trained on insufficient, inaccurate or unrepresentative data. Thus, AI systems that are trained with such biased statistical data will necessarily produce biased results.

 

Examples of biased algorithms in some justice systems

Examples of biased AI tools include the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) program, which is used by some U.S. courts to assess a defendant’s likelihood of reoffending based on publicly available data and answers to more than 100 questions about past criminal activity, family history, education, place of residence, employment status and more.13 However, a 2016 investigation found that the software reflects and reinforces racist prejudices and stereotypes against Black people by falsely predicting that they are more likely to reoffend. COMPAS falsely assigns them a risk of reoffending that is twice that of lighter-skinned people in similar situations.14

Another example is facial recognition technology, which is increasingly being used by authorities in various countries to identify suspects. Studies have shown that many artificial intelligence tools perform better on light-skinned faces than on darker ones.15 In January 2020, an error in a facial recognition algorithm used by police in Detroit, Michigan, led to the wrongful arrest and 30-hour detention of a Black man falsely accused of theft.16 Despite their impressive biometric accuracy, facial recognition algorithms are therefore likelier to be wrong about certain demographic groups, particularly Blacks and women, according to the National Institute of Standards and Technology.17

 

A multipronged solution

But how do we ensure that artificial intelligence systems do not discriminate against certain groups, whether based on skin colour, gender, age or nationality?

There are several possible solutions for designing fairer, less biased AIs. First, we need to raise awareness among those who design AI systems so that they recognize and mitigate their own biases by working with a diverse, mixed and interdisciplinary team that prioritizes the impact of AI on people, rather than on performance or the pursuit of profit.18

Then we need to improve the AI systems themselves, in the way they mine data, are developed, deployed and used, to prevent them from perpetuating human and cognitive biases. One way to do this is to incorporate psychological research and literature on unconscious bias into AI data.19 It is also critical to diversify the data sets on which AIs are trained as much as possible.

At the same time, we should increase and improve our use of AI as a tool for identifying instances of systemic discrimination against particular demographic groups and developing targeted public policies to address them.

 

How AI can help reduce discrimination

One example is the Algorithmic Justice League, which raises public awareness of the harms of AI systems, provides representative data by including all minorities and allows citizens to self-report algorithmic bias against them.20

Another example is the Ravel Law analytics platform, which allows users to review every decision handed down and identify trends in the way judges think, write and rule.21 These types of programs could be used to determine whether stereotypes and prejudices exist within a legal system or among certain judges, and if so, to expose them. Using AI this way could help us better protect people in marginalized groups in legal situations, make decision-making processes within legal systems more transparent and hold judges more accountable.22

When considering the regulation of AI, it is important to ensure that ethical issues are considered at all stages of the life cycle of these systems, from research, design and development to deployment and use, including maintenance, operation, trade, financing, monitoring and evaluation, validation, end-of-use, disassembly and termination.23

 

Law firms are not immune

Beyond ChatGPT, which invents non-existent legal precedents24 and fails the Quebec bar exam with a score of 12%,25 the use of AI by law firms also raises ethical concerns. Among them, and staying with the theme of bias, is the question of whether the cases used as a basis for AI in the legal field could be biased. Is the law becoming more Americanized due to the ever-growing amount of digital data coming from our neighbours to the south?26

Law firms, therefore, may face the same problems mentioned above, i.e., the risk that the programs they use may incorporate and amplify pervasive prejudice and discrimination against the most vulnerable groups.

One explanation for these challenges is that the analogical reasoning used by lawyers is difficult to translate neutrally into algorithms.27 The use of AI in comparing similar legal cases is effectively limited because it does not always take into account non-legal factors, such as morality, economics, politics and other human factors that are beyond algorithms’ capabilities, at least for now.28

AI could compromise a lawyer’s independent professional judgment if the lawyer leans too heavily on information provided by AI and the imbedded algorithms that were originally developed by other humans. There is therefore a risk that some lawyers will rely on AI outputs without verifying them and without exercising their independent judgment.

Lawyers will therefore need to confirm whether information provided by AI technology is accurate and reliable. In addition, they should use AI outputs only as a starting point for their reasoning, and then use their own judgment to provide clients with relevant, independent advice specific to their case. The bottom line is that AI can be used to point lawyers in the right direction, but it’s up to the lawyer to decide how to proceed.29

On June 23, 2023, the Court of King’s Bench of Manitoba issued a practice direction to ensure some control over the use of artificial intelligence in the courts:

“With the still novel but rapid development of artificial intelligence, it is apparent that artificial intelligence might be used in court submissions. While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence. To address these concerns, when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.30

The prevalence of North American works, which dominate legal IT systems and databases, pose yet another obstacle to the ethical use of AI in Canadian legal practice, as it could lead to the Americanization of the Canadian law.31

Note, also, that Quebec civil law and Canadian common law can be quite different from U.S. law. Once again, it’s important to make sure that the data on which the AI used by law firms is trained is as diverse as possible, that it’s verified and verifiable, and that it’s based on the law of the relevant jurisdiction.

 

Conclusion

AI raises the same ethical, moral and social questions that we see in society at large. Among them, discriminatory bias should be a key concern. These issues ought thus be considered when algorithms are being designed, and especially when training data is selected. However, if used properly, AI could also help to combat discrimination.

Lawyers, meanwhile, should always give extra thought to how they use AI to avoid automating decisions driven solely by these technologies.

__________

1 Government of Canada (2015). What is ethics? Canada.ca, online.
2 What is artificial intelligence and how is it used? | News | European Parliament (europa.eu).
3 Clémence Maquet (2021). “Intelligence artificielle : quelle approche des biais algorithmiques?”, Siècle Digital, online.
4 Patrice Bertail, David Bounie, Stephan Clémençon and Patrick Waelbroeck (2019). “Algorithmes : biais, discrimination et équité”, Télécom ParisTech, online.
5 When only information and data confirming certain opinions and assumptions are used.
6 When information is considered to be true by dint of repetition, even when it is false.
7 When statements are based on prejudice against certain groups in a population.
8 Patricia Gautrin (2021). “[ANALYSE] Que reprocher à un algorithme biaisé?”, CScience, online.
9 When the data used are not representative of the population.
10 When the data used are simply inaccurate.
11 When certain variables affecting reality have been omitted.
12 Statistics Canada. (2023). “Statistique 101 : biais statistique [Video]”, YouTube, online.
13 Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner (2016). “Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks.”, ProPublica, online.
14 Ibid.
15 DataScientest (2020). “Intelligence artificielle et discrimination : Tout savoir sur ces sujets”, online.

16 Agence France-Presse (2020). “Un homme noir arrêté à tort à cause de la technologie de reconnaissance faciale”, Radio-Canada, online.
17 Patrick Grother, Mei Ngan and Kayee Hanaoka (2019). Face Recognition Vendor Test, Part 3: Demographic Effects, National Institute of Standards and Technology, U.S. Department of Commerce, online.
18 Ibid.
19 Ibid.
20 “Mission, Team and Story”, The Algorithmic Justice League, online.

21 “Ravel Law.” SourceForge, online.
22 Anne-Isabelle Cloutier and Katarina Daniels (2019). “La discrimination systémique à l’aube de l’intelligence artificielle”, Blogue du CRL, online.
23 UNESCO Digital Library (2021). “Recommandation on the Ethics of Artificial Intelligence”, online.
24 Benjamin Weiser (May 27, 2023). “Here’s What Happens When Your Lawyer Uses ChatGPT”, The New York Times, online.; Dan Milmo (June 23 2023). “Two US lawyers fined for submitting fake court citations from ChatGPT”, The Gardian, online.
25 Katia Gagnon (May 25, 2023). “Examen du Barreau ChatGPT recalé”, La Presse, online.
26 Arnaud Billion and Mathieu Guillermin (2019). “Intelligence artificielle juridique : enjeux épistémiques et éthiques”, Droit, Sciences et Technologies, 9, 131-147, online.
27 Ibid.
28 Michael A Patterson and Rachel P. Dunaway (2019). “Understanding the Ethical Obligations of Using Artificial Intelligence”, Long Law Firm, online.
29 Ibid.
30 Court of King’s Bench of Manitoba (June 23 2023). “Practice Direction”, online.
31 Arnaud Billion and Mathieu Guillermin (2019). “Intelligence artificielle juridique : enjeux épistémiques et éthiques.” Cahiers Droit, Sciences et Technologies, Vol. 8, p. 131-147, online.