top of page

AI in Legal Systems: Examining Gender Bias and the Role of UK Legal Frameworks in Addressing It

Updated: Sep 16

Abstract: This study examines the gender discrimination of Artificial Intelligence (AI) used in the legal system, focusing on risk assessment, facial recognition, and decision-making and decision-support tools. The study delves into the use of AI in the legal system, examining how its reliance on historical data, under/over-representation, and homogeneity of development teams perpetuate existing gender biases. The study then analyses the implications of the United Kingdom General Data Protection Regulation (UK GDPR) and the proposed Data Protection and Digital Information (DPPI) Bill in addressing gender biases in AI. Nevertheless, the study finds the need for a more robust and proactive legal framework that addresses the root causes of these biases in the design and implementation of AI systems. The paper concludes by proposing a framework to effectively address gender bias in AI systems used in the legal system. The framework outlines explicit obligations across policymakers, companies, and end users to ensure the development and deployment of bias-free AI systems. Its role is to provide comprehensive guidelines and oversight mechanisms that promote proactive measures to prevent gender bias. The framework aims to create a more equitable legal environment for everyone.


Keywords: Artificial Intelligence, Gender Discrimination, UK GDPR, Automated Decision-Making, Policy Recommendations.


1. Introduction

Stereotypes often lead to discrimination in the judiciary, which continues to disadvantage women. Whether as victims, witnesses, or offenders, women’s experiences differ significantly from men’s [1]. An analysis of 67 million case law documents reveals significant gender bias within the judicial system [2]. With the increasing utilisation of AI in legal systems, will it perpetuate or eliminate gender discrimination? The world has witnessed both its opportunities and risks. AI has been valuable in improving productivity and access to justice, such as through ROSS Intelligence and the DoNotPay System. Legal professionals also believe that using automation in the early stages of court processes is fairer than human judgment, given that gender discrimination is a reality in every judiciary [3]. However, if left unaddressed, AI systems will perpetuate or deepen gender biases, acting as a proxy for human decisions [4].


AI bias stems from two main sources: the use of biased or incomplete datasets for training algorithms and the inherent design biases present within the algorithms themselves [5]. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has identified that large language models (LLMs), including Llama 2 and GPT-2, exhibit bias against women and girls, a concern intensified by their free and public accessibility [6]. Joy Buolamwini and Timnit Gebru categorised potential harms caused by algorithmic decision-making into three areas: “loss of opportunity,” “economic loss,” and “social stigmatization” [7]. In order to analyse gender discrimination in these domains, this article looks at the effects of automated decision-support and decision-making systems, risk assessment tools, and facial recognition technology.


Next, using the case of the United Kingdom (UK), the paper examines current frameworks and regulations designed to address gender biases in AI systems. Specifically, the UK GDPR aims to protect individuals from potentially harmful legal decisions that are made solely by AI algorithms. Its shortcomings and suggested changes, however, highlight the need for clearer frameworks and

preventive mechanisms to address gender discrimination in AI algorithms utilized by the judicial system.


On the whole, this paper utilizes existing literature and case analysis to examine the intersection of AI technology and gender discrimination within the legal system. Then, it critically assesses the UK GDPR and identifies gaps in its effectiveness. By doing so, the study aims to provide valuable insights that stakeholders and policymakers may utilize to create and maintain more equitable AI algorithms for the legal system.


2. Risk Assessment Tools

Organizations use actuarial risk assessment tools to assist judges,prosecutors, and other legalDrofessionals to predict the probability of certain outcomes in court. The risk assessment systemswork by analyzing historical datasets and identifying patterns to generate an outcome. HoweverKatyal underscores the issues of underrepresentation and exclusion, in which certain groups areinadequately represented in certain datasets, leading to inaccurate results [8]. Moreover, riskassessment tools have been male-centric, largely because the majority of violent extremist offendersand terrorists in prison are men 19. The overreliance on male-centric data leads to inaccurateassessment outcomes for women and other gender identities [9]. Specifically, people often overlooknon-binary genders, thereby leading to misclassification.


Moreover, socioeconomic factors, when combined with binary gender variables, also producebiased results, Gwen van Eiiim criticises the use of socioeconomic factors in risk assessment toolsbecause they perpetuate social inequality and sentencing disparities. For example, women ofsocioeconomically marginalised status may be subjected to longer custodial sentences and lessfavourable treatment in the justice system due to their assessed risk levels [10. Likewise, Starrarticulates the importance of risk assessment tools to focus on individual behaviour and criminalhistory rather than demographic features. When disproportionately focusing on demographiccharacteristics, risk assessment tools both fail to yield accurate outcomes for individuals and deepeninequalities and biases within the justice system [11].


Indeed, the U.S. Supreme Court has consistently rejected the use of statistical generalizations aboutgroup tendencies, emphasizing individualism as essential to equal protection [12]. For example. inCraig v. Boren, the Court ruled against laws that treated individuals differently due to their genderdespite statistical evidence supporting these laws [13].


Given the intention to reject the use of protected variables to yield disparately different predictions.a possible alternative is to explicitly omit gender as a variable to achieve gender-neutral riskassessment tools. However, when gender is not considered a variable nor are gender-specificinterpretations taken into account, the predictions become inaccurate. For example, women with arisk score of'6 were found to re-offend at the same rate as men with a risk score of4 [14]. By enablingrisk assessment tools to operate without gender as a variable, they would fail the “calibration withingroups” standard and produce unfair predictions 15,. Likewise, a study by Skeem et al. omitted gender as a factor in the Post Conviction Risk Assessment (PCRA), and they found that the PCRAended up overestimating the risk of recidivism for women. Therefore, women would be unfairlypenalised by risk assessment algorithms if gender was not included as a variable. Afterwards, theauthors argue that gender-specific interpretations are necessary in order to produce predictions thatare accurate for all [16]. Likewise, Kim's finding on the topic of race-aware algorithms showed thatthe non-inclusion of race in the risk assessment tools does not ensure fairness, and that it evenproduces more subtle or exacerbated biases. Kim contends that risk assessment tools should be awareof protected characteristics but should not employ them as decisive factors for prediction outcomes[17].


However, gender-specific risk assessment tools are currently not prevalent: the RadicalizationAwareness Network (RAN) claims that the majority of the risk assessment tools now available foiviolent extremist offenders (VEOs) are majorly designed to evaluate male experiences and behavioursinsufficiently including gender-specific interpretations and indicators. The RAN then emphasizes theimportance of incorporating gender-specific factors into existing risk assessment tools in order tounderstand how identity factors interact and impact experiences. To achieve this goal, the RANrecommends using structured professional judgment (SP]) for nuanced assessment and includingintersectionality [18]. Research shows that a significant percentage of women who are incarceratedreport having been victims of physical or sexual abuse before their incarceration [19]. Therefore.rather than letting male-centric risk assessment tools create a cycle of victimization and criminality,gender-specific experiences should be taken into account [19].


Furthermore, the majority of risk assessment tools only take binary gender identities into account.which results in insufficient legal protections for those who do not fit into binary genderclassifications [20]. Additionally, this exclusion perpetuates gender stereotypes, which link particularcharacteristics, behaviours, or appearances to either the male or female category [20].


Thus, neither incorporating gender in a binary form as a risk factor nor omiting it completely issufficient. On the one hand, risk assessment tools should not rely on a single demographic factor asvalidation to treat one group disparately differently from another. Furthermore, if current male-centricrisk assessment tools are not updated, they will be biased against men due to the extensive data onmale offenders, and they will also be inaccurate for women. Overall, risk assessment tools shouldadopt a comprehensive, intersectional framework to ensure accurate assessments for all genderidentities.


3. Facial recognition tools

By analyzing facial features, facial recognition tools enable biometric identification andcategorization [21]. Biometric identification requires a database of known faces to match againstallowing police and security agencies to identify suspects and assist in criminal investigations [21]. According to the Government Accountability Office (GAO), seven law enforcement agencies withinthe Departments of Justice (DO]) and Homeland Security (DHS) have reported to be utilizing facialrecognition technology to aid in criminal investigations [22]. Facial recognition technology functionsthrough a systematic process that includes face detection, feature extraction, and pattern recognitionand it handles variations [23]. $pecifically, the program trains an FRT system on a dataset of variousfaces as it develops. The algorithm learns to distinguish faces from other objects and identifiesindividual facial features to match new images [24]. This means that the accuracy ofthese algorithmsis heavily dependent on the quality and representativeness of the training data [24] .


Joy Buolamwini and Timnit Gebru conducted a study in 2018 that highlighted significant biasesin facial recognition technology, revealing lower accuracy rates for women and individuals withdarker skin tones. While International Business Machines (lBM), one of the creators of the facial recognition system, later improved its system and retested it, the error rates still disproportionately affected darker-skinned females [25].


Misidentification caused by insufticiently diverse training datasets can result in false positives.where individuals are wrongly identified as suspects. These false positives can lead to discriminatorytreatment and negative experiences [21]. For instance, Porcha Woodruff, a pregnant woman falselaccused of carjacking, was the first known female victim of this phenomenon. She was held for 11hours and experienced severe physical distress, including a panic attack, and was hospitalized fordehydration after the charges [26]. Unfortunately, this is not a unique case [27].


Furthermore, Schwemmer et al. propose that image recognition systems are an example of the"amplification process," in which they systematically perpetuate existing status inequalities andgender stereotypes, as they categorize men and women with labels that reflect differing statuses [28][29].


Therefore, despite facial recognition technology's potential to enhance security and aid lawenforcement, the biases and inaccuracies inherent in the technology disproportionately affect womenby misidentifying them and reinforcing existing status inequalities.


4. Automated decision-support and decision-making

According to the United Kingdom's Information Commissioner's Office (ICO),“Automateddecision-making is the process of' deciding through automated means without any human involvement30]. According to Richardson, automated decision systems refer to any systems, software, orprocesses that utilize computational methods to assist or substitute for governmental decisionsjudgments, and policy execution, affecting opportunities, access, liberties, and/or safety. Thesesystems may include finctions such as predicting, classifying, optimizing, identifying, and/orrecommending [31].


Nadeem et al. identify three primary sources of bias in Al-based decision-making systems: designand implementation, institutional, and societal [32]. $pecifically, a major source of gender bias in Alsystems is biassed training datasets, which either under-represent or over-represent certain groups[33]. Another mechanism of bias is the lack of gender diversity within Al development teams. Thishomogeneity fails to account for the experiences and needs of women, reinforcing existing biases inthe design and implementation of Al algorithms [34]. Furthermore, the training of Al systems onhistorical data perpetuates biases due to societal stereotypes and gender roles, which associate certainprofessions with specific genders [35]. Mimi Onuoha introduced the concept of "algorithmicviolence" to describe how automated decision-making systems and algorithms can cause harm byimpeding people's access to fundamental needs 36,. Therefore, if unregulated, the use of automateddecision support and decision-making will perpetuate gender biases in the legal system.


For example, Amazon developed an unintentionally discriminatory Al recruiting tool in 2014Since the algorithm was trained on historical hiring data that favoured male candidates, it served as aproxy for humans and diseriminated against female applicants for technical roles, perpetuating theexisting gender imbalances in the tech industry 37. This perpetuation shows how algorithmicsystems learn and reinforce discriminatory patterns from humans, if used on a wide scale, they wouldlimit women’'s job opportunities and financial independence. Similarly, Facebook job advertisementsfavoured male candidates for STEM jobs, credit loan algorithms demonstrated gender bias, and manymore f38]. Brooks also highlights demographic biases in custody decision-making algorithms due tohistorical data, imposing standardized judgments on unique family disputes 39. This generalizationcreates a feedback loop: as legal practitioners modify their strategies based on these trends, biases arefurther entrenched [5].


The increasing integration of Al systems into decision-making processes in the legal systemincreases the risk of amplifying existing biases, potentially creating a cycle of discrimination [40]. At the individual level, systematic gender bias leads to unfair treatment, which results in significanteconomic and social disadvantages for women. These impacts then extend to families to contributeto broader economic inequality, which hinders community development. The significant implicationsfor this risk highlight the importance of addressing gender discrimination in automated decision-making systems.


5. Policy Analysis and Recommendations

5.1. The UK General Data Protection Regulation (UK GDPR)

Currently, the United Kingdom has several legal frameworks and legislation aimed at addressinggender discrimination in Al algorithms. The Equality Act 2010 prohibits discrimination based onprotected characteristics, which applies to both human and automated decision-making systems.covering both direct and indirect discrimination [41]. The Human Rights Act 1998 also prohibitsdiscrimination on any grounds [42]. However, both acts are foundational, meaning that they do notexplicitly address Al gender discrimination nor lay out clear preventive measures.


The UK General Data Protection Regulation (UK GDPR) is the data privacy and protection lawadapted from the European Union General Data Protection Regulation (EU GDPR) after Brexit tosuit the UK legal framework. Specifically, Article 22 of the UK GDPR offers safeguards aimed atprotecting people from potentially harmful Al decisions that have a legal effect or similarlysignificant effect on them. Article 22 gives people "the right not to be subject to solely automateddecisions, including profiling, which have a legal or similarly significant effect on them." Moreover.Article 22 of the UK GDPR grants individuals the right to contest decisions that have legal orsimilarly significant effects on them [43]. The article ensures that individuals understand and caninteract with decisions that have significant impacts on them. Individuals may also request a humanreview of Al-driven decisions, which helps to mitigate systematic biases [44]. On the whole, article22 emphasises communication, accountability, and transparency [43].


Nevertheless, the UK GDPR framework only acts as a reactive mechanism as it addresses biasesafter they arise rather than preventing them from occurring in the first place. Moreover, the UK GDPRlacks safeguarding explicitly aimed at protecting against gender discrimination in Al algorithms. Thismeans that Al systems in the UK may continue to operate with gender biases without beingthoroughly regulated, as long as these systems do not yield a legal or similarly significant impact onindividuals. While these biases might not immediately impact legal or similarly significant decisions.they could still perpetuate gender inequality in a data-driven society. Using job ads as an example.Katyal shows how subtle these biases can be and how powerful these subconscious "nudges" can be.even if they do not immediately change behaviour [8]. This subtlety underscores the need forregulatory requirements that proactively address and mitigate the risk of gender bias in Al algorithms


5.2. The Data Protection and Digital Information Bill

The Data Protection and Digital Information Bill (DPDl) is a legislative proposal that seeks to updatethe UK's data protection framework. The general aim of the DPDI is to adjust the UK GDPR into amore business-friendly and deregulated framework, which may weaken some of the protectionscurrently offered by the UK GDPR. Below are three main ways the DPDI Bill could influence theprotection that individuals receive from UK GDPR.


The first way in which the DPDl Bill could influence the protection that individuals receive fromthe UK GDPR pertains to the modification of data subiect rights. The bill limits the rights ofindividuals concerning solely automated decision-making, particularly when sensitive data isinvolved [45]. In situations where the UK GDPR currently prohibits it, the bill also permits solelyautomated decision-making [46]. oreover, the DPDI Bill proposes to eliminate the requirement for The Data Protection and Digital Information Bill (DPDl) is a legislative proposal that seeks to updatethe UK's data protection framework. The general aim of the DPDI is to adjust the UK GDPR into amore business-friendly and deregulated framework, which may weaken some of the protectionscurrently offered by the UK GDPR. Below are three main ways the DPDI Bill could influence theprotection that individuals receive from UK GDPR.The first way in which the DPDl Bill could influence the protection that individuals receive fromthe UK GDPR pertains to the modification of data subiect rights. The bill limits the rights ofindividuals concerning solely automated decision-making, particularly when sensitive data isinvolved [45]. In situations where the UK GDPR currently prohibits it, the bill also permits solelyautomated decision-making [46]. oreover, the DPDI Bill proposes to eliminate the requirement for a balancing test that weighs the interests of the data controller against the rights of the individualThis change could lead to increased data processing without fully considering the impacts on the datasubjects, ultimately weakening their protection [47].


The second way in which the DPDl Bill may affect individual protections relates to themodification of transparency requirements in data processing. The DPDl Bill modifies therequirements for transparency in data processing, particularly concerning Research, Archiving, and Statistical (RAS)purposes [45]. The bill establishes exemptions from proactive transparencyrequirements as it states “the new derogation when further processing data for research, archivingand/or statistical purposes" can be applied in situations when "compliance would either be impossibleor would involve a disproportionate effort." This implies that if providing the standard transparencyinformation would be too burdensome for an organization, they might not be required to [45].


Additionally. the bill substitutes the standard of"vexatious" requests with the idea of "manifestlyunfounded" requests. Organizations now have greater discretion to reject requests that they deemexcessively burdensome due to this replacement [45l. Furthermore. the bill narrows the scope of DataProtection lmpact Assessments (DPIAs) to require risk documentation just for high-risk processing[45].


The final way that the DPDI Bill might affect individual protection is by changing the integrityobligations that organizations must fulfill. For instance, the DPD Bill removes the requirement thatbusinesses designate a "senior responsible individual" for high-risk processing activities with therequirement that they designate a statutory Data Protection Officer (DPO) [45].


Additionally, the bill might affect the ICO's independence. This could potentially lead to lesseffective oversight and enforcement of data protection rights [47].


In general, the DPDI Bill aims to simplify compliance requirements to create flexibility forbusinesses, particularly medium-sized enterprises (SMEs). Alhough the government ultimatelydecided not to proceed with the bill, it signals a future direction in which the UK government attemptsto balance innovation with the rights of data subjects.


However, as this trend continues, the crucial step of addressing the root causes of Al genderdiscrimination remains absent. The future direction of UK legislation should not only create a robustframework to protect against Al bias. Legislation should also develop preventive measures to stop Albiases from arising in the first place. To combat algorithmic bias, this paper recommends theestablishment of a robust legal framework that clearly outlines the obligations of policymakerscompanies, and users in the design and implementation of bias-free Al systems[48].


5.3. Recommendations

Due to the opacity of Al systems and trade secrets, Al can diminish one's sense of responsibility.deferring everything to the technology. For example, the Estonian initiative that uses Al judges isunclear as to who is responsible for correcting certain errors, whether it is the Al system's developersor the judicial system itself[49].



While the existing legal framework in the United Kingdom lays out a foundation to protect againstAl discrimination, it lacks both preventive and reactive mechanisms included in a thorough andexplicit framework. To effectively address gender bias in Al systems used in the legal system, theassignment of explicit obligations across different stakeholders is necessary. This proposal isconsistent with the European Commission, which aims to introduce a legal framework for Al aimedat defining the responsibilities of users and providers [50]. This framework should ensure that Alsystems are developed, deployed, utilized, and feedbacked to prevent bias, promote transparency, anduphold accountability. Below are the proposed obligations for three different stakeholders.policymakers,companies, and users.


5.3.1. Policymakers

Firstly, policymakers need to create specific, bias-aware fameworks for corporations developing Alalgorithms used in the judicial system. For example, the UNODC Global Judicial Integrity Networkhas created award-winning initiatives aimed at eliminating gender bias in Al systems used in legalsettings [51]. In order to identify any gender biases in court decisions, policymakers must also fundthe creation of an algorithm that makes use of natural language processing (NLP) techniques [48]These solutions are both feasible, as researchers and organizations have previously developed themThe tools just need to be enhanced and deployed on a wide scale.


Second, legislators must guarantee that judges and other legal professionals are aware of thepotential for Al bias and possess a necessary degree of Al literacy. This includes setting upeducational programs to train legal professionals in comprehending the interpretations of AI-drivenjudgments and to be aware of their shortcomings due to the reliance on historical data, forming Al.independent decisions when necessary.


Thirdly, policymakers should implement regulations that explicitly address the domain of genderbias in Al algorithms, as it is currently lacking in all UK frameworks. A UNESCO publication, forexample, offers a range of methods for incorporating gender equality into Al principles [52].


Lastly, policymakers should set up independent bias oversight committees responsible forreviewing and addressing biases that stem from Al algorithms used in the judicial system. When endusers report bias to these committees, they should review and rectify the cases, and then follow upwith the companies that developed the algorithms and impose reasonable penalties as necessary.


5.3.2. Companies

Firstly, companies must comply with government frameworks and regulations to ensure all Aalgorithms follow uniform standards and are bias-free and that they are incentivized to make surethey are bias-free.

Besides, companies should be required to conduct bias impact assessments on their Al systemsThese assessments identify and address potential biases in the Al systems before they are ofiiciallydeployed in the judicial system, which prevents any biases from outpouring in the first place. Thedevelopment process should integrate tools like those developed by Pinton, Sexton, Tozzi, Sevim.and Baker Gillis, which focus on detecting gender biases in legal contexts [48].


Thirdly, companies should adopt various other bias mitigation strategies, such as boxing methodsproposed by O’Connor and Liu, which aim to identify and mitigate biases before the full deploymentof Al systems [25]. Another efective bias prevention strategy is blind testing, which evaluates Alalgorithms across different protected groups to locate biases. If used prior to system deployment.these methodologies would ensure that Al systems are developed with minimal bias.


Fourthly, companies need to set up intermediary explanation pathways that provide clarityregarding Al decisions. This openness makes it possible for litigants and judges to comprehend thereasoning behind AI-driven decisions, allowing them to interact with and respond to them

meaningfully.


Fifthly, if biases are found and shown to be caused by an Al system, companies should be heldaccountable for damages and penalties. This obligation incentivizes businesses to give equity toppriority when developing Al systems.


5.3.3. End users

End users include litigants and legal professionals who have a responsibility to use Al algorithms tocreate an impartial judicial environment. Firstly, users need to be made aware of the potential biasesthat Al algorithms can create and therefore build independence from solely automated decisions. This education should include training on how to suspect potential biases and the importance of reportingthem. Users need to be incentivized to report and notify bias oversight committees of any biases theyidentify.


Furthermore, people must be able to fully demand that Al systems be transparent. This includesthe fireedom to challenge results that seem biased and the right to comprehend the reasoning behind Al decisions, Under the UK GDPR, individuals already have the right to contest solely automateddecisions that have a legal or similarly significant effect on them. This right should be expanded toallow users to challenge Al-driven judicial decisions when they suspect any gender bias.


6. Conclusions


On the one hand, Al has the potential to transform the judicial system and society by improving accessto justice, efficiency, and consistency. On the other hand, current Al systems have also perpetuatedmany societal biases. Nevertheless, it is important to acknowledge that Al systems inherently serveas a proxy for human decisions, trying to predict human intent. Gender discrimination would stillperpetuate in a world without Al: it is the implementation of Al algorithms in the legal system thathas also brought biases to the forefront of human awareness. If Al algorithms deployed in the legalsystem can be designed and implemented with minimal bias, this awareness could lead to arevolutionary transformation in our society, where justice is free from discrimination. Current Ukframeworks, such as the UK GDPR, act as firm reactive measures when individuals are at potentialthreat under biased automated decisions. However, designing and implementing Al systems withgender equity in mind requires proactive approaches to prevent bias from the outset. Current UK legaframeworks fall short of explicitly addressing gender discrimination in Al applications as well asassigning clear obligations across different stakeholders, which is what this research proposes.


This research is not without its limitations. The focus on frameworks in the UK may notcomprehensively reflect regulations and legislations across the globe. Additionally, due to thelimitations of up-to-date resources, the research lacks sufficient empirical data and case studies in thejudiciary that could illustrate how Al's gender bias impacts the lives of specific individualscommunities, and societies. The proposed framework for addressing gender bias in Al may also notprovide detailed implementation strategies or consider the practical challenges of enforcing suchmeasures within specific legal systems. Future research should incorporate specific case studiesassessing the impact of gender bias in Al through qualitative or quantitative research. Future studiescan also focus on the effectiveness of existing legal frameworks in addressing gender bias in Al andcarry out experiments to test the effectiveness and feasibility of proposed frameworks. By addressingthese gaps, we can work toward a future where Al serves as a tool for justice free from bias.


References

[1] Gender equality. (2013). https://www.judiciary.uk/wp-content/uploads/JCO/Documents/judicial college/ETBB_Gender__finalised_.pdf

[2] Baker Gillis, N. (2021, August 1). Sexism in the Judiciary: The Importance of Bias Definition in NLP and In Our Courts. ACLWeb; Association for Computational Linguistics.https://doi.org/10.18653/v1/2021.gebnlp-1.6

[3] Barysė, D., & Sarel, R. (2023). Algorithms in the court: does it matter which part of the judicial decision-making is automated? Artificial Intelligence and Law, 32, 117–146. https://doi.org/10.1007/s10506-022-09343-6

[4] Belenguer, L. (2022). AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and Ethics, 2(2). https://doi.org/10.1007/s43681-022-00138-8

[5] Zafar, A. (2024). Balancing the scale: navigating ethical and practical challenges of artificial intelligence (AI) integration in legal practices. Discover Artificial Intelligence, 4(1).https://doi.org/10.1007/s44163-024-00121-8

[6] UNESCO. (2024). Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes. Unesco.org. https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes

[7] Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification *. Proceedings of Machine Learning Research, 81(81), 77–91. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

[8] Katyal, S. K. (2020). Private Accountability in an Age of Artificial Intelligence. In W. Barfield (Ed.), The Cambridge Handbook of the Law of Algorithms (pp. 47–106). chapter, Cambridge: Cambridge University Press.

[9] Radicalisation Awareness Network. (2023). The missing gender-dimension in risk assessment Key outcomes. https://home-affairs.ec.europa.eu/system/files/2024-01/ran_missing_gender-dimension_in_risk_assessment_14112023_en.pdf

[10] van Eijk, G. (2016). Socioeconomic marginality in sentencing: The built-in bias in risk assessment tools and the reproduction of social inequality. Punishment & Society, 19(4), 463–481. https://doi.org/10.1177/1462474516666282

[11] Starr, S. B. (2015). The New Profiling. Federal Sentencing Reporter, 27(4), 229–236. https://doi.org/10.1525/fsr.2015.27.4.229

[12] Primus, R. A. (2003). Equal Protection and Disparate Impact: Round Three. Harvard Law Review, 117(2), 493. https://doi.org/10.2307/3651947

[13] US Supreme Court . (1976). Craig v. Boren, 429 U.S. 190. Justia Law. https://supreme.justia.com/cases/federal/us/429/190/

[14] Drösser, C. (2017, December 22). In Order Not to Discriminate, We Might Have to Discriminate. Simons Institute for the Theory of Computing. https://www.droesser.net/en/2017/12/

[15] Eckhouse, L., Lum, K., Conti-Cook, C., & Ciccolini, J. (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior, 46(2), 185–209. https://doi.org/10.1177/0093854818811379

[16] Skeem, J. L., Monahan, J., & Lowenkamp, C. T. (2016). Gender, Risk Assessment, and Sanctioning: The Cost of Treating Women Like Men. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2718460

[17] Kim, P. (2022, October). Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action. California Law Review. https://www.californialawreview.org/print/race-aware-algorithms-fairness-nondiscrimination-and-affirmative-action

[18] Directorate-General for Migration and Home Affairs. (2024, May 27). Improving risk assessment: Accounting for gender, May 2024. Migration and Home Affairs. https://home-affairs.ec.europa.eu/whats-new/publications/improving-risk-assessment-accounting-gender-may-2024_en

[19] Women and Girls in the Justice System | Overview. (2020, August 13). Office of Justice Programs. https://www.ojp.gov/feature/women-and-girls-justice-system/overview#overview

[20] Katyal, S., & Jung, J. (2021b). The Gender Panopticon: Artificial Intelligence, Gender, and Design Justice. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3760098

[21] Waelen, R. A. (2022). The struggle for recognition in the age of facial recognition technology. AI and Ethics, 3(1).

https://doi.org/10.1007/s43681-022-00146-8

[22] Office, U. S. G. A. (2024, March 8). Facial Recognition Technology: Federal Law Enforcement Agency Efforts Related to Civil Rights and Training | U.S. GAO. Www.gao.gov. https://www.gao.gov/products/gao-24-107372 [23] Lin, S.-H. (2000). An Introduction to Face Recognition Technology. Informing Science: The International Journal of an Emerging Transdiscipline, 3, 001–007. https://doi.org/10.28945/569

[24] Schuetz, P. (2021). Fly in the Face of Bias: Algorithmic Bias in Law Enforcement’s Facial Recognition Technology and the Need for an Adaptive Legal Framework. Minnesota Journal of Law & Inequality, 39(1), 221–254. https://doi.org/10.24926/25730037.626

[25] O’Connor, S., & Liu, H. (2023). Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities. AI & SOCIETY, 39(4), 2045–2057. https://doi.org/10.1007/s00146-023-01675-4

[26] Hill, K. (2023, August 6). Eight Months Pregnant and Arrested After False Facial Recognition Match. The New York Times. https://www.nytimes.com/2023/08/06/business/facial-recognition-false-arrest.html

[27] Clayton, J. (2024, May 25). “I was misidentified as shoplifter by facial recognition tech.” BBC News; BBC News.

https://www.bbc.co.uk/news/technology-69055945

[28] Charles, M. (2012). Cecilia L. Ridgeway: Framed by Gender: How Gender Inequality Persists in the Modern World. European Sociological Review, 29(2), 408–410. https://doi.org/10.1093/esr/jcs074

[29] Schwemmer, C., Knight, C., Bello-Pardo, E. D., Oklobdzija, S., Schoonvelde, M., & Lockhart, J. W. (2020). Diagnosing Gender Bias in Image Recognition Systems. Socius: Sociological Research for a Dynamic World, 6(6), 237802312096717. https://doi.org/10.1177/2378023120967171

[31] Richardson, R. (2021). Defining and Demystifying Automated Decision Systems. Social Science Research Network,

81(3). [32] Nadeem, A., Marjanovic, O., & Abedin, B. (2022). Gender bias in AI-based decision-making systems: a systematic literature review. Australasian Journal of Information Systems, 26(26). https://doi.org/10.3127/ajis.v26i0.3835

[33] Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 205395171774353. https://doi.org/10.1177/2053951717743530

[34] Johnson, K. N. (2019, November 14). Automating the Risk of Bias. Ssrn.com. https://ssrn.com/abstract=3486723

[35] Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder‐Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., & Broelemann, K. (2020). Bias in Data‐driven Artificial Intelligence systems—An Introductory Survey. WIREs Data Mining and Knowledge Discovery, 10(3). https://doi.org/10.1002/widm.1356

[36] Mimi Onuoha . (2021, November 9). Notes on Algorithmic Violence. GitHub. https://github.com/MimiOnuoha/On-Algorithmic-Violence

[37] Dastin, J. (2018, October 11). Insight - Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/

[38] Lambrecht, A., & Tucker, C. (2019). Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads. Management Science, 65(7), 2966–2981.

[39] Brooks, W. (2022). Artificial Bias: The Ethical Concerns of AI-Driven Dispute Resolution in Family Matters. Journal of Dispute Resolution, 2022(2). https://scholarship.law.missouri.edu/jdr/vol2022/iss2/9

[40] Altman, M., Wood, A., & Vayena, E. (2018). A Harm-Reduction Framework for Algorithmic Fairness. IEEE Security & Privacy, 16(3), 34–45. https://doi.org/10.1109/msp.2018.2701149

[41] GOV.UK. (2010). Equality Act 2010. Legislation.gov.uk; Gov.uk. https://www.legislation.gov.uk/ukpga/2010/15/contents

[42] Human Rights Act 1998. (1998). Legislation.gov.uk. https://www.legislation.gov.uk/ukpga/1998/42/contents

[43] ICO. (2023, May 19). What is the impact of Article 22 of the UK GDPR on fairness? Ico.org.uk. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/what-is-the-impact-of-article-22-of-the-uk-gdpr-on-fairness/

[44] ICO. (2023, May 19). What about fairness, bias and discrimination? Ico.org.uk. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/what-about-fairness-bias-and-discrimination/

[45] Erdos, D. (2022). A Bill for a Change? Analysing the UK Government’s Statutory Proposals on the Content of Data Protection and Electronic Privacy. SSRN Electronic Journal, 13. https://doi.org/10.2139/ssrn.4212420

[46] How the new Data Bill waters down protections. (2023, November 28). Public Law Project. https://publiclawproject.org.uk/resources/how-the-new-data-bill-waters-down-protections/

[47] McCullagh, K. (2023). Data Protection and Digital Sovereignty Post-Brexit. Bloomsburycollections.com. https://www.bloomsburycollections.com/monograph-detail?docid=b-9781509966516&tocid=b-9781509966516-chapter2

[48] Raysa Benatti, Severi, F., Avila, S., & Colombini, E. L. (2024). Gender Bias Detection in Court Decisions: A Brazilian Case Study. 2022 ACM Conference on Fairness, Accountability, and Transparency, 67(3), 746–763. https://doi.org/10.1145/3630106.3658937

[49] Bell, F., Bennett Moses, L., Legg, M., Silove, J., & Zalnieriute, M. (2022, June 14). AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators. Papers.ssrn.com. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4162985

[50] Di Noia, T., Tintarev, N., Fatourou, P., & Schedl, M. (2022). Recommender systems under European AI regulations. Communications of the ACM, 65(4), 69–73.https://doi.org/10.1145/3512728

[51] Award-winning project on preventing gender bias in AI systems used in judiciaries. (2021). United Nations : Office on Drugs and Crime. https://www.unodc.org/unodc/en/gender/news/award-winning-project-on-preventing-gender-bias-in-ai-systems-used-in-judiciaries.html

[52] UNESCO. (2020). Artificial intelligence and gender equality: Key findings of UNESCO’s global dialogue. https://unesdoc.unesco.org/ark:/48223/pf0000374174

bottom of page