top of page

Evaluation of New York City Local Law 144-21 on AI Hiring Policy

Original Article by Siri Jonnada

AI has spread throughout a multitude of areas in society, especially with regard to streamlining decision making. The hiring process is one of these affected areas: companies have been integrating AI into the hiring process by using automated employment decision tools (AEDTs). However, behind the algorithms used in these AEDTs lie biases which discriminate against race, gender, and marginalized groups. To combat this, New York City created Local Law 144-21, first proposed in 2021 and enacted in 2023, which was the first US law that required companies utilizing AEDTs to bias audit and publicly disclose the impact of automated employment decision tools on protected groups. This legislation is a pioneering step towards increasing AI accountability in the hiring process. However, there are still many areas where this law falls short and needs improvements such as expanding the scope of AI technologies covered, adding more criteria to bias auditing, and increasing transparency to a greater level.


AI can streamline many processes in the hiring process, such as to screen resumes, rank candidates, schedule interviews, and even analyze video interviews for candidates’ tone, facial expressions, and word choice. AEDTs appeal to organizations because manual review can take weeks or even months. Hiring algorithms easily rank and sift through resumes based on experience, skills, and other criteria considered by the company (Mearian 1). However, it’s known that AI can have a plethora of biases and flaws. Commonly these flaws aren’t even known to the company itself, thus it is crucial for an AI to be screened or independently bias audited before use. The New York City Local Law 144-21 requires organizations using AEDTs in employment decisions to have an independent bias audit performed by an independent auditor (“NYC Local Law 144-21 and Algorithmic Bias”). A bias audit is an evaluation of an AI system to detect any potential biases towards groups of people. This can detect whether an AI unintentionally favors or gives a disadvantage to a certain group.


This law prevents organizations from using AEDTs unless they meet a certain set of criteria: “the tool is audited annually; the employer publishes a public summary of the audit; and applicants and employees are provided certain notices by employers who are subject to screening by the AEDT,” (“SolasAI Targets NYC Local Law 144 Functionality”). The law also mandates that employers inform applicants that the company will be using AEDTs during the hiring process. Law 144-21 also protects not just citizens of New York City, but also anyone outside of the city applying for a job inside the city. Companies that don’t comply with these regulations “will face penalties of $375 for a first violation, $1,350 for a second violation and $1,500 for a third or more violations. Each day an automated employment decision tool is used in violation of the law will be considered another separate violation,” (Mearian 3). This provides more incentive for companies to follow the law and uphold regulations.


This new legislation has given many benefits to the citizens of New York City. The law has mainly helped “weed out the use of AI tools that might perpetuate biases, a concern that has plagued AI tools in the past,” (Saric). This is crucial in a time where AI is still considered a “new and upcoming” technology that lacks proper regulations. The legislation has a focus on race and gender discrimination, both of which are common areas where AI has historically been discriminatory. Thus, this legislation can ease the minds of job applicants by removing one major worry. Transparency is also a key aspect in AI regulating legislation, according to the CTO of Talent Select AI, “Candidates should have the ability to know what data is being collected and how it is being used,” (Mearian 3). Mandating that companies must disclose when AEDTs are being used allows applicants to not only prepare more adequately, but also ensures that applicants are aware that AI is collecting their data. These benefits highlight how this piece of legislation is pioneering AI regulations aiming at AEDTs in the US.


Although this legislation is a step in the right direction, there are still many flaws and drawbacks. One issue is that the law only covers a subset of algorithms. For example, AI tools which transcribe the text from audio and video interviews are excluded in the law; speech and recognition tools have had a history of bias issues (Wiggers), so there is a clear need for legislation to regulate these areas of AI use in employment as well. Additionally, the law only focuses on racial and gender discrimination, and doesn’t address other forms of discrimination such as bias against disabilities and age (NYCLU). The law itself is very vague, “The original version of the law was narrowed down in revisions so that it ultimately only applies to ‘tools that almost completely replace human decision making processes,’” (Saric). This vague description for what is required to be audited has very obvious loopholes, since technically as long as there is some human aspect to the decision making process, there is no need for independent auditing.


Many also state that “the law should have put more emphasis on requiring certain levels of ‘explainability’ in the AI systems that are used to make hiring decisions… as AI systems continue to become increasingly complex in nature, there should be some accountability that the AI technology developers or vendors are able to explain how their automated hiring decisions are made,” (Mearian 3). Although the law improves AI transparency in the work force, there is a clear need for regulations to increase transparency to a greater extent. People applying for a job should know the criteria that an AI is judging them on. As mentioned, there needs to be more emphasis on explainability in order to maintain fairness in the hiring process. Due to these identified flaws, it is clear that AI regulation in employment has a long way to go.


It’s crucial that New York City strengthens and clarifies Local Law 144-21 to more effectively regulate the use of AEDTs. First, the law must expand its scope to cover all forms of AI usage in the hiring process. This means that the law’s vague definition of “tools that almost completely replace human decision making processes” must be changed since, as previously mentioned, AI tools such as video and audio transcription have also been known to have biases. Thus, all AEDTs that assist organizations in the hiring process must be included in the law. Secondly, the law must broaden its definition of protected categories in its bias audits. Currently this only covers race and gender. Although race and gender are large areas where biases can be found in AI, there are other biases an AI can have which also unfairly and disproportionately affect groups of people, such as age and disabilities (NYCLU). Therefore the law must change the criteria for the bias auditing to include more people.


Additionally the law should mandate that companies release the criteria that the AI is basing its evaluations on in order to further improve transparency (Mearian 3). This would allow auditors, regulators, and applicants to better understand how hiring decisions are made and identify potential sources of unfairness. Implementing these changes would strengthen Local Law 144-21 by eliminating vagueness, addressing all forms of bias, and creating a more accountable and transparent hiring process.


New York City Local Law 144-21 is a groundbreaking piece of legislation that is the first step in bringing accountability and transparency to AI-driven hiring practices in the United States. By requiring independent bias audits and public disclosure, the law is crucial in mitigating discrimination in automated employment decision tools and providing job applicants with greater insight into how AI shapes hiring outcomes. However, as this evaluation has demonstrated, the law is not without its shortcomings: its narrow scope, vague definitions, and limited focus on only certain forms of bias leave significant gaps. Expanding the law to cover all AI tools used in hiring, addressing additional types of bias such as age and disability, and requiring greater explainability of the AEDTs’ decision-making would strengthen its effectiveness. Local Law 144-21 is a promising first step, but its full potential will only be realized through continuous improvement, rigorous enforcement, and a commitment to inclusive, transparent, and accountable AI in employment.



Works Cited


“SolasAI Targets NYC Local Law 144 Functionality with Update to Bias Detection & Mitigation Solution.” PR Newswire US, June 2023.


Mearian, L. (2023). NYC law governing AI-based hiring tools goes live. Computerworld (Online Only), 1.


Wiggers, Kyle. “NYC’s Anti-bias Law for Hiring Algorithms Goes Into Effect.” TechCrunch, 5 July 2023.


NYCLU. “Biased Algorithms Are Deciding Who Gets Hired. We’re Not Doing Enough to Stop Them.” NYCLU, 16 May 2024.


“NYC Local Law 144-21 and Algorithmic Bias | Deloitte US.” Deloitte, 11 July 2025.


Saric, Ivana. “NYC Law Promises to Regulate AI in Hiring, but Leaves Crucial Gaps.” AXIOS, 6 July 2023.

bottom of page