top of page

The Equity Blind Spot: How the EU AI Act Fails to Protect the Most Vulnerable


Original Article by Albert Parappuzha

Introduction

The Artificial Intelligence Act (AI Act) of the European Union is rightly described as the first of its kind in the globe, the first ever attempt to take on the monumental task of regulating AI depending on its perceived risk. It aims to establish a reliable ecosystem to develop AI in the EU bloc. Nevertheless, there is one blind spot in its holistic and admirable framework, which is a lack of sufficient attention to socioeconomic equity. Though the Act works hard to safeguard against direct and intentional harm and violations of fundamental rights, its technocratic, compliance-focused strategy does not offer any solution to the systemic and algorithmic biases that disproportionately disadvantage marginalized groups. By promoting ex-ante regulation of products instead of ex-post auditing of outcomes, and by instilling compliance costs that lock large corporate actors in place, the AI Act wrongly incentivizes, despite its good intentions, the further entrenchment of inequities it is meant to avert.


A Landmark Framework Built on Risk

Understanding its weaknesses is impossible without recognising the strengths of the Act and its mechanics. The EU AI Act is based on a tiered risk approach which is based on a four-tier risk pyramid: Unacceptable Risk (banned practices), High-Risk (with strict requirements), Limited Risk (transparency obligations), and Minimal Risk (no obligations). The main part of the regulation is high-risk systems that encompass AI in critical infrastructure, education, employment, and law enforcement. Providers of these systems ought to have conformity assessment conducted, have risk management systems, have data governance, and have detailed documentation prior to their product being allowed into the EU market [AI Act, Art. 43]. The ex-ante (before-market) strategy is the best feature of the Act in theory—it tries to avoid harm before it takes place.


It clearly prohibits systems such as social scoring by governments and real-time biometric identification in public spaces (except in several limited circumstances), which is a direct attack on the tools of mass surveillance. In addition, its horizontal application across all sectors is an ambitious move to escape regulatory loopholes. The sheer existence of the Act creates an international debate, capitalizing on the “Brussels effect” to establish a de facto international standard, pushing the world closer to a more rights-based AI regulatory framework.


The Illusion of Technocratic Neutrality

The inherent weakness of this risk-based model is that it is based on the premise that harm may be predetermined and countered by technical documentation and internal corporate inspections. This gives the illusion of technocratic neutrality which ignores the fact that algorithmic bias is social in nature.


To begin with, the concept of “high-risk” is closely fixed on the intended purpose of an AI system enumerated in an annex (e.g., “AI used in recruitment”). It fails to properly account for the context of application or the population on which it is used. An algorithm-based resume-screening tool in a homogeneous country may be low-risk, but when applied in a diverse, multi-ethnic member state, it can automate and enhance discrimination in hiring. The bias is not in the product itself but rather in its encounter with biased information and social frameworks. The fact that the providers themselves [AI Act, Art. 9] are left to curb this through self-assessment is similar to asking a company to police its own discrimination, which is clearly a conflict of interest without independent, societal-level oversight.


Second, the transparency provisions (Article 52) of the Act are often diminished to the right to an explanation. However, telling a citizen that they have been denied a loan by a machine, and the reason why, is not an appropriate alternative to justice. It does little to compensate for the material harm suffered, or challenge the biased data patterns the algorithm was trained on. The equity concern is not just transparency, but redress. The burden of proof and the cost of challenge remain overwhelmingly on the individual, who lacks the resources of the corporation or government that deployed the system.


The Structural Inequity of Compliance

The AI Act creates structural economic barriers that work against equity in addition to its technical gaps. The very high cost of conformity tests, continuous observation and reporting is easily absorbed by technology giants like Google or Meta. These costs are prohibitive for a small startup or a non-profit organization developing a bias-auditing tool.


This has two negative consequences:


  1. It stifles innovation from the very groups—diverse founders, grassroots organizations, academic spinoffs—most likely to build equitable AI solutions tailored to community needs.

  2. It consolidates the AI market into a small number of large, predominantly non-EU corporations, which now have a moat against smaller players.



The regulation, which is aimed to protect citizens, turns out to be a protection for established market power. This economic gatekeeping ensures that the future of AI in Europe will be built by and for the interests of a handful of powerful actors, systematically excluding diverse perspectives from the design process.


The Quantifiable Cost of Exclusion

The hypothetical dangers of the approach to the AI Act are already demonstrated in the alarming empirical evidence, indicating the necessity to design it fairly immediately. The 2019 report of the National Institute of Standards and Technology (NIST) discovered that the false positive rates of people of color and women were much higher in facial recognition technologies, which were among the key areas under the high-risk category of the AI Act, reaching up to ten to one hundred times [Grother, Ngan, & Hanaoka, 2019]. This is not a technical mistake but a failure in the system that results in false arrest and an increase in surveillance in minority groups.


Moreover, the economic barrier to entry is not a hypothetical one. Even a high-stakes AI system of high risk can cost EUR 50,000 to more than EUR 400,000 [Bertuzzi, 2022] to be fully assessed to conform, which a multinational corporation can afford to pay but will result in the bankruptcy of a small startup set up by immigrants or minorities to develop more fair alternatives. This financial moat directly attributes into a homogeneity of perspective. The 2021 report by AI Now Institute pointed out that 99 percent of AI researchers and engineers are men, and belong to a small number of socioeconomic backgrounds [West, Whittaker, & Crawford, 2019]. The regulatory framework implicitly bending in the favour of these very large, homogenous actors will guarantee that the biases of this small group—both conscious and unconscious—will become a part of the underlying technologies that will shape European society decades to come. The AI Act, as it currently exists, fails to stop this cycle; on the contrary, it threatens to give it a green-washed subsidy.


Toward a Truly Equitable Framework

It is not that the EU AI Act is beyond repair, but to achieve its equitable potential, it is imperative to go beyond a technocratic checklist. Three critical amendments are required:


  1. Mandatory, Independent Equity Audits: In addition to internal conformity checks, high-risk systems must be subject to mandatory, third-party audits that specifically assess disparate impact across racial, gender, and socioeconomic lines. These audits need to be public to allow societal monitoring.

  2. A Strengthened Role for Civil Society: The Act must establish a formal role for civil society organizations and community representatives in the standard-setting and monitoring process, ensuring the voices of the most impacted communities are heard directly, not filtered through corporate or regulatory interpretations.

  3. An Equitable Innovation Fund: A portion of the fines collected for violations should be directed into a fund that provides grants to startups, researchers, and NGOs from underrepresented backgrounds to develop ethical AI and cover the costs of compliance, fostering a more diverse ecosystem.



Conclusion

The EU AI Act is a necessary and historical first step in the right direction. It is a powerful testament to the belief that technology must be governed by democratic values. However, by focusing on product safety over societal outcome and corporate compliance over community redress, it treats equity as a secondary concern rather than a central pillar. Without critical changes, it risks creating a two-tier system: one where large corporations are compliant with the letter of the law while the most vulnerable citizens remain exposed to algorithmic discrimination.


To be fully prepared for the societal impact of AI requires laws that are not just risk-aware, but explicitly justice-seeking. The EU has built the frame; now it must paint in the colors of equity.



Works Cited


  1. Bertuzzi, L. (2022, November 24). EU’s AI Act compliance could cost large companies over €400,000, study finds. Euractiv. Retrieved September 19, 2025.

  2. European Parliament and the Council of the European Union. (2024). Regulation (EU) 2024/… on harmonised rules on artificial intelligence (AI Act).

  3. Grother, P., Ngan, M., & Hanaoka, K. (2019). NISTIR 8280 - Demographic Effects in Facial Recognition Technology. National Institute of Standards and Technology.

  4. West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating Systems: Gender, Race, and Power in AI. AI Now Institute.

bottom of page