The EU AI Act’s Transparency Gap
- Alina Huang
- Nov 18
- 6 min read
Original Article by Ashwin Kirubakaran
1.1 Introduction
The European Union’s Artificial Intelligence Act, which entered force in August 2024, represents the world’s first legal framework for artificial intelligence regulation [1]. As AI systems increasingly determine access to employment, healthcare, housing, and criminal justice outcomes, the Act’s risk-based approach attempts to balance innovation with rights protection. However, a critical analysis reveals significant weaknesses in the Act’s transparency requirements for high-stakes decision-making systems, creating enforcement gaps that undermine its stated objectives of ensuring trustworthy AI.
This essay examines the EU AI Act’s strengths and limitations, focusing specifically on its approach to algorithmic transparency and explainability. While the Act establishes important precedents for AI governance, its reliance on self-assessment mechanisms and vague transparency standards creates opportunities for compliance without meaningful accountability.
1.2 Strengths of the EU AI Act
The Act’s most significant strength lies in its systematic categorization of AI applications based on risk levels. The four-tier system–minimal risk, limited risk, high risk, and unacceptable risk–provides clear regulatory boundaries while avoiding overly prescriptive technical requirements that could stop innovation [2]. This approach recognizes that AI governance must be proportionate to potential harm, concentrating regulatory resources on systems with the greatest societal impact.
The prohibited practices outlined in Article 5 demonstrate the Act’s commitment to fundamental rights protection. The ban on AI systems using subliminal techniques, exploiting vulnerabilities of specific groups, or enabling social scoring by public authorities establishes clear red lines that reflect European values [3]. These prohibitions address some of the most concerning applications of AI technology while maintaining space for beneficial uses.
The Act creates new institutional frameworks including AI offices within member states and the European AI Board, establishing dedicated expertise for AI governance [4]. This institutional innovation addresses the technical knowledge gap that has plagued traditional regulators attempting to oversee algorithmic systems.
Most importantly though, the Act positions Europe as a global standard-setter through the “Brussels Effect”–the tendency for EU regulations to become de facto global standards due to market size and regulatory stringency [5]. Major technology companies are already adapting their global practices to comply with EU requirements, extending the Act’s influence far beyond European borders.
1.3 Critical Weaknesses in the EU AI Act
The Act’s most significant weakness lies in its vague and insufficient transparency requirements. Article 1 requires that high-risk AI systems be “designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately” [7]. However, this language provides no technical specifications for what constitutes “sufficient transparency” or “appropriate interpretation.”
This ambiguity creates what scholars call the “compliance gap”, which is the difference between regulatory intent and implementation reality [8]. Without specific technical standards, organizations can satisfy legal requirements through minimal disclosures that provide little meaningful insight into algorithmic decision-making processes. Current industry practices suggest that companies interpret these requirements narrowly, providing generic explanations that fail to enable accountability.
The Act also relies heavily on conformity assessment procedures that allow providers to self-certify compliance with regulatory requirements. While Article 43 establishes some third-party assessment requirements, the majority of high-risk systems undergo internal evaluation processes with minimal external oversight [9].
This self assessment approach proves particularly problematic for transparency requirements, where organizations have strong incentives to minimize disclosure while claiming compliance. The technical complexity of modern AI systems makes it difficult for regulators to detect superficial compliance, creating opportunities for what researchers call “ethics washing” [10].
Furthermore, the Act’s definition of “high-risk” AI systems, contains significant gaps that exclude important applications. The focus on specific use cases listed in Annex III means that novel applications of AI in consequential contexts may escape regulation until the Annex is updated through lengthy legislative processes [11]. Additionally, the Act’s emphasis on “placing on the market” as a trigger for regulation creates challenges for AI systems developed and deployed within single organizations. Internal hiring algorithms, employee monitoring systems, and resource allocation tools may fall outside the Act’s scope despite having significant impacts on individual rights and opportunities.
1.4 Proposed Amendments
The Act should be amended to require specific technical documentation for high-risk AI systems, including algorithmic architecture specifications with mathematical formulations, training data provenance and demographic composition analysis, validation methodology and performance metrics disaggregated by protected demographic groups, and feature importance rankings with statistically confidence intervals and uncertainty quantification.
Article 13 should be strengthened to require individualized explanations for all consequential decisions made by high-risk AI systems. These explanations must satisfy technical criteria including counterfactual reasoning showing how input changes would alter outcomes, feature contribution scores using validated methods like SHAP or LIME with uncertainty bounds, decision pathway visualization for interpretable components, and consistency checks ensuring local explanations align with global system behavior.
The amendment should establish technical standards for explanation quality, preventing superficial compliance through meaningless or misleading explanations. Recent advances in explainable AI make these requirements technically feasible without sacrificing system performance [12].
The Act should mandate annual algorithmic impact assessments for deployed high-risk systems, examining differential impact patterns across protected demographic groups using established fairness metrics, temporal stability analysis showing how algorithmic behavior evolves over time, adversarial testing results demonstrating system resilience to gaming and manipulation, and intersectional analysis recognizing complex patterns of discrimination.
The proposed amendments should strengthen enforcement through automated monitoring systems deployed by regulatory authorities to detect statistical anomalies in AI system outputs, mandatory incident reporting for discriminatory outcomes or system failures, whistleblower protections for employees reporting algorithmic misconduct, and graduated penalties including system suspension for serious compliance failures.
1.4 Implementation Considerations and Trade-offs
These proposed amendments raise legitimate concerns about regulatory burden and innovation impacts. However, recent research suggests that explainable AI requirements need not compromise system performance when incorporated during design phases rather than retrofitted to existing systems [13]. The amendments include provisions for technical assistance to smaller organizations and extended compliance timelines for startups, preventing regulatory capture by large technology companies.
Strengthening EU transparency requirements could enhance the Act’s global influence while creating challenges for international AI companies. The proposed amendments include mutual recognition provisions for equivalent transparency standards in other jurisdictions, facilitating international cooperation while maintaining European leadership in AI governance.
1.5 Conclusion
The EU AI Act represents a landmark achievement in AI governance, establishing important precedents for risk-based regulation and fundamental rights protection. However, its transparency requirements remain insufficiently specific to ensure meaningful algorithmic accountability. The proposed amendments address these weaknesses through mandatory explainability standards that leverage recent advances in interpretable machine learning.
These reforms would transform the Act from a framework that permits compliance theatre into one that demands genuine transparency and accountability. While implementation challenges exist, the alternative, continued algorithmic opacity in high-stakes decision-making poses an unacceptable risk to individual rights and democratic governance.
The EU’s leadership in AI regulation provides an opportunity to establish global standards for algorithmic transparency. By strengthening the Act’s transparency requirements, Europe can ensure that its regulatory framework delivers on the promise of trustworthy AI while maintaining its position as a global standard-setter in the digital age.
References
[1] European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence. Official Journal of the European Union.
[2] Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97-112.
[3] European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence. COM(2021) 206 final.
[4] Ebers, M., et al. (2021). The European Commission’s Artificial Intelligence Act proposal. Common Market Law Review, 58(4), 1025-1074.
[5] Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
[6] Wachter, S., Mittelstadt, B., & Russell, C. (2021). Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review, 41, 105567.
[7] European Union. (2024). Article 13, Regulation (EU) 2024/1689 on artificial intelligence.
[8] Selbst, A. D. (2021). Disparate impact in big data policing. Georgia Law Review, 52(1), 109-195.
[9] Kaminski, M. E. (2021). Regulating real-world surveillance. Washington Law Review, 90(4), 1113-1194.
[10] Bietti, E. (2020). From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
[11] Hacker, P. (2023). The European AI Act: A first analysis. European Law Review, 48(2), 175-192.
[12] Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.
[13] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you? Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference.