top of page

A Critical Policy Review: Canada’s Proposed AI and Data Act

Original Article by Mabel Zheng

Introduction: Navigating the Algorithmic Frontier

We are living through a grand, unplanned experiment. Artificial intelligence, once the domain of science fiction, now curates our news, screens our job applications, and even informs policing and judicial decisions. Its power is immense, and so is its potential for peril. In 2022, the Canadian government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, marking a foundational attempt to establish rules for this new algorithmic frontier (Government of Canada, 2022). AIDA’s premise is sound: a risk-based approach that focuses oversight on “high-impact” AI systems to mitigate dangers like algorithmic bias and misuse. Its core provisions, which include mandating risk assessments, requiring transparency, and banning harmful practices, aim to safeguard citizens without inhibiting innovation. However, while AIDA’s risk-based foundation is a promising starting point, this analysis argues that its efficacy and legitimacy are critically undermined by three flaws: its debilitating vagueness, its insufficient protections for equity-seeking groups, and an exclusionary development process. By comparing AIDA to the European Union’s AI Act and the United States’ fragmented approach, this essay highlights these gaps and proposes reforms to ensure Canada’s framework protects the most vulnerable and fosters responsible development.


Overview of AIDA’s Core Framework

AIDA establishes a principles-based framework centered on regulating “high-impact” AI—a crucial but ill-defined term. Its core provisions mandate risk assessments and mitigation measures for developers and deployers, require transparency in automated decisions, and prohibit practices that cause harm, exploit vulnerabilities, or reinforce prejudice (Bill C-27, 2022, s. 5, 6, 8). Enforcement would be supervised by an AI and Data Commissioner, with violations leading to fines of up to $25 million or 5% of global revenue (Bill C-27, 2022, s. 39). At its heart, AIDA proposes a system that seeks to avoid restricting low-risk innovation while concentrating oversight where it matters most, a structure that reflects the growing international consensus that AI necessitates targeted, risk-weighted governance (Cath et al., 2018).


Strengths: A Foundational Step

AIDA’s primary strength is its deliberate focus on high-risk sectors like healthcare, justice, and employment, acknowledging that the stakes of algorithmic decision-making are not equal. This measured approach avoids impeding innovation in low-risk applications, and its requirements for documentation and impact assessments are essential tools for promoting corporate accountability, aligning with global best practices from the OECD (2021). Its explicit bans on harmful AI applications position Canada as a potential ethical leader on the world stage. Even the introduction of AIDA signals a crucial recognition that the unregulated deployment of AI systems constitutes a fundamental threat to social cohesion and individual rights. Its risk-based system correctly concentrates regulatory attention on areas of greatest potential harm.


Weaknesses: Ambiguity and Exclusion

Despite its admirable intentions, AIDA’s flaws are fundamental. It recognizes the threat of unaccountable AI yet lacks the precise tools to prevent it. Its vagueness creates a risk of ethics washing—where the appearance of governance masks continued inequity. The term “high-impact” remains dangerously vague; without clear definitions or illustrative examples, the law creates regulatory uncertainty, a gift to bad actors and a barrier for ethical companies attempting to comply. This lack of legal clarity contrasts sharply with the EU’s detailed, tiered classification system (European Parliament, 2024).


Second, AIDA treats bias as a technical issue rather than a deeply systemic one. While it prohibits discriminatory outcomes, it lacks mandatory equity safeguards such as fundamental rights impact assessments for AI used in policing, child services, or immigration—sectors where Indigenous, racialized, and disabled communities face disproportionate harm (Benjamin, 2019). These are precisely the contexts where historical data is most likely to encode discrimination. By not mandating such assessments, the law prohibits harmful outcomes but does not require the processes that would prevent them, leaving the burden of proof on already marginalized groups.


Third, the Act’s development process lacked meaningful inclusion. Crafted with minimal public consultation, it excluded the communities most affected by AI systems (Raji et al., 2020). This technocratic approach contrasts with the EU’s deliberative model and undermines the law’s democratic legitimacy. A law designed to protect the public must be built with the public. Finally, without guaranteed funding and independence, the proposed AI and Data Commissioner risks becoming ineffective, mirroring enforcement weaknesses seen under PIPEDA (Office of the Privacy Commissioner of Canada, 2021). These structural flaws risk rendering the Act obsolete upon arrival.


Comparative Analysis: Finding a Middle Ground

AIDA attempts to balance the EU’s comprehensive regulatory model with the US’s flexible, sector-specific approach. The EU AI Act provides rigor and legal certainty with robust rights protections but is criticized for complexity (European Parliament, 2024). The United States relies on a fragmented patchwork of agency guidelines that fail to ensure consistent accountability (NIST, 2023). Canada’s compromise is ambitious but unstable. In its current state, it risks combining the regulatory burden of the EU with the uncertainty of the US, satisfying no key stakeholder. Without addressing its ambiguities, Canada could end up with the worst of both worlds—a framework that lacks both the EU’s protections and the US’s innovation flexibility.


Recommendations: From Principle to Practice

To be effective, AIDA must evolve from principle to practice. First, the government must provide a clear, detailed definition of “high-impact” AI to eliminate regulatory uncertainty. Second, the legislation must embed equity through mandatory, participatory equity impact assessments for AI deployed in high-stakes public sectors. These assessments should be co-designed with marginalized communities, including Indigenous groups, civil society organizations, and disability rights advocates. Third, the legislative process must be reopened to include historically marginalized groups, strengthening democratic legitimacy. The proposed AI and Data Commissioner must be guaranteed operational independence and adequate funding. Finally, to support responsible innovation rather than stifle it, the government should introduce grants and regulatory sandboxes for startups developing fair and accessible AI systems.


Conclusion: The Stakes of Getting It Right

The ambition of AIDA is laudable, but ambition is not enough. In its current form, the Act represents a paradigmatic yet perilous first step—its vagueness, lack of equity protections, and exclusionary development process risk legitimizing ethics washing. Canada stands at a crossroads: accept a future where powerful systems make unaccountable, opaque decisions, or build a trustworthy AI ecosystem grounded in human dignity. Closing the gap between AIDA’s intentions and its execution is essential. By embedding clarity, equity safeguards, and democratic participation at its core, Canada can set a global standard for AI governance that is not only effective but genuinely just and legitimate.


Citations

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press.


Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 44th Parliament, 1st Session. (2022). Parliament of Canada. https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading


Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach. Science and Public Policy, 45(2), 153–163.


European Parliament. (2024). Artificial Intelligence Act. EUR-Lex. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206


National Institute of Standards and Technology (NIST). (2023). AI Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://www.nist.gov/itl/ai-risk-management-framework


OECD. (2021). OECD Principles on Artificial Intelligence. OECD Legal Instruments. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449


Office of the Privacy Commissioner of Canada. (2021). PIPEDA and the Responsible Development of AI. https://www.priv.gc.ca/en/opc-actions-and-decisions/ar_index/202021/ar_2021/


Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33–44.


Suresh, H., & Guttag, J. V. (2021). A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Proceedings of the 2021 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’21).






bottom of page