top of page

Ideas that spark change
Explore a growing portfolio of academic research and critical thinking — from blogs and op-eds to research papers and podcast interviews. Each piece offers critical insight into urgent digital issues and is tagged by theme for easy discovery.


A Critical Policy Review: Canada’s Proposed AI and Data Act
Canada’s proposed Artificial Intelligence and Data Act (AIDA) is its first attempt to govern AI systems entrenched in everyday life. While the Act sets out to protect the public through a risk-based framework, its vague definitions, limited equity protections, and exclusionary development process raise concerns.
5 min read


Beyond Digital Band-Aids: A Youth-Centered Framework for AI Mental Health Governance
A teenager in crisis shouldn’t have to wonder whether their AI “listener” is saving their life or saving their data. As more youth turn to chatbots for comfort, weak safeguards turn private pain into unprotected information. We urgently need rules that provides safeguard for our youth.
6 min read


The EU AI Act’s Transparency Gap
AI is increasingly embedded in key societal sectors - work, housing, care. The EU AI Act aims to provide a safeguard, however its transparency gaps risk leaving people vulnerable to decisions they cannot see, challenge, or understand.
6 min read


The Digital Trojan Horse: Widespread Data Collection and Its Impact on Equity
In an age where our lives are increasingly lived online, the data we leave behind has become both a source of vulnerability. This piece explores how this threats the public to unprecedented risks, and why America’s existing privacy laws are no match for the scale of this new digital reality.
6 min read


The Equity Blind Spot: How the EU AI Act Fails to Protect the Most Vulnerable
The EU AI Act is celebrated as a landmark in global tech regulation. But it contains a quiet omission: it does not account for the unequal realities in which AI operates. This piece examines how a law built to manage risk can still overlook the people most vulnerable to algorithmic harm.
6 min read
bottom of page