top of page
Search
Independent Research


A Critical Policy Review: Canada’s Proposed AI and Data Act
Canada’s proposed Artificial Intelligence and Data Act (AIDA) is its first attempt to govern AI systems entrenched in everyday life. While the Act sets out to protect the public through a risk-based framework, its vague definitions, limited equity protections, and exclusionary development process raise concerns.
Alina Huang
Nov 185 min read


Beyond Digital Band-Aids: A Youth-Centered Framework for AI Mental Health Governance
A teenager in crisis shouldn’t have to wonder whether their AI “listener” is saving their life or saving their data. As more youth turn to chatbots for comfort, weak safeguards turn private pain into unprotected information. We urgently need rules that provides safeguard for our youth.
Alina Huang
Nov 186 min read


The EU AI Act’s Transparency Gap
AI is increasingly embedded in key societal sectors - work, housing, care. The EU AI Act aims to provide a safeguard, however its transparency gaps risk leaving people vulnerable to decisions they cannot see, challenge, or understand.
Alina Huang
Nov 186 min read


The Digital Trojan Horse: Widespread Data Collection and Its Impact on Equity
In an age where our lives are increasingly lived online, the data we leave behind has become both a source of vulnerability. This piece explores how this threats the public to unprecedented risks, and why America’s existing privacy laws are no match for the scale of this new digital reality.
Alina Huang
Nov 186 min read


The Equity Blind Spot: How the EU AI Act Fails to Protect the Most Vulnerable
The EU AI Act is celebrated as a landmark in global tech regulation. But it contains a quiet omission: it does not account for the unequal realities in which AI operates. This piece examines how a law built to manage risk can still overlook the people most vulnerable to algorithmic harm.
Alina Huang
Nov 186 min read


Evaluation of New York City Local Law 144-21 on AI Hiring Policy
AI influences who gets interviewed, shortlisted, or rejected; often before a human ever sees a CV. But as New York City’s first-in-the-US hiring law shows, regulating these systems is far from straightforward. This piece examines what Local Law 144-21 gets right, where it falls short, and what needs change.
Alina Huang
Nov 186 min read


Ensuring Legal Confidentiality for AI User Conversations in Moldova
As more Moldovans turn to AI for advice, support, and everyday problem-solving, few realise their conversations have no legal protection at all. This proposal asks how Moldova can close this gap by creating a new form of confidentiality fit for the age of AI.
Alina Huang
Nov 185 min read


The Algorithmic Divide: How the FDA's Historic AI Policy Unwittingly Crafts Inequity
When the FDA updated its medical AI policy, it reshaped the lives of patients who will rely on these systems. However, algorithms guiding clinical decisions risk leaving entire communities behind. This piece examines how a policy like may deepen inequality, and what it would take to prevent it.
Alina Huang
Nov 185 min read


California’s ADMT Regulations: Balancing AI Innovation and Consumer Protection
As AI quietly takes over decisions about jobs, housing, credit, and healthcare, California faces a dilemma: how do you protect people without crushing the innovation in Silicon Valley?
Alina Huang
Nov 187 min read


A Government-Guided, Market-Executed AI Assessment Framework
In the age of AI governance, trust has to be legislated. But how do we create AI oversight that is strong enough to protect people, but flexible enough to let innovation breathe? Here’s our attempt to bridge that gap.
Alina Huang
Nov 185 min read


TAP-Fusion: Designing Fair Multi-Modal AI for Equitable Alzheimer’s Risk Detection
TAP Fusion is a human centered AI project that brings together brain scans, speech, text, and clinical data to detect Alzheimer’s risk earlier and more fairly, showing how thoughtful design can make healthcare technology more inclusive.
Alina Huang
Oct 153 min read
bottom of page