top of page
Search
All Posts


A Critical Policy Review: Canada’s Proposed AI and Data Act
Canada’s proposed Artificial Intelligence and Data Act (AIDA) is its first attempt to govern AI systems entrenched in everyday life. While the Act sets out to protect the public through a risk-based framework, its vague definitions, limited equity protections, and exclusionary development process raise concerns.
Alina Huang
Nov 185 min read


Beyond Digital Band-Aids: A Youth-Centered Framework for AI Mental Health Governance
A teenager in crisis shouldn’t have to wonder whether their AI “listener” is saving their life or saving their data. As more youth turn to chatbots for comfort, weak safeguards turn private pain into unprotected information. We urgently need rules that provides safeguard for our youth.
Alina Huang
Nov 186 min read


The EU AI Act’s Transparency Gap
AI is increasingly embedded in key societal sectors - work, housing, care. The EU AI Act aims to provide a safeguard, however its transparency gaps risk leaving people vulnerable to decisions they cannot see, challenge, or understand.
Alina Huang
Nov 186 min read


The Digital Trojan Horse: Widespread Data Collection and Its Impact on Equity
In an age where our lives are increasingly lived online, the data we leave behind has become both a source of vulnerability. This piece explores how this threats the public to unprecedented risks, and why America’s existing privacy laws are no match for the scale of this new digital reality.
Alina Huang
Nov 186 min read


The Equity Blind Spot: How the EU AI Act Fails to Protect the Most Vulnerable
The EU AI Act is celebrated as a landmark in global tech regulation. But it contains a quiet omission: it does not account for the unequal realities in which AI operates. This piece examines how a law built to manage risk can still overlook the people most vulnerable to algorithmic harm.
Alina Huang
Nov 186 min read


Evaluation of New York City Local Law 144-21 on AI Hiring Policy
AI influences who gets interviewed, shortlisted, or rejected; often before a human ever sees a CV. But as New York City’s first-in-the-US hiring law shows, regulating these systems is far from straightforward. This piece examines what Local Law 144-21 gets right, where it falls short, and what needs change.
Alina Huang
Nov 186 min read


Ensuring Legal Confidentiality for AI User Conversations in Moldova
As more Moldovans turn to AI for advice, support, and everyday problem-solving, few realise their conversations have no legal protection at all. This proposal asks how Moldova can close this gap by creating a new form of confidentiality fit for the age of AI.
Alina Huang
Nov 185 min read


The Algorithmic Divide: How the FDA's Historic AI Policy Unwittingly Crafts Inequity
When the FDA updated its medical AI policy, it reshaped the lives of patients who will rely on these systems. However, algorithms guiding clinical decisions risk leaving entire communities behind. This piece examines how a policy like may deepen inequality, and what it would take to prevent it.
Alina Huang
Nov 185 min read


California’s ADMT Regulations: Balancing AI Innovation and Consumer Protection
As AI quietly takes over decisions about jobs, housing, credit, and healthcare, California faces a dilemma: how do you protect people without crushing the innovation in Silicon Valley?
Alina Huang
Nov 187 min read


A Government-Guided, Market-Executed AI Assessment Framework
In the age of AI governance, trust has to be legislated. But how do we create AI oversight that is strong enough to protect people, but flexible enough to let innovation breathe? Here’s our attempt to bridge that gap.
Alina Huang
Nov 185 min read


History Meets AI: A Conversation with a Digital Humanist
Kairos stands where history meets code - a digital humanist turning ancient manuscripts into living data. She explores how AI can revive culture’s past without erasing its heart. She reminds us that progress must still feel human.
Alina Huang
Oct 267 min read


TAP-Fusion: Designing Fair Multi-Modal AI for Equitable Alzheimer’s Risk Detection
TAP Fusion is a human centered AI project that brings together brain scans, speech, text, and clinical data to detect Alzheimer’s risk earlier and more fairly, showing how thoughtful design can make healthcare technology more inclusive.
Alina Huang
Oct 153 min read


Why Algorithms Discriminate: Risks in Values and Justice
Algorithms now decide who gets a job, a loan, or even a voice, but they carry the hidden values of their makers. This paper explores how such silent choices fuel discrimination and asks how justice can be reclaimed in the digital age.
Alina Huang
Sep 2917 min read


AI Plus Action Plan Signals a New Era of Innovation and Governance in China
China’s new AI + Action Plan marks a turning point: treating AI not as a tool, but as the backbone of future economies, governance, and global collaboration. From smart enterprises to UN-led governance, the blueprint signals how AI will reshape industries, jobs, and society by 2030.
Alina Huang
Aug 292 min read


China’s Draft AI Ethics Measures: A Pivotal Step in Responsible Governance
On August 22, China released draft AI Ethics Services Measures (2025). Building on earlier policies, it sets a three tiered framework that stresses responsibility, risk management, and compliance, placing China’s approach between the EU’s strict rules and the US’s more fragmented model.
Alina Huang
Aug 232 min read


Exploring Metacognition, Digital Divides, and Algorithmic Bias with Dr. Christy Hamilton
Dr. Christy Hamilton (UC Santa Barbara) looks at how technology shapes how we think about our own thinking. She reflects on digital divides, our reliance on familiar platforms, and the risks of personalization. We conclude that design choices can either deepen inequality or build more inclusive digital futures.
Alina Huang
Aug 216 min read


From Grok 4 to Musk: Reflections on the Politics and Ethics of AI
The release of Grok 4 in 2025 was celebrated as a milestone in multimodal large-scale language models. However, closer examination reveals troubling biases, partisan tendencies, and ethical risks, raising urgent questions about neutrality and accountability in artificial intelligence.
Alina Huang
Aug 215 min read


Can AI Ever Be Unbiased? And Who Decides What’s Fair?
AI now permeates healthcare, education, and social media, but it mirrors human data and values, reproducing bias and inequality. This essay questions whether AI can be unbiased, and who holds the authority to decide so.
Alina Huang
Jul 63 min read


How a Teen Designer Turns Algorithmic Bias Into Wearable Activism
High school designer Vivianne Hartan uses bold graphics and sustainable fashion to challenge the myth of algorithmic neutrality. Her collection Coded Inequality turns T-shirts into wearable protests, blending art and activism to show that algorithms aren’t neutral tools but reflections of human choices and values.
-
May 282 min read


Invisible Injustice: How AI Exploits the Marginalised and Why Regulation Must Act First
The gig economy, built on short term, on demand jobs from platforms like Uber or Deliveroo, promises freedom. However, algorithms decide who gets work, pay, and stability. Efficiency often hides old inequalities, especially for women, where caregiving, safety, or choice are penalized while the “ideal worker” is rewarded.
-
May 2610 min read
bottom of page