Can AI Ever Be Unbiased? And Who Decides What’s Fair?
- Alina Huang
- Jul 6
- 3 min read
Updated: Sep 16
Artificial intelligence (AI) is becoming an increasingly significant part of our daily lives. It plays a role in important decision-making processes in areas such as healthcare, education, and social media. Since AI is created by humans, it often reflects and perpetuates existing human biases and inequalities. In this essay, I will explore whether AI can ever be truly unbiased and examine who holds the power to ensure AI is developed and used in a fair and responsible way. Governments, large technology companies, and certain influential individuals have considerable control over how AI is designed and implemented, which directly impacts its fairness.
From one perspective, many people believe that bias in AI can be reduced with the right efforts. For example, Sam Altman, CEO of OpenAI has emphasised the importance of making AI safe, ethical, and transparent. OpenAI shares its research openly to promote the development of responsible AI that benefits everyone. Additionally, governments and organisations are beginning to introduce regulations to manage AI responsibly. The European Union, for instance, has proposed the AI Act, which aims to increase transparency in AI systems and holds companies accountable if their AI causes harm or unfair treatment. These efforts suggest that, with the right rules and cooperation, AI can become a powerful tool that promotes fairness and protects human rights.
From another perspective, some experts argue that AI is inherently biased because it is built on human society, which is filled with flaws, stereotypes, and historical inequalities. AI learns from data that often contains unfair assumptions and reflects systemic injustices. Kate Crawford, a leading AI researcher, points out that AI frequently mirrors existing power structures in society. Furthermore, a small number of large technology companies, such as Google and Meta, dominate AI development and control vast amounts of data. These companies often prioritise profit and influence over fairness and ethics. This issue was evident in the case of California’s SB1047 bill, which originally aimed to reduce tech companies’ control over data but was weakened due to pressure from those very companies. In addition, many AI systems are highly complex and opaque, making it difficult for the public or governments to assess how fair they truly are. This lack of transparency increases the risk that AI could cause harm without anyone being aware.
In conclusion, I believe that AI can never be completely free from bias, as it inevitably reflects the problems and inequalities present in human society. While efforts by companies like OpenAI and new government regulations offer hope for creating fairer and more responsible AI, the concentration of power in the hands of a few major tech companies and the challenges of effective regulation remain significant obstacles. To build AI that is truly fair and accountable, a diverse range of voices must be included in its development, and policies must evolve alongside the technology. Ultimately, the future of AI depends on whether we can guide it to reflect the best of humanity, rather than its worst.
Bibliography
Crawford, Kate. Atlas of AI. Yale University Press, 2021, yalebooks.yale.edu/book/9780300264630/atlas-of-ai/.
European Commission. Proposal for a REGULATION of the EUROPEAN PARLIAMENT and of the COUNCIL LAYING down HARMONISED RULES on ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) and AMENDING CERTAIN UNION LEGISLATIVE ACTS. Eur-Lex.europa.eu, 21 Apr. 2021, eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.
OpenAI. “About OpenAI.” OpenAI, 2025, openai.com/about/.
Noble, Safiya Umoja. Algorithms of Oppression. NYU Press, 2018, nyupress.org/9781479837243/algorithms-of-oppression/.