Why Algorithms Discriminate: Risks in Values and Justice
- Alina Huang
- Sep 29
- 17 min read
Why Do Algorithms Discriminate? Risk Mechanisms in Value Philosophy and Social Justice
Original Research by Jingru Zheng, MSc Philosophy and Public Policy, LSE.
Abstract: Algorithms, as data-driven strategies and procedures for solving problems and completing tasks, are the “core” and “soul” of artificial intelligence. However, algorithms are not inherently value-neutral. In automated decision-making processes involving data collection, storage, and analysis using machine learning and other technologies, the perspectives of algorithm designers, the sources and quality of data, and the embedded value choices directly influence algorithm performance. The behavioral tendencies of specific stakeholders when using algorithms, as well as the autonomous evaluations and decisions of intelligent systems, can lead to algorithmic discrimination, impacting social justice. Compared to traditional social discrimination, algorithmic discrimination is more covert, precise, targeted, and difficult to detect. To address this issue, as society becomes increasingly intelligent, we must affirm the status of all individuals as value-bearing subjects, embed just values into intelligent algorithms, and establish dynamic evaluation and supervision mechanisms to effectively regulate algorithmic discrimination, thereby reshaping social justice in the intelligent era.
Keywords: Artificial Intelligence, Algorithmic Discrimination, Value Subjects, Social Justice
1. Introduction
Social justice is a contentious issue that concerns everyone. With the advent of the intelligent era, it has been profoundly impacted by research and applications in information and intelligent technologies, leading to significant shifts in discourse. Algorithms, as the “central nervous system” and “soul” of artificial intelligence, are the “good core” for building intelligent systems and are crucial for constructing a just intelligent society. Algorithms are open, and development paths are diverse—we “do not yet have the ultimate algorithm” (Domingos, 2017, p.3); however, algorithms are ultimately data-based strategies and operational procedures aimed at solving problems and completing tasks. Although leveraging big data and algorithms can prevent and mitigate certain social discriminations and biases, algorithms themselves are not “value-neutral”. In the process of collecting, storing, and analyzing data through machine learning for automated decision-making, various forms of discrimination and bias often emerge. Sometimes, algorithms even amplify existing social discrimination and prejudice or create new risks of discrimination and injustice. Since algorithmic discrimination appears in many areas of AI research and application, causing multiple negative effects on social justice, it is necessary to systematically examine and appropriately regulate it based on foundational theories of value philosophy and subjectivity analysis methods, to rebuild a just social order in the intelligent era.
2. Research Questions
From the perspective of value philosophy, this study aims to reveal the value orientations behind algorithm design and explore how regulation can reconstruct a fair and just social order. The core question is whether the positions of algorithm designers and the data they select exacerbate social injustice and discrimination in the application of algorithms.
Specifically, the research questions include: Who are the subjects designing algorithms? How do these subjects select and weight the variables influencing algorithmic decisions? Does the process of algorithm design and data selection contain discriminatory content? Does the algorithm fairly consider the interests of all stakeholders and provide appropriate rights to information and participation?
3. Literature Review
Technically, algorithms are fundamentally defined as coded processes that transform “input data” into “desired output” (Gillespie, 2014, p. 167). But in a digitized society, algorithms are not just technical logic or formulas; they are viewed as social forces that participate in governance and solve real-world problems (Issar & Aneesh, 2022, p. e12955). Scholars have gradually shifted from a purely technical perspective to examining the social impacts of algorithms, viewing them as non-neutral technologies (Katzenbach & Ulbricht, 2019, pp.1-18) embedded with power dynamics and even possessing cultural and emotional biases (Fourcade & Healy, 2017, pp. 9-29). In summary, considering algorithmic systems as new, hidden, and inherently valuable powerful mechanisms has become a mainstream recognition in academia (Napoli, 2014, pp. 340-360).
Based on this understanding, existing research primarily explores algorithms’ social impact from two dimensions: “algorithmic episteme” and “algorithmic governance”. “Algorithmic episteme” emphasizes how technology uses algorithms to turn big data into knowledge (Fisher, 2019, pp. 1176-1191)—a process that reshapes collective consciousness (Bucher, 2013, pp. 479-493; Just & Latzer, 2017, pp. 238-258) by influencing individuals’ cognitive frameworks and behaviors, thus constructing cultural landscapes (Beer, 2013, p. 97). “Algorithmic governance” refers to many social management processes being handed over to algorithms. Scholars have extensively reflected on issues arising from algorithmic governance (Issar, 2022, p. e12955), such as social inequality (Katzenbach, 2019, pp. 1-18), discrimination (Shorey & Howard, 2016, pp. 5032-5055), identity construction (Fourcade & Healy, 2017, pp. 9-29), and media content production and gatekeeping (Napoli, 2014, pp. 340-360). Moreover, ethical reflections on algorithms highlight the need to balance “governing algorithms” with “algorithmic governance”, advocating for regulation in algorithm production and application (Zhang, 2021).
Currently, both domestic and international research on algorithms primarily revolves around the above three aspects. Due to the “black-box” nature of algorithms, research often assumes that algorithms possess great power and are frequently regarded as objective and neutral tools (Neyland, 2017, pp. 45-62). Existing studies on algorithm governance also tend to focus on the technical level, seldom examining how algorithms impact social justice from the perspective of value philosophy. However, in reality, algorithms are not “value-neutral”. At every stage of design, development, and application, algorithms may be influenced by the cultural backgrounds, ideologies, and vested interest groups behind the developers. Therefore, the values embedded within algorithms may significantly affect social outcomes. This issue is especially worthy of in-depth exploration in the current social context because rapid technological development often renders such influences more concealed and complex.
4. Methodology
This study combines qualitative analysis with theoretical exploration, using the “value subject analysis” framework from philosophy and sociology. It seeks to uncover the roles and positions of algorithm designers, data providers, and users in the design and implementation of algorithms, and how these influence algorithmic fairness. The research specifically focuses on the potential “subject” status and value choices of intelligent systems (including humanoid robots) within these processes, investigating whether the advancement of algorithmic autonomy and machine learning presents ethical challenges.
First, by reviewing the value-philosophical background of social justice and discrimination, the study clarifies the applicability and challenges of these concepts in algorithm design. Through an in-depth discussion of justice, discrimination, and value subjects, it uncovers differing understandings from various standpoints, particularly addressing the theoretical questions of “whose justice” and “who is discriminating against whom”.
Second, through case studies, this research explores how the biases or positions of various stakeholders in algorithm design influence decisions and social outcomes, focusing on discriminatory issues in practical applications. By analyzing recent cases of AI algorithms in recruitment, finance, and healthcare, it reveals how data selection, model design, and biases in training data exacerbate social discrimination. For example, recruitment algorithms may unfairly treat certain groups due to gender and racial biases present in historical data. These cases illustrate how algorithms produce unfair outcomes when dealing with incomplete, imbalanced, outdated, or biased data.
Finally, considering modern technological trends—especially the advancement of intelligent systems’ autonomous learning capabilities—the study analyzes how data fairness in algorithm design and algorithmic self-learning may intensify discrimination. Data collection relies mainly on existing literature, case analyses, and expert interviews to ensure theory and practice are well-integrated.
5. Research Findings
5.1 Sources and Impacts of Data Injustice
The stances of algorithm designers and developers are deeply embedded in the complexities and controversies of social justice and discrimination. Many studies have found that injustice often arises during data collection, processing, and usage: “When an algorithm is applied to improperly selected, incomplete, outdated, or biased data, injustice occurs.” (Susskind, 2022) This data injustice ultimately permeates algorithm design and application, leading to discriminatory outputs. From a value-philosophical perspective, social justice aims to achieve a proportionate or appropriate balance between what individuals receive and what they contribute, requiring that benefits are distributed fairly, reasonably, and impartially while avoiding denigration, discrimination, bullying, and deprivation. However, justice and discrimination are inherently subjective; their meanings depend on the stances, interests, and evaluation criteria of the value subjects involved. As researchers have noted, algorithms are historical products of “subject objectification” and “object subjectification,” reflecting the will and preferences of value subjects at every stage—from design conception to coding, from data selection to application feedback. This subjectivity raises critical questions: Who are the designers and programmers of algorithms? What indicators and variables have they chosen, and what weights have they assigned? Do the algorithms contain discriminatory content, and in practice, have all stakeholders been granted the rights to information, expression, and participation in decision-making? (Yeung & Lodge, 2020, p. 25) Discussing the justice and discrimination of algorithms without considering the value subjects is thus empty and meaningless.
Moreover, modern technological developments further complicate this issue of subjectivity. With advances in intelligent and biotechnologies, human capabilities have been greatly extended, and trends toward human-machine collaboration and even integration are increasingly evident. Meanwhile, intelligent systems represented by humanoid robots are exhibiting ever-greater autonomy, planning abilities, and creativity, making the philosophical question “What is a human?” more ambiguous and urgent than ever. In this context, can intelligent systems be regarded as value subjects? Can their actions be considered practices, and should they enjoy rights equivalent to humans? These questions directly relate to fundamentally rethinking social justice and discrimination in algorithm design. In specific historical contexts, conflicting value judgments often exist between the rich and the poor, developed and developing nations, minorities and majorities. Algorithm design and application may implicitly or explicitly reinforce these contradictions. For example, defining justice from the majority’s standpoint may harm minority interests; evaluating justice from the perspective of wealthy nations or individuals may carry biases that disdain poorer countries or people. These divergences become explicit or implicit through algorithms as the “core engine,” posing deeper challenges to the foundations of social justice and the elimination of discrimination.
As intelligent systems rapidly evolve, the subjectivity in algorithm development becomes even more blurred and difficult to regulate. On one hand, humanity’s traditional “anthropocentrism” is profoundly questioned by rapid technological advancement: Are intelligent systems merely “tools,” or should they enjoy rights and dignity similar to humans? On the other hand, control over algorithm design is gradually moving beyond human programmers; intelligent systems increasingly demonstrate autonomous learning and self-evolution capabilities through big data, potentially reaching the point of “taking over” algorithm design (Domingos, 2017, p. 9). This technological trend prompts deeper philosophical reflections: As potential value subjects, can intelligent systems’ concepts of justice align with human societal norms? Might their algorithmic logic deviate from human needs and ethical standards? In extreme cases, could they even turn the tables, challenging or depriving humans of their status as value subjects? In this context, it’s an unavoidable imperative of our time that algorithm design must fully consider value subjects and their stances.
5.2 Algorithmic Discrimination Arising from Cultural Value Biases
Research indicates that algorithms, as “opinions” expressed through mathematical formulas or computer code, are not value-neutral; they inherently embody the cultural values and choices of their creators. These cultural biases may be intentionally or unintentionally embedded during the design, coding, and application of algorithms, leading to discriminatory outcomes. As Brockman noted: “No matter how excellently an algorithm maximizes, and no matter how accurate its model of the world, a machine’s decisions may be ineffably stupid in the eyes of an ordinary human if its utility function is not well aligned with human values.” (Brockman, 2019, p. 48). Therefore, an algorithm’s fairness largely depends on whether designers can effectively integrate fundamental human values and ethical norms into it. This involves not only the designers’ values and moral commitment but also the regulatory and supportive roles of governments, businesses, and social organizations.
However, in practice, algorithm design often prioritizes task completion and efficiency optimization, with insufficient attention to value scrutiny and ethical embedding. For example, some Western résumé-screening algorithms favor male applicants, resulting in lower scores for female candidates; high-paying job ads are more frequently shown to white males while deliberately excluding women and other groups. These phenomena highlight the prevalence of algorithmic bias and reveal how cultural value biases lead to discrimination. As Yeung and Lodge pointed out: “If there is bias in the source data or the way the algorithm operates, it will inevitably undermine the fairness and accuracy of the results” (Yeung & Lodge, 2020, p. 32). 21 Algorithmic discrimination is not only evident in technical design but is further exacerbated by cultural traditions and social ideologies. Deep-rooted ideologies like “Western centrism” and “American exceptionalism” may subtly infiltrate algorithmic rules and logic, creating systemic discrimination against vulnerable groups and continuously reinforcing it through everyday algorithmic applications.
Moreover, conflicts between value systems pose significant challenges to algorithm design. Zhang Yuhong and others noted: “When processing massive amounts of data, big data algorithms face a choice between fairness and efficiency” (Zhang et al., 2017, p. 84). When efficiency is set as the primary goal, social justice is often sacrificed. A typical example is food delivery platform algorithms: to pursue efficiency and profit, algorithms continuously reduce delivery times for couriers, ignoring weather, traffic, and the couriers’ actual circumstances. This exposes couriers to frequent health and safety risks, making it challenging to safeguard their legitimate rights. This situation clearly demonstrates that when cultural value biases are introduced into algorithms without effective supervision and correction mechanisms, algorithm operation not only fails to achieve fairness but may also amplify social injustice. More seriously, longstanding cultural prejudices in society, such as racial discrimination and “Western centrism”, are often replicated and amplified by algorithms. Yeung and Lodge further observed: “Even if algorithms and source data are accurate, analytical techniques may unconsciously follow old paths of bias, continuing to discriminate against certain groups and deepening their harm” (Yeung & Lodge, 2020). In this context, if algorithms—as tools for modern social governance—cannot transcend the limitations of cultural biases in their design and operation, they will inevitably further entrench social injustice, hindering the fairness and sustainable development of an intelligent society.
5.3 Behavioral Discrimination by Specific Value Subjects Using Algorithms
Research shows that despite globalization deepening human interdependence, specific value subjects—such as religious organizations, nation-states, and multinational corporations—often aim to protect their core interests by leveraging algorithms to implement “behavioral discrimination”, thereby harming the legitimate rights of other groups. Ezrachi and Stucke point out that certain meticulously designed complex algorithms intentionally embed behavioral biases to favor these entities’ interests, further exacerbating social injustice (Ezrachi & Stucke, 2018, pp. 109-187). In the intelligent era, this fusion of technology with capital and geopolitical forces has become a key means of creating social discrimination.
In the political sphere, value subjects may use algorithms to precisely manipulate public opinion and social governance. For example, intelligence agencies might illegally collect vast amounts of citizen data to develop deep analysis algorithms, not only controlling societal dynamics but also misleading public perception by deliberately timing information releases or obscuring key details. Election algorithms classify and push information based on voter preferences, reinforcing “information cocoon” and making the public more inclined to support specific candidates. Additionally, algorithms may prioritize issues concerning certain groups while neglecting the needs of vulnerable populations, leading to structural discrimination.
In the economic domain, large tech companies exploit data advantages and technological monopolies to infringe upon consumer rights through “algorithmic hegemony”. Kenny and Zysman note that these platform leaders, possessing vast datasets, advanced computing power, and specialized resources, further entrench market inequalities (Kenney & Zysman, 2020, pp. 55-76). For instance, companies generate “user profiles” via algorithms to implement differential pricing or “big data backstabbing”, inducing unfair transactions for certain consumers based on individual preferences and purchasing power. Some platforms also manipulate traffic distribution and ranking algorithms to disrupt fair market competition. More seriously, mandatory “take-it-or-leave-it” clauses and hidden discriminatory settings make it difficult for ordinary consumers to defend their rights; Even prolonged resistance by some victims fails to challenge these companies’ exploitative practices.
In international relations, technologically advanced countries expand their interests covertly or overtly by designing exclusive algorithms. In other words, these nations may construct dominant narratives through information collection and dissemination channels, packaging their own interests as the common interests of humanity, while using algorithms to suppress the demands of economically and technologically lagging countries, pushing them into marginalization or “digital colonization” within the global digital order. Just as colonialists in the industrial era invaded and plundered less developed countries, algorithmic discrimination in the digital age similarly excludes and bullies these nations and their “digital poor”.
In summary, the widespread application of algorithmic technology provides certain value subjects with more covert and powerful tools, further aggravating historical social injustices. In the intelligent era, such algorithm-driven behavioral discrimination poses severe challenges to the realization of social justice, urgently requiring new ethical norms and governance mechanisms to address it.
5.4 Autonomous Decision-Making in Intelligent Systems May Lead to New Algorithmic Discrimination
While the autonomous decision-making capabilities of intelligent systems significantly advance societal informatization and intelligence, their potential risks of discrimination and social injustice cannot be ignored. Harari warned: “As algorithms push humans out of the job market, wealth and power might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social and political inequality” (Harari, 2017, p. 290). With intelligent systems widely applied in production and daily life, they are becoming not just “workers” but evolving into “managers” of intelligent society. This exacerbates wealth disparities and may reduce the “digital poor” to subordinates or bystanders, even excluding them from the global economy due to technological unemployment.
In their autonomous operation, intelligent systems are inevitably influenced by limitations in algorithm design and inherent flaws. Zheng Zhifeng points out that algorithms, as mathematical tools for analysis and prediction, inherently emphasize correlation over causation, which makes discrimination possible (Zheng, 2019). Moreover, the “black-box” nature of algorithms results in a lack of transparency; the human-machine gap and lack of explainability intensify public difficulty in understanding decisions made by intelligent systems, deepening trust issues. Scaruffi emphasizes: “The real danger of artificial intelligence is that when humans ask it to perform a task, it may completely misunderstand the instructions, leading to decisions humans regret” (Scaruffi, 2017, p. 69). Such “dehumanized” decision-making can significantly diverge from human ethical consensus on core values like life, freedom, and justice. During autonomous decision-making, the values and morals of intelligent systems may also be constrained by historical and current biases or distortions. If algorithms fail to effectively avoid longstanding issues like racial discrimination and hierarchical prejudice, they may internalize and amplify these problems. Furthermore, goal-oriented intelligent systems may prioritize task completion over safeguarding the dignity and rights of those affected. Karen Yeung and Martin Lodge point out: “Algorithmic decisions deprive affected individuals of the right to ‘express opinions’ and challenge decisions, thereby undermining the dignity and fundamental rights that individuals, as moral agents, should enjoy” (Yeung & Lodge, 2020).
As intelligent systems gradually gain more decision-making power in economic, social, and cultural spheres, whether their autonomous operation and learning will lead them to make self-assured discriminatory decisions becomes a pressing issue for intelligent society. If intelligent systems pass the “Turing Test” and gain widespread managerial authority in human society, we must be vigilant about the new forms of discrimination and social injustice they may introduce. Only through effective regulation, ethical standards, and governance mechanisms can we balance the positive and negative impacts of technological transformation in the intelligent era, preventing potential harms from algorithmic decisions that further erode social justice.
5.5 Possible Paths to Address Algorithmic Discrimination and Pursue Social Justice
With the advent of the “age of algorithms,” algorithmic discrimination is becoming increasingly diverse and, as technology and society evolve, is exhibiting more complex and profound trends. As Domingos noted, this era signifies a societal transformation dominated by high technology (Domingos, 2017, p. 3). Algorithmic discrimination isn’t just the internalization and amplification of traditional social biases—like overt discrimination based on race, gender, or religion—but also includes new forms arising from implicit biases in data or the algorithms’ own development processes. As intelligent systems enhance their autonomous management and decision-making capabilities, new risks of discrimination and social injustice continue to emerge.
Compared to traditional social discrimination, algorithmic discrimination exhibits unprecedented characteristics in scope, precision, and concealment. First, it is more widespread and diverse. Traditional discrimination usually hinges on obvious traits like race, gender, or religion, but algorithms can deeply mine various data sources—such as medical records, shopping histories, and social media interactions—thereby expanding the scope of discrimination. Second, algorithmic discrimination is more precise and targeted. By deeply analyzing individual profiles, algorithms can offer customized marketing or services; this precision can be used for targeted forms of discrimination. Third, it is more concealed. Although algorithmic decisions may appear rational and neutral, they can utilize black-box mechanisms to hide discriminatory decisions, making them difficult to detect or challenge (Borgesius, 2020, pp. 1572). What we can do is base our efforts on value philosophy and subjective analysis, apply the latest technological advancements, and gradually explore a systematic, comprehensive intelligent governance strategy.
First, establish a human-centered mechanism for social justice. The fundamental approach to addressing algorithmic discrimination is to consistently uphold the value that “humans are ends in themselves.” Whether in the design, development, or application of algorithms, we should base our work on core human values such as dignity, interests, and personhood, ensuring that the ultimate goal of technological advancement is to serve human well-being. This means that all intelligent systems should be strictly governed by legal and ethical standards to prevent them from becoming tools that exacerbate social injustice.
Second, ensure data justice through data protection legislation. Data is one of the root causes of algorithmic discrimination, making the establishment of comprehensive data protection laws crucial. As Hoffmann emphasized, “Bias and fairness issues are at the core of data justice” (Hoffmann, 2019, pp. 900). To eliminate algorithmic discrimination, we must ensure data integrity, accuracy, and impartiality, avoiding discriminatory practices during data collection and usage. Additionally, the privacy and security of personal data should be strictly protected to prevent malicious use and abuse.
Third, strengthen both ethical and technical regulation of algorithms. To prevent misuse of algorithms, we must enhance oversight of those developing and applying them. Researchers should consistently adhere to just values when designing and coding algorithms, ensuring they meet ethical standards. Governments and society should impose greater constraints on power structures and capital forces that might influence algorithm design and application, preventing them from promoting discriminatory or unjust decisions through algorithms.
Fourth, establish a comprehensive, dynamic algorithm oversight mechanism. To ensure algorithms don’t create new forms of social discrimination, we need robust oversight systems involving government, enterprises, the public, academia, and other stakeholders. Ethics committees can serve as neutral third parties to regularly review and assess algorithm design, application, and social impact, ensuring they meet basic social justice requirements. Scrutiny should be intensified, especially for algorithms that might negatively affect vulnerable groups.
Fifth, enhance individuals’ digital literacy and rights awareness. Those affected by algorithms need to improve their skills and abilities, master basic digital literacy, and safeguard their “right to know” and “right to appeal”. They should increase their sensitivity to algorithmic discrimination, promptly identifying and reporting unjust treatment. Individuals should have the awareness and capability to combat social discrimination to prevent the negative impacts of algorithmic discrimination.
6. Conclusion
In the intelligent era, algorithmic discrimination has emerged as a new social issue accompanying technological development. It not only continues and intensifies traditional social discrimination but also presents new challenges arising from the autonomous evolution of intelligent systems. Its hidden, complex, and precise nature makes the threat to social justice more widespread and profound. Addressing this issue requires going beyond technical fixes to deeply explore its roots in value philosophy and to advance governance through institutional, legal, ethical, and technological means.
A human-centered value stance is crucial for resolving algorithmic discrimination. Only by establishing a justice mechanism based on human dignity, freedom, and equality—supplemented with strict data protection laws and comprehensive algorithm oversight—can we prevent technology from negatively reshaping social structures. Moreover, relevant stakeholders must strengthen their moral responsibilities and technical standards, while individuals need to enhance their digital literacy and awareness of rights. Through interdisciplinary collaboration and dynamic supervision, society can effectively mitigate the potential risks of algorithmic discrimination, ensuring that intelligent technologies truly serve the well-being of all humanity. Governing algorithmic discrimination in the construction of future intelligent societies is a long-term and complex task that requires sustained effort and deep exploration. By collaboratively integrating value philosophy with science and technology, we hope to eliminate discrimination while opening new possibilities for achieving social justice. This is not only a critical issue of our time but also a key step toward promoting the sustainable development of human society.
Reference
Beer, D. (2013). Popular culture and new media. Palgrave Macmillan.
Borgesius, F. J. Z. (2020). Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24(10).
Brockman, J. (2019). Possible Minds: 25 Ways of Looking at AI. (Wang, J. Y., Trans.). Hangzhou: Zhejiang People’s Publishing House.
Bucher, T. (2013). The friendship assemblage: Investigating programmed sociality on Facebook. Television & New Media, 14(6).
Domingos, P. (2017). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (F. Huang, Trans.). CITIC Press.
Ezrachi, A. & Stucke, M.E. (2018). Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Yu, X., Trans.). Beijing: CITIC Press.
Fisher, E. (2019). How algorithms see their audience: Media epistemes and the changing conception of the individual. Media, Culture & Society, 41(8).
Fourcade, M., & Healy, K. (2017). Seeing like a market. Socio-Economic Review, 15(1).
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (p. 167). MIT Press.
Harari, Y. N. (2017). Homo deus: A Brief History of Tomorrow (Lin, J. H., Trans.). Beijing: CITIC Press.
Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7).
Issar, S., & Aneesh, A. (2022). What is algorithmic governance? Sociology Compass, 16(1).
Just, N., & Latzer, M. (2017). Governance by algorithms: Reality construction by algorithmic selection on the internet. Media, Culture & Society, 39(2).
Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review, 8(4).
Kenney, M., & Zysman, J. (2020). The platform economy: Restructuring the space of capitalist accumulation. Cambridge Journal of Regions, Economy and Society, 13(1).
Napoli, P. M. (2014). Automated media: An institutional theory perspective on algorithmic media production and consumption. Communication Theory, 24(3).
Neyland, D., & Möller, N. (2017). Algorithmic IF…THEN rules and the conditions and results of power. Information, Communication & Society, 20(1).
Scaruffi, P. (2017). Human 2.0 (Niu, J. X., & Yan, J. L., Trans.). Beijing: CITIC Press.
Shorey, S., & Howard, P. (2016). Automation, big data, and politics: A research review. International Journal of Communication, 10.
Susskind, J. (2022). Future Politics: Living Together in a World Transformed by Tech. Li, D. B. (Trans.). Beijing: Beijing Daily Press.
Wang, Q. (2020). An examination of the gatekeeping standards of Weibo’s “hot search” from the perspective of critical algorithm research. Chinese Journal of Journalism & Communication, (7).
Yeung, K., & Lodge, M. (2020). Algorithmic Regulation (Lin, S. W., & Tang, L. Y., Trans.). Shanghai: Shanghai People’s Publishing House.
Zhang, A. J. (2021). The risks and regulation of “Algorithm Leviathan”. Exploration and Free Views, 1.
Zhang, L. H. (2021). Regulation of Algorithms in the Era of Artificial Intelligence. Shanghai: Shanghai People’s Publishing House.
Zhang, Y. H., Qin, Z. G., & Xiao, L. (2017). The discriminatory nature of big data algorithms. Studies in Dialectics of Nature, 5.
Zheng, Z. F. (2019, June 23). Beware of the hidden risks of algorithmic discrimination. Guangming Daily, 7.


