top of page
後端開發人員

From Discussion to Decision-Making

Our proposal library showcases the policy outputs emerging from collaboration across the Lab. Browse memos, letters, and drafts categorized by theme. Proposals are downloadable, open to feedback, and regularly submitted to relevant institutions — because youth-led policy deserves a seat at the table.

Government Review of Algorithmic Bias in Public Services

In January 2025, the UK Government published a Review into Bias in Algorithmic Decision‑Making, led by the Government Digital Service and the Centre for Data Ethics and Innovation. It focused on four domains: recruitment, financial services, policing, and local government. They concluded that:

  • Bias in algorithms is both predictable and pervasive.

  • Effective mitigation must be structured and integrated across the entire system lifecycle.

  • Organizations must adopt a data‑informed approach to detect and address bias, with mechanisms for human intervention where necessary.

The review recommends building capacity within public bodies to conduct bias audits and to establish governance frameworks that ensure humans remain accountable for outcomes.

Sectoral Regulator Initiatives

In the United Kingdom, the regulation of algorithmic bias is managed through a decentralised network of sector-specific regulators, each applying fairness principles within their existing legal frameworks. Rather than establishing a central AI regulator, the UK government relies on these authorities to embed anti-discrimination considerations into their oversight of AI systems.

 

Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA)

 

The FCA and PRA have engaged with stakeholders on fairness in automated financial services, including credit scoring and insurance. A 2023 consultation highlighted strong support for aligning bias mitigation with the UK GDPR and the Equality Act 2010, emphasizing enhanced guidance and governance standards over new statutory duties for senior managers.

Digital Regulation Cooperation Forum (DRCF)

 

The DRCF, comprising the ICO, Ofcom, CMA, and FCA, has prioritized fairness in AI as a cross-sector issue. In collaboration with the Equality and Human Rights Commission, it is working to develop shared definitions and regulatory approaches across communications, digital platforms, and consumer services.

 

While these efforts show growing alignment, challenges remain around regulatory fragmentation, resource limitations, and the lack of mandatory consistency in how bias is assessed and mitigated across sectors.

Non‑Regulatory Principles and Guidance for Fairness

AI Regulatory White Paper (2023)

 

The UK’s pro‑innovation White Paper proposes no new standalone bias laws. Instead, it defines five cross-sector principles, including fairness, to be enforced by existing regulators. These regulators are expected to integrate bias measures into sector‑specific remits. An initial monitoring function and ministerial oversight will track regulator progress.

 

Guidance Documents

 

  • ICO’s Guidance on AI and Data Protection (March 2023) expands on fairness within data protection law, advising organizations to:

    • Perform Data Protection Impact Assessments.

    • Assess disparate impacts on protected characteristics.

    • Embed fairness throughout the AI lifecycle.

  • Implementing the UK’s AI Regulatory Principles, published by the Centre for Data Ethics and Innovation, encourages regulators to reference technical bias standards (e.g., IEEE P7003, ISO/IEC TR 24027) and to adopt a fairness‑by‑design mindset during development and audits.

Technical Standards and Toolkit Adoption

The United Kingdom encourages the use of international technical standards and open-source toolkits to support the assessment and mitigation of algorithmic bias across the AI lifecycle.

International Standards

 

Key references include:

 

  • IEEE P7003, which guides bias identification and mitigation;

  • IEEE 7000, focused on embedding ethical values into AI system design;

  • ISO/IEC TR 24027:2021, which outlines methods for assessing and reducing data and algorithmic bias.

 

These standards are cited in UK policy guidance, including the CDEI’s implementation advice for regulators, as tools to harmonize fairness approaches across sectors.

Toolkits and Methodologies

 

  • IBM’s AI Fairness 360 offers statistical metrics and algorithms to detect and address bias in machine learning models.

  • Bias Impact Assessments (BIAs) and related academic frameworks are used in public sector evaluations, particularly in recruitment, welfare, and health systems.

bottom of page