A Government-Guided, Market-Executed AI Assessment Framework
- Alina Huang
- Nov 18
- 5 min read
A Government-Guided, Market-Executed Assessment Framework for Fair, Accountable, and Trustworthy AI
Original Proposal by Harry Qu, Hunter Dorwart
Background
AI assessments are becoming a key feature of regulating AI.¹ For instance, the EU AI Act requires conformity assessments and fundamental rights impact assessments for certain high-risk AI systems. China similarly requires AI service providers to conduct assessments and file with the Cyberspace Administration of China (CAC). While jurisdictions have implemented different regulatory regimes, the overall processes for AI assessments share key similarities. This is well summarised by NIST:
Government: establish the strategy for AI risk; define roles and responsibilities.
Mapping: identify the potential risks and vulnerabilities.
Measure: evaluate and quantify those risks using appropriate metrics and tools.
Management: take action to mitigate the risks and continuously monitor the system for new issues.
Current Gaps
Despite a proliferation of AI assessment regimes worldwide, we believe most approaches contain shortcomings. These typically include:
Lack of Substance and Accountability: Some regulatory frameworks are overly permissive, which leaves AI safety and fairness goals at the whim of voluntary corporate commitments. In the U.S., for example, this has created a regulatory vacuum, which critics dismiss as “mere gimmickry” (Neaher and Strategy, 2025) and allows for fragmented policies that may even hinder innovation (Frazier and Thierer, 2025).
Procedural Overload: On the flip side, in state-driven models, security assessments are often too procedurally complicated and lead to rigid compliance burdens for even mundane technologies. This is the case, for instance, in China.¹
Mismatched Priorities: AI assessments are also subject to regulatory capture where political interests crowd out objectives of fairness and ethics. For example, in China and the United States, issues of content of AI has led to the neglect of other critical risks. This gap in oversight leaves consumers vulnerable to unaddressed ethical and safety issues.
Confusion and Regulatory Lag: Uncertainty as to how AI assessments will be used can create regulatory lag and lack of standardisation. This is particularly relevant where legal regimes require numerous assessments under different legal frameworks, such as privacy and consumer protection law, and where companies are unsure how to meet requirements.
In light of these limitations, we think there is a better policy approach to AI assessments that can synthesize key elements of existing practices.
Key Goals
A more functional security assessment framework should include at least four key considerations:
Dual Track Collaboration: While governments should act as overall rule- and principles-setters, they should promote standardisation through collaboration and support of recognised standards-setting bodies to ensure uniformity. This creates a dual-track that leaves the creation of specific assessment tools and certification procedures to the market but under government guidance.
Risk-based approach: To avoid burdening innovation, security assessments should continue to adopt a risk-based, tiered approach. For non-critical AI systems, companies should lead the assessment process, with ex-post self certifications that their AI technologies have been developed with AI ethics principles in mind. However, for critical systems that could seriously endanger consumer rights, national security or public interest, ex-ante mandatory reviews should be conducted by governments or authorised third parties.
Comprehensive assessment: The scope of the assessment must be broad. We think emerging international consensus around AI principles could be leveraged in a similar manner as how the FIPs were for data protection law. This would help streamline assessments and avoid creating overlapping priorities.²
Our Proposal
Our proposed security assessment framework is built upon a dual-track system, where the government establishes rules (with private-sector participation commensurate with the level of risk and sophistication of the AI systems), and the market then implements them. This approach is designed for efficiency and accountability – i.e., the government takes the lead by providing clear guidance and a unified framework, but minimises red tape and duplicative assessments by deferring to industry on how these should be conducted for lower risk technologies.
To accomplish this, our proposed framework would synthesize gaps in current approaches by applying a dynamic, lifecycle-based approach to compliance. Key to this is a compliance-by-design philosophy where companies conduct a tiered ex-ante assessment during development followed by robust ex-post accountability, that is ensured through continuous transparency and post-market audits.
1. Ex-ante: Tiered Assessment
Assessments should follow a two-tiered, risk-based approach to design and deployment. Unlike Chinese law which mandates that all companies must conduct the same assessment as long as their product has “public attributes” (Gong and Dorwart, 2023), we believe there is no one-size-fits-all approach.
For most non-critical and low-risk AI systems, companies would be granted flexibility to conduct their own assessments. This would document how the design and deployment of AI technologies adhere to key principles such as fairness, including through best practice bias and discrimination testing metrics. Companies can opt for either an internal self-assessment or engage a qualified third-party firm if their resources are limited.
However, for critical AI systems that could endanger state security and public interests or severely affect individual rights, a mandatory review would be required before deployment, conducted by the government or an authorised third-party with the necessary expertise and conforming with recognised standards.
2. Ex-post: Continuous Accountability
While an ex-ante assessment is a crucial first line of defence, a complete accountability loop requires a robust system of continuous, post-market oversight. This framework would establish two core mechanisms: public disclosure and government audits.
Public Disclosure: After an AI system passes its assessment, companies would be required to publicly disclose the results on platforms like its website or in its Terms of Use. The disclosure should also specify whether the assessment was conducted by the company or a third party, with the results simultaneously filed with the government.
Post-Market Audits: Administrative bodies should retain the authority to conduct post-market audits with full investigatory and supervisory powers.
Effectiveness Assessment
Assessment Metric | Proposed Framework | EU | China | U.S. |
Fairness | Monitoring through ex-ante and ex-post checks incorporating self-regulation with government oversight where required. | Principled in theory, but uncertainty over implementation and core requirements creates regulatory inertia. | Limited focus; prioritises social stability over fairness review. | Depends on voluntary compliance, with little enforceable standards for fairness. |
Accountability | A complete loop of mandatory ex-ante assessment and strict ex-post audit. | Weakened by exemptions and complex bureaucracy. | Overly procedural and lacking substantive technical verification. | Fragmented, with no unified legal framework to enforce accountability. |
Trustworthiness | Public disclosure and government endorsement build market trust; clear rules reduce uncertainty. | Aims to build trust, but its complexity may confuse the public and industry. | Based on state control, not market transparency; public trust relies on government alone. | Relies on corporate self-governance; lack of standards hinders trust. |
Adaptability | A dynamic framework driven by the market. | Prone to lag, as standardisation processes are often slow. | Rigid administrative reviews unsuited for AI’s pace. | Depends on voluntary compliance, failing to ensure a coherent adaptation. |
Economic Analysis
Our proposal attempts to offers economic benefits by creating value and reducing costs for all stakeholders.
For enterprises, a tiered assessment model can minimise unnecessary regulatory burdens. Promoting self-assessment for non-risky technologies would free up resources to be redirected toward innovation and research, as a principles-based approach would not tie companies to document assessments through overcomplicated means. The establishment of a unified framework and government-endorsed standard mechanisms could also reduce transaction costs across the entire AI ecosystem.
For governments, the model aims to achieve efficiency. It avoids the need for a large, inflexible government agency to review all AI products, so that regulators can focus on mandatory reviews of only critical systems that produce huge externalities. The proposal also attempts to address the issue of regulatory capture by encouraging the participation and competition of third-party institutions and watchdogs.
For the public, the framework’s ex-post public disclosure mechanism enhances market transparency, which helps to mitigate information asymmetry and build greater consumer trust, as well as punish those who are deceptive or fail to adhere to the rules.
Works Cited
Frazier, K. and Thierer, A., 2025. 1,000 AI Bills: Time for Congress to Get Serious About Preemption. Lawfare.
Glynn, F., 2025. How to Conduct an AI Security Assessment. Mindgard.
Gong, J. and Dorwart, H., 2023. AI Governance in China: Data Protection and Content Moderation. Bird & Bird.
National Institute of Standards and Technology, 2023. Artificial Intelligence Risk Management Framework (AI RMF 1.0).
Neaher, G., 2025. AI Regulation: Bigger Is Not Always Better. Stimson Center.


