The Algorithmic Divide: How the FDA's Historic AI Policy Unwittingly Crafts Inequity
- Alina Huang
- Nov 18
- 5 min read
Original Article by Jesstian Vincent
Introduction: Paradigm Shift and the Latent Paradox
In the vast, frequently sclerotic world of government regulation, the U.S. Food and Drug Administration’s (FDA) response to Artificial Intelligence and Machine Learning (AI/ML) in medicine is an historic breakthrough. Facing up to technology that is fundamentally dynamic, a product that is meant to improve after it is released, the FDA effected a revolutionary shift from the static, traditional “lock-and-key” approval mechanism in order to install a system based on the “Predetermined Change Control Plan” (PCCP). This policy lets AI algorithms self-learn and self-improve in the real world with the prospect of progressively improving medical diagnostics. On paper, it is a stroke of regulatory brilliance that makes the U.S. the leader in responsible innovation in AI. But deeper excavations expose a perilous fault: in linking algorithmic progress with constant validation against data, the PCCP system actively designs a self-reinforcing loop that compoundingly widens the gulf between healthcare haves and have-nots, inducing a new kind of diagnostic desertification in the guise of value neutrality in technological advancement.
Strength: The Brilliance of the PCCP – Agile Governance for Adaptive Technology
The FDA’s PCCP model is a piece of regulatory brilliance. Its greatest strength is an understanding that regulating AI necessarily involves a shift from the assessment of a static product to supervision of an enduring process. In the older FDA model, a device is approved as it is at a given moment. In a typical MRI system, this is adequate. With an algorithm recognizing diabetic retinopathy on retinal scans, the older model is debilitating. Any enhancement would provoke a complete, expensive, time-consuming new approval process, suppressing innovation and leaving patients with stagnant technology.
The PCCP breaks that model. Developers are able to file upfront a plan that specifies:
• The Nature of Expected Changes: What the algorithm will undergo (i.e., retraining on fresh data)
• The Methodology of Change: How the company will ensure changes are safe and productive
• The Defined Boundaries: The thresholds that cannot be crossed without triggering a new review
This creates a managed pipeline for iterative improvement. An AI model for detecting breast cancer can learn from new cases across the country, constantly refining its accuracy across diverse body types and ethnicities. This strength is undeniable: it aligns regulatory policy with the fundamental nature of the technology it seeks to govern, fostering an environment where AI becomes a collaborative partner that gets smarter with every patient it helps.
Weakness: The Socio-Economic Blind Spot – The Data-Driven Inequity Feedback Loop
However, the very mechanism of this strength contains a devastating weakness. The PCCP model is predicated on one non-negotiable input: a continuous, high-quality stream of real world data for validation. This unexamined prerequisite unleashes a chain of consequences with dire implications for equity.
The failure is not with the policy statement itself but with unwritten assumptions regarding the background against which it is to be implemented. The United States health system is no level field for data generation. These advanced Electronic Health Record systems, special data science groups, and sophisticated IT infrastructures needed to create the clean, structured data the FDA methods crave are concentrated in large, rich Academic Medical Centers (AMCs) and rich urban hospital systems. They have patient populations overrepresented as white, insured, and of greater socioeconomic means.
In contrast, community hospitals, rural critical access hospitals, and safety-net providers exist at razor-thin margins. They do not have budgets for fancy data infrastructure and provide patient populations—rural, poor, minority, uninsured—that bear the greatest burden of health disparities. These institutions are not merely data-poor; they are data-generation poor.
PCCP framework outlines an auto-reinforcing loop of unfairness:
The Original, Biased Precedent: Initially, first-generation AI algorithms are trained on historically biased datasets with disproportionately white, urban, and male representation.
The Rational Incentive of the Developer: For large corporations seeking to calibrate their PCCP cost-efficiently, the simplest option is partnership with large, high-profile AMCs.
The Generative Loop of Excellence: Algorithms deployed at “centers of excellence” learn from the data generated there and improve specifically for those populations.
Diagnostic Desertification: Community hospitals become data sinks rather than data sources, causing algorithmic plateau or drift, degrading accuracy for underserved populations.
PCCP model, therefore, does not merely allow for this unequal; it promotes it. It constructs a positive feedback loop for the established and a negative one for the disadvantaged. The policy’s core weakness is that it maximizes technological progress and commercial viability and treats fair outcomes as a desirable secondary effect and not as an integral requirement.
The Compound Risks: Risks Greater Than Disparities
• Erosion of Trust: Communities already skeptical will experience poorer outcomes and deepen distrust.
• Misinvestment of Resources: Safety-net providers may waste scarce capital on misaligned AI tools.
• Masking of Bias: Continuous training may hide emerging skews.
• Legal and Liability Quagmire: Responsibility for inequitable drift becomes unclear.
These hazards make it impossible to treat equity as an externality.
The answer is not removing the PCCP but reimagining it with equity-by-design requirements:
Appropriate Representative Data Collection Plan
Ongoing Equity Performance Metrics
“Equity Bounties” and Fast-Track Channels
Federated Learning Support
Conclusion: From Technical Regulation to Moral Architecture
The PCCP framework of the FDA is a tribute to human creative genius in governing technological sophistication. Its ability to cultivate agile, safe innovation is beyond dispute and ought to be retained. But as an analysis of what it is today shows, it has an abiding weakness that runs contrary to its own potential and the fundamental ethos of medicine. In treating the data needed for it to work as an objective input and not as a finite resource aligned with America’s fault lines of inequity, the policy sets us up for a future in which medical AI separates us along the lines of the precision of our diagnoses. The ultimate measure of a regulatory system is not whether it enables brilliant technology, but whether it ensures that technology delivers on its promise for every member of society. True excellence in AI governance requires designing policies that are not only technically sophisticated but also morally intelligent, embedding justice and equity into their very code. The FDA has built the first draft of a brilliant technical framework; it is now an urgent imperative to revise it into the moral architecture for a fairer future of medicine.
References
Food and Drug Administration. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. FDA, Jan. 2021. FDA.gov (https://www.fda.gov/media/145022/download).
Food and Drug Administration. “Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions Guidance for Industry and Food and Drug Administration Staff.” FDA, Aug. 2025. Docket No. FDA-2022-D-2628. FDA.gov (https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial-intelligence).
Food and Drug Administration. Predetermined Change Control Plans for Machine Learning Enabled Medical Devices: Guiding Principles. FDA, Oct. 2023. FDA.gov.
Food and Drug Administration. Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles. FDA, Jun. 2024. FDA.gov.
If you’d like, I can also:
✅ create a 200–300 character hook for your website page
✅ produce a summary card
✅ create a title banner description


