LTP News Sharing:
President Trump’s executive order addressing what he calls “Woke AI” in the federal government did well to address political bias and progressive orthodoxy, says Free Enterprise Project Executive Director Stefan Padfield, but “it arguably leaves unnamed one of the most consequential mechanisms by which ideology enters algorithmic systems: the doctrine of ‘disparate impact.’”
In a commentary published at RealClearMarkets, Stefan explains why the legal theory of disparate impact is dangerous and needs to be specifically addressed in this context:
Rather than ensuring equal treatment, disparate impact often requires institutions to alter outcomes until all groups are statistically equal — regardless of merit, behavior, or circumstance.
In AI, this manifests in the form of algorithmic “fairness” metrics designed to equalize outcomes across racial or gender lines. For example, an AI used in hiring might be penalized if it selects more Asian or white applicants than Black or Latino ones, even if the differences are based on objective qualifications. To “correct” this, developers are incentivized to manipulate data inputs or tweak model outputs — in effect, building racial preferences into the code.
This is precisely the kind of ideological intervention Trump’s order is supposed to prevent. But without explicitly naming disparate impact as a prohibited basis for AI development, the order risks allowing its influence to persist under a more technocratic veneer.
Read Stefan’s commentary in full below.
President Trump’s recent Truth Social warning about the rise of “Woke AI” echoes his prior executive order, Preventing Woke AI in the Federal Government, which seeks to confront artificial intelligence systems trained or tuned to reflect ideological narratives rather than objective truth. And yet, even as that order takes aim at political bias and progressive orthodoxy in AI, it arguably leaves unnamed one of the most consequential mechanisms by which ideology enters algorithmic systems: the doctrine of “disparate impact.”
Stefan Padfield
“Woke AI,” according to Trump’s order, refers to systems that “sacrifice truthfulness and accuracy to ideological agendas.” The order prohibits federal agencies from procuring or developing AI systems that advance these ideologies and mandates rigorous audits to ensure AI outputs are free from political bias. This is a promising start. But the failure to directly call out the pernicious role of “disparate impact” — a legal theory that undergirds many of the policies Trump’s order ostensibly seeks to combat — potentially leaves the door open for continued ideological influence.
To understand why this matters, consider the role that disparate impact plays in shaping institutional policy. Originally rooted in civil rights law, the theory of disparate impact holds that a policy may be deemed discriminatory if it disproportionately affects a protected group — even without intent. As Gail Heriot has extensively argued in her analysis of the doctrine, this framework has evolved into a tool for coercive social engineering. Rather than ensuring equal treatment, disparate impact often requires institutions to alter outcomes until all groups are statistically equal — regardless of merit, behavior, or circumstance.
In AI, this manifests in the form of algorithmic “fairness” metrics designed to equalize outcomes across racial or gender lines. For example, an AI used in hiring might be penalized if it selects more Asian or white applicants than Black or Latino ones, even if the differences are based on objective qualifications. To “correct” this, developers are incentivized to manipulate data inputs or tweak model outputs — in effect, building racial preferences into the code.
This is precisely the kind of ideological intervention Trump’s order is supposed to prevent. But without explicitly naming disparate impact as a prohibited basis for AI development, the order risks allowing its influence to persist under a more technocratic veneer.
Christopher Rufo, in his breakdown of the order, notes that now “the federal government will purchase only software that is … committed to ‘ideological neutrality.’” But ideological bias often hides in plain sight. Disparate impact has been internalized by agencies, HR departments, school districts, and tech firms not as a controversial legal theory, but as a default operating principle. It has become the bureaucratic air we breathe. And in AI, where “bias audits” and “fairness interventions” are often called for, the results can be algorithms that have been explicitly trained to engineer equal outcomes — the very definition of a “woke” system.
The consequences are already evident. In education, as Manhattan Institute fellow Max Eden has detailed, school discipline policies based on disparate impact have led to perverse outcomes: “chaos and less learning than ever.” AI systems built to enforce similar equity metrics will carry these same policies into new domains — from hiring to lending to criminal justice — all while maintaining the illusion of objectivity.
In this context, any meaningful effort to combat “woke AI” must go beyond ideology as declared and confront ideology as practiced. That means tackling the operational rules, like disparate impact, that shape how outcomes are measured and adjusted.
President Trump’s executive order is a milestone in the fight against politicized technology. But it cannot achieve its full potential unless it addresses the subtle, structural mechanisms through which ideology is encoded. Adding an express prohibition against weaponizing disparate impact would close a potential loophole and send a clear message: AI should pursue truth and competence, not statistical parity by fiat.
Wokeness is not just a set of ideas; it is a system of incentives, embedded in laws, regulations, and code. If we want to root it out of AI, we must strike at its legal and conceptual core. Disparate impact is part of woke AI’s core. It must be named.
Author: Stefan Padfield

