The Greater Threat: Amplified Stupidity

28 min readAudio available

Listen to this chapter

0:00--:--

In February 2022, as Russian forces prepared to invade Ukraine, intelligence agencies across the Western world provided clear, consistent warnings about the imminent attack. Despite this wealth of information, numerous leaders dismissed the possibility of a full-scale invasion—a fundamental failure of judgment, not intelligence.

This wasn't a failure of intelligence gathering or information sharing. Rather, it represented a more fundamental failure of judgment: the willful rejection of evidence that contradicted preferred beliefs, the substitution of wishful thinking for critical analysis, and the prioritization of ideological frameworks over observable reality. In short, it was stupidity in action.

Poor Judgment Enhanced by Algorithmic Power

Stupidity, as we've defined it, involves not the absence of knowledge but its misapplication—the failure to use information effectively or to recognize when information is missing. It manifests through cognitive laziness, motivated reasoning, intellectual arrogance, and willful blindness. When these patterns of poor judgment intersect with artificial intelligence, three particularly troubling dynamics emerge.

Confirmation Acceleration

Confirmation Acceleration occurs when AI systems rapidly provide information that confirms existing biases, creating an illusion of comprehensive research when they've merely accelerated confirmation bias. Traditional confirmation bias—our tendency to seek information that supports our existing beliefs—has always been a limitation of human cognition. But it operated within practical constraints; finding confirming evidence required some effort, and contradictory information might be encountered along the way.

AI systems, particularly those designed to maximize user satisfaction, can remove these practical constraints. They can instantaneously generate vast amounts of content that aligns with a user's expressed viewpoint, producing the appearance of overwhelming evidence for virtually any position.

For leaders already predisposed toward certain conclusions, this dynamic creates a dangerous feedback loop. A CEO convinced of a particular strategic direction can use AI to generate analysis that supports this direction, encountering none of the friction that might traditionally prompt reconsideration. A policymaker committed to a specific approach can find endless justifications for their position without grappling with serious counterarguments.

Consider the case of Theranos founder Elizabeth Holmes, who maintained unwavering confidence in her company's blood-testing technology despite mounting evidence of its failure. While Holmes didn't have today's AI tools at her disposal, she exemplified the pattern of dismissing contradictory evidence and seeking confirmation for predetermined conclusions. With contemporary AI, such selective information processing becomes even more frictionless and comprehensive.

Decision Laundering

Decision Laundering happens when leaders use AI systems to add a veneer of objectivity and thoroughness to what are essentially intuitive or ideologically driven decisions. By running predetermined conclusions through AI analysis, decision-makers can create the appearance of data-driven, systematic thought processes without actually engaging in them.

This pattern resembles what organizational scholars call “strategic misrepresentation”—the deliberate presentation of selective information to justify decisions already made on other grounds. AI systems make this practice more effective by generating sophisticated, technical-sounding justifications that may be difficult for others to evaluate or challenge.

In corporate settings, we see this when executives use complex AI-generated financial models to justify decisions actually driven by personal incentives or organizational politics. In policy contexts, it appears when officials use algorithmic simulations to support positions determined by ideological commitments rather than evidence.

Former WeWork CEO Adam Neumann exemplified this pattern when he used increasingly elaborate financial metrics and technological visions to justify a fundamentally unsustainable business model. These custom metrics created the impression of data-driven management while actually obscuring basic economic realities. Modern AI tools would make such obfuscation even more sophisticated and convincing.

Artificial Consensus

Artificial Consensus emerges when leaders use AI to create the illusion of widespread agreement with their position. By generating varied content from seemingly diverse sources—different writing styles, apparent perspectives, or fictional personas—AI can simulate consensus where none exists.

This manufactured consensus can insulate leaders from recognizing genuine disagreement or legitimate concerns about their decisions. It can also be weaponized to create social pressure on others to conform to the leader's preferred position, presenting dissenters as outliers against apparent widespread agreement.

Social media platforms have already revealed the power of artificial consensus through coordinated inauthentic behavior—networks of fake accounts creating the appearance of organic consensus. AI dramatically scales this capability, allowing the generation of seemingly diverse content that actually promotes a singular viewpoint.

Former Theranos president Ramesh “Sunny” Balwani reportedly created an environment where questioning the company's technology was treated as disloyalty, enforcing an artificial consensus that everything was working as claimed. AI systems can enhance such environments by generating content that makes dissenting positions appear unreasonable or poorly informed.

Across these patterns, we see how AI doesn't create stupidity but amplifies it—removing friction that might otherwise limit poor judgment, adding persuasive power to flawed reasoning, and creating illusions of validation that discourage critical reflection.

When Bad Decisions Scale: Examples from Social Media to Finance

The impact of AI-amplified stupidity becomes clearest when we examine specific domains where algorithmic systems already influence decision-making at scale. Three areas—social media governance, financial markets, and public policy—demonstrate both the mechanisms of amplification and their potential consequences.

Social Media Governance

Social Media Governance represents a domain where algorithmic amplification already intersects with human judgment in complex ways. Platform leaders make decisions about content policies, recommendation systems, and community standards that affect billions of users.

When Facebook (now Meta) optimized its recommendation algorithms for “meaningful social interactions” in 2018, they inadvertently created incentives for divisive, emotionally charged content. This decision, made with incomplete understanding of its likely consequences, contributed to political polarization and the spread of misinformation globally.

Similarly, when Twitter (now X) implemented inconsistent moderation policies around COVID-19 information, they created confusion about what constituted harmful misinformation versus legitimate scientific debate.

As generative AI becomes integrated into social media platforms, these judgment failures risk becoming more consequential. AI content generation and moderation systems can implement flawed human judgments more efficiently and at greater scale.

Financial Markets

Financial Markets provide another domain where algorithmic systems already amplify human judgment, both good and bad. Algorithmic trading, automated credit scoring, and AI-powered investment analysis now play significant roles in capital allocation and risk management.

The 2008 financial crisis illustrated how poor judgment—specifically, overconfidence in quantitative models and underestimation of systemic risk—can produce catastrophic outcomes when implemented at scale through financial technologies.

More recently, the 2021 implosion of Archegos Capital Management demonstrated how advanced financial technologies can amplify individual poor judgment. Using sophisticated derivatives and leveraged positions, Archegos founder Bill Hwang turned personal investment misjudgments into a $10 billion loss that threatened broader market stability.

Public Policy

Public Policy represents perhaps the most consequential domain for AI-amplified stupidity, as policy decisions affect entire populations through healthcare systems, economic regulations, environmental standards, and social programs.

The 2003 decision to invade Iraq based on flawed intelligence about weapons of mass destruction reflected not just factual errors but motivated reasoning and willful blindness to contradictory evidence. The 2008 decision by the Federal Reserve to maintain low interest rates despite growing evidence of housing market instability demonstrated intellectual arrogance about the ability to manage complex economic systems.

As AI systems become integrated into policy analysis and implementation, these judgment failures risk becoming more consequential. AI can generate more sophisticated justifications for ideologically driven policies, create more convincing simulations that appear to support predetermined conclusions, and implement flawed regulatory frameworks more efficiently.

Across these domains, we see common patterns in how AI amplifies poor judgment:

  1. Implementing flawed human judgments more efficiently and at greater scale
  2. Creating more persuasive justifications for decisions driven by non-rational factors
  3. Providing apparent objectivity to what are essentially subjective or ideological choices
  4. Removing friction that might otherwise prompt reconsideration of poor decisions
  5. Generating artificial validation that insulates decision-makers from contrary evidence

Power as a Stupidity Amplifier: Leadership, Authority, and Cognitive Failure

The examples discussed above highlight a crucial insight: power itself functions as a stupidity amplifier, independently of technology. Leaders in positions of authority have always had their decisions—wise or foolish—amplified by the systems they control.

What makes this particularly dangerous is that power frequently insulates decision-makers from feedback that might correct their thinking. The dynamics of organizational hierarchy create several reinforcing patterns:

Deference Cascades

Deference Cascades occur when subordinates hesitate to challenge leaders' judgments, even when they recognize potential errors. Boeing's 737 MAX crisis exemplified this pattern. Engineers and test pilots identified concerns about the aircraft's MCAS system early in development, but these concerns were systematically minimized as they moved up the organizational hierarchy.

Reality Distortion Fields

Reality Distortion Fields form around powerful leaders when their status leads others to accept their assertions without the scrutiny they would apply to claims from peers or subordinates. Elizabeth Holmes created such a reality distortion field at Theranos, where her vision of revolutionary blood testing technology overrode mounting evidence of technical impossibility.

Ideological Capture

Ideological Capture occurs when leaders allow partisan, ideological, or tribal frameworks to override evidence-based reasoning. Jack Dorsey's leadership at Twitter demonstrated aspects of ideological capture, as absolute commitments to free speech principles sometimes overrode practical concerns about platform harm. Similarly, Mark Zuckerberg's commitment to connecting people globally sometimes blinded Facebook to the harmful social dynamics their platform enabled.

Institutional Validation

Institutional Validation reinforces leaders' poor judgment when organizational systems—performance metrics, reporting structures, incentive systems—are designed to validate rather than challenge their decisions. Wells Fargo's account fraud scandal emerged from exactly this dynamic.

These power-driven amplification patterns interact synergistically with technological amplification. When a powerful leader with poor judgment gains access to AI tools that accelerate confirmation bias, generate artificial consensus, and provide sophisticated justifications for predetermined conclusions, the result can be a particularly dangerous form of amplified stupidity.

The Compounding Effect of Amplified Stupidity

Beyond the immediate consequences of individual bad decisions, AI-amplified stupidity creates compounding effects that can damage social systems over time.

Epistemic Degradation

Epistemic Degradation occurs when repeated exposure to misleading or false information generated at scale gradually erodes shared standards for evaluating truth claims. This manifests in phenomena like “truth decay”—characterized by increasing disagreement about facts, blurring of the line between opinion and fact, and declining trust in formerly respected sources of information.

Competence Atrophy

Competence Atrophy emerges when overreliance on AI systems for cognitive tasks leads to declining human capability in critical thinking, analysis, and judgment. Students who rely on AI to complete assignments without engaging with the material may receive passing grades but fail to develop the critical thinking skills the assignments were designed to build.

Trust Collapse

Trust Collapse follows when AI-amplified poor judgment leads to highly visible failures that undermine public confidence in institutions, expertise, and information systems. The 2008 global financial crisis resulted partly from overreliance on sophisticated quantitative models that inadequately accounted for systemic risk.

Accountability Diffusion

Accountability Diffusion happens when the involvement of AI systems in decision processes makes it difficult to assign responsibility for harmful outcomes. “The algorithm made me do it” becomes a convenient deflection of responsibility, even when human judgment fundamentally shaped the algorithm's behavior.

Recent examples of algorithmic bias in hiring, lending, and criminal justice systems demonstrate this dynamic. When algorithmic systems produce discriminatory outcomes, responsibility often bounces between technologists who claim they merely implemented client requirements and executives who claim they relied on technical expertise.

Together, these compounding effects—epistemic degradation, competence atrophy, trust collapse, and accountability diffusion—create a particularly dangerous form of systemic risk. This compounding nature of AI-amplified stupidity makes it potentially more dangerous than AI-amplified ignorance.

Understanding these mechanisms isn't cause for technological pessimism but for renewed focus on the human dimensions of our technological future. The primary challenge isn't controlling artificial intelligence but cultivating human wisdom—the sound judgment necessary to deploy technology beneficially rather than destructively.