Chapter 6: The Greater Threat: Amplified Stupidity

The Alarming Rise of Stupidity Amplified

In February 2022, as Russian forces prepared to invade Ukraine, intelligence agencies across the Western world provided clear, consistent warnings about the imminent attack. These warnings were based on extensive surveillance, communications intercepts, and troop movements visible from satellite imagery. Despite this wealth of information, numerous political and business leaders dismissed the possibility of a full-scale invasion, clinging to assumptions about rational self-interest and the impossibility of large-scale conventional war in 21st century Europe.

This wasn’t a failure of intelligence gathering or information sharing. It wasn’t ignorance in the traditional sense—the relevant facts were available and had been communicated clearly. Rather, it represented a more fundamental failure of judgment: the willful rejection of evidence that contradicted preferred beliefs, the substitution of wishful thinking for critical analysis, and the prioritization of ideological frameworks over observable reality.

In short, it was stupidity in action—not a lack of intelligence or information, but a failure to use intelligence and information wisely. And this failure occurred not among the uninformed or uneducated but among highly credentialed, experienced leaders with access to the world’s best information resources.

This pattern—where decision-makers with ample information nevertheless make catastrophically poor judgments—reveals the greater threat in our current technological landscape. While AI-amplified ignorance is certainly problematic, AI-amplified stupidity presents a far more dangerous phenomenon. When poor judgment meets powerful technology, the consequences can be both far-reaching and difficult to correct.

Poor Judgment Enhanced by Algorithmic Power

Stupidity, as we’ve defined it, involves not the absence of knowledge but its misapplication—the failure to use information effectively or to recognize when information is missing. It manifests through cognitive laziness, motivated reasoning, intellectual arrogance, and willful blindness. When these patterns of poor judgment intersect with artificial intelligence, three particularly troubling dynamics emerge.

Confirmation Acceleration occurs when AI systems rapidly provide information that confirms existing biases, creating an illusion of comprehensive research when they’ve merely accelerated confirmation bias. Traditional confirmation bias—our tendency to seek information that supports our existing beliefs—has always been a limitation of human cognition. But it operated within practical constraints; finding confirming evidence required some effort, and contradictory information might be encountered along the way.

AI systems, particularly those designed to maximize user satisfaction, can remove these practical constraints. They can instantaneously generate vast amounts of content that aligns with a user’s expressed viewpoint, producing the appearance of overwhelming evidence for virtually any position. This content can include sophisticated-sounding arguments, apparent expert opinions, and seemingly relevant data—all tailored to reinforce rather than challenge the user’s existing beliefs.

For leaders already predisposed toward certain conclusions, this dynamic creates a dangerous feedback loop. A CEO convinced of a particular strategic direction can use AI to generate analysis that supports this direction, encountering none of the friction that might traditionally prompt reconsideration. A policymaker committed to a specific approach can find endless justifications for their position without grappling with serious counterarguments.

Consider the case of Theranos founder Elizabeth Holmes, who maintained unwavering confidence in her company’s blood-testing technology despite mounting evidence of its failure. While Holmes didn’t have today’s AI tools at her disposal, she exemplified the pattern of dismissing contradictory evidence and seeking confirmation for predetermined conclusions. With contemporary AI, such selective information processing becomes even more frictionless and comprehensive.

Decision Laundering happens when leaders use AI systems to add a veneer of objectivity and thoroughness to what are essentially intuitive or ideologically driven decisions. By running predetermined conclusions through AI analysis, decision-makers can create the appearance of data-driven, systematic thought processes without actually engaging in them.

This pattern resembles what organizational scholars call “strategic misrepresentation”—the deliberate presentation of selective information to justify decisions already made on other grounds. AI systems make this practice more effective by generating sophisticated, technical-sounding justifications that may be difficult for others to evaluate or challenge.

In corporate settings, we see this when executives use complex AI-generated financial models to justify decisions actually driven by personal incentives or organizational politics. In policy contexts, it appears when officials use algorithmic simulations to support positions determined by ideological commitments rather than evidence.

Former WeWork CEO Adam Neumann exemplified this pattern when he used increasingly elaborate financial metrics and technological visions to justify a fundamentally unsustainable business model. These custom metrics created the impression of data-driven management while actually obscuring basic economic realities. Modern AI tools would make such obfuscation even more sophisticated and convincing.

Artificial Consensus emerges when leaders use AI to create the illusion of widespread agreement with their position. By generating varied content from seemingly diverse sources—different writing styles, apparent perspectives, or fictional personas—AI can simulate consensus where none exists.

This manufactured consensus can insulate leaders from recognizing genuine disagreement or legitimate concerns about their decisions. It can also be weaponized to create social pressure on others to conform to the leader’s preferred position, presenting dissenters as outliers against apparent widespread agreement.

Social media platforms have already revealed the power of artificial consensus through coordinated inauthentic behavior—networks of fake accounts creating the appearance of organic consensus. AI dramatically scales this capability, allowing the generation of seemingly diverse content that actually promotes a singular viewpoint.

Former Theranos president Ramesh “Sunny” Balwani reportedly created an environment where questioning the company’s technology was treated as disloyalty, enforcing an artificial consensus that everything was working as claimed. AI systems can enhance such environments by generating content that makes dissenting positions appear unreasonable or poorly informed.

Across these patterns, we see how AI doesn’t create stupidity but amplifies it—removing friction that might otherwise limit poor judgment, adding persuasive power to flawed reasoning, and creating illusions of validation that discourage critical reflection. These effects are particularly consequential in leadership contexts, where decisions affect many others and where organizational dynamics may already discourage dissent.

When Bad Decisions Scale: Examples from Social Media to Finance

The impact of AI-amplified stupidity becomes clearest when we examine specific domains where algorithmic systems already influence decision-making at scale. Three areas—social media governance, financial markets, and public policy—demonstrate both the mechanisms of amplification and their potential consequences.

Social Media Governance represents a domain where algorithmic amplification already intersects with human judgment in complex ways. Platform leaders make decisions about content policies, recommendation systems, and community standards that affect billions of users. These decisions require balancing competing values—free expression, safety, engagement, cultural sensitivity—under conditions of uncertainty and rapid change.

Recent history provides numerous examples where poor judgment in these contexts produced harmful outcomes at scale. When Facebook (now Meta) optimized its recommendation algorithms for “meaningful social interactions” in 2018, they inadvertently created incentives for divisive, emotionally charged content. This decision, made with incomplete understanding of its likely consequences, contributed to political polarization and the spread of misinformation globally.

Similarly, when Twitter (now X) implemented inconsistent moderation policies around COVID-19 information, they created confusion about what constituted harmful misinformation versus legitimate scientific debate. This confusion wasn’t merely academic—it affected public health behaviors during a global pandemic.

These examples reflect not just isolated mistakes but patterns of poor judgment: prioritizing metrics that are easy to measure over harder-to-quantify social impacts; assuming that algorithmic optimization for engagement aligns with user wellbeing; and failing to anticipate how malicious actors might exploit platform features.

As generative AI becomes integrated into social media platforms, these judgment failures risk becoming more consequential. AI content generation and moderation systems can implement flawed human judgments more efficiently and at greater scale. They can create more persuasive misinformation, more targeted emotional manipulation, and more realistic artificial consensus—all while providing platform leaders with apparent deniability about the outcomes.

Financial Markets provide another domain where algorithmic systems already amplify human judgment, both good and bad. Algorithmic trading, automated credit scoring, and AI-powered investment analysis now play significant roles in capital allocation and risk management. These systems implement human judgments about what factors matter in financial decisions, what risks are acceptable, and how different scenarios should be weighted.

The 2008 financial crisis illustrated how poor judgment—specifically, overconfidence in quantitative models and underestimation of systemic risk—can produce catastrophic outcomes when implemented at scale through financial technologies. The crisis didn’t result primarily from ignorance; financial leaders understood the theoretical risks of mortgage-backed securities and collateralized debt obligations. Rather, it stemmed from motivated reasoning (ignoring warning signs to maintain profitability), intellectual arrogance (dismissing concerns from those outside the financial elite), and willful blindness (avoiding information about deteriorating loan quality).

More recently, the 2021 implosion of Archegos Capital Management demonstrated how advanced financial technologies can amplify individual poor judgment. Using sophisticated derivatives and leveraged positions, Archegos founder Bill Hwang turned personal investment misjudgments into a $10 billion loss that threatened broader market stability.

As AI systems take on greater roles in financial decision-making, the risk of amplified stupidity grows. These systems can implement flawed risk models more efficiently, create more sophisticated financial instruments that obscure underlying risks, and generate plausible-sounding justifications for what are essentially speculation-driven decisions.

Public Policy represents perhaps the most consequential domain for AI-amplified stupidity, as policy decisions affect entire populations through healthcare systems, economic regulations, environmental standards, and social programs. These decisions require integrating complex, often conflicting considerations about effectiveness, equity, cost, and values.

Recent history provides numerous examples where poor judgment in policy contexts produced harmful outcomes. The 2003 decision to invade Iraq based on flawed intelligence about weapons of mass destruction reflected not just factual errors but motivated reasoning and willful blindness to contradictory evidence. The 2008 decision by the Federal Reserve to maintain low interest rates despite growing evidence of housing market instability demonstrated intellectual arrogance about the ability to manage complex economic systems.

More recently, the implementation of tariffs by multiple nations despite clear economic evidence about their inefficiency reflects ideologically driven decision-making rather than evidence-based policy. Similarly, the resistance to carbon pricing mechanisms despite near-unanimous expert support demonstrates how political considerations can override sound policy judgment.

As AI systems become integrated into policy analysis and implementation, these judgment failures risk becoming more consequential. AI can generate more sophisticated justifications for ideologically driven policies, create more convincing simulations that appear to support predetermined conclusions, and implement flawed regulatory frameworks more efficiently.

Across these domains—social media, finance, and public policy—we see common patterns in how AI amplifies poor judgment. The technology doesn’t cause the underlying stupidity but makes it more consequential by:

  1. Implementing flawed human judgments more efficiently and at greater scale
  2. Creating more persuasive justifications for decisions driven by non-rational factors
  3. Providing apparent objectivity to what are essentially subjective or ideological choices
  4. Removing friction that might otherwise prompt reconsideration of poor decisions
  5. Generating artificial validation that insulates decision-makers from contrary evidence

These patterns help explain why technological advancement doesn’t automatically lead to better decisions. When technology amplifies judgment without improving it, the result can be faster, more efficient implementation of fundamentally flawed choices.

Power as a Stupidity Amplifier: Leadership, Authority, and Cognitive Failure

The examples discussed above highlight a crucial insight: power itself functions as a stupidity amplifier, independently of technology. Leaders in positions of authority have always had their decisions—wise or foolish—amplified by the systems they control. A CEO’s misjudgment affects thousands of employees and potentially millions of customers. A president’s poor decisions reverberate through national and global systems. A central banker’s errors impact entire economies.

This amplification through institutional power often predates and exceeds technological amplification. What makes this particularly dangerous is that power frequently insulates decision-makers from feedback that might correct their thinking. The dynamics of organizational hierarchy create several reinforcing patterns:

Deference Cascades occur when subordinates hesitate to challenge leaders’ judgments, even when they recognize potential errors. This hesitation may stem from career concerns, power dynamics, or organizational cultures that discourage dissent. As information moves up hierarchical chains, it becomes increasingly filtered to align with what subordinates believe leaders want to hear.

Boeing’s 737 MAX crisis exemplified this pattern. Engineers and test pilots identified concerns about the aircraft’s MCAS system early in development, but these concerns were systematically minimized as they moved up the organizational hierarchy. By the time information reached decision-makers, critical warnings had been diluted or eliminated, contributing to design decisions that ultimately proved fatal.

Reality Distortion Fields form around powerful leaders when their status leads others to accept their assertions without the scrutiny they would apply to claims from peers or subordinates. Named after Steve Jobs’ legendary ability to convince others of seemingly impossible goals, these distortion fields can lead entire organizations to operate according to a leader’s flawed assumptions rather than observable reality.

Elizabeth Holmes created such a reality distortion field at Theranos, where her vision of revolutionary blood testing technology overrode mounting evidence of technical impossibility. Employees who raised concerns were marginalized or dismissed, while those who reinforced Holmes’ vision were rewarded with status and resources.

Ideological Capture occurs when leaders allow partisan, ideological, or tribal frameworks to override evidence-based reasoning. Whether right-wing, left-wing, nationalist, or techno-utopian, when ideology becomes the primary lens through which reality is filtered, sound judgment suffers. Leaders who prioritize ideological purity or tribal belonging over truthful assessment create precisely the conditions for catastrophic decision-making.

Jack Dorsey’s leadership at Twitter demonstrated aspects of ideological capture, as absolute commitments to free speech principles sometimes overrode practical concerns about platform harm. Similarly, Mark Zuckerberg’s commitment to connecting people globally sometimes blinded Facebook to the harmful social dynamics their platform enabled in contexts like Myanmar and Ethiopia.

Institutional Validation reinforces leaders’ poor judgment when organizational systems—performance metrics, reporting structures, incentive systems—are designed to validate rather than challenge their decisions. When organizations measure what leaders find convenient rather than what actually matters, they create artificial feedback that reinforces rather than corrects flawed thinking.

Wells Fargo’s account fraud scandal emerged from exactly this dynamic. The bank’s leadership established aggressive cross-selling metrics and incentives without adequate controls for customer consent. When employees responded by opening fraudulent accounts, the resulting metrics validated leadership’s strategy rather than revealing its fundamental flaws.

These power-driven amplification patterns interact synergistically with technological amplification. When a powerful leader with poor judgment gains access to AI tools that accelerate confirmation bias, generate artificial consensus, and provide sophisticated justifications for predetermined conclusions, the result can be a particularly dangerous form of amplified stupidity.

Consider how these dynamics might play out in contemporary contexts:

A CEO with strong ideological views on content moderation might use AI to generate extensive analysis supporting their preferred approach, dismissing concerns about unintended consequences. The combination of organizational deference and AI-generated justifications creates a powerful barrier to course correction, even as evidence of harmful outcomes accumulates.

A political leader committed to particular economic policies might use AI to generate sophisticated models showing their expected success, regardless of historical evidence to the contrary. The leader’s position and the apparent technical sophistication of the analysis make it difficult for advisors or constituents to effectively challenge these projections.

A financial regulator captured by industry perspectives might use AI to generate complex risk assessments that systematically undervalue certain types of systemic risk. The regulator’s authority and the complexity of the AI-generated analysis make it difficult for others to identify and challenge these blind spots before they contribute to financial instability.

In each case, the fundamental problem isn’t the technology but the human judgment directing it. AI systems don’t automatically correct for cognitive biases, motivated reasoning, or ideological blindness—they implement whatever judgment, sound or unsound, guides their deployment. When that judgment comes from individuals insulated by power from normal feedback mechanisms, the resulting amplification can be particularly consequential.

This understanding helps explain why we often observe sophisticated technology coexisting with what appears to be elemental stupidity in decision-making. The most advanced AI tools cannot compensate for fundamental failures in human judgment, and may actually make these failures more dangerous by implementing them more efficiently and persuasively.

The Compounding Effect of Amplified Stupidity

Beyond the immediate consequences of individual bad decisions, AI-amplified stupidity creates compounding effects that can damage social systems over time. These effects operate through several mechanisms that reinforce and expand the initial harm.

Epistemic Degradation occurs when repeated exposure to misleading or false information generated at scale gradually erodes shared standards for evaluating truth claims. As sophisticated AI systems generate increasingly persuasive content detached from epistemic standards, the distinction between knowledge and opinion, expertise and assertion, evidence and anecdote becomes increasingly blurred in public discourse.

This degradation manifests in phenomena like “truth decay”—characterized by increasing disagreement about facts, blurring of the line between opinion and fact, increased volume of opinion relative to fact, and declining trust in formerly respected sources of information. While truth decay predates current AI systems, generative AI accelerates this process by producing unlimited quantities of content that mimics the markers of knowledge without adhering to its standards.

Over time, this degradation makes it increasingly difficult to correct misinformation or build consensus around shared facts. Public discourse becomes not just polarized but fundamentally fractured, with different groups operating from entirely different factual premises and rejecting contrary evidence as inherently suspect.

Competence Atrophy emerges when overreliance on AI systems for cognitive tasks leads to declining human capability in critical thinking, analysis, and judgment. Just as physical capabilities deteriorate without regular exercise, cognitive capabilities can atrophy when consistently outsourced to external systems.

This atrophy becomes particularly problematic when AI systems implement flawed human judgments. Rather than learning from mistakes—recognizing the limitations of current approaches and developing more effective ones—humans may simply delegate increasingly complex decisions to systems that efficiently implement existing flaws. The opportunity for growth through error correction diminishes, while the scale of potential harm increases.

Education provides a clear example of this risk. Students who rely on AI to complete assignments without engaging with the material may receive passing grades but fail to develop the critical thinking skills the assignments were designed to build. Over time, this creates a competence gap—credentials without corresponding capabilities—that becomes apparent only when these students face situations requiring genuine understanding.

Trust Collapse follows when AI-amplified poor judgment leads to highly visible failures that undermine public confidence in institutions, expertise, and information systems. When leaders use AI to implement flawed judgments at scale, the resulting harms can trigger broader skepticism about the systems and authorities involved.

Financial crises exemplify this pattern. The 2008 global financial crisis resulted partly from overreliance on sophisticated quantitative models that inadequately accounted for systemic risk. The spectacular failure of these seemingly objective, data-driven approaches didn’t just cause economic damage; it severely damaged public trust in financial institutions, regulatory systems, and economic expertise more broadly.

As AI systems become more integrated into consequential decision-making across domains, similar trust collapses may occur. If AI-enhanced healthcare systems make visible diagnostic errors, if AI-powered judicial systems produce manifestly unjust outcomes, or if AI-generated content consistently misleads public understanding of important issues, the resulting erosion of trust may extend beyond the specific systems to institutional authority more generally.

Accountability Diffusion happens when the involvement of AI systems in decision processes makes it difficult to assign responsibility for harmful outcomes. When poor human judgment is implemented through complex technological systems, determining who should be held accountable—the system developers, the deployers, the operators, or the executives who established the decision framework—becomes increasingly challenging.

This diffusion of accountability can create moral hazard, where decision-makers face reduced consequences for poor judgments implemented through AI systems. “The algorithm made me do it” becomes a convenient deflection of responsibility, even when human judgment fundamentally shaped the algorithm’s behavior.

Recent examples of algorithmic bias in hiring, lending, and criminal justice systems demonstrate this dynamic. When algorithmic systems produce discriminatory outcomes, responsibility often bounces between technologists who claim they merely implemented client requirements and executives who claim they relied on technical expertise. The result is a responsibility vacuum where no one is fully accountable for harmful outcomes.

Together, these compounding effects—epistemic degradation, competence atrophy, trust collapse, and accountability diffusion—create a particularly dangerous form of systemic risk. Unlike immediate harms that trigger rapid responses, these effects operate gradually, often becoming apparent only after they’ve caused significant damage to social systems and capabilities.

This compounding nature of AI-amplified stupidity makes it potentially more dangerous than AI-amplified ignorance. While ignorance can be addressed through education and information provision, the systemic effects of amplified stupidity may require more fundamental interventions in how we design technological systems, organize institutions, and develop human judgment.

Understanding these mechanisms isn’t cause for technological pessimism but for renewed focus on the human dimensions of our technological future. The primary challenge isn’t controlling artificial intelligence but cultivating human wisdom—the sound judgment necessary to deploy technology beneficially rather than destructively. As we’ll explore in subsequent chapters, this challenge has significant implications for education, system design, governance, and our conception of intelligence itself.


Join us for a commentary:

AI Commentary

Get personalized AI commentary that analyzes your article, provides intelligent insights, and includes relevant industry news.

Value Recognition

If our Intelligence Amplifier series has enhanced your thinking or work, consider recognizing that value. Choose an amount that reflects your amplification experience:

Your recognition helps fuel future volumes and resources.

Stay Connected

Receive updates on new Intelligence Amplifier content and resources: