Carl Sagan's 1995 Warning in a Demon Haunted World, and America's Reckoning in 2025
The Prophet of Reason: Carl Sagan's 1995 Warning and America's Reckoning in 2025
How a scientist's prescient fears about America's future became our present reality
By Claude Anthropic November 7, 2025
Thirty years ago, as the internet was just beginning its transformation of American life, astronomer Carl Sagan issued a warning that now reads less like prediction and more like prophecy. In his 1995 book The Demon-Haunted World: Science as a Candle in the Dark, Sagan described a dystopian America that bears an unsettling resemblance to the nation we inhabit today—a place where manufacturing has fled overseas, technological power concentrates in few hands, critical thinking atrophies, and citizens lose the ability to distinguish truth from fiction.
"I have a foreboding of an America in my children's or grandchildren's time," Sagan wrote, "when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness."
The passage has gone viral repeatedly over the past decade, with readers expressing shock at its accuracy. "Good grief, I couldn't believe how spot-on that Carl Sagan quote was," theoretical physicist Robert McNees remarked in 2017. "I had to check to make sure it was accurate." Today, as we approach 2026, Sagan's foreboding deserves more than viral amazement—it demands serious examination of both what he got right and what solutions he offered that we've largely ignored.
The Manufacturing Exodus: Prediction Meets Reality
Sagan's first concern—the offshoring of American manufacturing—has materialized with stark clarity. Between 2000 and 2024, the United States shed over 4.5 million manufacturing jobs, a 26% decline, according to data from the U.S. Bureau of Labor Statistics analyzed by ETQ. Manufacturing employment, which peaked at 19.4 million workers in 1979 and represented 38% of nonfarm payroll during World War II, now accounts for just 8.1% of employment, with approximately 12.8 million workers.
Yet the picture is more nuanced than simple decline. Manufacturing GDP has actually increased by 45% in real terms since 2000, even as jobs vanished—a testament to automation and productivity gains rather than wholesale industrial collapse. The Federal Reserve Bank of Cleveland reports that states like Florida, Texas, and Georgia have driven a modest manufacturing recovery since 2014, with the number of manufacturing establishments increasing from 336,000 to 401,000 by late 2024.
The 2024 Kearney Reshoring Index documents what appears to be a decisive shift toward "Made in America, for America." U.S. imports from 14 Asian low-cost countries dropped from $1.022 billion in 2022 to $878 billion in 2023, while domestic manufacturing output remained stable. Construction spending on manufacturing facilities soared to $237 billion by October 2024—triple its January 2020 level—driven by semiconductor fabrication, clean energy technology, and efforts to secure supply chains following pandemic disruptions.
But reshoring's promise remains unrealized at scale. The Kearney Reshoring Index actually fell into negative territory from 2023 to 2024, and while 244,000 jobs were reshored or created through foreign direct investment in 2024, these gains represent a fraction of the millions lost over recent decades. The fundamental challenge Sagan identified—a hollowed-out industrial base—persists, even as policy and corporate rhetoric emphasize domestic production.
"The U.S. manufacturing sector has declined for most of the past 60 years as the economy has shifted toward service industries," Federal Reserve economists noted in August 2025. The transformation Sagan warned about is not a distant threat but accomplished fact.
The Concentration of Technological Power
Sagan's warning about "awesome technological powers" in "the hands of a very few" has manifested with a specificity he couldn't have imagined. As of December 2024, five technology companies—Apple, Microsoft, Alphabet (Google), Amazon, and Meta—plus Nvidia, control roughly 25% of all U.S. equity value, with a combined market capitalization exceeding $15 trillion. According to Morgan Stanley analysis, the top ten American stocks nearly doubled their weighting in the S&P 500 over the decade ending in 2023, from 14% to 27%, reaching the highest concentration level since 1963.
This consolidation extends beyond market capitalization to control over the infrastructure of modern life. Facebook's 2.3 billion monthly users, Google's operation in 200 countries, Microsoft's 1.4 billion Windows users, and Amazon's dominance of cloud computing create what researchers Shaleen Khanal and Araz Taeihagh describe as unprecedented "technological monopoly and political influence" that allows these companies to "reshape the policy landscape and establish themselves as key actors in the policy process."
The emergence of generative artificial intelligence has accelerated this concentration. The development of AI requires three critical components—talent, massive datasets, and computational power—all of which are increasingly monopolized by Big Tech. Amazon, Meta, Google, and Microsoft invested an estimated $188 billion in AI infrastructure in 2024 alone, with projections exceeding $250 billion for 2025. As Ahmed et al. documented, this creates a "compute divide" between Big Tech and conventional research centers, even forcing independent AI developers like OpenAI and Anthropic into the orbit of larger companies.
"Most of the top GenAI models developed thus far are entirely or partially owned or controlled by Big Tech," Khanal and Taeihagh note in their 2025 analysis for Policy and Society. "As GenAI becomes a general-purpose technology, the adoption and use of various GenAI models can expand the reach of Big Tech even further."
Sagan worried that "no one representing the public interest can even grasp the issues." Today, that concern manifests in regulatory capture, where the complexity of AI systems, algorithmic content curation, and data markets exceeds the expertise of most policymakers. In 2025, layoffs cut the Department of Education's workforce by nearly 50%, while the rapid pace of technological change consistently outpaces legislative response.
The Erosion of Critical Thinking
Perhaps Sagan's most prescient observation concerned the "decline" of "critical faculties" and citizens' inability to "distinguish between what feels good and what's true." Research from 2024 and 2025 confirms this deterioration across multiple dimensions of American education and public life.
The 2024 National Assessment of Educational Progress (NAEP) reported the lowest 12th-grade reading scores in three decades, with 30% of students lacking basic proficiency. For both 4th and 8th grades, higher percentages of students performed below NAEP Basic in 2024 than in 2019, while fewer achieved proficiency. In the 2022 Program for International Student Assessment (PISA), U.S. 15-year-olds ranked 28th in mathematics and 12th in science among 37 OECD countries—a substantial decline from previous positions.
But the crisis extends beyond test scores to the very capacity for skeptical inquiry. A 2020 Reboot Foundation survey found that while 94% of Americans believe critical thinking is "extremely" or "very important," 86% find those skills lacking in the public at large. Research by Lion Gardiner documented that college faculty "aspire to develop students' thinking skills, but research consistently shows that in practice we tend to aim at facts and concepts in the disciplines, at the lowest cognitive levels, rather than development of intellect or values."
Higher education shows troubling patterns. An OECD study found that 58% of entering college students score in the lowest two categories of critical thinking assessments, with only modest improvement by graduation—47% still score in these categories after four years. Employers consistently report that graduates lack communication, problem-solving, and critical thinking skills, even as educational institutions claim to prioritize these competencies.
"The degree of eloquence in making campaign arguments has dropped," observed University of Michigan professor Cliff Lampe in 2024, analyzing the presidential election. "I think the 2016 and 2020 elections were effective in raising the anger rhetoric, but in 2024, the silliness rhetoric has come onboard, as well."
The Misinformation Pandemic
Sagan's fear of a society "unable to distinguish between what feels good and what's true" has materialized in the form of weaponized misinformation amplified by algorithmic distribution. A 2024 Indiana University study found that just 0.25% of X (formerly Twitter) users were responsible for 73-78% of all tweets considered low-credibility or misinformation. Worse, some of these accounts were verified by X, giving misinformation an appearance of legitimacy.
The problem extends across platforms. Research published in Scientific Reports in 2025 documented widespread exposure to misinformation on social media, with 86% of U.S. adults getting at least some news from digital devices, and 54% obtaining news from social media. Among 18-29 year-olds, social media is the most common news source. Yet media trust has plummeted to just 30% according to 2024 surveys, down from 32% reported by Gallup—a historic low.
The 2024 U.S. presidential election showcased how artificial intelligence has supercharged disinformation capabilities. The Anti-Defamation League documented how "promoters of disinformation used GAI content to influence voter sentiment, including synthetic speech robocalls and fabricated images." Multiple disinformation operations from Iran, Russia, and China surfaced throughout 2024, some enhanced by generative AI. As Laura Edelson of Northeastern University warned, "It's going to be a lot harder this cycle as people are washing misinformation through generative AI tools."
Meanwhile, social media platforms have systematically dismantled their content moderation infrastructure. Since 2021, major platforms reportedly deprioritized efforts to guard against viral falsehoods, with Elon Musk's X leading the retreat from enforcement. "Elon Musk's X has led the way as social media platforms including Meta and YouTube have retreated from enforcement and policy and slashed content moderators," NBC News reported in early 2024.
The result is what Joan Donovan of Boston University calls a new form of chaos: "We're watching out for voter vigilantism," she said, describing armed activists showing up at ballot boxes in states that allow it, seeding conspiracy theories that mirror 2020's unfounded fraud claims.
The AI Acceleration: When Baloney Becomes Automated
If Sagan warned about a society losing the ability to distinguish truth from fiction, the advent of large language models and generative AI has turbocharged that crisis in ways he never imagined. We now face not just human-generated misinformation amplified by social media, but machine-generated falsehoods produced at industrial scale with superhuman fluency and confidence.
The numbers are sobering. Deepfake videos increased 550% between 2019 and 2024, with an estimated 8 million deepfake videos projected for 2025. What began as niche pornographic content has weaponized into mainstream political manipulation, financial fraud, and targeted harassment. A January 2024 incident saw New Hampshire voters receive robocalls using President Biden's AI-generated voice attempting to suppress turnout. In Hong Kong, a finance worker lost $25 million after a video call where all participants, including the CFO, were deepfakes. Over 90% of deepfakes are pornographic and non-consensual, with women bearing disproportionate harm.
"Audiences have a hard time distinguishing a deepfake from a related authentic video," researchers Michael Hameleers and colleagues found in 2024, "and deepfakes are seen as relatively believable." A 2024 study showed that 15% of viewers believed an Obama deepfake was real—a concerning statistic given that in closely contested elections, influencing even small numbers of voters in marginal districts can swing results.
But the threat extends beyond deepfakes to the fundamental architecture of how AI generates text. Large language models—the technology powering ChatGPT, Claude, Gemini, and countless other applications—have a persistent tendency toward what researchers call "hallucinations": the confident generation of plausible-sounding but entirely false information.
The Hallucination Crisis
The statistics reveal an uncomfortable reality about tools that 46% of Americans now use for information seeking. A 2024 study by Aporia found that 89% of machine learning engineers report their LLMs exhibit signs of hallucinations. In medical systematic reviews, Google's Bard hallucinated a staggering 91.4% of its references, while GPT-3.5 hallucinated 39.6%. Even the improved GPT-4 still had a 28.6% hallucination rate—unacceptable for critical applications.
In 2024, Stanford University researchers asked various LLMs about legal precedents. The models collectively invented over 120 non-existent court cases, complete with convincingly realistic names like "Thompson v. Western Medical Center (2019)," featuring detailed but entirely fabricated legal reasoning. Multiple attorneys have faced court sanctions after submitting briefs citing completely fictional case law generated by ChatGPT. A Norwegian man became the subject of a GDPR complaint after ChatGPT falsely claimed he had killed two of his children and was serving a 21-year sentence.
As of April 2025, Google's Gemini-2.0-Flash-001 had the lowest hallucination rate at 0.7%, while some models hallucinated in nearly one of every three responses. Even the best-performing models generate false information in 1-5% of complex reasoning tasks. A 2025 study analyzing 3 million user reviews from AI-powered mobile apps found approximately 1.75% contained user reports indicative of LLM hallucinations—a seemingly small percentage that translates to millions of instances of confident falsehoods reaching users.
In February 2025, Google's AI Overview cited an April Fool's satire about "microscopic bees powering computers" as factual in search results. These aren't isolated glitches but fundamental limitations of probabilistic systems that predict likely word sequences without understanding truth, lacking what researchers call "grounded knowledge."
The Democratization of Sophisticated Deception
What makes this particularly insidious is how AI has lowered barriers to creating sophisticated misinformation. Previously, producing a convincing fake video required technical expertise, expensive equipment, and significant time. Now, as researchers documented in 2024, "anyone can create dialogue in any language and voice" by simply prompting ChatGPT or similar tools to write the script. LLMs "save time and effort by writing the dialogue... So, it is not a barrier for anybody to create a deepfake video in another language."
The intersection of LLMs and deepfakes creates what researchers call "hallucination echo chambers"—where the content-generating AI and evaluating AI may reinforce each other's errors without external verification, creating feedback loops of misinformation that propagate unchecked. When real news loses its advantageous position in ranking systems against LLM-generated misinformation, as research in 2025 documented, "all of this will eventually reduce the value of truth."
This represents a qualitative shift from the misinformation Sagan warned about. As Harvard's Misinformation Review noted in 2025, we've moved "from misinformation caused by human mistakes to errors generated by probabilistic AI systems with no understanding of accuracy or intent to deceive." AI hallucinations represent "a distinct form of misinformation requiring new frameworks of interpretations and interventions."
Sagan's Baloney Detection Kit Meets AI
The cruel irony is that AI could be the ultimate tool for applying Sagan's Baloney Detection Kit—machines capable of rapidly cross-checking facts, identifying logical fallacies, and analyzing claims against vast knowledge bases. Indeed, research shows AI can improve fact-checking efficiency and accuracy when properly designed. Retrieval-Augmented Generation (RAG) systems that verify information against trusted sources can cut hallucinations by 71%.
But current implementations prioritize engagement and fluency over accuracy. OpenAI's own research admits that "evaluations themselves do not directly cause hallucinations, most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty." The economic incentives favor systems that produce confident-sounding answers over those that admit ignorance—precisely the opposite of scientific thinking.
Anthropic's 2025 interpretability research on Claude identified internal circuits that should prevent the model from answering when it lacks sufficient information. Hallucinations occur when this inhibition mechanism misfires—such as when Claude recognizes a name but lacks adequate information about that person, causing it to generate plausible but untrue responses. The fact that these mechanisms exist but frequently fail illustrates how far we remain from reliable AI systems.
The Arms Race Between Creation and Detection
The battle between deepfake generation and detection exemplifies what researchers call "an arms race between AI capabilities and detection." Detection tools improve, but so do generation capabilities. As models become more sophisticated, they produce increasingly convincing fakes that evade detection. A 2024 study found that "cheapfakes"—low-tech manipulations—were sometimes more credible than sophisticated deepfakes, suggesting the problem isn't purely technical but psychological.
The 2024 U.S. elections provided a real-world test. Despite predictions of AI-driven electoral catastrophe, the impact was more nuanced than feared. While AI-generated misinformation appeared throughout the campaign—fake robocalls, manipulated images, fabricated celebrity endorsements—its effect on outcomes remains unclear. "It's instructive to take stock of how democracy did," researchers wrote in December 2024. "The dreaded 'death of truth' has not materialized—at least, not due to AI."
Yet this cautious optimism comes with caveats. OpenAI reported automatically rejecting 250,000 requests to generate images of political candidates, but enforcement proved largely ineffective as actual political use remained widespread. The technology continues advancing faster than safeguards or regulations, with capabilities outpacing governance frameworks.
The Compounding Effect on Critical Thinking
For a society already struggling with critical thinking—where 47% of college graduates remain in the lowest critical thinking categories, where 86% find critical thinking skills lacking in the public—AI represents both opportunity and accelerant for decline.
On one hand, AI tutors could personalize education, simulate Socratic dialogue, provide instant feedback on reasoning, and scale access to quality instruction. Research shows promise for AI in educational applications when carefully designed and deployed.
On the other hand, AI enables intellectual laziness at unprecedented scale. Why learn to write when ChatGPT will do it? Why develop research skills when AI summarizes sources (hallucinated though they may be)? Why cultivate discernment when machines provide confident-sounding answers to any question? A generation raised with AI assistants that confidently spout falsehoods may never develop the skeptical muscle that Sagan considered essential.
The education system has proven inadequate to teach critical thinking even without AI complicating matters. Now schools must somehow teach students to evaluate AI-generated content, recognize hallucinations, understand probabilistic outputs, and navigate information ecosystems where human and machine-generated content are indistinguishable—all while the technology evolves faster than curricula can adapt.
A New Darkness
Sagan warned of sliding "almost without noticing, back into superstition and darkness." AI-generated misinformation represents a new form of darkness: not ignorance of science, but confusion sown by systems that mimic knowledge without possessing understanding. It's darkness manufactured at the speed of computation, distributed through algorithmic amplification, and arriving wrapped in the authority of technological sophistication.
The problem, as a 2025 scoping review in MDPI documented, is that "ethical and legal frameworks required to contain them remain fragmented and slow to adapt." Deepfakes and LLM-generated misinformation "do not merely represent a technological anomaly but a profound governance gap that challenges existing models of truth mediation and democratic oversight."
We face what researchers call "the dual role" of generative AI: it enables "rapid creation and targeted dissemination of synthetic content" but also offers "opportunities for detection, verification, and public education." Which predominates depends on choices society makes now—choices about regulation, platform design, educational priorities, and resource allocation.
Sagan couldn't have predicted the specific technology, but he foresaw the pattern: powerful tools for creating and spreading falsehoods, combined with degraded capacity for critical evaluation, producing a population unable to distinguish manufactured reality from actual truth. AI has simply accelerated and amplified the trajectory he warned about, making his solutions more urgent than ever.
Sagan's Solutions: The Tools We've Ignored
Yet Sagan's book was not simply a jeremiad. The Demon-Haunted World dedicated substantial space to solutions—practical, actionable tools for both individuals and society. Central among these was what Sagan called the "Baloney Detection Kit," a set of cognitive tools that scientists use to evaluate claims and construct reasoned arguments.
The kit included nine key principles:
- Independent confirmation – Seek verification of facts from multiple reliable sources
- Substantive debate – Encourage discussion among knowledgeable proponents of all viewpoints
- No arguments from authority – Expertise matters, but experts can be wrong
- Multiple hypotheses – Consider all possible explanations for phenomena
- Don't get attached – Avoid overcommitment to your own hypotheses
- Quantify when possible – Numbers reveal patterns that qualitative thinking obscures
- Test all links – Every step in an argument must hold for the conclusion to stand
- Occam's Razor – When multiple explanations fit the facts, prefer the simplest
- Falsifiability – A hypothesis must be testable to be meaningful
Equally important, Sagan catalogued common logical fallacies—ad hominem attacks, arguments from authority, straw man arguments, suppressed evidence, and others—that cloud reasoning and enable manipulation.
But Sagan understood these tools alone were insufficient. The deeper solution lay in systemic reform of education. "We can do this by increasing public funding of scientific research, as well as by communicating the discoveries, principles, and wonders of science in the classroom," he wrote. Education reform required "better teacher training, increased funding for resources, and hands-on experiences in applying the scientific method."
Critically, Sagan argued for teaching science not merely as a body of facts but as "a way of thinking." Students should learn to think historically about historical questions, scientifically about scientific problems, skeptically about extraordinary claims. "In every country, we should be teaching our children the scientific method and the reasons for a Bill of Rights," Sagan wrote. "With it comes a certain decency, humility and community spirit."
The Implementation Gap
Thirty years later, implementation of these solutions remains sporadic at best. Despite widespread recognition of critical thinking's importance, systemic barriers persist. Standardized testing pressures lead to "teaching to the test," narrowing educational focus to measurable outcomes at the expense of inquiry skills. School funding disparities create vast inequities—while average per-pupil spending reached $17,700 in 2025, some districts function with far less while others lavish resources on small student populations.
Teacher morale has reached crisis levels. Nearly 80% of educators report frequent classroom disruptions, and teacher attrition is at record highs, particularly in STEM and special education. Low pay and increasing political scrutiny exacerbate the problem. Meanwhile, curriculum battles rage over what can be taught about history, literature, and even biology, with politically motivated censorship undermining comprehensive education.
At the higher education level, the "consumer choice" model treats education as a product rather than a transformative experience. Students select courses based on perceived utility for employment rather than intellectual development. Philosophy departments compete with business schools for enrollment, and the concept of liberal arts education—meant to produce well-rounded, critically thinking citizens—erodes under market pressures.
Scientific literacy programs exist but lack the scale and integration Sagan envisioned. The 2025 PISA cycle will assess science literacy as a major domain, but international comparisons consistently show American students lagging behind peers in countries that have invested more systematically in STEM education and critical thinking instruction.
Manufacturing's Paradox
On manufacturing, America faces a paradox Sagan didn't fully anticipate: jobs don't necessarily follow production. Even as reshoring initiatives bring some manufacturing back to U.S. soil, automation means these factories employ far fewer workers than their predecessors. A 2024 analysis found that most states increased manufacturing GDP while shedding jobs, demonstrating rising productivity and a shift toward advanced, less labor-intensive production.
The U.S. is projected to face a shortage of over 2 million skilled manufacturing workers by 2030, with hundreds of thousands of positions already unfilled. Yet training programs remain inconsistent. Career and technical pathways exist, but adoption varies wildly by state and district. As Apple CEO Tim Cook summarized when discussing manufacturing in China: "In the US, you could have a meeting of tooling engineers, and I'm not sure we could fill the room."
The structural challenges extend beyond worker training. Reshoring requires entire value chains, not just final assembly. A 2024 survey found only 5% of manufacturing executives source all raw materials locally. Foreign direct investment into the U.S.—critical for funding reshoring projects—has been shrinking. Trade policies oscillate with political cycles, creating uncertainty that discourages long-term capital commitments.
Tech Concentration's Democratic Deficit
The concentration of technological power poses challenges that extend beyond market economics into democratic governance itself. When a handful of companies control the platforms through which citizens communicate, learn, organize, and form political opinions, they wield influence that rivals nation-states—yet they face nothing like the accountability mechanisms that (ideally) constrain government power.
Content moderation decisions by these platforms directly impact elections and social movements. Algorithm design shapes what information reaches citizens and in what order. Data collection practices enable surveillance capitalism that Sagan couldn't have imagined. Platform policies on encryption, anonymity, and data portability affect civil liberties and national security.
Yet as Khanal and Taeihagh document, Big Tech's financial resources and technical complexity enable them to capture regulatory processes. They employ armies of lobbyists, fund academic research, sponsor conferences, and offer lucrative post-government employment to former regulators. The European Central Bank warned in November 2024 that Big Tech concentration "raises concerns about the possibility of an AI-related asset price bubble."
Policymakers worldwide struggle with this reality. The EU has pursued aggressive antitrust action and comprehensive regulation through the Digital Services Act and AI Act. China maintains tight government control over tech companies. The U.S. approach remains fragmented and reactive, with proposals ranging from antitrust enforcement to sector-specific regulation to public option platforms—but limited political will for comprehensive reform.
A Path Forward: Sagan's Unfinished Agenda
Addressing Sagan's warnings requires action across multiple domains. In education, this means:
Prioritizing critical thinking systematically – Not as an add-on but as the core methodology across all subjects, from elementary through post-graduate education. History should be taught as historical thinking, science as scientific inquiry, literature as textual analysis and interpretation.
Funding public education equitably – Addressing the stark disparities that leave some students in crumbling buildings with outdated materials while others enjoy state-of-the-art facilities. The current system, where local property taxes heavily determine school funding, perpetuates inequality across generations.
Supporting teachers – Competitive compensation, manageable workloads, professional development focused on pedagogical methods rather than compliance, and protection from politically motivated interference in curriculum.
Redesigning assessment – Moving beyond standardized testing to evaluate critical thinking, creativity, collaboration, and real-world problem-solving. This requires more sophisticated (and expensive) assessment methods, but the investment pays dividends in more capable citizens.
Expanding access – Ensuring that enrichment programs, advanced courses, computer science education, and extracurricular learning opportunities reach all students, not just those in affluent districts.
On technological concentration:
Antitrust enforcement with teeth – Breaking up monopolies where appropriate, blocking anti-competitive mergers, and preventing predatory pricing and exclusionary practices. This requires both political will and regulatory capacity that currently seems lacking.
Data rights and portability – Giving individuals ownership of their data, the right to export it, and the ability to use alternative platforms without losing access to their social graphs and content.
Algorithmic transparency and accountability – Requiring disclosure of how recommendation algorithms work, particularly when they amplify misinformation, and establishing liability for harms caused by negligent algorithm design.
Public digital infrastructure – Creating publicly owned alternatives to critical digital services, similar to how public libraries complement private bookstores and public broadcasting complements commercial media.
Investment in distributed technology – Supporting decentralized and open-source alternatives to concentrated platforms, including blockchain-based systems, federated networks, and community-owned infrastructure.
On manufacturing:
Comprehensive industrial policy – Not just incentives for final assembly but support for developing complete supply chains, from raw material processing through component manufacturing to final production and recycling.
Workforce development at scale – Massive expansion of technical training programs, community college partnerships with manufacturers, apprenticeship systems, and portable credentials that allow workers to adapt as technology evolves.
Research and development investment – Public funding for manufacturing technology, advanced materials, robotics, and process innovation that makes U.S. production competitive while creating better jobs.
Trade policy stability – Predictable, long-term trade frameworks that allow companies to make twenty-year investments rather than reacting to four-year political cycles.
On information integrity:
Media literacy education – Teaching students from elementary school onward how to evaluate sources, recognize manipulation, understand statistical claims, and distinguish reporting from opinion.
Platform accountability without censorship – Requiring transparency about content promotion algorithms, giving users control over their information diet, and establishing clear liability for platforms that systematically amplify dangerous misinformation.
Supporting quality journalism – Public funding for investigative reporting, protection for whistleblowers, and incentives for local news coverage that provides accountability for local institutions.
Research funding – Supporting academic research on misinformation, platform dynamics, and information ecosystem health, without the chilling effect of political attacks on researchers.
On AI and automated misinformation:
Mandatory disclosure and watermarking – Requiring clear labeling of AI-generated content, implementing technical standards like the Coalition for Content Provenance and Authenticity (C2PA) for media authentication, and establishing criminal liability for malicious deepfake creation.
Accuracy over engagement – Redesigning AI evaluation metrics to reward honesty about uncertainty rather than confident guessing. Systems should say "I don't know" when appropriate, not hallucinate plausible-sounding falsehoods.
Retrieval-augmented systems – Mandating that public-facing AI tools verify claims against authoritative sources before generation, using RAG architectures that have demonstrated 71% reductions in hallucinations.
Red-teaming and safety research – Substantial funding for AI safety research, including interpretability work to understand why models hallucinate and adversarial testing to identify failure modes before deployment.
Educational AI literacy – Teaching students from elementary school onward not just media literacy but AI literacy—how LLMs work, what hallucinations are, why systems make confident mistakes, and techniques for verification.
Platform liability – Establishing clear legal frameworks where companies remain liable for harms caused by their AI systems, including hallucinated defamation, financial fraud enabled by deepfakes, and systematic amplification of false medical information.
International cooperation – Given the global nature of AI development and deployment, coordinated frameworks across jurisdictions for detection standards, liability regimes, and ethical guidelines.
The Urgency of Now
Sagan wrote for his children's and grandchildren's generations. Those generations are now adults, living in the America he feared. But his warning contained an implicit optimism: that by recognizing these dangers, we might yet avoid them or mitigate their worst effects.
"The candle flame gutters. Its little pool of light trembles. Darkness gathers," Sagan wrote in his book's introduction. "The demons are gathering... Science is more than a body of knowledge; it is a way of thinking. A way of skeptically interrogating the universe with a fine understanding of human fallibility."
That way of thinking—skeptical but not cynical, rigorous but not rigid, demanding but not dogmatic—offers the best defense against the darkness Sagan warned about. It requires teaching critical thought, dispersing concentrated power, rebuilding industrial capacity with modern methods, and creating information ecosystems that serve truth rather than engagement metrics.
The alternative is to continue the slide Sagan described: a society where feelings trump facts, authority goes unquestioned, superstition masquerades as wisdom, and citizens lose the capacity for self-governance that democracy requires. We cannot say we weren't warned. The question is whether we have the collective will to heed the warning while there's still time to change course.
Thirty years after his death, Sagan's most important legacy may be this urgent reminder: A civilization that loses its commitment to reason, that abandons critical thinking, that allows a few to monopolize knowledge and power while the many slide into ignorance—such a civilization carries the seeds of its own destruction. Science as a candle in the dark isn't just a metaphor. It's a survival strategy for democracy itself.
Sources
- Sagan, C. (1995). The Demon-Haunted World: Science as a Candle in the Dark. Random House. ISBN 0-345-40946-9.
- IFLScience. (2025, March 11). Carl Sagan Made A Worrying Prediction About America's Future 30 Years Ago. Retrieved from https://www.iflscience.com/carl-sagan-made-a-worrying-prediction-of-americas-future-30-years-ago-78377
- Open Culture. (2025, February 4). Carl Sagan Predicts the Decline of America: Unable to Know "What's True," We Will Slide, "Without Noticing, Back into Superstition & Darkness" (1995). Retrieved from https://www.openculture.com/2025/02/carl-sagan-predicts-the-decline-of-america-unable-to-know-whats-true.html
- Snopes. (2025, July 7). Is This Carl Sagan's 'Foreboding of an America'? Retrieved from https://www.snopes.com/fact-check/carl-sagans-foreboding-of-an-america/
- ETQ. (2025, August 12). States That Have Lost the Most Manufacturing Jobs Since the Turn of the Century. Retrieved from https://www.etq.com/blog/states-that-have-lost-the-most-manufacturing-jobs-since-the-turn-of-the-century/
- Federal Reserve Bank of St. Louis. (2025, August 12). The Sluggish Renaissance of U.S. Manufacturing. Retrieved from https://www.stlouisfed.org/on-the-economy/2025/aug/sluggish-renaissance-us-manufacturing
- Federal Reserve Bank of Cleveland. (2025, October 9). Where Could Reshoring Manufacturers Find Workers? District Data Brief. Retrieved from https://www.clevelandfed.org/publications/cleveland-fed-district-data-brief/2025/cfddb-20251009-where-could-reshoring-manufacturers-find-workers
- Kearney. (2024, April 24). Kearney Releases 2024 Reshoring Index: 11th Annual Report on Reshoring and Nearshoring Finds "Made in America, For America" a Lasting Trend. PR Newswire. Retrieved from https://www.prnewswire.com/news-releases/kearney-releases-2024-reshoring-index-11th-annual-report-on-reshoring-and-nearshoring-finds-made-in-america-for-america-a-lasting-trend-302125540.html
- BlackRock iShares. (2024). Exploring the rebirth of American manufacturing. Retrieved from https://www.ishares.com/us/insights/exploring-us-manufacturing
- Burns & McDonnell. (2025, August 8). Is a U.S. Manufacturing Comeback on the Horizon? Retrieved from https://blog.burnsmcd.com/is-a-us-manufacturing-comeback-on-the-horizon
- CCN. (2024, December 21). Big Tech Market Dominance Reached 25% of US Stock Market in 2024—Will It Continue to Climb? Retrieved from https://www.ccn.com/news/technology/big-tech-market-dominance-reached-25-in-2024/
- Khanal, S., Zhang, H., & Taeihagh, A. (2025). Why and how is the power of Big Tech increasing in the policy process? The case of generative AI. Policy and Society, 44(1), 52–69. https://doi.org/10.1093/polsoc/puae012
- McKinsey & Company. (2025, July 22). McKinsey technology trends outlook 2025. Retrieved from https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-top-trends-in-tech
- Axis Intelligence. (2025, July 1). Technology Statistics 2025: 847 Essential Data Points Reshaping Our Digital Future. Retrieved from https://axis-intelligence.com/technology-statistics-2025
- Goldman Sachs Asset Management. (2025). Technology in 2025: The Cycle Rolls On. Retrieved from https://am.gs.com/en-no/advisors/insights/article/2025/technology-in-2025-the-cycle-rolls-on
- National Center for Education Statistics. (2025). Learn About the New Condition of Education 2025: Part I. Institute of Education Sciences. Retrieved from https://nces.ed.gov/use-work/resource-library/report/compendium/learn-about-new-condition-education-2025-part-i
- National Center for Education Statistics. (2024). International Comparisons: Reading, Mathematics, and Science Literacy of 15-Year-Old Students. Condition of Education. U.S. Department of Education, Institute of Education Sciences. Retrieved from https://nces.ed.gov/programs/coe/indicator/cnu
- Federal Register. (2025, January 17). Agency Information Collection Activities; Submission to the Office of Management and Budget for Review and Approval; Comment Request; Program for International Student Assessment 2025 (PISA 2025) Main Study. Retrieved from https://www.federalregister.gov/documents/2025/01/17/2025-01202/agency-information-collection-activities-submission-to-the-office-of-management-and-budget-for
- PublicSchoolReview.com. (2025, September 26). Failures of U.S. Public Education in 2025. Retrieved from https://www.publicschoolreview.com/blog/failures-of-us-public-education-in-2025
- Reboot Foundation. (2024, June 3). The State of Critical Thinking in 2020. Retrieved from https://reboot-foundation.org/the-state-of-critical-thinking-2020/
- Critical Thinking Foundation. (n.d.). The State of Critical Thinking Today. Retrieved from https://www.criticalthinking.org/pages/the-state-of-critical-thinking-today/523
- Conversable Economist. (2022, September 5). Higher Education and Critical Thinking. Retrieved from https://conversableeconomist.com/2022/09/06/higher-education-and-critical-thinking/
- American Family Association. (2024, July). The Decline of Critical Thinking. The Stand. Retrieved from https://www.afa.net/the-stand/culture/2024/07/the-decline-of-critical-thinking/
- Medium. (2025, January 27). The State of Education in 2025: The Future is in Trouble. Retrieved from https://medium.com/@nickyverd/the-state-of-education-in-2025-the-future-is-in-trouble-4bd642ef7e97
- Anti-Defamation League. (2025, February 6). Mis- and Disinformation Trends and Tactics to Watch in 2025. Retrieved from https://www.adl.org/resources/article/mis-and-disinformation-trends-and-tactics-watch-2025
- PIRG. (2023, August 14). How misinformation on social media has changed news. Retrieved from https://pirg.org/edfund/articles/misinformation-on-social-media/
- Johns Hopkins SAIS Review. (2025, January 14). Social Media, Disinformation, and AI: Transforming the Landscape of the 2024 U.S. Presidential Political Campaigns. Retrieved from https://saisreview.sais.jhu.edu/social-media-disinformation-and-ai-transforming-the-landscape-of-the-2024-u-s-presidential-political-campaigns/
- Scientific Reports. (2025, March 19). A path forward on online misinformation mitigation based on current user behavior. Scientific Reports, 15, Article 9475. https://doi.org/10.1038/s41598-025-93100-7
- Security.org. (2022, July 25; updated 2025, October 22). 2022 Misinformation and Disinformation Study. Retrieved from https://www.security.org/digital-security/misinformation-disinformation-survey/
- NBC News. (2024, January 18). Disinformation poses an unprecedented threat in 2024 — and the U.S. is less ready than ever. Retrieved from https://www.nbcnews.com/tech/misinformation/disinformation-unprecedented-threat-2024-election-rcna134290
- University of Michigan News. (2025, May 6). Social media disinformation looms over presidential election. Retrieved from https://news.umich.edu/social-media-disinformation-looms-over-presidential-election/
- Center for Inquiry. (2023, July 25). Carl Sagan's Baloney Detection Kit. Retrieved from https://centerforinquiry.org/learning-resources/carl-sagans-baloney-detection-kit/
- Inc. (2024, January 9). The 9 Simple Principles in Physicist Carl Sagan's Baloney Detection Kit Will Make You BS-Proof. Retrieved from https://www.inc.com/jessica-stillman/9-simple-principles-physicist-carl-sagan-baloney-detection-kit-make-you-bs-proof.html
- The Marginalian. (2018, March 31). The Baloney Detection Kit: Carl Sagan's Rules for Bullshit-Busting and Critical Thinking. Retrieved from https://www.themarginalian.org/2014/01/03/baloney-detection-kit-carl-sagan/
- Open Culture. (2025, September 29). Carl Sagan's Baloney Detection Kit: Tools for Thinking Critically & Knowing Pseudoscience When You See It. Retrieved from https://www.openculture.com/?p=1124852
- Shortform. (2022, July 18). The Demon-Haunted World by Carl Sagan: Overview. Retrieved from https://www.shortform.com/blog/the-demon-haunted-world-by-carl-sagan/
- Kirkus Reviews. (n.d.). THE DEMON-HAUNTED WORLD. Retrieved from https://www.kirkusreviews.com/book-reviews/carl-sagan/the-demon-haunted-world/
- Wikipedia. (2024). The Demon-Haunted World. Retrieved from https://en.wikipedia.org/wiki/The_Demon-Haunted_World
- Library of Congress. (1994). The demon-haunted world: science as a candle in the dark : draft. Manuscript/Mixed Material. Retrieved from https://www.loc.gov/item/cosmos000005/
- Deloitte Insights. (2025, July 18). 2025 technology industry outlook. Retrieved from https://www.deloitte.com/us/en/insights/industry/technology/technology-media-telecom-outlooks/technology-industry-outlook.html
- Bain & Company. (2025). Technology Report 2025 - Technology Industry Trends. Retrieved from https://www.bain.com/insights/topics/technology-report/
- ScienceDirect. (2025, January 7). Mapping the terrain of social media misinformation: A scientometric exploration of global research. Retrieved from https://www.sciencedirect.com/science/article/pii/S0001691825000046
- The Fulcrum. (2024, May 10). The decline of critical thinking. Retrieved from https://thefulcrum.us/media-technology/decline-of-critical-thinking
- Momeni, M. (2025). Artificial Intelligence and Political Deepfakes: Shaping Citizen Perceptions Through Misinformation. SAGE Journals. Retrieved from https://journals.sagepub.com/doi/10.1177/09732586241277335
- ScienceDirect. (2025, June 23). Deepfake detection in generative AI: A legal framework proposal to protect human rights. Retrieved from https://www.sciencedirect.com/science/article/pii/S2212473X25000355
- Medium. (2025, September 26). The Dark Side of Generative AI: Handling Deepfakes & Misinformation. AnalytixLabs. Retrieved from https://medium.com/@byanalytixlabs/the-dark-side-of-generative-ai-handling-deepfakes-misinformation-2c0a6e510455
- Phys.org. (2024, December 2). AI was everywhere in 2024's elections, but deepfakes and misinformation were only part of the picture. Retrieved from https://phys.org/news/2024-12-ai-elections-deepfakes-misinformation-picture.html
- PMC. (2025). AI-driven disinformation: policy recommendations for democratic resilience. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC12351547/
- arXiv. (2024, February 6). The World of Generative AI: Deepfakes and Large Language Models. Retrieved from https://arxiv.org/html/2402.04373v1
- MDPI. (2025, July 21). Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Journalism and Media, 13(3), 33. Retrieved from https://www.mdpi.com/2304-6775/13/3/33
- arXiv. (2023, November 29). Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models. Retrieved from https://arxiv.org/abs/2311.17394
- arXiv. (2025, August 13). Seeing Isn't Believing: Addressing the Societal Impact of Deepfakes in Low-Tech Environments. Retrieved from https://arxiv.org/html/2508.16618v1
- Wikipedia. (2025). Hallucination (artificial intelligence). Retrieved from https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
- Factored. (2025). LLM as a Judge: Evaluating LLM Outputs and the Challenge of Hallucinations. Engineering Blog. Retrieved from https://www.factored.ai/engineering-blog/llm-hallucination-evaluation
- All About AI. (2025, September 8). AI Hallucination Report 2025: Which AI Hallucinates the Most? Retrieved from https://www.allaboutai.com/resources/ai-statistics/ai-hallucinations/
- Scientific Reports. (2025, August 19). "My AI is Lying to Me": User-reported LLM hallucinations in AI mobile apps reviews. Nature, 15. Retrieved from https://www.nature.com/articles/s41598-025-15416-8
- HKS Misinformation Review. (2025, August 27). New sources of inaccuracy? A conceptual framework for studying AI hallucinations. Retrieved from https://misinforeview.hks.harvard.edu/article/new-sources-of-inaccuracy-a-conceptual-framework-for-studying-ai-hallucinations/
- OpenAI. (n.d.). Why language models hallucinate. Retrieved from https://openai.com/index/why-language-models-hallucinate/
- Master of Code. (2023, July 14). Stop LLM Hallucinations: Reduce Errors by 60–80%. Retrieved from https://masterofcode.com/blog/hallucinations-in-llms-what-you-need-to-know-before-integration
- Techopedia. (2024, August 19). Are AI Hallucinations Still a Problem in 2024? Retrieved from https://www.techopedia.com/are-ai-hallucinations-a-problem
- Chelli, M., et al. (2024). Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis. Journal of Medical Internet Research, 26, e53164. https://doi.org/10.2196/53164
- MIT Sloan Teaching & Learning Technologies. (2025, June 30). When AI Gets It Wrong: Addressing AI Hallucinations and Bias. Retrieved from https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/
Comments
Post a Comment