Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Brickbat: Not Getting the Full Picture

41 seconds ago

Bitcoin trading firm suggests a bullish BTC strategy with a key financing twist

23 minutes ago

US Bitcoin ETFs Post $462 Million Inflows as BTC Tops $73K

26 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Thursday, March 5
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»News»Media & Culture»Superintelligent AI Is Not Coming To Kill You
Media & Culture

Superintelligent AI Is Not Coming To Kill You

News RoomBy News Room1 month agoNo Comments7 Mins Read677 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Superintelligent AI Is Not Coming To Kill You
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares, Little, Brown, and Company, 272 pages, $30

Eliezer Yudkowsky and Nate Soares have a new book titled If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. “We do not mean that as hyperbole,” they write. They believe artificial intelligence research will inevitably produce superintelligent machines and these machines will inevitably kill everyone.

This is an extraordinary claim. It requires extraordinary evidence. Instead, they offer a daisy chain of thought experiments, unexamined premises, and a linguistic sleight of hand that smuggles their conclusion into the definition of intelligence itself.

The book’s central argument rests on the “alignment problem”—the effort to ensure that advanced AI systems share human values. Yudkowsky popularized this concept. Humans, the authors argue, succeed through intelligence, which they define as “the work of predicting the world, and the work of steering the world.” Computers will surpass human intelligence because they are faster, can copy themselves, have perfect memory, and can modify their own architecture. Because AI systems are “grown” through training rather than explicitly programmed, we cannot fully specify their goals. When superintelligent AI pursues objectives that diverge even slightly from human values, it will optimize relentlessly toward those alien goals. When we interfere, it will eliminate us.

To Yudkowsky and Soares, alignment isn’t about teaching machines ethics or preventing human misuse. It’s about preventing an indifferent optimizer from razing the world. They believe an unaligned superintelligence is a global death sentence.

Therefore, they argue, all AI research must stop immediately. Governments should monitor powerful computers, ban possession of more than eight state-of-the-art GPUs without oversight, and criminalize AI research. An international coalition should destroy rogue data centers through “cyberattacks or sabotage or conventional airstrikes,” even risking nuclear war, because “data centers can kill more people than nuclear weapons.” These measures, they claim, “won’t make much of a difference in most people’s daily lives” and “wouldn’t make much of a difference with regard to state power.”

This is astoundingly casual authoritarianism. A complete ban on research into a general purpose technology already delivering significant health and productivity benefits, enforced by militarized international bodies, would place global society on a permanent wartime footing.

But the problem isn’t just the medicine. It’s the diagnosis.

***

By defining intelligence as the ability to predict and steer the world, Yudkowsky and Soares collapse two distinct capacities—understanding and acting—into one concept. This builds their conclusion into their premise. If intelligence inherently includes steering, then any sufficiently intelligent system is, by definition, a world-shaping agent. The alignment problem becomes not a hypothesis about how certain AI architectures might behave but a tautology about how all intelligent systems must behave.

Yet an economist can understand markets without directing them. Google Maps predicts commute times but cannot drive your car or clear traffic. Steering requires more than prediction. It needs feedback, memory, actuators, and persistent goals.

Large language models—the technology underlying ChatGPT and similar systems—are fundamentally about prediction. They predict the next element in a sequence. They are sophisticated pattern-matching engines, grown through training on vast datasets rather than being explicitly programmed. The book emphasizes this “grown, not crafted” nature as evidence that we cannot understand or control such systems. But these tools do not optimize toward goals and have no ability to act in the world. They are prediction without steering.

Tech companies now build “agentic” AI using language models, but the agentic parts that set goals, plan, and execute rely on traditional, interpretable techniques. These components are crafted, not grown. The distinction between trained prediction engines and designed agent frameworks undermines Yudkowsky’s argument that we’re growing uncontrollable alien intelligences.

The authors attempt to address this objection in their online resources (which are more coherent than the book). They argue that prediction and steering blend together because sophisticated prediction sometimes requires intermediate steps with steering; to predict the physical world accurately, you might need to execute experiments. But this doesn’t unite prediction and steering—it simply means nonagentic systems can’t make certain predictions. A weather model that cannot run experiments will be less accurate than one that can, but this doesn’t mean the latter can control the weather.

***

We also have examples of nonhuman systems that meet Yudkowsky’s intelligence definition because they both predict and steer. Markets predict future scarcity and abundance through price signals, then steer resources toward their highest-valued uses. No individual fully understands or controls these mechanisms. Markets are “grown” through countless individual decisions rather than centrally designed. And they have generated unprecedented prosperity, not existential catastrophe.

This signals a deeper problem: The authors consistently misunderstand emergent order and complex systems. They equate lack of complete understanding with lack of control. They treat “growing” large language models as inherently dangerous, even though growing things we don’t fully understand has been the norm in human civilization. We grow food through millennia-old agricultural practices without understanding biochemistry. We grow institutions through the gradual evolution of norms and rules. We grow functioning societies through the interactions of millions of individuals pursuing their own ends.

The authors’ misunderstanding of complex systems undermines their argument in three ways. First, Yudkowsky and Soares assume that any system too complex for complete analysis must be dangerous and that any powerful agentic process not under centralized control will optimize for alien values. But complex systems can generate robustness and self-correction.

Second, they concede that sophisticated prediction sometimes requires real-world experimentation—you cannot predict physics without running experiments. But if a superintelligent AI must engage with the real world to gather data for its predictions, it must interact with other “superintelligent” systems in Yudkowsky’s sense: markets that predict and steer resources, societies that predict and coordinate behavior, ecosystems that respond adaptively to intervention. These systems will constrain the AI through feedback, forcing it into a multiagent game with radical uncertainty rather than a single-player optimization problem it can solve in isolation.

Finally, complex systems impose fundamental limits, even for superintelligence. Computationally irreducible systems cannot be accurately and efficiently predicted. Emergent social dynamics, market reactions to interventions, and ecological cascades all exhibit this property. The authors assume that greater intelligence and computational speed translate directly into omnipotent control, but complex systems don’t yield to brute-force prediction. A superintelligent AI attempting to “steer” civilization would face the same irreducible uncertainties that limit human planners, just with more computational resources to waste on unsolvable problems.

***

Yudkowsky may have accelerated the AI development he now condemns. After dropping out of high school to build superintelligence, he started the Machine Intelligence Research Institute (MIRI). The book claims DeepMind’s founders met their first big funder at a MIRI event, and OpenAI CEO Sam Altman claims Yudkowsky interested him in artificial general intelligence. Yudkowsky also created the website LessWrong, a hub of the “rationalist” community. The overlap between this subculture and effective altruism has made Yudkowsky’s AI safety framework highly influential among tech philanthropists.

That influence matters. While few publicly endorse Yudkowsky’s most extreme proposals, his rhetoric has shaped the debate. Framing AI development as inevitable omnicide makes people more likely to concentrate power over a transformative technology in the hands of governments and incumbent firms. It also risks inviting acts of terrorism. (The authors do caution, in a two-sentence footnote, that “even if you feel desperate,” such violence would be ineffective.)

All this rests on thought experiments and extrapolations, not evidence. The book assumes what it needs to prove: that advanced prediction necessarily becomes dangerous agency, that systems we don’t fully understand cannot be safely deployed, that optimization toward specific goals inevitably means optimizing away human existence.

Building tools that supplement our cognitive efforts like engines supplement our physical efforts could create unprecedented prosperity. The response to AI should be thoughtful governance that addresses real risks—cybersecurity, misuse, economic disruption—while preserving the dynamism needed to build a better world.

Instead, Yudkowsky and Soares offer stasis and despair. Their book’s greatest failure is not its stilted parables or its unconvincing arguments. It’s ignoring that complex systems all around us resist prediction and steering. Ironically, their recommendations for keeping control would hobble humanity’s ability to solve hard problems intelligently.

The authors characterize their conclusion as an “easy call.” It’s not. The ease with which they dismiss alternatives should make readers skeptical of their judgment.

This article originally appeared in print under the headline “Superintelligent AI Is Not Coming To Kill You.”

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

#CivicEngagement #Democracy #MediaEthics #PoliticalCoverage #PoliticalMedia
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Media & Culture

Brickbat: Not Getting the Full Picture

41 seconds ago
Cryptocurrency & Free Speech Finance

MrBeast Video Editor Fired From Beast Industries Following Kalshi Insider Trading Probe

30 minutes ago
Cryptocurrency & Free Speech Finance

Apple iPhone Hacking Kit Used By Spies, Crypto Scams Could Have US Intelligence Origins

2 hours ago
Cryptocurrency & Free Speech Finance

Senator Flags White House ‘Corruption’ Concerns Over Iran War Predictions Markets

3 hours ago
Cryptocurrency & Free Speech Finance

Big Tech Joins White House Energy Pledge as Iran Tensions Threaten Higher Costs

4 hours ago
Cryptocurrency & Free Speech Finance

Canadian Robbed of Crypto via ATM Kiosk, Recovery Efforts Lead to Another Scam Attempt

5 hours ago
Add A Comment

Comments are closed.

Editors Picks

Bitcoin trading firm suggests a bullish BTC strategy with a key financing twist

23 minutes ago

US Bitcoin ETFs Post $462 Million Inflows as BTC Tops $73K

26 minutes ago

MrBeast Video Editor Fired From Beast Industries Following Kalshi Insider Trading Probe

30 minutes ago

Journalism is not ‘doxxing’: The push to redefine reporting as harassment

60 minutes ago
Latest Posts

The rally is nearing a two-year ‘make or break’ price zone

1 hour ago

Crypto Scams Using ‘Powerful’ iPhone Exploit Kit: Google

1 hour ago

Apple iPhone Hacking Kit Used By Spies, Crypto Scams Could Have US Intelligence Origins

2 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Brickbat: Not Getting the Full Picture

41 seconds ago

Bitcoin trading firm suggests a bullish BTC strategy with a key financing twist

23 minutes ago

US Bitcoin ETFs Post $462 Million Inflows as BTC Tops $73K

26 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.