Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Crypto.com Boss Rolls out Agentic AIs with ai.com Launch

36 seconds ago

French Police Arrest Six After Magistrate Kidnapped in Crypto Ransom Case

4 minutes ago

Crypto, Banks Give Input to Fed ‘Skinny Master Account’ Idea

1 hour ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Monday, February 9
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»Will Artificial Intelligence Save Humanity—Or End It?
Cryptocurrency & Free Speech Finance

Will Artificial Intelligence Save Humanity—Or End It?

News RoomBy News Room1 week agoNo Comments5 Mins Read1,657 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Will Artificial Intelligence Save Humanity—Or End It?
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • An online panel showcased a deep divide between transhumanists and technologists over AGI.
  • Author Eliezer Yudkowsky warned that current “black box” AI systems make extinction an unavoidable outcome.
  • Max More argued that delaying AGI could cost humanity its best chance to defeat aging and prevent long-term catastrophe.

A sharp divide over the future of artificial intelligence played out this week as four prominent technologists and transhumanists debated whether building artificial general intelligence, or AGI, would save humanity or destroy it.

The panel hosted by the nonprofit Humanity+ brought together one of the most vocal AI “Doomers,” Eliezer Yudkowsky, who has called for shutting down advanced AI development, alongside philosopher and futurist Max More, computational neuroscientist Anders Sandberg, and Humanity+ President Emeritus Natasha Vita‑More.

Their discussion revealed fundamental disagreements over whether AGI can be aligned with human survival or whether its creation would make extinction unavoidable.

The “black box” problem

Yudkowsky warned that modern AI systems are fundamentally unsafe because their internal decision-making processes cannot be fully understood or controlled.

“Anything black box is probably going to end up with remarkably similar problems to the current technology,” Yudkowsky warned. He argued that humanity would need to move “very, very far off the current paradigms” before advanced AI could be developed safely.

Artificial general intelligence refers to a form of AI that can reason and learn across a wide range of tasks, rather than being built for a single job like text, image, or video generation. AGI is often associated with the idea of the technological singularity, because reaching that level of intelligence could enable machines to improve themselves faster than humans can keep up.

Yudkowsky pointed to the “paperclip maximizer” analogy popularized by philosopher Nick Bostrom to illustrate the risk. The thought experiment features a hypothetical AI that converts all available matter into paperclips, furthering its fixation on a single objective at the expense of mankind. Adding more objectives, Yudkowsky said, would not meaningfully improve safety.

Referring to the title of his recent book on AI, “If Anyone Builds It, Everyone Dies,” he said, “Our title is not like it might possibly kill you,” Yudkowsky said. “Our title is, if anyone builds it, everyone dies.”

But More challenged the premise that extreme caution offers the safest outcome. He argued that AGI could provide humanity’s best chance to overcome aging and disease.

“Most importantly to me, is AGI could help us to prevent the extinction of every person who’s living due to aging,” More stated. “We’re all dying. We’re heading for a catastrophe, one by one.” He warned that excessive restraint could push governments toward authoritarian controls as the only way to stop AI development worldwide.

Sandberg positioned himself between the two camps, describing himself as “more sanguine” while remaining more cautious than transhumanist optimists. He recounted a personal experience in which he nearly used a large language model to assist with designing a bioweapon, an episode he described as “horrifying.”

“We’re getting to a point where amplifying malicious actors is also going to cause a huge mess,” Sandberg said. Still, he argued that partial or “approximate safety” could be achievable. He rejected the idea that safety must be perfect to be meaningful, suggesting that humans could at least converge on minimal shared values such as survival.

“So if you demand perfect safety, you’re not going to get it. And that sounds very bad from that perspective,” he said. “On the other hand, I think we can actually have approximate safety. That’s good enough.”

Skepticism of alignment

Vita-More criticized the broader alignment debate itself, arguing that the concept assumes a level of consensus that does not exist even among longtime collaborators.

“The alignment notion is a Pollyanna scheme,” she said. “It will never be aligned. I mean, even here, we’re all good people. We’ve known each other for decades, and we’re not aligned.”

She described Yudkowsky’s claim that AGI would inevitably kill everyone as “absolutist thinking” that leaves no room for other outcomes.

“I have a problem with the sweeping statement that everyone dies,” she said. “Approaching this as a futurist and a pragmatic thinker, it leaves no consequence, no alternative, no other scenario. It’s just a blunt assertion, and I wonder whether it reflects a kind of absolutist thinking.”

The discussion included a debate over whether closer integration between humans and machines could mitigate the risk posed by AGI—something Tesla CEO Elon Musk has proposed in the past. Yudkowsky dismissed the idea of merging with AI, comparing it to “trying to merge with your toaster oven.”

Sandberg and Vita-More argued that, as AI systems grow more capable, humans will need to integrate or merge more closely with them to better cope with a post-AGI world.

“This whole discussion is a reality check on who we are as human beings,” Vita-More said.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Cryptocurrency & Free Speech Finance

Crypto.com Boss Rolls out Agentic AIs with ai.com Launch

36 seconds ago
Cryptocurrency & Free Speech Finance

French Police Arrest Six After Magistrate Kidnapped in Crypto Ransom Case

4 minutes ago
Cryptocurrency & Free Speech Finance

Crypto, Banks Give Input to Fed ‘Skinny Master Account’ Idea

1 hour ago
Cryptocurrency & Free Speech Finance

Japan’s record 56,000 Nikkei surge sends bitcoin to $72,000, gold past $5,000

4 hours ago
Cryptocurrency & Free Speech Finance

Only 10K Bitcoin is Quantum-Vulnerable and Worth Attacking

4 hours ago
Cryptocurrency & Free Speech Finance

Why Quantum Computing Isn’t a Serious Risk for Bitcoin Yet: CoinShares

4 hours ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

French Police Arrest Six After Magistrate Kidnapped in Crypto Ransom Case

4 minutes ago

Crypto, Banks Give Input to Fed ‘Skinny Master Account’ Idea

1 hour ago

Jimmy Lai sentenced to 20 years in prison in Hong Kong’s biggest media trial

3 hours ago

Japan’s record 56,000 Nikkei surge sends bitcoin to $72,000, gold past $5,000

4 hours ago
Latest Posts

Only 10K Bitcoin is Quantum-Vulnerable and Worth Attacking

4 hours ago

Why Quantum Computing Isn’t a Serious Risk for Bitcoin Yet: CoinShares

4 hours ago

British Columbia’s Radical Political Landscape

5 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Crypto.com Boss Rolls out Agentic AIs with ai.com Launch

36 seconds ago

French Police Arrest Six After Magistrate Kidnapped in Crypto Ransom Case

4 minutes ago

Crypto, Banks Give Input to Fed ‘Skinny Master Account’ Idea

1 hour ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.