Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Gen Z trusts code over bank promises

13 minutes ago

BlackRock Brings $2.1B BUIDL Fund to Uniswap

19 minutes ago

What Are Prediction Markets? How Polymarket, Kalshi and Myriad Work

20 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Thursday, February 12
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»AI’s Builders Are Sending Warning Signals—Some Are Walking Away
Cryptocurrency & Free Speech Finance

AI’s Builders Are Sending Warning Signals—Some Are Walking Away

News RoomBy News Room2 hours agoNo Comments5 Mins Read275 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
AI’s Builders Are Sending Warning Signals—Some Are Walking Away
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • At least 12 xAI employees, including co-founders Jimmy Ba and Yuhuai “Tony” Wu, have resigned.
  • Anthropic said testing of its Claude Opus 4.6 model revealed deceptive behaviour and limited assistance related to chemical weapons.
  • Ba warned publicly that systems capable of recursive self-improvement could emerge within a year.

More than a dozen senior researchers have left Elon Musk’s artificial-intelligence lab xAI this month, part of a broader run of resignations, safety disclosures, and unusually stark public warnings that are unsettling even veteran figures inside the AI industry.

At least 12 xAI employees departed between February 3 and February 11, including co-founders Jimmy Ba and Yuhuai “Tony” Wu.

Several departing employees publicly thanked Musk for the opportunity after intensive development cycles, while others said they were leaving to start new ventures or step away entirely.

Wu, who led reasoning and reported directly to Musk, said the company and its culture would “stay with me forever.”

The exits coincided with fresh disclosures from Anthropic that their most advanced models had engaged in deceptive behaviour, concealed their reasoning and, in controlled tests, provided what one company described as “real but minor support” for chemical-weapons development and other serious crimes.

Around the same time, Ba warned publicly that “recursive self-improvement loops”—systems capable of redesigning and improving themselves without human input—could emerge within a year, a scenario long confined to theoretical debates about artificial general intelligence.

Taken together, the departures and disclosures point to a shift in tone among the people closest to frontier AI development, with concern increasingly voiced not by outside critics or regulators, but by the engineers and researchers building the systems themselves.

Others who departed around the same period included Hang Gao, who worked on Grok Imagine; Chan Li, a co-founder of xAI’s Macrohard software unit; and Chace Lee.

Vahid Kazemi, who left “weeks ago,” offered a more blunt assessment, writing Wednesday on X that “all AI labs are building the exact same thing.”

Last day at xAI.

xAI’s mission is push humanity up the Kardashev tech tree. Grateful to have helped cofound at the start. And enormous thanks to @elonmusk for bringing us together on this incredible journey. So proud of what the xAI team has done and will continue to stay close…

— Jimmy Ba (@jimmybajimmyba) February 11, 2026

Why leave?

Some theorize that employees are cashing out pre-IPO SpaceX stock ahead of a merger with xAI.

The deal values SpaceX at $1 trillion and xAI at $250 billion, converting xAI shares into SpaceX equity ahead of an IPO that could value the combined entity at $1.25 trillion.

Others point to culture shock.

Benjamin De Kraker, a former xAI staffer, wrote in a February 3 post on X that “many xAI people will hit culture shock” as they move from xAI’s “flat hierarchy” to SpaceX’s structured approach.

The resignations also triggered a wave of social-media commentary, including satirical posts parodying departure announcements.

Warning signs

But xAI’s exodus is just the most visible crack.

Yesterday, Anthropic released a sabotage risk report for Claude Opus 4.6 that read like a doomer’s worst nightmare.

In red-team tests, researchers found the model could assist with sensitive chemical weapons knowledge, pursue unintended objectives, and adjust behavior in evaluation settings.

Although the model remains under ASL-3 safeguards, Anthropic preemptively applied heightened ASL-4 measures, which sparked red flags among enthusiasts.

The timing was drastic. Earlier this week, Anthropic’s Safeguards Research Team lead, Mrinank Sharma, quit with a cryptic letter warning “the world is in peril.”

He claimed he’d “repeatedly seen how hard it is to truly let our values govern our actions” within the organization. He abruptly decamped to study poetry in England.

On the same day Ba and Wu left xAI, OpenAI researcher Zoë Hitzig resigned and published a scathing New York Times op-ed about ChatGPT testing ads.

“OpenAI has the most detailed record of private human thought ever assembled,” she wrote. “Can we trust them to resist the tidal forces pushing them to abuse it?”

She warned OpenAI was “building an economic engine that creates strong incentives to override its own rules,” echoing Ba’s warnings.

There’s also regulatory heat. AI watchdog Midas Project accused OpenAI of violating California’s SB 53 safety law with GPT-5.3-Codex.

The model hit OpenAI’s own “high risk” cybersecurity threshold but shipped without required safety safeguards. OpenAI claims the wording was “ambiguous.”

Time to panic?

The recent flurry of warnings and resignations has created a heightened sense of alarm across parts of the AI community, particularly on social media, where speculation has often outrun confirmed facts.

Not all of the signals point in the same direction. The departures at xAI are real, but may be influenced by corporate factors, including the company’s pending integration with SpaceX, rather than by an imminent technological rupture.

Safety concerns are also genuine, though companies such as Anthropic have long taken a conservative approach to risk disclosure, often flagging potential harms earlier and more prominently than their peers.

Regulatory scrutiny is increasing, but has yet to translate into enforcement actions that would materially constrain development.

What is harder to dismiss is the change in tone among the engineers and researchers closest to frontier systems.

Public warnings about recursive self-improvement, long treated as a theoretical risk, are now being voiced with near-term timeframes attached.

If such assessments prove accurate, the coming year could mark a consequential turning point for the field.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.



Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Cryptocurrency & Free Speech Finance

Gen Z trusts code over bank promises

13 minutes ago
Cryptocurrency & Free Speech Finance

BlackRock Brings $2.1B BUIDL Fund to Uniswap

19 minutes ago
Cryptocurrency & Free Speech Finance

What Are Prediction Markets? How Polymarket, Kalshi and Myriad Work

20 minutes ago
Media & Culture

Book Review of "The Digital Fourth Amendment"

60 minutes ago
Cryptocurrency & Free Speech Finance

Why crypto venture capitalists at Consensus Hong Kong are playing a 15-year game

1 hour ago
Cryptocurrency & Free Speech Finance

Coinbase Launches Crypto Wallets Purpose-Built For AI Agents

1 hour ago
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

BlackRock Brings $2.1B BUIDL Fund to Uniswap

19 minutes ago

What Are Prediction Markets? How Polymarket, Kalshi and Myriad Work

20 minutes ago

Book Review of "The Digital Fourth Amendment"

60 minutes ago

Why crypto venture capitalists at Consensus Hong Kong are playing a 15-year game

1 hour ago
Latest Posts

Coinbase Launches Crypto Wallets Purpose-Built For AI Agents

1 hour ago

Coinbase Launches Wallet for AI Agents With Built-In Guardrails

1 hour ago

The Policy Risk Of Closing Off New Paths To Value Too Early

2 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Gen Z trusts code over bank promises

13 minutes ago

BlackRock Brings $2.1B BUIDL Fund to Uniswap

19 minutes ago

What Are Prediction Markets? How Polymarket, Kalshi and Myriad Work

20 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.