Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Bitcoin’s Next RSI Showdown Is Brewing With a Higher Low at Stake

14 minutes ago

A Model For HHS: New Mexico Measles Outbreak Was Curtailed With Mass Vaccination Campaign

52 minutes ago

Markwayne Mullin Says Agents Don’t Need a Warrant If They’re Pursuing a Suspect. Here’s What the Law Says.

53 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Saturday, March 21
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Opinions»Debates»AI Hype vs. Natural Intelligence Breakthroughs
Debates

AI Hype vs. Natural Intelligence Breakthroughs

News RoomBy News Room5 months agoNo Comments12 Mins Read1,196 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
AI Hype vs. Natural Intelligence Breakthroughs
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

When the creative minds of science fiction got us thinking about how AI might be introduced to the world, along with the impact it would have on unprepared societies, they didn’t imagine a tech industry dominated by algorithmic supremacists who put hype before science. Thinking back to cultural touchstones in novels and film, there seemed to be a general creative consensus around the expectation that, when the time would come to unleash an AI, Big Tech would undersell and over-deliver.

Creators pictured corporations producing a robotic companion for the lonely, a synthetic child for the bereaved, a caregiver for the infirm, or a butler for the privileged. They anticipated a world in which scientific innovations would be licensed by businesses to create products with specific in-demand uses, and that these products would competently perform the tasks for which they were designed. The proven science would drive the business model. In general, the marketing strategy would not require deception or hype, at least, no more than any other product launch.

Of course, surprises later emerge (good storytelling needs its twists) as the AI becomes self-aware and horrors follow. But bad things happened because the scope of human knowledge behind the innovation was incomplete, not because business needs demanded scientific fraud. In the Terminator series, Skynet is created as a defensive tool with specific operational limits, and when the AI goes rogue and steps outside these limits the innovating company makes every effort to shut its creation down. Even if you don’t like the Skynet product due to personal aversions to the defence industry, it is portrayed as transparently developed for a useful purpose and not sold or hyped as humanity’s incoming digital overlords. 

The same cannot be said for the current work being done by algorithmic supremacists at OpenAI, Meta, Alphabet, xAI, and their ilk. They are choosing to quickly release half-baked products based on business hype, not the long-established pillars of scientific progress like transparency, reproducibility, replicability, and adherence to research ethics. This is not to say that the AI products currently on the market aren’t impressive. The technology behind them is the stuff of wonder for many of us. Nor is it to imply that algorithmic supremacists are insincere in their stated commitment to use algorithmic thinking to change the world. They believe in the power of the algorithm so deeply that they are prepared to sacrifice good business practices and scientific principles in support of building a techno-maximalist future unencumbered by any meaningful market, regulatory, or ethical constraints.

In fact, it is the reality of those two qualifications that make our AI age so strange. One would have expected that, with this impressive technology and sincere ideological commitments to algorithmic supremacy in hand, the techno elites would find hucksterism at the very least unnecessary, if not overtly distasteful. In regular times, the ritual of opening our news feeds to be confronted with bold pronouncements about how amazing AI has become and how our world will never be the same, with few accompanying changes on the ground, would elicit serious and widespread scepticism. If the tool is everything algorithmic supremacists say it is, it should sell itself through performance and open scientific inquiry about trade-offs. So why all the hustling to make us less informed about the full costs of adoption?

In a crowded field of contenders, OpenAI CEO Sam Altman may be the most vocal person offering ungrounded proclamations. In June, he wrote: “Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have. In some big sense, ChatGPT is already more powerful than any human who has ever lived.” The first sentence has merit. Scientific progress is a big driver of societal progress. But the second sentence is, at best, unscientific. There is no scientific basis for the assertion that ChatGPT is superior to every human in history. And just two months after making this claim, the disappointing launch of GPT-5 forced Altman to walk back the verbiage a little bit. He now says that AGI, or human-level AI, is “not a super useful term.”

Some readers may be wondering why we should care about this techno-class of carnival barkers. Selling snake oil has a storied place in American capitalism, after all. But this is something different. AI, even in its LLM configuration, is not a product devoid of actual innovation or scientific advancement. Even so, its deployment is operationalised to hurt scientific progress in other disciplines. And mainstreaming a false equivalence between current AI (basically probability calculators with zero understanding) and the wonders of human cognition may result in societal harm. Altman and his ilk have been dominating the headlines with their unjustified pronouncements about artificial intelligence for years now.

As AI hype is taking up all the air space, there is little room for public consideration of the recent cavalcade of mind-blowing findings on natural intelligence. In fact, researchers must fight hard to simply remind society of long-known scientific truths. It was truly astounding to see the editors at Nature devote an editorial to “recognizing the importance of human-generated scientific writing” because writing is part of, not separate from, the process of thinking and outsourcing it to machines will come with a serious cognitive cost.

Algorithmic supremacists like to describe our brains as meat-based computers. They designed AI on a neural net model of computing to mimic their understanding of how our brains operate. When an AI “thinks,” it uses hardware built on interconnected neurons in a layered structure. But researchers have been demonstrating for decades that human thinking is not restricted to the activity of the neural system in the brain. Instead, they’ve been pushing an embodied cognition paradigm.

Recent developments in this field are nothing short of astounding. In May 2023, it was argued that human cognition needs to be conceived as a multiscale web of dynamic information-processing distributed across cellular and network systems which operate across the entire body. The rationale for challenging the established orthodoxy is persuasive: The human brain is one part of the human body. It is made of cells, just like all other organs in the body. Why are we focusing only on the neural networks to understand human cognition? What happens if we look at not only neural, but metabolic, cellular, and immunological processes? The researcher team from Lisbon and Tufts argue that non-neural cells should also be described as “active cognisers.” They suggest that cognitive processes are better understood as “multiscale processes implemented at multilevel bodily systems and intricate cellular networks that compose the biological human organism as a whole.”

At around the same time, a different group of researchers from Washington University School of Medicine had a breakthrough in finding a previously unknown system located in a part of the brain responsible for the movement of specific body parts. They named it the somato-cognitive action network (SCAN) and showed how it supports mind–body integration, allowing the brain to anticipate upcoming changes in physiological demands based on planned actions. SCAN enables pre-action anticipatory changes in posture, breathing, and the cardiovascular system, like shoulder tension, increased heart rate, and “butterflies in the stomach.” Reuters did cover this story, but it didn’t get much traction.

ChatGPT-5 and the Limits of Machine Intelligence

The disillusion produced by GPT-5 is not a technical hiccup, it’s a philosophical wake-up call.

Why aren’t we obsessively talking about the fact that action and body control are melded in a common circuit? Why aren’t we wonderstruck by the fact that there is now a neuroanatomical explanation for “why ‘the body’ and ‘the mind’ aren’t separate or separable”? As the study’s lead author Nico Dosenbach casually explains, the mind-body dualism at the heart of the algorithmic supremacist’s belief system is simply not compatible with contemporary neuroscience. And yet, mainstream discussions of AI take for granted the assumption that a neural net architecture reproduces human cognition. 

Perhaps most impressively, scientists have found empirical support for a new conception of how our brains interact in social settings that aligns with what was previously regarded as New Age hippie silliness. Scientific American ran a story in July 2023 summarising research that found our brain waves synchronise when we interact with other people or share an experience with them. It comes from a relatively new stream of neuroscience that looks at collective effects. Scientists report that “neurons in corresponding locations of the different brains fire at the same time, creating matching patterns, like dancers moving together.” The experience of “being on the same wavelength” as another person is real, and it is visible in the activity of the brain.

Collective neuroscience now appears to show that the neural waves of people attending a concert match those of the performers, and the greater the synchrony in brain waves, the higher the expressed enjoyment by all who were there. When students are engaged, their brain waves align with those of the teacher. Couples and close friends show higher degrees of brain synchrony than other acquaintances. This explains why we don’t always “click” with certain folks. To me, this finding is far more earth-shattering than any AI advancement. It is forcing me to rethink social interactions in physical settings in a whole new way. Frankly, it’s a way of thinking that I once dismissed, relegating it to the pile of pseudoscience. And I know I’m not alone in this dismissive take. So why aren’t more people sharing findings that challenge some of our most long-held beliefs? 

Even relatively small embodied cognition discoveries should be shaking up how we live. Like the insight that if we want to get a good read on a friend’s emotional state, eat spicy food. This action enhances our emotion perception process and allows us to be more aware of facial expressions signalling anger and disgust. Or in romantic outings, head to the dessert spot because experiencing the taste of sweetness has been linked to seeing people as more attractive. We are so quick to turn to AI and apps for help in these social challenges, while remaining incurious about easily accessible biological boosters.

What makes the challenge to publicising natural intelligence research even tougher is that too often the critiques of AI that break into mainstream attention are written by those ultimately aligned with algorithmic supremacy. A recent bit of scientific research directly contradicting Big Tech’s hype with findings that prove AI is a bad tool for writing got a good amount of media attention and generated interesting online discussions in the critical community. 

In the study, researchers divided students into three groups and tasked them with writing an essay. One group was to use LLMs to help them write, another was granted access to search engines, while a third was to rely on their natural intelligence only and work without any external supportive tools. The MIT research team used electroencephalography (EEG) to assess cognitive load during the essay writing task. They found significant differences in brain connectivity between the groups. Unsurprisingly, the natural intelligence cohort showed the strongest connectivity, while search engine users had moderate neural engagement. And the chatbot users? Their brains displayed the weakest connectivity by a mile. The researchers concluded decisively that cognitive activity scales down in relation to external tool use. They also found that self-reported ownership of essays was the lowest in the chatbot group, who struggled to accurately quote their own work, and highest in the natural intelligence group.

I was so excited by this research when it came out that I went to hear one of the study’s authors, Nataliya Kosmyna, a Research Scientist at MIT Media Lab, speak at an Andus Labs event in July of this year. I wanted to know if there were plans for future studies that would extend the investigations into body implications. We know about brain connectivity stimulated by learning, emotions, memories… so what about measuring responses in the heart or other muscles? My takeaway from their study is that using the tool of AI, at least in the context of essay writing, means you learn nothing. Your brain doesn’t even get a workout. Which means that this tool for crafting generic and uninteresting output should not be used by students. But apparently, that’s not quite the message the authors intended.

Kosmyna emphasised that she and her team were narrowly looking at brain connectivity, which is a fair and academically respectable qualification. And in her talk, she rationalised the findings with a common-sense narrative of how writing with natural intelligence forces the writer to draw on memories and emotions, while writing with a bot disconnects the writer so far from the written output that they don’t even feel a sense of ownership of the words. 

But then her talk turned into a hustle for NeuroChat, an AI tutor project she was developing with MIT Labs. The product uses Muse’s consumer EEG headset to adjust the AI tutoring experience based on brain connectivity responses. The point of the critical research was to set the stage for a new product that would overcome the perceived weakness in existing AI assistance. I shelved my lingering questions. Clearly, there was little curiosity about embodied cognition implications for future research. Ultimately, even the critical papers are still authored by algorithmic supremacists who want in on the AI hustle.

At the end of the day, if we’re not limiting innovation we need to focus on the worldview of the innovators. In my own research, I call the opposite of algorithmic supremacy “artful thinking.” It’s thinking with our body… our hands, eyes, ears, hearts, guts, and brains. It’s using the environment we are in to support us. It’s engaging our bodies in physical spaces to think through real-world actions. We have so many underutilised cognitive resources at our disposal. I call this BEAM (body, environment, action, mind).

Algorithmic supremacists want to keep us isolated. When I attend industry conferences I keep hearing about an agent-to-agent future, where even the internet as we know it disappears, and all communication is between our personal bots. Meanwhile, underfunded and under-platformed scientists are discovering the massive untapped potential in our physical bodies, natural intelligence, and the magic that happens when we come together, human-to-human, not agent-to-agent. Writing is thinking. Socialising is thinking. Taking action is thinking. What else will science teach us about ourselves in the coming years? And will AI let the message reach us?



Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Media & Culture

A Model For HHS: New Mexico Measles Outbreak Was Curtailed With Mass Vaccination Campaign

52 minutes ago
Media & Culture

Markwayne Mullin Says Agents Don’t Need a Warrant If They’re Pursuing a Suspect. Here’s What the Law Says.

53 minutes ago
Debates

The Buenos Aires Bombings and Iran’s Long War

1 hour ago
Media & Culture

Sam Altman’s Defamation and Abuse of Process Case Against Sister Can Go Forward

2 hours ago
Media & Culture

Joe Kent Defects, Robots, and Mad Men Sequel Ideas

3 hours ago
Cryptocurrency & Free Speech Finance

Gemini Faces Class-Action Suit Over Prediction Market Pivot, Plummeting Stock Price

3 hours ago
Add A Comment

Comments are closed.

Editors Picks

A Model For HHS: New Mexico Measles Outbreak Was Curtailed With Mass Vaccination Campaign

52 minutes ago

Markwayne Mullin Says Agents Don’t Need a Warrant If They’re Pursuing a Suspect. Here’s What the Law Says.

53 minutes ago

The Buenos Aires Bombings and Iran’s Long War

1 hour ago

Gold Falls 11%, Biggest Weekly Fall Since 1983

1 hour ago
Latest Posts

Sam Altman’s Defamation and Abuse of Process Case Against Sister Can Go Forward

2 hours ago

Trump White House Proposes National AI Framework, Urges Federal Standard

2 hours ago

Joe Kent Defects, Robots, and Mad Men Sequel Ideas

3 hours ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Bitcoin’s Next RSI Showdown Is Brewing With a Higher Low at Stake

14 minutes ago

A Model For HHS: New Mexico Measles Outbreak Was Curtailed With Mass Vaccination Campaign

52 minutes ago

Markwayne Mullin Says Agents Don’t Need a Warrant If They’re Pursuing a Suspect. Here’s What the Law Says.

53 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.