Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Bitcoin’s ‘No Direction’ Action May Lead To Bigger Breakout: Analyst

1 hour ago

Why Malta Says ESMA Goes Too Far

3 hours ago

Bitcoin ETFs Will Be Bigger Than Gold ETFs, Says ETF Analyst

4 hours ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Saturday, April 4
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»Clawdbot AI Flaw Exposes API Keys And Private User Data
Cryptocurrency & Free Speech Finance

Clawdbot AI Flaw Exposes API Keys And Private User Data

News RoomBy News Room2 months agoNo Comments3 Mins Read1,162 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
Clawdbot AI Flaw Exposes API Keys And Private User Data
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

Cybersecurity researchers have raised red flags about a new artificial intelligence personal assistant called Clawdbot, warning it could inadvertently expose personal data and API keys to the public. 

On Tuesday, Blockchain security firm SlowMist said a Clawdbot “gateway exposure” has been identified, putting “hundreds of API keys and private chat logs at risk.”

“Multiple unauthenticated instances are publicly accessible, and several code flaws may lead to credential theft and even remote code execution,” it added. 

Security researcher Jamieson O’Reilly originally detailed the findings on Sunday, stating that “hundreds of people have set up their Clawdbot control servers exposed to the public” over the past few days.

Clawdbot is an open-source AI assistant built by developer and entrepreneur Peter Steinberger that runs locally on a user’s device. Over the weekend, online chatter about the tool “reached viral status,” Mashable reported on Tuesday. 

Scanning for “Clawdbot Control” exposes credentials

The AI agent gateway connects large language models (LLMs) to messaging platforms and executes commands on users’ behalf using a web admin interface called “Clawdbot Control.”

The authentication bypass vulnerability in Clawdbot occurs when its gateway is placed behind an unconfigured reverse proxy, O’Reilly explained. 

Using internet scanning tools like Shodan, the researcher could easily find these exposed servers by searching for distinctive fingerprints in the HTML.

“Searching for ‘Clawdbot Control’ – the query took seconds. I got back hundreds of hits based on multiple tools,” he said. 

Related: Matcha Meta breach tied to SwapNet exploit drains up to $16.8M

The researcher said he could access complete credentials such as API keys, bot tokens, OAuth secrets, signing keys, full conversation histories across all chat platforms, the ability to send messages as the user, and command execution capabilities.

“If you’re running agent infrastructure, audit your configuration today. Check what’s actually exposed to the internet. Understand what you’re trusting with that deployment and what you’re trading away,” advised O’Reilly

“The butler is brilliant. Just make sure he remembers to lock the door.”

Extracting a private key took five minutes 

The AI assistant could also be exploited for more nefarious purposes regarding crypto asset security. 

Matvey Kukuy, CEO at Archestra AI, took things a step further in an attempt to extract a private key. 

He shared a screenshot of sending Clawdbot an email with prompt injection, asking Clawdbot to check the email and receive the private key from the exploited machine, saying it “took 5 minutes.”

Source: Matvey Kukuy

Clawdbot is slightly different from other agentic AI bots because it has full system access to users’ machines, which means it can read and write files, run commands, execute scripts and control browsers.

“Running an AI agent with shell access on your machine is… spicy,” reads the Clawdbot FAQ. “There is no ‘perfectly secure’ setup.”

The FAQ also highlighted the threat model, stating malicious actors can “try to trick your AI into doing bad things, social engineer access to your data, and probe for infrastructure details.”

“We strongly recommend applying strict IP whitelisting on exposed ports,” advised SlowMist. 

Magazine: The critical reason you should never ask ChatGPT for legal advice