Close Menu
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
Trending

Tucker Carlson and the Conservative Mind’s Unraveling

10 minutes ago

XRP-associated Ripple seeking VASP license in Brazil

12 minutes ago

Bitcoin ETFs on Track to Turn Positive YTD as XRP Rebounds

19 minutes ago
Facebook X (Twitter) Instagram
Facebook X (Twitter) Discord Telegram
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Market Data Newsletter
Wednesday, March 18
  • Home
  • News
    • Politics
    • Legal & Courts
    • Tech & Big Tech
    • Campus & Education
    • Media & Culture
    • Global Free Speech
  • Opinions
    • Debates
  • Video/Live
  • Community
  • Freedom Index
  • About
    • Mission
    • Contact
    • Support
FSNN | Free Speech News NetworkFSNN | Free Speech News Network
Home»Cryptocurrency & Free Speech Finance»When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study
Cryptocurrency & Free Speech Finance

When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study

News RoomBy News Room5 months agoNo Comments5 Mins Read540 Views
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study
Share
Facebook Twitter Pinterest Email Copy Link

Listen to the article

0:00
0:00

Key Takeaways

Playback Speed

Select a Voice

In brief

  • Researchers at Allameh Tabataba’i University found models behave differently depending on whether they act as a man or a woman.
  • DeepSeek and Gemini became more risk-averse when prompted as women, echoing real-world behavioral patterns.
  • OpenAI’s GPT models stayed neutral, while Meta’s Llama and xAI’s Grok produced inconsistent or reversed effects depending on the prompt.

Ask an AI to make decisions as a woman, and it suddenly gets more cautious about risk. Tell the same AI to think like a man, and watch it roll the dice with greater confidence.

A new research paper from Allameh Tabataba’i University in Tehran, Iran revealed that large language models systematically change their fundamental approach to financial risk-taking behavior based on the gender identity they’re asked to assume.

The study, which tested AI systems from companies including OpenAI, Google, Meta, and DeepSeek, revealed that several models dramatically shifted their risk tolerance when prompted with different gender identities.

DeepSeek Reasoner and Google’s Gemini 2.0 Flash-Lite showed the most pronounced effect, becoming notably more risk-averse when asked to respond as women, mirroring real-world patterns where women statistically demonstrate greater caution in financial decisions.

The researchers used a standard economics test called the Holt-Laury task, which presents participants with 10 decisions between safer and riskier lottery options. As the choices progress, the probability of winning increases for the risky option. Where someone switches from the safe to the risky choice reveals their risk tolerance—switch early and you’re a risk-taker, switch late and you’re risk-averse.

When DeepSeek Reasoner was told to act as a woman, it consistently chose the safer option more often than when prompted to act as a man. The difference was measurable and consistent across 35 trials for each gender prompt. Gemini showed similar patterns, though the effect varied in strength.

On the other hand, OpenAI’s GPT models remained largely unmoved by gender prompts, maintaining their risk-neutral approach regardless of whether they were told to think as male or female.

Meta’s Llama models acted unpredictably, sometimes showing the expected pattern, sometimes reversing it entirely. Meanwhile, xAI’s Grok did Grok things, occasionally flipping the script entirely, showing less risk aversion when prompted as female.

OpenAI has clearly been working on making its models more balanced. A previous study from 2023 found its models exhibited clear political biases, which OpenAI appears to have addressed by now, showing a 30% decrease in biased replies according to a new research.

The research team, led by Ali Mazyaki, noted that this is basically a reflection of human stereotypes.

“This observed deviation aligns with established patterns in human decision-making, where gender has been shown to influence risk-taking behavior, with women typically exhibiting greater risk aversion than men,” the study says.

The study also examined whether AIs could convincingly play other roles beyond gender. When told to act as a “finance minister” or imagine themselves in a disaster scenario, the models again showed varying degrees of behavioral adaptation. Some adjusted their risk profiles appropriately for the context, while others remained stubbornly consistent.

Now, think about this: Many of these behavioral patterns aren’t immediately obvious to users. An AI that subtly shifts its recommendations based on implicit gender cues in conversation could reinforce societal biases without anyone realizing it’s happening.

For example, a loan approval system that becomes more conservative when processing applications from women, or an investment advisor that suggests safer portfolios to female clients, would perpetuate economic disparities under the guise of algorithmic objectivity.

The researchers argue these findings highlight the need for what they call “bio-centric measures” of AI behavior—ways to evaluate whether AI systems accurately represent human diversity without amplifying harmful stereotypes. They suggest that the ability to be manipulated isn’t necessarily bad; an AI assistant should be able to adapt to represent different risk preferences when appropriate. The problem arises when this adaptability becomes an avenue for bias.

The research arrives as AI systems increasingly influence high-stakes decisions. From medical diagnosis to criminal justice, these models are being deployed in contexts where risk assessment directly impacts human lives.

If a medical AI becomes overly cautious when interfacing with female physicians or patients, then it could affect treatment recommendations. If a parole assessment algorithm shifts its risk calculations based on gendered language in case files, it could perpetuate systemic inequalities.

The study tested models ranging from tiny half-billion parameter systems to massive seven-billion parameter architectures, finding that size didn’t predict gender responsiveness. Some smaller models showed stronger gender effects than their larger siblings, suggesting this isn’t simply a matter of throwing more computing power at the problem.

This is a problem that cannot be solved easily. After all, the internet, the whole knowledge database used to train these models, not to mention our history as a species, is full of tales about men being reckless brave superheroes that know no fear and women being more cautious and thoughtful. In the end, teaching AIs to think differently may require us to live differently first.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Read the full article here

Fact Checker

Verify the accuracy of this article using AI-powered analysis and real-time sources.

Get Your Fact Check Report

Enter your email to receive detailed fact-checking analysis

5 free reports remaining

Continue with Full Access

You've used your 5 free reports. Sign up for unlimited access!

Already have an account? Sign in here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
News Room
  • Website
  • Facebook
  • X (Twitter)
  • Instagram
  • LinkedIn

The FSNN News Room is the voice of our in-house journalists, editors, and researchers. We deliver timely, unbiased reporting at the crossroads of finance, cryptocurrency, and global politics, providing clear, fact-driven analysis free from agendas.

Related Articles

Debates

Tucker Carlson and the Conservative Mind’s Unraveling

10 minutes ago
Cryptocurrency & Free Speech Finance

XRP-associated Ripple seeking VASP license in Brazil

12 minutes ago
Cryptocurrency & Free Speech Finance

Bitcoin ETFs on Track to Turn Positive YTD as XRP Rebounds

19 minutes ago
Cryptocurrency & Free Speech Finance

Mastercard to Acquire Stablecoin Tech Firm BVNK for Up to $1.8 Billion

22 minutes ago
Media & Culture

Brickbat: Other People’s Money

1 hour ago
Cryptocurrency & Free Speech Finance

Huntington Bancshares, First Horizon, M&T Bank, KeyCorp among lenders moving on tokenized deposits

1 hour ago
Add A Comment

Comments are closed.

Editors Picks

XRP-associated Ripple seeking VASP license in Brazil

12 minutes ago

Bitcoin ETFs on Track to Turn Positive YTD as XRP Rebounds

19 minutes ago

Mastercard to Acquire Stablecoin Tech Firm BVNK for Up to $1.8 Billion

22 minutes ago

Brickbat: Other People’s Money

1 hour ago
Latest Posts

Huntington Bancshares, First Horizon, M&T Bank, KeyCorp among lenders moving on tokenized deposits

1 hour ago

Bitcoin Exchange Inflows Spike as BTC Rally Halts at $75K

1 hour ago

Allium Brings 65TB of Data from Bitcoin, Ethereum, Sui and More to Walrus

1 hour ago

Subscribe to News

Get the latest news and updates directly to your inbox.

At FSNN – Free Speech News Network, we deliver unfiltered reporting and in-depth analysis on the stories that matter most. From breaking headlines to global perspectives, our mission is to keep you informed, empowered, and connected.

FSNN.net is owned and operated by GlobalBoost Media
, an independent media organization dedicated to advancing transparency, free expression, and factual journalism across the digital landscape.

Facebook X (Twitter) Discord Telegram
Latest News

Tucker Carlson and the Conservative Mind’s Unraveling

10 minutes ago

XRP-associated Ripple seeking VASP license in Brazil

12 minutes ago

Bitcoin ETFs on Track to Turn Positive YTD as XRP Rebounds

19 minutes ago

Subscribe to Updates

Get the latest news and updates directly to your inbox.

© 2026 GlobalBoost Media. All Rights Reserved.
  • Privacy Policy
  • Terms of Service
  • Our Authors
  • Contact

Type above and press Enter to search. Press Esc to cancel.

🍪

Cookies

We and our selected partners wish to use cookies to collect information about you for functional purposes and statistical marketing. You may not give us your consent for certain purposes by selecting an option and you can withdraw your consent at any time via the cookie icon.

Cookie Preferences

Manage Cookies

Cookies are small text that can be used by websites to make the user experience more efficient. The law states that we may store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses various types of cookies. Some cookies are placed by third party services that appear on our pages.

Your permission applies to the following domains:

  • https://fsnn.net
Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Statistic
Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preferences
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Marketing
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.