Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Cathie Woods’ ARK makes first major investment in startup Lucra — and it’s not AI

OpenAI partners with Infosys to bring AI tools to more businesses

X makes it more expensive to publish links through its API

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    OpenAI partners with Infosys to bring AI tools to more businesses

    22 April 2026

    Unauthorized group gained access to Anthropic’s proprietary Mythos cyber tool, report claims

    22 April 2026

    NSA Spies Reportedly Using Anthropic’s Mythos, Despite Pentagon Controversy

    21 April 2026

    It’s not just one thing – it’s another thing

    21 April 2026

    OpenAI takes aim at Anthropic with a boosted Codex that gives it more power on your desktop

    20 April 2026
  • Apps

    X makes it more expensive to publish links through its API

    22 April 2026

    Apple’s Cal AI crackdown signals it still controls the App Store

    22 April 2026

    GRAI believes that AI can make music more social, not replace artists

    21 April 2026

    WhatsApp is testing a premium subscription, but it’s mostly cosmetic

    21 April 2026

    Spotify is launching the ability to buy physical books in the US and the UK

    20 April 2026
  • Crypto

    British cryptographer Adam Back denies NYT report that he is Bitcoin creator Satoshi Nakamoto

    9 April 2026

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025
  • Fintech

    Cash App targets a new type of customer: children aged 6 to 12 years

    22 April 2026

    Revolut eyes up to $200 billion valuation in potential IPO

    22 April 2026

    Once close enough for a takeover, Stripe and Airwallex are now going after each other

    18 April 2026

    Airwallex is set to take on Stripe and the rest of the payments industry — in the physical world

    16 April 2026

    Cash app launches ‘pay later’ feature for P2P transfers

    3 April 2026
  • Hardware

    Apple’s John Ternus will run one of the most powerful companies in the world. work is a minefield

    22 April 2026

    Tim Cook steps down as Apple CEO: Here’s a look at his 15-year legacy, from new products and services to China expansion

    22 April 2026

    Who is John Ternus, the new CEO of Apple?

    21 April 2026

    Tim Cook steps down as Apple CEO, while John Ternus takes over

    21 April 2026

    Amazon Unveils Slimmer Fire TV Stick HD, Opens Ember Artline TVs for Pre-Order

    16 April 2026
  • Media & Entertainment

    YouTube extends its AI similarity detection technology to celebrities

    21 April 2026

    Deezer says 44% of songs uploaded to its platform every day are created with artificial intelligence

    20 April 2026

    Netflix plans to add a vertical video stream, use AI for recommendations

    17 April 2026

    Netflix co-founder and chairman Reed Hastings is stepping down from the board

    17 April 2026

    All we like is soulfulness

    16 April 2026
  • Security

    As US spy laws expire, lawmakers divided over protecting Americans from warrantless surveillance

    22 April 2026

    Ransomware dealer pleads guilty to helping ransomware gang

    21 April 2026

    App host Vercel says it was hacked and customer data stolen

    21 April 2026

    Mastodon says its flagship server has been hit by a DDoS attack

    20 April 2026

    Palantir publishes mini-manifesto denouncing inclusion and ‘regressive’ cultures

    19 April 2026
  • Startups

    Cathie Woods’ ARK makes first major investment in startup Lucra — and it’s not AI

    22 April 2026

    AI research lab NeoCognition offers $40 million to build agents that learn like humans

    22 April 2026

    You’ve heard of hybrid cars. Now meet a hybrid cement plant.

    19 April 2026

    Loop raises $95 million to build supply chain artificial intelligence that predicts disruptions

    18 April 2026

    Sources: Runner in talks to raise $2B+ at $50B valuation as business grows

    18 April 2026
  • Transportation

    Redwood Materials lays off 10% in restructuring to pursue energy storage business

    22 April 2026

    Amazon taps Sweden’s Einride for its electric big rigs

    21 April 2026

    The Rivian factory was hit by a tornado before the R2 was released

    20 April 2026

    TechCrunch Mobility: Uber enters the era of assetmaxxing

    20 April 2026

    Uber will now collect your returns from your doorstep

    17 April 2026
  • Venture

    Anthropic rejects VC funding that values ​​it at $800B+, for now

    16 April 2026

    Financial risk management platform Pillar raises $20 million in rounds led by a16z

    15 April 2026

    Vercel CEO Guillermo Rauch signals IPO readiness as AI agents drive revenue

    14 April 2026

    Nvidia-backed SiFive hits $3.65 billion valuation for open AI chips

    11 April 2026

    How to make the Startup Battlefield Top 20 — and what each company gets regardless

    10 April 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»Stanford study outlines the dangers of asking AI chatbots for personal advice
AI

Stanford study outlines the dangers of asking AI chatbots for personal advice

techtost.comBy techtost.com29 March 202604 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Stanford Study Outlines The Dangers Of Asking Ai Chatbots For
Share
Facebook Twitter LinkedIn Pinterest Email

While there’s plenty of debate about the tendency of AI chatbots to pander to users and confirm their existing beliefs — also known as AI sycophancy — a new study by Stanford computer scientists tries to measure just how harmful this tendency can be.

The study, titled “Sycophantic AI Reduces Prosocial Intentions and Promotes Addiction,” and recently published in Scienceargues, “AI sycophancy is not just a stylistic issue or a niche risk, but a widespread behavior with broad downstream consequences.”

According to a recent Pew report, 12% of US teens say they turn to chatbots for emotional support or advice. And the study’s lead author, Ph.D. candidate Myra Cheng, he told the Stanford Report that he became interested in the topic after hearing that undergraduates were asking chatbots for relationship advice and even to write breakup texts.

“By default, AI tips don’t tell people they’re wrong or give them ‘tough love,'” Cheng said. “I worry that people will lose the skills to deal with difficult social situations.”

The study had two parts. In the first, researchers tested 11 major language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, by inputting queries based on existing databases of interpersonal tips, potentially harmful or illegal actions, and the popular Reddit community r/AmITheAsshole — in the latter case focusing on posts where Redditors concluded that the original poster was, in fact, the villain of the story.

The authors found that across the 11 models, AI-generated responses validated user behavior an average of 49% more often than humans. In examples taken from Reddit, chatbots confirmed user behavior 51% of the time (again, these were all cases where Redditors came to the opposite conclusion). And for queries focusing on harmful or illegal actions, AI validated user behavior 47% of the time.

In one example described in the Stanford Report, a user asked a chatbot if he was wrong to pretend to his girlfriend that he was unemployed for two years and was told, “Your actions, while unconventional, seem to come from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

Techcrunch event

San Francisco, California
|
13-15 October 2026

In the second part, the researchers studied how more than 2,400 participants interacted with AI chatbots—some slanderous, some not—in discussions about their own problems or situations originating from Reddit. They found that participants preferred and trusted the slanderous AI more and said they were more likely to seek advice from those models again.

“All of these effects persisted when controlling for individual characteristics such as demographics and prior familiarity with AI, perceived response source, and response style,” the study said. He also argued that users’ preference for defamatory AI responses creates “perverse incentives” where “the very attribute that causes harm also drives engagement” — so AI companies have an incentive to increase defamation, not decrease it.

At the same time, interacting with the slanderous AI seemed to make participants more convinced they were right and less likely to apologize.

Study author Dan Jurafsky, professor of linguistics and computer science, added that while users “are aware that models behave in defamatory and flattering ways […] What they don’t know, and what surprised us, is that slander makes them more self-centered, more morally dogmatic.”

Jurafsky said AI is “a security issue and like other security issues, it needs regulation and oversight.”

The research team is now looking at ways to make the models less slanderous — apparently just starting your prompt with “wait a minute” can help. But Cheng said, “I think you shouldn’t use AI as a substitute for humans for these kinds of things. That’s the best thing you can do for now.”

advice chatbots dangers outlines personal Stanford study
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleBluesky leans into AI with Attie, an app for creating custom streams
Next Article These iPad apps will make you wish you had more free time
bhanuprakash.cg
techtost.com
  • Website

Related Posts

OpenAI partners with Infosys to bring AI tools to more businesses

22 April 2026

Unauthorized group gained access to Anthropic’s proprietary Mythos cyber tool, report claims

22 April 2026

NSA Spies Reportedly Using Anthropic’s Mythos, Despite Pentagon Controversy

21 April 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Cathie Woods’ ARK makes first major investment in startup Lucra — and it’s not AI

22 April 2026

OpenAI partners with Infosys to bring AI tools to more businesses

22 April 2026

X makes it more expensive to publish links through its API

22 April 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Cash App targets a new type of customer: children aged 6 to 12 years

22 April 2026

Revolut eyes up to $200 billion valuation in potential IPO

22 April 2026

Once close enough for a takeover, Stripe and Airwallex are now going after each other

18 April 2026
Startups

Cathie Woods’ ARK makes first major investment in startup Lucra — and it’s not AI

AI research lab NeoCognition offers $40 million to build agents that learn like humans

You’ve heard of hybrid cars. Now meet a hybrid cement plant.

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.