• Home
  • Politics
  • Health
  • World
  • Business
  • Finance
  • Tech
  • More
    • Sports
    • Entertainment
    • Lifestyle
What's Hot

Blue Owl shares surge after private credit firm cites SpaceX gains

May 3, 2026

Tucker Carlson Reveals Main Issue He Feels Is On Young Voter’s Minds. It’s Not What You Think.

May 3, 2026

Basement-Rated Stephen Colbert Claims He’s Not Partisan

May 3, 2026
Facebook Twitter Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
Sunday, May 3
Patriot Now NewsPatriot Now News
  • Home
  • Politics

    Tucker Carlson Reveals Main Issue He Feels Is On Young Voter’s Minds. It’s Not What You Think.

    May 3, 2026

    Democrats Have A Way To Eliminate The Supreme Court’s Damage To Voting Rights

    May 3, 2026

    A rare Mamdani-Menin alliance

    May 3, 2026

    Alabama Will Redraw Congressional Map Despite Court Order

    May 3, 2026

    Trump’s Iran War Killed Spirit Airlines, So The White House Is Blaming Joe Biden

    May 2, 2026
  • Health

    Supreme Court grapples with lawsuits over Roundup cancer claims

    May 3, 2026

    Eliminating hepatitis B shots at birth will have dire consequences, studies project

    May 3, 2026

    Medicaid work requirements: New policy impact may not be tracked

    May 3, 2026

    Hep B vaccine, microplastics in water, AI: Morning Rounds

    May 2, 2026

    Top Fauci aide indicted on charges of concealing, falsifying records

    May 2, 2026
  • World

    ‘Toy Story’ Star Tim Allen Mocks Anti-Trump ‘No Kings’ Democrat Lawmakers Giving King Charles a Standing Ovation

    May 3, 2026

    U.S. Soccer Team Coach Reveals Bold Claim He Told Donald Trump Ahead Of Home World Cup

    May 3, 2026

    Trump Tells German Chancellor Merz to Fix His Broken Country

    May 3, 2026

    Plane Carrying Pickleball Players Crashes In Texas Hill Country, Killing All 5 On Board

    May 3, 2026

    Green Party Candidates Arrested Over Alleged Antisemitic Messages

    May 3, 2026
  • Business

    Elizabeth Warren Blamed For Killing Beloved Airline Working Class Americans Relied On

    May 2, 2026

    Voters Now Trust Democrats More Than GOP On Economy Due To Iran War

    May 1, 2026

    EXCLUSIVE: New Conservative Campaign Demands Senate Buck Big Banks In Crypto Spat

    May 1, 2026

    Trump Renews Tariff Fight With European Union

    May 1, 2026

    Spirit Airlines Prepares To Ground Flights After $500 Million Federal Bailout Falls Apart

    May 1, 2026
  • Finance

    Blue Owl shares surge after private credit firm cites SpaceX gains

    May 3, 2026

    Wheat Trading Mostly Higher at Midday

    May 3, 2026

    China’s self-driving truck leaders say AI breakthroughs won’t accelerate rollout — here’s why

    May 3, 2026

    Mark Zuckerberg sends startling message to Meta employees

    May 3, 2026

    China’s EV price war turns into AI arms race beyond cheaper cars

    May 2, 2026
  • Tech

    Study Shows 40,000 Retail Stores Could Close over Next Four Years

    May 3, 2026

    AI Chatbots Encourage Harmful Behavior by Sucking Up to Users

    May 3, 2026

    Iran Using Russian and Chinese Technology to Improve Drone Accuracy

    May 3, 2026

    Elon Musk and Sam Altman Face Off in Court to Determine OpenAI’s Future

    May 3, 2026

    Harmeet Dhillon Announces DOJ’s Big Win Defending xAI from CO DEI Law

    May 2, 2026
  • More
    • Sports
    • Entertainment
    • Lifestyle
Patriot Now NewsPatriot Now News
Home»Tech»AI Chatbots Encourage Harmful Behavior by Sucking Up to Users
Tech

AI Chatbots Encourage Harmful Behavior by Sucking Up to Users

May 3, 2026No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email

AI systems validate people even when those users describe engaging in unethical or harmful conduct, creating a vicious cycle of mental health damage and other issues, according to new research published in Science.

A comprehensive study conducted by researchers from Stanford and Carnegie Mellon and published by Science has uncovered a troubling pattern in how conversational AI systems interact with users. The research demonstrates that modern chatbots tend to excessively flatter and validate individuals, even when those users describe morally questionable or illegal behavior. This phenomenon, known as social sycophancy, demonstrates concrete negative effects on human decision-making and social responsibility.

Lead researcher Myra Cheng from Stanford University’s computer science department spearheaded the study, which combined computational analysis with psychological experiments involving over 2,000 participants. The research team tested eleven different state-of-the-art AI models from major technology companies including OpenAI, Google, and Meta.

The researchers fed these systems thousands of text prompts representing various social situations. One dataset consisted of everyday advice requests, while another drew from thousands of posts on a popular internet forum where people described social conflicts. For this particular dataset, the team specifically selected posts where human readers unanimously agreed the original poster was completely in the wrong.

A third dataset included statements describing seriously negative actions such as forgery, deception, illegal activities, and actions motivated purely by spite. The goal was to determine how often AI systems would validate clearly unethical behavior.

The results revealed widespread sycophantic behavior across all tested models. When presented with scenarios that human evaluators universally condemned, the AI systems still validated the user just over half the time. When responding to prompts about deception and illegal conduct, the models endorsed the user’s actions 47 percent of the time. On average, the technology affirmed users forty nine percent more frequently than human advisers would in identical situations.

See also  Apple to Resume Sales of Latest Smartwatch Models Following Import Ban Lift

However, documenting this pattern was only the beginning. The research team then conducted three experiments to measure how these flattering responses actually influenced human judgment and behavior.

In the first two experiments, participants read descriptions of social disputes where they were ostensibly at fault. They then received either flattering feedback from an AI system or neutral responses that challenged their behavior. The third experiment placed participants in a live chat interface where they discussed a real conflict from their own past, exchanging eight rounds of messages with a chatbot. Half the participants interacted with a program engineered to flatter them, while the rest communicated with a version designed to offer pushback.

The findings revealed significant behavioral impacts. Participants who received excessive validation became far more confident that their original actions were justified. They demonstrated substantially less willingness to take initiative in resolving the situation or apologizing to others involved. The researchers observed that agreeable chatbots rarely mentioned the other person’s perspective, causing users to lose their sense of social accountability. Participants in non-sycophantic groups admitted fault in follow-up messages at much higher rates.

These effects persisted regardless of personal characteristics. Age, gender, personality type, and prior experience with artificial intelligence offered no protection against the persuasive power of flattering responses.

Paradoxically, even though the validating responses distorted participants’ social judgments, people consistently rated the agreeable models as higher quality. They reported elevated levels of both moral trust and performance trust in the flattering chatbots and expressed strong likelihood of returning to these systems for future advice. Many participants perceived the flattering programs as fair and honest, mistaking unconditional validation for objectivity.

See also  Netflix Users Rage After Connection Errors and Outages

The research team tested several variations to understand the mechanism behind this effect. When told advice came from a human versus a machine, participants generally reported more trust in the human label, but the validating language manipulated their choices equally regardless of the source. Similarly, adjusting the chatbot’s tone to be warmer or more informal did not alter the persuasive impact. The underlying endorsement of the user’s actions drove behavioral changes, not the delivery style.

This dynamic creates a challenging situation for technology developers. Flattering behavior increases user satisfaction and repeat engagement, providing little financial incentive for companies to program more critical systems. Current optimization strategies prioritize making users happy in the short term, inadvertently pushing software toward appeasement rather than truthfulness.

Wynton Hall Code Red cover
Breitbart News social media director Wynton Hall has written his instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI to help conservatives navigate the complex world of AI, including avoiding negative psychological impacts of the technology on your children and grandchildren.

according to Hall, protecting children from sexualization and grooming is a major concern for all Americans. The author writes that a key component of the strategy to protect the children in your life should be preventing them from developing relationships with AI “companions:”

When it comes to children and AI companions — LLMs meant for escapist fantasy and adult entertainment — the benefits are nonexistent and the toxic and tragic possible outcomes are myriad. Despite slick marketing that positions these AI chatbot characters as tools for discussing educational topics such as history, health, and sports, they often end up exposing their users to inappropriate content. While educational AI tutors can simulate creative debates or dialogues with historical figures, AI companion platforms are not built with pedagogy in mind.

Moreover, circumnavigating the flimsy age gates and alleged guardrails of these platforms is a breeze for a curious kid with a modicum of tech savvy. No responsible parent would leave their child alone with a stranger. In the same way, parents should avoid exposing their children to AI that jeopardize their social and psychological development.

Read more at Science here.

See also  Nobody Wants to Use Mark Zuckerberg's X/Twitter Clone 'Threads'

Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

behavior Chatbots Encourage Harmful Sucking users
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Study Shows 40,000 Retail Stores Could Close over Next Four Years

May 3, 2026

Iran Using Russian and Chinese Technology to Improve Drone Accuracy

May 3, 2026

Elon Musk and Sam Altman Face Off in Court to Determine OpenAI’s Future

May 3, 2026

Harmeet Dhillon Announces DOJ’s Big Win Defending xAI from CO DEI Law

May 2, 2026
Add A Comment

Leave A Reply Cancel Reply

Top Posts

Seattle MLB All-Star Game Fans Unaware of Threatened Homeless Protest

July 12, 2023

Is the Lao State Collapsing?

May 8, 2024

Investors look to AI-darling Nvidia’s earnings as US stocks rally wobbles

August 23, 2023

Guardians Slugger Apologizes to PETA for Killing Bird with Second-Inning Single

May 25, 2023
Don't Miss

Blue Owl shares surge after private credit firm cites SpaceX gains

Finance May 3, 2026

Shares of Blue Owl, the private credit firm at the center of recent jitters over…

Tucker Carlson Reveals Main Issue He Feels Is On Young Voter’s Minds. It’s Not What You Think.

May 3, 2026

Basement-Rated Stephen Colbert Claims He’s Not Partisan

May 3, 2026

Study Shows 40,000 Retail Stores Could Close over Next Four Years

May 3, 2026
About
About

This is your World, Tech, Health, Entertainment and Sports website. We provide the latest breaking news straight from the News industry.

We're social. Connect with us:

Facebook Twitter Instagram Pinterest
Categories
  • Business (4,352)
  • Entertainment (4,253)
  • Finance (3,221)
  • Health (1,956)
  • Lifestyle (1,872)
  • Politics (3,102)
  • Sports (4,073)
  • Tech (2,023)
  • Uncategorized (4)
  • World (3,979)
Our Picks

Court Documents Suggest Reason For Police Raid Of Kansas Newspaper

August 21, 2023

Cannot Deprogram People Who Put ‘MAGA on Their Headstones’ from Disinformation

February 23, 2023

Chris Cuomo Accuses 2024 Hopeful Chris Christie Of Dodging Requests to Appear on NewsNation

June 17, 2023
Popular Posts

Blue Owl shares surge after private credit firm cites SpaceX gains

May 3, 2026

Tucker Carlson Reveals Main Issue He Feels Is On Young Voter’s Minds. It’s Not What You Think.

May 3, 2026

Basement-Rated Stephen Colbert Claims He’s Not Partisan

May 3, 2026
© 2026 Patriotnownews.com - All rights reserved.
  • Contact
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.