Uncover What's Hot: TopProductReviews' Trending Selection

Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 

Anthropic has announced new capabilities that can enable a few of its latest, largest fashions to finish conversations in what the corporate describes as “uncommon, excessive circumstances of persistently dangerous or abusive consumer interactions.” Strikingly, Anthropic says it’s doing this to not shield the human consumer, however quite the AI mannequin itself.

To be clear, the corporate isn’t claiming that its Claude AI fashions are sentient or might be harmed by their conversations with customers. In its personal phrases, Anthropic stays “extremely unsure in regards to the potential ethical standing of Claude and different LLMs, now or sooner or later.”

Nonetheless, its announcement factors to a recent program created to study what it calls “model welfare” and says Anthropic is actually taking a just-in-case strategy, “working to establish and implement low-cost interventions to mitigate dangers to mannequin welfare, in case such welfare is feasible.”

This newest change is at present restricted to Claude Opus 4 and 4.1. And once more, it’s solely purported to occur in “excessive edge circumstances,” comparable to “requests from customers for sexual content material involving minors and makes an attempt to solicit info that might allow large-scale violence or acts of terror.”

Whereas these varieties of requests may probably create authorized or publicity issues for Anthropic itself (witness current reporting round how ChatGPT can potentially reinforce or contribute to its users’ delusional thinking), the corporate says that in pre-deployment testing, Claude Opus 4 confirmed a “sturdy desire towards” responding to those requests and a “sample of obvious misery” when it did so.

As for these new conversation-ending capabilities, the corporate says, “In all circumstances, Claude is simply to make use of its conversation-ending potential as a final resort when a number of makes an attempt at redirection have failed and hope of a productive interplay has been exhausted, or when a consumer explicitly asks Claude to finish a chat.”

Anthropic additionally says Claude has been “directed to not use this potential in circumstances the place customers could be at imminent danger of harming themselves or others.”

Techcrunch occasion

San Francisco
|
October 27-29, 2025

When Claude does finish a dialog, Anthropic says customers will nonetheless be capable of begin new conversations from the identical account, and to create new branches of the troublesome dialog by enhancing their responses.

“We’re treating this characteristic as an ongoing experiment and can proceed refining our strategy,” the corporate says.

Trending Merchandise

0
Add to compare
CIVOTIL Porch Sign, Porch Decor for Home, Bar, Farmhouse, 4″x16″ Aluminum Metal Wall Sign – This is Our Happy Place
0
Add to compare
$10.25
0
Add to compare
PTShadow 4 Pcs Decorative Books for Home décor,Black and whiteshelf Decor Accents Library décor for Home Sweet Stacked Books
0
Add to compare
$22.99
0
Add to compare
Handmade Wooden Statue, Sitting Woman and Dog, Wood Decor Accents Craft Figurine for Bedroom Home Office Shelf Decor Gift Natural ECO Friendly
0
Add to compare
$15.09
0
Add to compare
Nicunom 12-Inch Retro Wall Clock, Round Vintage Wall Clocks, Silent Non-Ticking, Classic Decorative Clock for Home Living Room Bedroom Kitchen School Office – Battery Operated
0
Add to compare
$21.99
0
Add to compare
White Ceramic Vases Flower for Home Décor Modern Boho Vase for Living Room Pampas Floor Tall Geometric Vase (7.7in) (WhiteC)
0
Add to compare
$17.99
0
Add to compare
LEIKE Large Modern Metal Wall Clocks Rustic Round Silent Non Ticking Battery Operated Black Roman Numerals Clock for Living Room/Bedroom/Kitchen Wall Decor-60cm
0
Add to compare
$73.99
.

We will be happy to hear your thoughts

Leave a reply

TopProductReviews
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart