Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
37
A new Anthropic paper reports that reasoning model chain of thought (CoT) is often unfaithful. They test on Claude Sonnet 3.7 and r1, I’d love to see someone try this on o3 as well.
3 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Don't Worry About the Vase

Cheaters Gonna Cheat Cheat Cheat Cheat Cheat

Cheaters.

2 months ago 25 votes
AI #115: The Evil Applications Division

It can be bleak out there, but the candor is very helpful, and you occasionally get a win.

2 months ago 25 votes
OpenAI Claims Nonprofit Will Retain Nominal Control

Your voice has been heard.

2 months ago 25 votes
Zuckerberg's Dystopian AI Vision

You think it’s bad now?

2 months ago 22 votes
GPT-4o Sycophancy Post Mortem

Last week I covered that GPT-4o was briefly an (even more than usually) absurd sycophant, and how OpenAI responded to that.

2 months ago 25 votes

More in AI

Could AI slow science?

Confronting the production-progress paradox

yesterday 6 votes
The Slow Apocalypse: When will we run out of kids?

More than you wanted to know about the fertility crisis

yesterday 7 votes
Transformers Ain't It

Curated SWE AI Content, Jobs, and Resources 7-15-2025

2 days ago 5 votes
AI Roundup 126: Good Grok / Bad Grok

July 11, 2025.

6 days ago 9 votes
AI Has Fundamentally Changed the Music Industry

A case study of Spotify's algorithm, including how it works and the impact it has

6 days ago 11 votes