Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
40
A new Anthropic paper reports that reasoning model chain of thought (CoT) is often unfaithful. They test on Claude Sonnet 3.7 and r1, I’d love to see someone try this on o3 as well.
3 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Don't Worry About the Vase

Cheaters Gonna Cheat Cheat Cheat Cheat Cheat

Cheaters.

2 months ago 27 votes
AI #115: The Evil Applications Division

It can be bleak out there, but the candor is very helpful, and you occasionally get a win.

2 months ago 26 votes
OpenAI Claims Nonprofit Will Retain Nominal Control

Your voice has been heard.

2 months ago 25 votes
Zuckerberg's Dystopian AI Vision

You think it’s bad now?

2 months ago 23 votes
GPT-4o Sycophancy Post Mortem

Last week I covered that GPT-4o was briefly an (even more than usually) absurd sycophant, and how OpenAI responded to that.

2 months ago 26 votes

More in AI

ML for SWEs #60: The skills software engineers should focus on to get involved with AI

Curated SWE AI Content, Jobs, and Resources 7-22-2025

yesterday 4 votes
AI Roundup 127: ChatGPT Agent

July 18, 2025.

5 days ago 8 votes
Could AI slow science?

Confronting the production-progress paradox

a week ago 15 votes
The Slow Apocalypse: When will we run out of kids?

More than you wanted to know about the fertility crisis

a week ago 16 votes
Transformers Ain't It

Curated SWE AI Content, Jobs, and Resources 7-15-2025

a week ago 11 votes