Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
29
A new Anthropic paper reports that reasoning model chain of thought (CoT) is often unfaithful. They test on Claude Sonnet 3.7 and r1, I’d love to see someone try this on o3 as well.
3 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Don't Worry About the Vase

Cheaters Gonna Cheat Cheat Cheat Cheat Cheat

Cheaters.

a month ago 19 votes
AI #115: The Evil Applications Division

It can be bleak out there, but the candor is very helpful, and you occasionally get a win.

a month ago 21 votes
OpenAI Claims Nonprofit Will Retain Nominal Control

Your voice has been heard.

a month ago 21 votes
Zuckerberg's Dystopian AI Vision

You think it’s bad now?

a month ago 17 votes
GPT-4o Sycophancy Post Mortem

Last week I covered that GPT-4o was briefly an (even more than usually) absurd sycophant, and how OpenAI responded to that.

a month ago 22 votes

More in AI

Idle Thoughts On Programming and AI

I've been thinking a lot lately about how coding is changing.

8 hours ago 2 votes
ML Jobs, Resources, and Content for Software Engineers #14: How do we combat cognitive decline?

An AI reading list curated to make you a better engineer: 7-1-25

3 days ago 5 votes
AI Roundup 124: $uperintelligence

June 27, 2025.

a week ago 11 votes
Weekly ML for SWEs #13: Avoiding brain rot is the key to success

An AI reading list curated to make you a better engineer: 6-24-25

a week ago 10 votes
Using AI Right Now: A Quick Guide

Which AIs to use, and how to use them

a week ago 9 votes