Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]

New here?

Welcome! BoredReading is a fresh way to read high quality articles (updated every hour). Our goal is to curate (with your help) Michelin star quality articles (stuff that's really worth reading). We currently have articles in 0 categories from architecture, history, design, technology, and more. Grab a cup of freshly brewed coffee and start reading. This is the best way to increase your attention span, grow as a person, and get a better understanding of the world (or atleast that's why we built it).

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Don't Worry About the Vase

AI CoT Reasoning Is Often Unfaithful

A new Anthropic paper reports that reasoning model chain of thought (CoT) is often unfaithful. They test on Claude Sonnet 3.7 and r1, I’d love to see someone try this on o3 as well.

an hour ago 1 votes
AI #110: Of Course You Know...

Yeah.

yesterday 1 votes
More Fun With GPT-4o Image Generation

Greetings from Costa Rica!

yesterday 2 votes
Housing Roundup #11

The book of March 2025 was Abundance. Ezra Klein and Derek Thompson are making a noble attempt to highlight the importance of solving America’s housing crisis the only way it can be solved: Building houses in places people want to live, via repealing the rules that make this impossible. They also talk about green energy abundance, and other places besides. There may be a review coming.

3 days ago 3 votes
OpenAI #12: Battle of the Board Redux

Back when the OpenAI board attempted and failed to fire Sam Altman, we faced a highly hostile information environment.

4 days ago 4 votes

More in AI

AI Roundup 112: OpenAI might be open again

April 4, 2025.

an hour ago 2 votes
Did an LLM help write Trump’s trade plan?

Probably yes

22 hours ago 1 votes
AI CoT Reasoning Is Often Unfaithful

A new Anthropic paper reports that reasoning model chain of thought (CoT) is often unfaithful. They test on Claude Sonnet 3.7 and r1, I’d love to see someone try this on o3 as well.

an hour ago 1 votes
ML for SWEs 5: AI for Education is Bigger Than You Think

Machine learning for software engineers 4-4-25

an hour ago 1 votes