Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
27
Toronto recently used an AI tool to predict when a public beach will be safe. It went horribly awry. The developer claimed the tool achieved over 90% accuracy in predicting when beaches would be safe to swim in. But the tool did much worse: on a majority of the days when the water was in fact unsafe, beaches remained open based on the tool’s assessments. It was less accurate than the previous method of simply testing the water for bacteria each day.
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from AI Snake Oil

Is AI progress slowing down?

Making sense of recent technology trends and claims

3 months ago 67 votes
We Looked at 78 Election Deepfakes. Political Misinformation is not an AI Problem.

Technology Isn’t the Problem—or the Solution.

3 months ago 68 votes
Does the UK’s liver transplant matching algorithm systematically exclude younger patients?

Seemingly minor technical decisions can have life-or-death effects

4 months ago 57 votes
FAQ about the book and our writing process

What's in the book and how we wrote it

6 months ago 82 votes
Can AI automate computational reproducibility?

A new benchmark to measure the impact of AI on improving science

6 months ago 101 votes

More in AI

Making Quantum Flytrap a polyglot with AI vibe translating

Making quantum physics more accessible, thanks to with the power of Claude, DeepSeek, Cursor, and i18n. Virtual Lab now speaks Spanish, Portuguese, Chinese, Polish, Ukrainian, French, and German.

15 hours ago 3 votes
More Fun With GPT-4o Image Generation

Greetings from Costa Rica!

13 hours ago 2 votes
AI #110: Of Course You Know...

Yeah.

2 hours ago 1 votes