Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]

AI Snake Oil

AI Snake Oil
Is AI progress slowing down? Making sense of recent technology trends and claims
2 months ago
AI Snake Oil
We Looked at 78 Election Deepfakes. Political Misinformation is not an AI Problem. Technology Isn’t the Problem—or the Solution.
2 months ago
AI Snake Oil
Does the UK’s liver transplant matching algorithm systematically exclude younger patients? Seemingly minor technical decisions can have life-or-death effects
3 months ago
AI Snake Oil
FAQ about the book and our writing process What's in the book and how we wrote it
4 months ago
AI Snake Oil
Can AI automate computational reproducibility? A new benchmark to measure the impact of AI on improving science
5 months ago
AI Snake Oil
Starting reading the AI Snake Oil book online today The book will be published on September 24
5 months ago
AI Snake Oil
AI companies are pivoting from creating gods to building products. Good. Turning models into products runs into five challenges
6 months ago
AI Snake Oil
AI existential risk probabilities are too unreliable to inform policy How speculation gets laundered through pseudo-quantification
7 months ago
AI Snake Oil
New paper: AI agents that matter Rethinking AI agent benchmarking and evaluation
7 months ago
AI Snake Oil
AI scaling myths Scaling will run out. The question is when.
8 months ago
AI Snake Oil
Scientists should use AI as a tool, not an oracle How AI hype leads to flawed research that fuels more hype
8 months ago
AI Snake Oil
AI leaderboards are no longer useful. It's time to switch to Pareto curves. What spending $2,000 can tell us about evaluating AI agents
9 months ago
AI Snake Oil
AI Snake Oil is now available to preorder What artificial intelligence can do, what it can't, and how to tell the difference
10 months ago
AI Snake Oil
Tech policy is only frustrating 90% of the time That’s what makes it worthwhile
10 months ago
AI Snake Oil
AI safety is not a model property Trying to make an AI model that can’t be misused is like trying to make a computer that can’t be...
11 months ago
107
11 months ago
Trying to make an AI model that can’t be misused is like trying to make a computer that can’t be used for bad things
AI Snake Oil
A safe harbor for AI evaluation and red teaming An argument for legal and technical safe harbors for AI safety and trustworthiness research
11 months ago
AI Snake Oil
On the Societal Impact of Open Foundation Models Adding precision to the debate on openness in AI
12 months ago
AI Snake Oil
Will AI transform law? The hype is not supported by current evidence
a year ago
AI Snake Oil
Generative AI’s end-run around copyright won’t be resolved by the courts Output similarity is a distraction
a year ago
AI Snake Oil
Are open foundation models actually more risky than closed ones? A policy brief on open foundation models
a year ago
AI Snake Oil
Model alignment protects against accidental harms, not intentional ones The hand wringing about failures of model alignment is misguided
a year ago
AI Snake Oil
What the executive order means for openness in AI Good news on paper, but the devil is in the details
a year ago
AI Snake Oil
How Transparent Are Foundation Model Developers? Introducing the Foundation Model Transparency Index
a year ago
AI Snake Oil
Evaluating LLMs is a minefield Annotated slides from a recent talk
a year ago
AI Snake Oil
Is the future of AI open or closed? Watch today’s Princeton-Stanford workshop By Sayash Kapoor, Rishi Bommasani, Percy Liang, Arvind Narayanan Perhaps the biggest tech policy...
a year ago
93
a year ago
By Sayash Kapoor, Rishi Bommasani, Percy Liang, Arvind Narayanan Perhaps the biggest tech policy debate today is about the future of AI, especially foundation models and generative AI. Will AI be open or closed? Will we be able to download and modify these models, or will a few...
AI Snake Oil
One year update: book submitted; TIME 100; Sep 21 online workshop It's been an eventful year
a year ago
AI Snake Oil
Does ChatGPT have a liberal bias? A new paper making this claim has many flaws. But the question merits research
a year ago
AI Snake Oil
Introducing the REFORMS checklist for ML-based science ML-based science is in trouble. Clear reporting standards for researchers could help.
a year ago
AI Snake Oil
ML is useful for many things, but not for predicting scientific replicability How the veneer of AI is used to legitimize awful ideas
a year ago
AI Snake Oil
Is GPT-4 getting worse over time? A new paper going viral has been widely misinterpreted
a year ago
AI Snake Oil
Generative AI companies must publish transparency reports The debate about the harms of AI is happening in a data vacuum
a year ago
AI Snake Oil
Three Ideas for Regulating Generative AI Policy input to the federal government from a Stanford-Princeton team
a year ago
AI Snake Oil
Is AI-generated disinformation a threat to democracy? An essay on the future of generative AI on social media
a year ago
AI Snake Oil
Licensing is neither feasible nor effective for addressing AI risks Non-proliferation only benefits incumbents
a year ago
AI Snake Oil
Is Avoiding Extinction from AI Really an Urgent Priority? The history of technology suggests that the greatest risks come not from the tech, but from the...
a year ago
AI Snake Oil
Quantifying ChatGPT’s gender bias Benchmarks allow us to dig deeper into what causes biases and what can be done about it
a year ago
AI Snake Oil
I set up a ChatGPT voice interface for my 3-year old. Here’s how it went. Chatbots are likely to revive familiar debates about kids and apps
a year ago
AI Snake Oil
A misleading open letter about sci-fi AI dangers ignores the real risks Misinformation, labor impact, and safety are all risks. But not in the way the letter implies.
a year ago
AI Snake Oil
OpenAI’s policies hinder reproducible research on language models LLMs have become privately-controlled research infrastructure
a year ago
AI Snake Oil
GPT-4 and professional benchmarks: the wrong answer to the wrong question OpenAI may have tested on the training data. Besides, human benchmarks are meaningless for bots.
a year ago
AI Snake Oil
What is algorithmic amplification and why should we care? A symposium and a primer on social media recommendation algorithms
a year ago
AI Snake Oil
Artists can now opt out of generative AI. It’s not enough. Opting out is the latest example of generative AI developers externalizing costs.
a year ago
AI Snake Oil
The LLaMA is out of the bag. Should we expect a tidal wave of disinformation? The bottleneck isn't the cost of producing disinfo, which is already very low.
a year ago
AI Snake Oil
AI cannot predict the future. But companies keep trying (and failing). A new paper on how AI companies make false promises and how we can challenge them
a year ago
AI Snake Oil
People keep anthropomorphizing AI. Here’s why Companies and journalists both contribute to the confusion
over a year ago
AI Snake Oil
Four more things we worked on in 2022 We had a busy 2022. Here are a few things we worked on but didn’t cover here.
over a year ago
AI Snake Oil
ChatGPT is a bullshit generator. But it can still be amazingly useful The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without...
over a year ago
30
over a year ago
The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce
AI Snake Oil
The bait and switch behind AI risk prediction tools Toronto recently used an AI tool to predict when a public beach will be safe. It went horribly awry....
over a year ago
24
over a year ago
Toronto recently used an AI tool to predict when a public beach will be safe. It went horribly awry. The developer claimed the tool achieved over 90% accuracy in predicting when beaches would be safe to swim in. But the tool did much worse: on a majority of the days when the...
AI Snake Oil
Students are acing their homework by turning in machine-generated essays. Good. Teachers adapted to the calculator. They can certainly adapt to language models.
over a year ago
AI Snake Oil
Eighteen pitfalls to beware of in AI journalism A checklist for avoiding hype
over a year ago