More from Sam Altman
Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity. Systems that start to point to AGI* are coming into view, and so we think it’s important to understand the moment we are in. AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields. People are tool-builders with an inherent drive to understand and create, which leads to the world getting better for all of us. Each new generation builds upon the discoveries of the generations before to create even more capable tools—electricity, the transistor, the computer, the internet, and soon AGI. Over time, in fits and starts, the steady march of human innovation has brought previously unimaginable levels of prosperity and improvements to almost every aspect of people’s lives. In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together. In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential. In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today. We continue to see rapid progress with AI development. Here are three observations about the economics of AI: 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude. 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger. 3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future. If these three observations continue to hold true, the impacts on society will be significant. We are now starting to roll out AI agents, which will eventually feel like virtual co-workers. Let’s imagine the case of a software engineering agent, which is an agent that we expect to be particularly important. Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long. It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others. Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work. In some ways, AI may turn out to be like the transistor economically—a big scientific discovery that scales well and that seeps into almost every corner of the economy. We don’t think much about transistors, or transistor companies, and the gains are very widely distributed. But we do expect our computers, TVs, cars, toys, and more to perform miracles. The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc. But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today. Agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value; resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness, and enable individual people to have more impact than ever before, not less. We expect the impact of AGI to be uneven. Although some industries will change very little, scientific progress will likely be much faster than it is today; this impact of AGI may surpass everything else. The price of many goods will eventually fall dramatically (right now, the cost of intelligence and the cost of energy constrain a lot of things), and the price of luxury goods and a few inherently limited resources like land may rise even more dramatically. Technically speaking, the road in front of us looks fairly clear. But public policy and collective opinion on how we should integrate AGI into society matter a lot; one of our reasons for launching products early and often is to give society and the technology time to co-evolve. AI will seep into all areas of the economy and society; we will expect everything to be smart. Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more, and accept that there is a balance between safety and individual empowerment that will require trade-offs. While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy. Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas. In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect. Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine. There is a great deal of talent right now without the resources to fully express itself, and if we change that, the resulting creative output of the world will lead to tremendous benefits for us all. Thanks especially to Josh Achiam, Boaz Barak and Aleksander Madry for reviewing drafts of this. *By using the term AGI here, we aim to communicate clearly, and we do not intend to alter or interpret the definitions and processes that define our relationship with Microsoft. We fully expect to be partnered with Microsoft for the long term. This footnote seems silly, but on the other hand we know some journalists will try to get clicks by writing something silly so here we are pre-empting the silliness…
The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning. New years get people in a reflective mood, and I wanted to share some personal thoughts about how it has gone so far, and some of the things I’ve learned along the way. As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started. We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial; we were excited to try to make our mark on history. Our ambitions were extraordinarily high and so was our belief that the work might benefit society in an equally extraordinary way. At the time, very few people cared, and if they did, it was mostly because they thought we had no chance of success. In 2022, OpenAI was a quiet research lab working on something temporarily called “Chat With GPT-3.5”. (We are much better at research than we are at naming things.) We had been watching people use the playground feature of our API and knew that developers were really enjoying talking to the model. We thought building a demo around that experience would show people something important about the future and help us make our models better and safer. We ended up mercifully calling it ChatGPT instead, and launched it on November 30th of 2022. We always knew, abstractly, that at some point we would hit a tipping point and the AI revolution would get kicked off. But we didn’t know what the moment would be. To our surprise, it turned out to be this. The launch of ChatGPT kicked off a growth curve like nothing we have ever seen—in our company, our industry, and the world broadly. We are finally seeing some of the massive upside we have always hoped for from AI, and we can see how much more will come soon. It hasn’t been easy. The road hasn’t been smooth and the right choices haven’t been obvious. In the last two years, we had to build an entire company, almost from scratch, around this new technology. There is no way to train people for this except by doing it, and when the technology category is completely new, there is no one at all who can tell you exactly how it should be done. Building up a company at such high velocity with so little training is a messy process. It’s often two steps forward, one step back (and sometimes, one step forward and two steps back). Mistakes get corrected as you go along, but there aren’t really any handbooks or guideposts when you’re doing original work. Moving at speed in uncharted waters is an incredible experience, but it is also immensely stressful for all the players. Conflicts and misunderstanding abound. These years have been the most rewarding, fun, best, interesting, exhausting, stressful, and—particularly for the last two—unpleasant years of my life so far. The overwhelming feeling is gratitude; I know that someday I’ll be retired at our ranch watching the plants grow, a little bored, and will think back at how cool it was that I got to do the work I dreamed of since I was a little kid. I try to remember that on any given Friday, when seven things go badly wrong by 1 pm. A little over a year ago, on one particular Friday, the main thing that had gone wrong that day was that I got fired by surprise on a video call, and then right after we hung up the board published a blog post about it. I was in a hotel room in Las Vegas. It felt, to a degree that is almost impossible to explain, like a dream gone wrong. Getting fired in public with no warning kicked off a really crazy few hours, and a pretty crazy few days. The “fog of war” was the strangest part. None of us were able to get satisfactory answers about what had happened, or why. The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included. Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago. I also learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges. Good governance requires a lot of trust and credibility. I appreciate the way so many people worked together to build a stronger system of governance for OpenAI that enables us to pursue our mission of ensuring that AGI benefits all of humanity. My biggest takeaway is how much I have to be thankful for and how many people I owe gratitude towards: to everyone who works at OpenAI and has chosen to spend their time and effort going after this dream, to friends who helped us get through the crisis moments, to our partners and customers who supported us and entrusted us to enable their success, and to the people in my life who showed me how much they cared. [1] We all got back to the work in a more cohesive and positive way and I’m very proud of our focus since then. We have done what is easily some of our best research ever. We grew from about 100 million weekly active users to more than 300 million. Most of all, we have continued to put technology out into the world that people genuinely seem to love and that solves real problems. Nine years ago, we really had no idea what we were eventually going to become; even now, we only sort of know. AI development has taken many twists and turns and we expect more in the future. Some of the twists have been joyful; some have been hard. It’s been fun watching a steady stream of research miracles occur, and a lot of naysayers have become true believers. We’ve also seen some colleagues split off and become competitors. Teams tend to turn over as they scale, and OpenAI scales really fast. I think some of this is unavoidable—startups usually see a lot of turnover at each new major level of scale, and at OpenAI numbers go up by orders of magnitude every few months. The last two years have been like a decade at a normal company. When any company grows and evolves so fast, interests naturally diverge. And when any company in an important industry is in the lead, lots of people attack it for all sorts of reasons, especially when they are trying to compete with it. Our vision won’t change; our tactics will continue to evolve. For example, when we started we had no idea we would have to build a product company; we thought we were just going to do great research. We also had no idea we would need such a crazy amount of capital. There are new things we have to go build now that we didn’t understand a few years ago, and there will be new things in the future we can barely imagine now. We are proud of our track-record on research and deployment so far, and are committed to continuing to advance our thinking on safety and benefits sharing. We continue to believe that the best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology, learning from experience, and continuing to make the technology safer. We believe in the importance of being world leaders on safety and alignment research, and in guiding that research with feedback from real world applications. We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes. We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity. This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company. How lucky and humbling it is to be able to play a role in this work. (Thanks to Josh Tyrangiel for sort of prompting this. I wish we had had a lot more time.) [1] There were a lot of people who did incredible and gigantic amounts of work to help OpenAI, and me personally, during those few days, but two people stood out from all others. Ron Conway and Brian Chesky went so far above and beyond the call of duty that I’m not even sure how to describe it. I’ve of course heard stories about Ron’s ability and tenaciousness for years and I’ve spent a lot of time with Brian over the past couple of years getting a huge amount of help and advice. But there’s nothing quite like being in the foxhole with people to see what they can really do. I am reasonably confident OpenAI would have fallen apart without their help; they worked around the clock for days until things were done. Although they worked unbelievably hard, they stayed calm and had clear strategic thought and great advice throughout. They stopped me from making several mistakes and made none themselves. They used their vast networks for everything needed and were able to navigate many complex situations. And I’m sure they did a lot of things I don’t know about. What I will remember most, though, is their care, compassion, and support. I thought I knew what it looked like to support a founder and a company, and in some small sense I did. But I have never before seen, or even heard of, anything like what these guys did, and now I get more fully why they have the legendary status they do. They are different and both fully deserve their genuinely unique reputations, but they are similar in their remarkable ability to move mountains and help, and in their unwavering commitment in times of need. The tech industry is far better off for having both of them in it. There are others like them; it is an amazingly special thing about our industry and does much more to make it all work than people realize. I look forward to paying it forward. On a more personal note, thanks especially to Ollie for his support that weekend and always; he is incredible in every way and no one could ask for a better partner.
There are two things from our announcement today I wanted to highlight. First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that. Our initial conception when we started OpenAI was that we’d create AI and use it to create all sorts of benefits for the world. Instead, it now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from. We are a business and will find plenty of things to charge for, and that will help us provide free, outstanding AI service to (hopefully) billions of people. Second, the new voice (and video) mode is the best computer interface I’ve ever used. It feels like AI from the movies; and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big change. The original ChatGPT showed a hint of what was possible with language interfaces; this new thing feels viscerally different. It is fast, smart, fun, natural, and helpful. Talking to a computer has never felt really natural for me; now it does. As we add (optional) personalization, access to your information, the ability to take actions on your behalf, and more, I can really see an exciting future where we are able to use computers to do much more than ever before. Finally, huge thanks to the team that poured so much work into making this happen!
Optimism, obsession, self-belief, raw horsepower and personal connections are how things get started. Cohesive teams, the right combination of calmness and urgency, and unreasonable commitment are how things get finished. Long-term orientation is in short supply; try not to worry about what people think in the short term, which will get easier over time. It is easier for a team to do a hard thing that really matters than to do an easy thing that doesn’t really matter; audacious ideas motivate people. Incentives are superpowers; set them carefully. Concentrate your resources on a small number of high-conviction bets; this is easy to say but evidently hard to do. You can delete more stuff than you think. Communicate clearly and concisely. Fight bullshit and bureaucracy every time you see it and get other people to fight it too. Do not let the org chart get in the way of people working productively together. Outcomes are what count; don’t let good process excuse bad results. Spend more time recruiting. Take risks on high-potential people with a fast rate of improvement. Look for evidence of getting stuff done in addition to intelligence. Superstars are even more valuable than they seem, but you have to evaluate people on their net impact on the performance of the organization. Fast iteration can make up for a lot; it’s usually ok to be wrong if you iterate quickly. Plans should be measured in decades, execution should be measured in weeks. Don’t fight the business equivalent of the laws of physics. Inspiration is perishable and life goes by fast. Inaction is a particularly insidious type of risk. Scale often has surprising emergent properties. Compounding exponentials are magic. In particular, you really want to build a business that gets a compounding advantage with scale. Get back up and keep going. Working with great people is one of the best parts of life.
Helion has been progressing even faster than I expected and is on pace in 2024 to 1) demonstrate Q > 1 fusion and 2) resolve all questions needed to design a mass-producible fusion generator. The goals of the company are quite ambitious—clean, continuous energy for 1 cent per kilowatt-hour, and the ability to manufacture enough power plants to satisfy the current electrical demand of earth in a ten year period. If both things happen, it will transform the world. Abundant, clean, and radically inexpensive energy will elevate the quality of life for all of us—think about how much the cost of energy factors into what we do and use. Also, electricity at this price will allow us to do things like efficiently capture carbon (so although we’ll still rely on gasoline for awhile, it’ll be ok). Although Helion’s scientific progress of the past 8 years is phenomenal and necessary, it is not sufficient to rapidly get to this new energy economy. Helion now needs to figure out how to engineer machines that don’t break, how to build a factory and supply chain capable of manufacturing a machine every day, how to work with power grids and governments around the world, and more. The biggest input to the degree and speed of success at the company is now the talent of the people who join the team. Here are a few of the most critical jobs, but please don’t let the lack of a perfect fit deter you from applying. Electrical Engineer, Low Voltage: https://boards.greenhouse.io/helionenergy/jobs/4044506005 Electrical Engineer, Pulsed Power: https://boards.greenhouse.io/helionenergy/jobs/4044510005 Mechanical Engineer, Generator Systems: https://boards.greenhouse.io/helionenergy/jobs/4044522005 Manager of Mechanical Engineering: https://boards.greenhouse.io/helionenergy/jobs/4044521005 (All current jobs: https://www.helionenergy.com/careers/)
More in AI
The debate about prioritizing speed or safety is over and reality has made the decision for us.
Today's links Hate the player AND the game: But above all, hate the crooked ump. Hey look at this: Delights to delectate. Object permanence: Library Tor nodes vs the DHS; Egg-board psyops; Fury Road amputation cosplay; NYPD's dirtiest cop. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Hate the player AND the game (permalink) The epigram for my forthcoming book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It is a quote from Ed Zitron: "I hate them for what they've done to the computer" (Ed even recorded a little cameo of this for the audiobook): https://www.kickstarter.com/projects/doctorow/enshittification-the-drm-free-audiobook/ Ed's a smart and passionate guy, and this was definitely the quote to sum up the rage I felt as I wrote the book. Ed's got a whole theory of who "they" are and "what they did to the computer," which he calls "the Rot Economy": https://www.wheresyoured.at/the-rot-economy/ The Rot Economy describes the ideology of bosses, starting with monsters like GE's Jack Welch, who financialized companies, optimizing them for making short term cash gains for investors, at the expense of their workers, their customers, their products and services, and, ultimately, their long-term health. For Ed, these bosses (especially tech bosses) are the sociopaths who destroyed "the computer" (a stand-in for tech more generally). I don't disagree at all. The there is a direct, undeniable line from the ideas and conduct of tech bosses and the tech hellscape we live in today. A good read on this subject is Anil Dash's scorching post from yesterday, "How Tim Cook sold out Steve Jobs": https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/ I find the Rot Economy hypothesis entirely compelling, but also, incomplete. Ed's explaining why we should hate the players and why we should hate the game, but the enshittification thesis goes even further and explains why we need to hate the umpires – the policymakers, enforcers, economists and legal theorists who created the enshittogenic environment in which the Rot Economy took hold. Some early reviews of Enshittification have expressed dissatisfaction with book's "solutions" section, complaining that all the solutions are policy oriented, and there's nothing suggested for us to do in our capacity as individual consumers: https://pluralistic.net/2025/07/31/unsatisfying-answers/#systemic-problems Those criticisms are correct: there is nothing we can do as individual consumers. Agonizing about your consumption choices will not fight enshittification any more than conscientiously sorting your recycling will end the climate emergency. Enshittification isn't caused by "lazy consumers" who choose "convenience" or are "too cheap to pay for online services": https://pluralistic.net/2024/04/12/give-me-convenience/#or-give-me-death The wellspring of enshittification isn't poor consumption choices, it's poor policy choices. The reason monsters are able to destroy our online lives isn't their personal moral failings, it's the system that rewards predatory, deceptive and unfair commercial practices and elevates their foremost practitioners to positions of power within firms: https://pluralistic.net/2023/07/28/microincentives-and-enshittification/ And here's the kicker: we know where those policy choices came from! The people who made these policy choices did so in living memory. They were warned at the time about the foreseeable consequences of their choices. They made those choices anyway. They faced zero consequences for doing so, even after every one of the prophesied horrors came to pass. Not only were they spared consequences for their actions, but they prospered as a result – they are revered as statesmen, lawyers, scholars and titans of economics. As Trashfuture showrunner Riley Quinn often says, the curse of being a leftist is that you have object permanence – you actually remember the stuff that happened and how it happened. You don't live in an eternal now that has no causal relationship to the past. It's not enough to hate the player, nor the game – we've got to remember the crooked umps who rigged the match. We have to say their names, because that's how we root out their terrible ideas and ensure that our policy interventions make real change. If Elon Musk OD'ed on ketamine tomorrow, there'd be ten Big Balls who'd tear each others' throats out in the ensuing succession fight, and the next guy would be just as stupid, racist, and authoritarian. Musk, Cook, Zuck, Pichai, Nadella, Larry Ellison – they're just filling the monster-shaped holes that policy-makers installed in our society. Start with Robert Bork, the jurist who championed the "consumer welfare" theory of antitrust, which promotes monopolies as efficient and counsels policymakers not to punish companies that take over markets, because the only way to really dominate a market is to be so good that everyone chooses your products and services. Wouldn't it just be perverse to use public funds to shut down the public's favorite companies? Bork was a virulent racist, a Nixonite criminal, and he was dead wrong about the law and the economics of monopoly: https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/ Bork's legacy of pro-monopoly advocacy is, unsurprisingly, monopolies. Monopolies that make everything more expensive and worse: from athletic shoes to microchips, glass bottles to pharmaceuticals, pro wrestling to eyeglasses: https://www.openmarketsinstitute.org/learn/monopoly-by-the-numbers These monopolies did not arise because of the iron laws of economics. They are not the product of the great forces of history. They are the direct and undeniable consequence of Robert Bork convincing the world's governments to embrace his bullshit, pro-monopoly policies. Satan took Bork to hell in 2012, but you know who's still with us? Bruce Lehman. Bruce Lehman was Bill Clinton's copyright czar, the man who, in his own words, "did an end-run around Congress" by getting an UN treaty passed that obliged its signatories to ban reverse engineering: https://www.cbc.ca/listen/cbc-podcasts/1353-the-naked-emperor/episode/16145640-ctrl-ctrl-ctrl Lehman's used the treaty to get Congress to pass the Digital Millennium Copyright Act (DMCA) and section 1201 of the DMCA made it a felony to break DRM. Bruce Lehman is why farmers can't fix their own tractors, hospitals can't fix their own ventilators, and your mechanic can't fix your car. He's why, when the manufacturer of your artificial eyes bricks a computer that is permanently wired to your nervous system, no one else can revive it: https://pluralistic.net/2022/12/12/unsafe-at-any-speed/ Bruce Lehman is why you can't use the apps of your choosing on your phone or games console. He's why we can't preserve beloved old video games. He's why Apple and Google get to steal 30 cents out of every dollar you send to a performer, software author, or creator through an app: https://pluralistic.net/2025/05/01/its-not-the-crime/#its-the-coverup Yeah, Tim Cook is a venal billionaire who owes his wealth to the Chinese sweatshops of iPhone City, where they had to install suicide nets to catch the workers who'd rather end it all than work another day for Tim Apple, but Tim Cook's power over those workers is owed to Bruce Lehman and Robert Bork. Then there's the ISP sector, whose Net Neutrality violations and underinvestment mean that people who live in the country where the internet was invented have some of the slowest, most expensive internet in the world. Big ISP bosses are some of the worst people on Earth. Take Thomas Rutledge, who CEO of Charter/Spectrum when covid broke out. At the time, Rutledge was America's highest-paid CEO. He dictated that his back-office staff could not work from home (imagine a telco boss who doesn't believe in telework!), and those back-offices all turned into super-spreader sites. Rutledge's field workers – the people who came to our homes and upgraded our internet so we could work from home – did not get PPE or danger pay. Instead, they got vouchers exclusively redeemable at restaurants that had shut down during the pandemic: https://pluralistic.net/2020/04/22/filternet/#thomas-rutledge-murderer Fuck Thomas Rutledge and may his name be a curse forever. But the reason Thomas Rutledge – and all the other terrible telco bosses – were able to reap millions by supplying us with dogshit internet while literally murdering their employees was that Trump's FCC chairman, an ex-Verizon lawyer named Ajit Pai, let them get away with it: https://pluralistic.net/2021/02/12/ajit-pai/#pai Ajit Pai engaged in some of the most flagrant cheating ever seen in American regulation (prior to Jan 20, 2025, at least). When he decided to kill Net Neutrality, he accepted obviously fraudulent comments into the official record, including one million identical comments from @pornhub.com email addresses, as well as millions of comments whose return addresses were taken from darknet data-dumps, including the email addresses of dead people and of sitting US senators who supported Net Neutrality: https://pluralistic.net/2023/11/10/digital-redlining/#stop-confusing-the-issue-with-relevant-facts Pai – and his co-conspirators – are the umps who rigged the game. Hate Thomas Rutledge to be sure, but to prevent people like Rutledge from gaining power over your digital life in future, you must remember Ajit Pai with the special form of white-hot rage that keeps people like him from ever making policy decisions again. Then there's Canada's hall of shame, which is full of monsters. Two of my least favorite are James Moore and Tony Clement, who, as ministers under Stephen Harper, rammed through a Canadian version of the DMCA, 2012's Bill C-11, despite their own consultation, which found that Canadians overwhelmingly rejected the idea: https://pluralistic.net/2024/11/15/radical-extremists/#sex-pest Clement (now a disgraced sex-pest) and Moore (still accepted into polite society as a corporate lawyer) are the reason that Canada's Right to Repair and interop laws are dead on arrival. THey're also why Canada can't retaliate against Trump's tariffs by jailbreaking US products, making everything cheaper for Canadians and birthing new, global Canadian tech businesses: https://pluralistic.net/2025/01/15/beauty-eh/#its-the-only-war-the-yankees-lost-except-for-vietnam-and-also-the-alamo-and-the-bay-of-ham In Europe, there's Axel Voss, the man behind 2019's "filternet" proposal, which requires tech platforms to spend hundreds of millions of euros for copyright filters that use AI to process everything posted to the public internet in Europe and block anything the AI thinks is "copyrighted": https://memex.craphound.com/2019/03/26/article-13-will-wreck-the-internet-because-swedish-meps-accidentally-pushed-the-wrong-voting-button/ For years, Voss maintained that none of this was true, that there would be no filters, and dismissed his critics as hysterical fools: https://memex.craphound.com/2019/04/03/after-months-of-insisting-that-article13-doesnt-require-filters-top-eu-commissioner-says-article-13-requires-filters/ But then, after his law passed, he admitted he "didn't know what he was voting for": https://memex.craphound.com/2018/09/14/father-of-the-catastrophic-copyright-directive-reveals-he-didnt-know-what-he-was-voting-for/ Fuck the media lobbyists who spent hundreds of millions of euros to push this catastrophic law through: https://memex.craphound.com/2018/12/13/clash-of-the-corporate-titans-whos-spending-what-in-europes-copyright-directive-battle/ But especially and forever, fuck Axel Voss, the policymaker who helped turn those corporate bribes into policy. Ed Zitron is right to hate the people who implement the Rot Economy for what they did to the computer. But those people are only doing what policymakers let them do. Corporate monsters thrive in an enshittogenic environment. But political monsters are the ones create that enshittogenic environment. They're the ones who are terraforming our planet to sideline human life and replace it with the immortal colony organisms we call "limited liability corporations." Hey look at this (permalink) Dwayne Johnson Will Play the Chicken Man in ‘Lizard Music’ https://gizmodo.com/dwayne-johnson-to-next-play-the-chicken-man-in-lizard-music-2000655464 Qualifying Conditions https://www.jwz.org/blog/2025/09/qualifying-conditions/ Cindy Cohn Is Leaving the EFF, but Not the Fight for Digital Rights https://www.wired.com/story/eff-cindy-cohn-stepping-down/ Five technological achievements! (That we won’t see any time soon.) https://crookedtimber.org/2025/09/09/five-technological-achievements-that-we-wont-see-any-time-soon/ A notional design studio. https://ethanmarcotte.com/wrote/a-notional-design-studio/ Object permanence (permalink) #20yrsago Anti-trusted-computing video https://www.lafkon.net/tc/ #10yrsago Library offers Tor nodes; DHS tells them to stop https://www.propublica.org/article/library-support-anonymous-internet-browsing-effort-stops-after-dhs-email #10yrsago Ashley Madison’s passwords were badly encrypted, 15 million+ passwords headed for the Web https://arstechnica.com/information-technology/2015/09/ashley-madison-password-crack-could-spell-trouble-across-the-internet/ #10yrsago Heathrow security insists that ice is a liquid https://gizmodo.com/what-happens-if-you-take-frozen-liquids-through-airport-1729772148 #10yrago DoJ says it will consider jailing executives who order corporate crimes https://www.nytimes.com/2015/09/10/us/politics/new-justice-dept-rules-aimed-at-prosecuting-corporate-executives.html #10yrsago Government-run egg board waged high-price, secret PSYOPS war on vegan egg-replacement https://www.theguardian.com/business/2015/sep/06/usda-american-egg-board-paid-bloggers-hampton-creek #10yrago Using sandwiches to teach the Socratic method https://web.archive.org/web/20140810204054/https://medium.com/@kmikeym/is-this-a-sandwich-50b1317eb3f5 #10yrago Fury Road cosplay: amputated arm edition https://web.archive.org/web/20150911194228/http://www.tor.com/2015/09/09/afternoon-roundup-furiosa-real-prosthetic-arm-cosplay/ #5yrsago Kids' smart-watches unsafe at any speed https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#digital-parenting #5yrsago Georgia voter suppression, quantified https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#georgia-suppression #5yrsago The rise and rise of one of NYPD's dirtiest cops https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#50a #5yrago Inaudible https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#audible-exclusive Upcoming appearances (permalink) Ithaca: Enshittification at Buffalo Street Books, Sept 11 https://buffalostreetbooks.com/event/2025-09-11/cory-doctorow-tcpl-librarian-judd-karlman Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ Ithaca: Enshittification at Autumn Leaves Books, Sept 13 https://www.autumnleavesithaca.com/event-details/enshittification-why-everything-got-worse-and-what-to-do-about-it Ithaca: Radicalized Q&A (Cornell), Sept 16 https://events.cornell.edu/event/radicalized-qa-with-author-cory-doctorow Ithaca: The Counterfeiters (Dinner/Movie Night) (Cornell), Sept 17 https://adwhiteprofessors.cornell.edu/visits/cory-doctorow/ Ithaca: Communication Power, Policy, and Practice (Cornell), Sept 18 https://events.cornell.edu/event/policy-provocations-a-conversation-about-communication-power-policy-and-practice Ithaca: A Reverse-Centaur's Guide to Being a Better AI Critic (Cornell), Sept 18 https://events.cornell.edu/event/2025-nordlander-lecture-in-science-public-policy NYC: Enshittification and Renewal (Cornell Tech), Sept 19 https://www.eventbrite.com/e/enshittification-and-renewal-a-conversation-with-cory-doctorow-tickets-1563948454929 NYC: Brooklyn Book Fair, Sept 21 https://brooklynbookfestival.org/event/big-techs-big-heist-cory-doctorow-in-conversation-with-adam-becker/ DC: Enshittification with Rohit Chopra (Politics and Prose), Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X
On shadow libraries, legal documents, and judicial skepticism.
Today's links Trump steals $400b from American workers: You get a noncompete, and you get a noncompete, and you get a noncompete! Hey look at this: Delights to delectate. Object permanence: Spying baby-monitors; FBI tests spy-gear at Burning Man; Little Brother optioned by Paramount; Best-paid CEOs have worst-paid workers. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Trump steals $400b from American workers (permalink) Trump's stolen a lot of workers' wages over the years, but this week, he has become history's greatest thief of wages, having directed his FTC to stop enforcing its ban on noncompetes "agreements," a move that will cost American workers $400 billion over the next ten years: https://prospect.org/labor/2025-09-09-trump-lets-bosses-grab-400-billion-worker-pay-noncompete-agreements/ The argument for noncompetes is this: modern industry is IP-intensive, and IP-intensive businesses need noncompetes, otherwise workers will take proprietary information with them when they walk out the door and bring it to a competitor. Who would invest in an IP-intensive firm under those circumstances? I'll tell you who would: Hollywood and Silicon Valley. These are two most IP-intensive industries in human history, both of which were incubated in California, a state whose constitution prohibits noncompetes and has done so through the entire history of those two industries. Indeed, we wouldn't have a Silicon Valley if California had noncompetes. Silicon Valley was founded by Robert Shockley, who won the Nobel Price for his role in inventing the silicon transistor (hence Silicon Valley). Shockley was a paranoid, virulent racist who couldn't produce a working chip because he was consumed by eugenic fervor and spent all his time on the road offering shares of his Nobel prize money to Black women who would agree to have their tubes tied. Lucky for (literally) everyone (except Robert Shockley), California doesn't have noncompetes, so eight of his top engineers ("The Traitorous Eight") were able to quit Shockley Semiconductor and start the first successful chip business: Fairchild Semiconductor. And then two of Fairchild's top engineers quit to found Intel: https://pluralistic.net/2021/10/24/the-traitorous-eight-and-the-battle-of-germanium-valley/ It's not just Silicon Valley that's rooted in wresting IP away from asshole control-freaks: that's Hollywood's story, too. Ever wonder how it was that movies were invented at Edison Labs in New Jersey, but the film industry was incubated in California, literally as far away from Edison as you could possibly get without ending up in Mexico? In short: California got the motion picture industry because Edison was an asshole who used his patents to control what kinds of movies could be made and to suck rents out of filmmakers to license those patents. So the most ambitious filmmakers in America fled to California, where Edison couldn't easily enforce his patents, and founded Hollywood: https://www.nytimes.com/2005/08/21/weekinreview/lala-land-the-origins.html?unlocked_article_code=1.kk8.5T1M.VSaEsN5Vn9tM&smid=url-share And Hollywood stayed in Calfornia, a place where noncompetes couldn't be enforced, where "IP" could hop from one studio to another, smuggled out between the ears of writers, actors, directors, SFX wizards, prop makers, scenepainters, makeup artists, costumers, and the most creative professionals in Hollywood: accountants. Empirically speaking, the function of noncompetes is to trap good workers and good ideas in companies controlled by asshole bosses who can't get anything done. Any disinvestment that can be attributed to the absence of noncompetes is completely swamped by the dividends generated by good workers and good ideas escaping from control-freak asshole bosses and founding productive firms. As ever, money talks and bullshit walks. Today, one in 18 US workers is trapped by a noncompete, and those aren't the knowledge workers of Silicon Valley workers or Hollywood. So who is captured by this form of contractual indenture? The median US worker under noncompete is a fast-food worker stuck with the tipped minimum wage, or a pet groomer making the regular minimum wage. The function of the noncompete in America isn't to secure investment for knowledge-intensive industries – it's to stop the cashier at Wendy's from getting an extra $0.25/hour working the fry-trap at the McDonald's across the street. Noncompetes are an integral part of the conservative project, which is the substitution of individual power for democratic choice. As Dan Savage puts it, the GOP agenda is "Husbands you can't leave [ed: ending no-fault divorce], pregnancies you can't prevent or terminate [ed: banning contraception and abortion], politicians you can't vote out of office [ed: gerrymandering and voter suppression." Add to that: jobs you can't quit. It's not just noncompetes that lock workers to shitty bosses. When Biden's FTC investigated the issue, they revealed a widespread practice called "training repayment agreement provision," (TRAPs) that puts workers on the hook for thousands of dollars if they quit or get fired: https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose A TRAPped worker – often a pet-groomer at a private equity-owned giant like Petsmart – is charged $5,500 or more for three weeks of "training" that actually amount to one or two weeks of sweeping up pet-hair. But if they leave or get fired in the next three years, they have to pay back that whole amount: https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose A closely related concept is "bondage fees," which have been imposed on whole classes of workers, like doormen in NYC apartment buildings: https://pluralistic.net/2023/04/21/bondage-fees/#doorman-building These fees trap workers in dead-end jobs by forcing anyone who hires them away to pay massive fees to their former employers. It's just another way to lock workers to businesses. The irony here is that conservatives claim to worship "voluntarism" and "free choice," and insist that the virtue of markets is that they "aggregate price signals" so that companies can respond to these signals by efficiently matching demand to supply. But though conservatives say they worship free choice as an engine of economic efficiency, they understand that their ideas are so unpopular that they can only succeed if people are coerced into adopting them, hence voter suppression, gerrymandering, noncompetes, and other heads-I-win/tails-you-lose propositions. Noncompetes aren't about preventing the loss of IP – they're about preventing the loss of process knowledge, the know-how to turn ideas into products and services. Bosses love IP, because it can be alienated, hoarded and sold, while process knowledge is ineluctably vested in the bodies, minds and relations of workers. No IP law can keep employees from taking process knowledge with them on their way out the door, so bosses want to ban them from leaving: https://pluralistic.net/2025/09/08/process-knowledge/#dance-monkey-dance Biden's FTC banned noncompetes nationwide, for nearly every category of employment, deeming them an "unfair method of competition": https://www.ftc.gov/news-events/news/press-releases/2023/03/ftc-extends-public-comment-period-its-proposed-rule-ban-noncompete-clauses-until-april-19 FTC economists estimated that killing noncompetes would result in $400b in wage gains for the American workforce over the next decade, as good workers migrated to good bosses. Of course this was challenged by the business lobby, which sued to get the rule overturned. Trump's FTC has not only declined to defend the rule in court, they've also decided to stop trying to enforce it. Trump is now the king of wage-theft, and MAGA is a relentless engine of enshittification. After all, the thesis of enshittification is that companies make their products and practices worse for suppliers, users and business customers only when they calculate that they can do so without facing punishment – from regulators, competitors, or workers. Trump's regulators are all either comatose or so captured they wear gimpsuits and leashes in public. They're not keeping companies in line. And his antitrust shops have turned into pay-for-play operations, where a $1m payment to a MAGA influencer gets your case dropped: https://www.thebignewsletter.com/p/an-attempted-coup-at-the-antitrust Trump neutered the National Labor Relations Board and now he's revived indentured servitude nationwide, formalizing the idea of government-backed jobs you can't quit. If you can't quit your job or vote our your politicians, why wouldn't your boss or your elected representative just relentless fuck you over? Not merely for sadism's sake (though sadism undoubtedly plays a part here), but simply to make things better for themselves by making things worse for you? It's exactly the same logic of platform lock-in: once you can't leave, they don't have to keep you happy. Formalizing the legality of noncompetes will only lead to their monotonic spread. When Antonin Scalia greenlit binding arbitration waivers in consumer contracts, only a tiny number of companies used them, forcing customers to sign away their right to sue them no matter how badly, negligently or criminally they behaved. Today, binding arbitration has expanded into every kind of contract, even to the point where groovy, open source, decentralized, federated social media platforms are forcing it on their users: https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-and-all-NON-NEGOTIATED-agreements Same for noncompetes: as private equity rolls up whole sectors – funeral homes, pet groomers, hospices – they will stuff noncompetes into the contracts of every employer in each industry, so no matter where a worker applies for a job, they'll have to sign a noncompete. Why wouldn't they? If workers can't leave, they'll accept worse working conditions and lower pay. The best workers will be stuck with the worst employers. And despite owing their existence to bans on noncompetes Silicon Valley and Hollywood will happily cram noncompetes down their workers' throats. If you doubt it, just read up on the "no poach" scandal, where the biggest tech and movie companies entered into a criminal conspiracy not to hire away each others' employees: https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_Litigation The conservative future, folks: jobs you can't quit, politicians you can't vote out of office, husbands you can't divorce, and pregnancies you can't prevent or terminate. Hey look at this (permalink) Nate Silver's big list of grievances https://www.garbageday.email/p/nate-silver-s-big-list-of-grievances Electronic Dance Music vs. Copyright: Law as Weaponized Culture https://drive.proton.me/urls/TVH0PW4TZ8#EM5VMl1BUlny Google admits the open web is in ‘rapid decline’ https://www.theverge.com/news/773928/google-open-web-rapid-decline Britain Owes Palestine https://www.britainowespalestine.org/ A Dramatic Reading of The Recent New York Times Dispatch from the Hamptons. https://bsky.app/profile/zohrankmamdani.bsky.social/post/3lyech7chqs2q Object permanence (permalink) #20yrsago Crooks take anti-forensic countermeasures https://www.newscientist.com/article/mg18725163-800-television-shows-scramble-forensic-evidence/ #20yrsago Recording industry demands digital radio broadcast flag https://web.archive.org/web/20051018100306/https://www.godwinslaw.org/weblog/archive/2005/09/09/riaas-big-push-to-copy-protect-digital-radio #20yrsago Unicef/Save the Children sell out to recording industry https://web.archive.org/web/20050914034709/http://www.promusicae.org/pdf/campana_jovenes_musica_e_internet.pdf #15yrsago TSA forces pregnant traveller into full-body scanner https://web.archive.org/web/20100910235117/https://consumerist.com/2010/09/pregnant-traveler-tsa-screeners-bullied-me-into-full-body-scan.html #10yrsago Help crowdfund a relentless tsunami of FOIA requests into America’s private prisons https://www.muckrock.com/project/the-private-prison-project-8/ #10yrsago Your baby monitor is an Internet-connected spycam vulnerable to voyeurs and crooks https://web.archive.org/web/20210505050810/https://www.rapid7.com/blog/post/2015/09/02/iotsec-disclosure-10-new-vulns-for-several-video-baby-monitors/ #10yrsago Inept copyright bot sends 2600 a legal threat over ink blotches https://www.2600.com/content/2600-accused-using-unauthorized-ink-splotches #10yrsago FBI used Burning Man to field-test new surveillance equipment https://www.muckrock.com/news/archives/2015/sep/01/burning-man-fbi-file/ #10yrsago Fury Road, hieroglyph edition https://imgur.com/gallery/you-will-ride-eternal-papyrus-chrome-you-will-ride-eternal-papyrus-chrome-BxdOcTr#/t/chrome #10yrsago Little Brother optioned by Paramount https://www.tracking-board.com/tb-exclusive-paramount-pictures-picks-up-ny-times-bestselling-ya-novel-little-brother/ #10yrsago Record street-marches in Moldova against corrupt oligarchs https://www.euractiv.com/section/europe-s-east/news/moldova-banking-scandal-fuels-biggest-protest-ever/ #5yrsago Germany's amazing new competition proposalhttps://pluralistic.net/2020/09/09/free-sample/#wunderschoen #5yrsago DRM versus human rights https://pluralistic.net/2020/09/09/free-sample/#que-viva #1yrago America's best-paid CEOs have the worst-paid employees https://pluralistic.net/2024/09/09/low-wage-100/#executive-excess Upcoming appearances (permalink) Ithaca: Enshittification at Buffalo Street Books, Sept 11 https://buffalostreetbooks.com/event/2025-09-11/cory-doctorow-tcpl-librarian-judd-karlman Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ Ithaca: Enshittification at Autumn Leaves Books, Sept 13 https://www.autumnleavesithaca.com/event-details/enshittification-why-everything-got-worse-and-what-to-do-about-it Ithaca: Radicalized Q&A (Cornell), Sept 16 https://events.cornell.edu/event/radicalized-qa-with-author-cory-doctorow Ithaca: The Counterfeiters (Dinner/Movie Night) (Cornell), Sept 17 https://adwhiteprofessors.cornell.edu/visits/cory-doctorow/ Ithaca: Communication Power, Policy, and Practice (Cornell), Sept 18 https://events.cornell.edu/event/policy-provocations-a-conversation-about-communication-power-policy-and-practice Ithaca: A Reverse-Centaur's Guide to Being a Better AI Critic (Cornell), Sept 18 https://events.cornell.edu/event/2025-nordlander-lecture-in-science-public-policy NYC: Enshittification and Renewal (Cornell Tech), Sept 19 https://www.eventbrite.com/e/enshittification-and-renewal-a-conversation-with-cory-doctorow-tickets-1563948454929 NYC: Brooklyn Book Fair, Sept 21 https://brooklynbookfestival.org/event/big-techs-big-heist-cory-doctorow-in-conversation-with-adam-becker/ DC: Enshittification with Rohit Chopra (Politics and Prose), Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X
And a big change for this newsletter