Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
180
There are two things from our announcement today I wanted to highlight. First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that.  Our initial conception when we started OpenAI was that we’d create AI and use it to create all sorts of benefits for the world. Instead, it now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from.  We are a business and will find plenty of things to charge for, and that will help us provide free, outstanding AI service to (hopefully) billions of people.  Second, the new voice (and video) mode is the best computer interface I’ve ever used. It feels like AI from the movies; and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big...
a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Sam Altman

Three Observations

Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity.  Systems that start to point to AGI* are coming into view, and so we think it’s important to understand the moment we are in. AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields. People are tool-builders with an inherent drive to understand and create, which leads to the world getting better for all of us. Each new generation builds upon the discoveries of the generations before to create even more capable tools—electricity, the transistor, the computer, the internet, and soon AGI. Over time, in fits and starts, the steady march of human innovation has brought previously unimaginable levels of prosperity and improvements to almost every aspect of people’s lives. In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together. In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential. In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today. We continue to see rapid progress with AI development. Here are three observations about the economics of AI: 1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude. 2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.  3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future. If these three observations continue to hold true, the impacts on society will be significant. We are now starting to roll out AI agents, which will eventually feel like virtual co-workers. Let’s imagine the case of a software engineering agent, which is an agent that we expect to be particularly important. Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long. It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others. Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work. In some ways, AI may turn out to be like the transistor economically—a big scientific discovery that scales well and that seeps into almost every corner of the economy. We don’t think much about transistors, or transistor companies, and the gains are very widely distributed. But we do expect our computers, TVs, cars, toys, and more to perform miracles. The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc. But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.  Agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value; resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness, and enable individual people to have more impact than ever before, not less. We expect the impact of AGI to be uneven. Although some industries will change very little, scientific progress will likely be much faster than it is today; this impact of AGI may surpass everything else. The price of many goods will eventually fall dramatically (right now, the cost of intelligence and the cost of energy constrain a lot of things), and the price of luxury goods and a few inherently limited resources like land may rise even more dramatically. Technically speaking, the road in front of us looks fairly clear. But public policy and collective opinion on how we should integrate AGI into society matter a lot; one of our reasons for launching products early and often is to give society and the technology time to co-evolve. AI will seep into all areas of the economy and society; we will expect everything to be smart. Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more, and accept that there is a balance between safety and individual empowerment that will require trade-offs. While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy. Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas. In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect. Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine. There is a great deal of talent right now without the resources to fully express itself, and if we change that, the resulting creative output of the world will lead to tremendous benefits for us all. Thanks especially to Josh Achiam, Boaz Barak and Aleksander Madry for reviewing drafts of this. *By using the term AGI here, we aim to communicate clearly, and we do not intend to alter or interpret the definitions and processes that define our relationship with Microsoft. We fully expect to be partnered with Microsoft for the long term. This footnote seems silly, but on the other hand we know some journalists will try to get clicks by writing something silly so here we are pre-empting the silliness…

6 months ago 51 votes
Reflections

The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning. New years get people in a reflective mood, and I wanted to share some personal thoughts about how it has gone so far, and some of the things I’ve learned along the way. As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started. We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history. We wanted to figure out how to build it and make it broadly beneficial; we were excited to try to make our mark on history. Our ambitions were extraordinarily high and so was our belief that the work might benefit society in an equally extraordinary way. At the time, very few people cared, and if they did, it was mostly because they thought we had no chance of success. In 2022, OpenAI was a quiet research lab working on something temporarily called “Chat With GPT-3.5”. (We are much better at research than we are at naming things.) We had been watching people use the playground feature of our API and knew that developers were really enjoying talking to the model. We thought building a demo around that experience would show people something important about the future and help us make our models better and safer. We ended up mercifully calling it ChatGPT instead, and launched it on November 30th of 2022. We always knew, abstractly, that at some point we would hit a tipping point and the AI revolution would get kicked off. But we didn’t know what the moment would be. To our surprise, it turned out to be this. The launch of ChatGPT kicked off a growth curve like nothing we have ever seen—in our company, our industry, and the world broadly. We are finally seeing some of the massive upside we have always hoped for from AI, and we can see how much more will come soon. It hasn’t been easy. The road hasn’t been smooth and the right choices haven’t been obvious. In the last two years, we had to build an entire company, almost from scratch, around this new technology. There is no way to train people for this except by doing it, and when the technology category is completely new, there is no one at all who can tell you exactly how it should be done. Building up a company at such high velocity with so little training is a messy process. It’s often two steps forward, one step back (and sometimes, one step forward and two steps back). Mistakes get corrected as you go along, but there aren’t really any handbooks or guideposts when you’re doing original work. Moving at speed in uncharted waters is an incredible experience, but it is also immensely stressful for all the players. Conflicts and misunderstanding abound. These years have been the most rewarding, fun, best, interesting, exhausting, stressful, and—particularly for the last two—unpleasant years of my life so far. The overwhelming feeling is gratitude; I know that someday I’ll be retired at our ranch watching the plants grow, a little bored, and will think back at how cool it was that I got to do the work I dreamed of since I was a little kid. I try to remember that on any given Friday, when seven things go badly wrong by 1 pm. A little over a year ago, on one particular Friday, the main thing that had gone wrong that day was that I got fired by surprise on a video call, and then right after we hung up the board published a blog post about it. I was in a hotel room in Las Vegas. It felt, to a degree that is almost impossible to explain, like a dream gone wrong. Getting fired in public with no warning kicked off a really crazy few hours, and a pretty crazy few days. The “fog of war” was the strangest part. None of us were able to get satisfactory answers about what had happened, or why.  The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included. Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago. I also learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges. Good governance requires a lot of trust and credibility. I appreciate the way so many people worked together to build a stronger system of governance for OpenAI that enables us to pursue our mission of ensuring that AGI benefits all of humanity. My biggest takeaway is how much I have to be thankful for and how many people I owe gratitude towards: to everyone who works at OpenAI and has chosen to spend their time and effort going after this dream, to friends who helped us get through the crisis moments, to our partners and customers who supported us and entrusted us to enable their success, and to the people in my life who showed me how much they cared. [1] We all got back to the work in a more cohesive and positive way and I’m very proud of our focus since then. We have done what is easily some of our best research ever. We grew from about 100 million weekly active users to more than 300 million. Most of all, we have continued to put technology out into the world that people genuinely seem to love and that solves real problems. Nine years ago, we really had no idea what we were eventually going to become; even now, we only sort of know. AI development has taken many twists and turns and we expect more in the future. Some of the twists have been joyful; some have been hard. It’s been fun watching a steady stream of research miracles occur, and a lot of naysayers have become true believers. We’ve also seen some colleagues split off and become competitors. Teams tend to turn over as they scale, and OpenAI scales really fast. I think some of this is unavoidable—startups usually see a lot of turnover at each new major level of scale, and at OpenAI numbers go up by orders of magnitude every few months. The last two years have been like a decade at a normal company. When any company grows and evolves so fast, interests naturally diverge. And when any company in an important industry is in the lead, lots of people attack it for all sorts of reasons, especially when they are trying to compete with it. Our vision won’t change; our tactics will continue to evolve. For example, when we started we had no idea we would have to build a product company; we thought we were just going to do great research. We also had no idea we would need such a crazy amount of capital. There are new things we have to go build now that we didn’t understand a few years ago, and there will be new things in the future we can barely imagine now.  We are proud of our track-record on research and deployment so far, and are committed to continuing to advance our thinking on safety and benefits sharing. We continue to believe that the best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology, learning from experience, and continuing to make the technology safer. We believe in the importance of being world leaders on safety and alignment research, and in guiding that research with feedback from real world applications. We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes. We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity. This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company. How lucky and humbling it is to be able to play a role in this work. (Thanks to Josh Tyrangiel for sort of prompting this. I wish we had had a lot more time.) [1] There were a lot of people who did incredible and gigantic amounts of work to help OpenAI, and me personally, during those few days, but two people stood out from all others. Ron Conway and Brian Chesky went so far above and beyond the call of duty that I’m not even sure how to describe it. I’ve of course heard stories about Ron’s ability and tenaciousness for years and I’ve spent a lot of time with Brian over the past couple of years getting a huge amount of help and advice. But there’s nothing quite like being in the foxhole with people to see what they can really do. I am reasonably confident OpenAI would have fallen apart without their help; they worked around the clock for days until things were done. Although they worked unbelievably hard, they stayed calm and had clear strategic thought and great advice throughout. They stopped me from making several mistakes and made none themselves. They used their vast networks for everything needed and were able to navigate many complex situations. And I’m sure they did a lot of things I don’t know about. What I will remember most, though, is their care, compassion, and support. I thought I knew what it looked like to support a founder and a company, and in some small sense I did. But I have never before seen, or even heard of, anything like what these guys did, and now I get more fully why they have the legendary status they do. They are different and both fully deserve their genuinely unique reputations, but they are similar in their remarkable ability to move mountains and help, and in their unwavering commitment in times of need. The tech industry is far better off for having both of them in it. There are others like them; it is an amazingly special thing about our industry and does much more to make it all work than people realize. I look forward to paying it forward. On a more personal note, thanks especially to Ollie for his support that weekend and always; he is incredible in every way and no one could ask for a better partner.

7 months ago 104 votes
What I Wish Someone Had Told Me

Optimism, obsession, self-belief, raw horsepower and personal connections are how things get started. Cohesive teams, the right combination of calmness and urgency, and unreasonable commitment are how things get finished. Long-term orientation is in short supply; try not to worry about what people think in the short term, which will get easier over time. It is easier for a team to do a hard thing that really matters than to do an easy thing that doesn’t really matter; audacious ideas motivate people. Incentives are superpowers; set them carefully. Concentrate your resources on a small number of high-conviction bets; this is easy to say but evidently hard to do. You can delete more stuff than you think. Communicate clearly and concisely. Fight bullshit and bureaucracy every time you see it and get other people to fight it too. Do not let the org chart get in the way of people working productively together. Outcomes are what count; don’t let good process excuse bad results. Spend more time recruiting. Take risks on high-potential people with a fast rate of improvement. Look for evidence of getting stuff done in addition to intelligence. Superstars are even more valuable than they seem, but you have to evaluate people on their net impact on the performance of the organization. Fast iteration can make up for a lot; it’s usually ok to be wrong if you iterate quickly. Plans should be measured in decades, execution should be measured in weeks. Don’t fight the business equivalent of the laws of physics. Inspiration is perishable and life goes by fast. Inaction is a particularly insidious type of risk. Scale often has surprising emergent properties. Compounding exponentials are magic. In particular, you really want to build a business that gets a compounding advantage with scale. Get back up and keep going. Working with great people is one of the best parts of life.

a year ago 130 votes
Helion Needs You

Helion has been progressing even faster than I expected and is on pace in 2024 to 1) demonstrate Q > 1 fusion and 2) resolve all questions needed to design a mass-producible fusion generator. The goals of the company are quite ambitious—clean, continuous energy for 1 cent per kilowatt-hour, and the ability to manufacture enough power plants to satisfy the current electrical demand of earth in a ten year period. If both things happen, it will transform the world. Abundant, clean, and radically inexpensive energy will elevate the quality of life for all of us—think about how much the cost of energy factors into what we do and use. Also, electricity at this price will allow us to do things like efficiently capture carbon (so although we’ll still rely on gasoline for awhile, it’ll be ok). Although Helion’s scientific progress of the past 8 years is phenomenal and necessary, it is not sufficient to rapidly get to this new energy economy. Helion now needs to figure out how to engineer machines that don’t break, how to build a factory and supply chain capable of manufacturing a machine every day, how to work with power grids and governments around the world, and more. The biggest input to the degree and speed of success at the company is now the talent of the people who join the team. Here are a few of the most critical jobs, but please don’t let the lack of a perfect fit deter you from applying. Electrical Engineer, Low Voltage: https://boards.greenhouse.io/helionenergy/jobs/4044506005 Electrical Engineer, Pulsed Power: https://boards.greenhouse.io/helionenergy/jobs/4044510005 Mechanical Engineer, Generator Systems: https://boards.greenhouse.io/helionenergy/jobs/4044522005 Manager of Mechanical Engineering: https://boards.greenhouse.io/helionenergy/jobs/4044521005 (All current jobs: https://www.helionenergy.com/careers/)

over a year ago 56 votes

More in AI

AI Roundup 132: The B-word

August 22, 2025.

18 hours ago 4 votes
The Strange Reality of AI and SWE Hiring in 2025

Understanding the job market beyond 'the market is bad' is

19 hours ago 2 votes
Pluralistic: Radical juries (22 Aug 2025)

Today's links Radical juries: They sure hate Big Tech. Hey look at this: Delights to delectate. Object permanence: DIY TSA universal keys; Steve Jackson Games raid +20. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Radical juries (permalink) I don't know if you've heard, but water has started running uphill – I mean, speaking in a politico-scientific sense: https://pluralistic.net/2025/06/28/mamdani/#trustbusting By which I mean, the bedrock consensus of political science appears to have been disproved. Broadly speaking, political scientists believe that lawmakers and regulators only respond to the policy preferences of powerful people. If economic elites want a policy, that's the policy we get – no matter how unpopular it is with everyone else. Likewise, even if something is very, very popular with all of us, we won't get it if the super-rich hate it. Just take a look at the gap between public opinion and policy outcomes: most people think "capitalism does more harm than good"; most Canadians, Britons and Australians aged 18-34 think "socialism will improve the economy and well-being of citizens"; 72% of Brits support a national job guarantee; the majority of Californians support permanent rent-controls; and most people in 40 countries want CEO salaries capped at 4X that of their lowest-paid employees: https://pluralistic.net/2025/08/07/the-people-no-2/#water-flowing-uphill The inability of the public to get its way isn't just an impressionistic view – it's an empirical finding, based on a representative sample of 1,779 policy outcomes, that politicians ignore the will of the people in favor of the will of billionaires: economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence. https://www.cambridge.org/core/journals/perspectives-on-politics/article/testing-theories-of-american-politics-elites-interest-groups-and-average-citizens/62327F513959D0A304D4893B382B992B And yet, all over the world, we're seeing these irrepressible outbreaks of antitrust policy, aimed squarely at shattering corporate power: https://pluralistic.net/2025/06/28/mamdani/#trustbusting It's a mystery. There's no policy that would be harder on billionaire wealth and power than vigorous antitrust enforcement (not least because preventing corporate concentration is key to preventing regulatory capture): https://pluralistic.net/2022/06/05/regulatory-capture/ Certainly, there are a lot of merely obscenely rich people who are angry that the farcically rich people are screwing them over, and this class division between the 0.01% and the 1% has opened up some political space: https://pluralistic.net/2025/08/09/elite-disunity/#awoken-giants But that wouldn't be enough, not without the massive supermajorities of everyday people who are sick to the back teeth of being abused by corporations, and who are desperate for any outlet to strike back. Take juries. Orrick is a big corporate law firm that represents the kinds of companies that might find their future in the hands of a jury in a state or federal courthouse. Orrick periodically surveys representative samples of people who show up for jury service to get a picture of their attitude towards the kinds of companies that can afford to hire a firm like theirs: https://www.orrick.com/en/Insights/Groundbreaking-Jury-Research-Reveals-US-Jury-Attitudes-in-a-Polarized-Society Their latest report contrasts the results of a pre-pandemic 2019 survey with a 2025 survey of 1,011 jurors in California, Florida, Kansas, Illinois, Indiana, Louisiana, Minnesota, Missouri, Texas, New Jersey, and New York. They found that jurors' trust in the court system has plummeted since 2019 (67% in 2019, 48% in 2025); hostility to cops has tripled (11% to 33%); anti-corporate sentiment is way up (27% then, 45% now). The percentage of jurors who believe that they should use the courts to "sent messages to companies to improve their behavior" has risen from 58% to 62%; and 77% want to award punitive damages to "punish a corporation" (up from 69%). And jurors are notably hostile to pharma companies, energy companies and large banks, but they especially hate social media companies. It's no wonder that corporations are so desperate to take away our right to sue them, and why "binding arbitration" clauses that permanently confiscate your legal rights are now part of every corner of modern life: https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-and-all-NON-NEGOTIATED-agreements The business lobby has been trying to take away workers' and customers' and citizens' right to seek justice in court for decades, ginning up urban legends like "A lady's coffee was too hot so McDonald's had to give her $2.7 million": https://pluralistic.net/2022/06/12/hot-coffee/#mcgeico Don't believe it. The courts are rarely on our side, but the fact that sometimes, every now and again, a jury will seize an opportunity to deliver a smidgen of justice just drives plutocrats nuts. Billionaireism is the belief that you don't owe anything to anyone else, that morality is whatever you can get away with. You don't have to be a billionaire to contract a wicked case of billionaireism – but you do have to be stinking rich to benefit from it: https://pluralistic.net/2025/08/20/billionaireism/#surveillance-infantalism Hey look at this (permalink) How Uber Became A Cash-Generating Machine https://len-sherman.medium.com/how-uber-became-a-cash-generating-machine-ef78e7a97230 Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet https://brave.com/blog/comet-prompt-injection/ Burner Phone 101 https://rebeccawilliams.info/burner-phone-101/ Commonwealth Bank backtracks on AI job cuts, apologises for 'error' as call volumes rise https://www.abc.net.au/news/2025-08-21/cba-backtracks-on-ai-job-cuts-as-chatbot-lifts-call-volumes/105679492 It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes https://defector.com/it-took-many-years-and-billions-of-dollars-but-microsoft-finally-invented-a-calculator-that-is-wrong-sometimes?giftLink=50fb6d3bb4d7516dfa13deb4e27638de Object permanence (permalink) #20yrsago Google stealthily monitoring clickthroughs from search-results https://web.archive.org/web/20051119012842/http://mboffin.com/post.aspx?id=1830 #20yrsago Hunter S Thompson’s ashes in fireworks display — pics http://www.talkleft.com/story/2005/08/22/076/47806/media/Hunter-Thompson-s-Final-Blast-Off #10yrsago Make your own TSA universal luggage keys https://www.washingtonpost.com/local/trafficandcommuting/where-oh-where-did-my-luggage-go/2014/11/24/16d168c6-69da-11e4-a31c-77759fc1eacc_story.html #10yrsago Regal promises security-theater bag-searches in America’s largest cinema chain https://www.techdirt.com/2015/08/21/tsa-movies-theater-chain-looks-to-bring-security-theater-to-movie-theater/ #10yrsago Judge: City of Inglewood can’t use copyright to censor videos of council meetings https://web.archive.org/web/20150821122121/http://popehat.com/2015/08/20/californias-city-of-inglewood-cant-copyright-city-council-meetings-case-against-youtube-critic-tossed/ #10yrsago EFF-Austin panel commemorating the 20th anniversary of the Steve Jackson Games raid https://www.youtube.com/watch?v=ChPS4H-nqiQ #5yrsago Facebook overrules its own fact-checkers https://pluralistic.net/2020/08/21/zuck-the-scale-thumber/#scale-thumbers #5yrsago Rewarding CEOs for failure https://pluralistic.net/2020/08/21/zuck-the-scale-thumber/#failing-up Upcoming appearances (permalink) Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ DC: Enshittification at Politics and Prose, Oct 8 https://politics-prose.com/cory-doctorow-10825 New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Kara Swisher (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Divesting from Amazon’s Audible and the Fight for Digital Rights (Libro.fm) https://pocketcasts.com/podcasts/9349e8d0-a87f-013a-d8af-0acc26574db2/00e6cbcf-7f27-4589-a11e-93e4ab59c04b The Utopias Podcast https://www.buzzsprout.com/2272465/episodes/17650124 Tariffs vs IP Law (Firewalls Don't Stop Dragons) https://www.youtube.com/watch?v=LFABFe-5-uQ Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Naked Capitalism (https://www.nakedcapitalism.com/). Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. (1036 words yesterday, 39136 words total). A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

18 hours ago 2 votes
ML for SWEs 64: What AI really means for software engineering jobs

The realistic take on 'software engineers being cooked' because AI can write code

3 days ago 8 votes
Pluralistic: Charlie Jane Anders' "Lessons in Magic and Disaster." (19 Aug 2025)

Today's links Charlie Jane Anders' "Lessons in Magic and Disaster.": Families, they fuck you up (magically). Hey look at this: Delights to delectate. Object permanence: HST in space; It Plays Doom; Deserted Chinese themeparks; Banksy's Dismaland; Dollars are better than warrants; "Fuck the algorithm." Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Charlie Jane Anders' "Lessons in Magic and Disaster." (permalink) Charlie Jane Anders' Lessons in Magic and Disaster drops today: it's a novel about queer academia, the wonder of thinking very hard about very old books, and the terror and joy of ambiguous magic. It's my kind of novel! https://us.macmillan.com/books/9781250867322/lessonsinmagicanddisaster/ There's a kind of magic I love to read about – the kind where it's not entirely clear whether the person purporting to do magic is acting entirely on instinct, and neither they nor we can be entirely sure whether anything magical has actually happened. This ambiguity just tickles something in me, the part of my brain that tries to bear down on traffic lights to make them turn green, or on board-game dice to get a good roll. It's the mode of Iain Banks's The Wasp Factory and Kelly Link's Book of Love. It's a mode that Anders does superbly, and has done since her 2016 debut novel, All the Birds in the Sky: https://memex.craphound.com/2016/01/26/charlie-jane-anderss-all-the-birds-in-the-sky-smartass-soulful-novel/ That's the kind of magic at the heart of Magic and Disaster, which tells the story of Jamie, a doctoral candidate at a New England liberal arts college who is trying to hold it all together while she finishes her dissertation. For Jamie, holding it together is a tall order. Her relationship is on the rocks, her advisor is breathing down her neck, a smartass alt-right kid in her class keeps trolling her lectures, and to top it all off, her mother Sarina has withdrawn from society and is self-evidently preparing to lie down and die, out of grief and penance for the death of her wife, who died of cancer that everyone – her doctors and Sarina – downplayed until it was too late. That would be an impossible lift, except for Jamie's gift for maybe-magic – magic that might or might not be real. Certain places ("liminal spaces") call to Jamie. These are abandoned, dirty, despoiled places, ruins and dumps and littered campsites. When Jamie finds one of these places, she can improvise a ritual, using the things in her pockets and school bag as talismans that might – or might not – conjure small bumps of luck and fortune into Jamie's path. Jamie's never told anyone about the magic, but when she and Sarina have an especially bitter confrontation, it slips out. In desperation, Jamie gives her mother – a campaigning lawyer who has withdrawn from life and become a hermit – a demonstration of magic. Her mother approaches the demonstration with a lawyer's don't-bullshit-me skepticism, but something in her responds to the magic, and when Jamie leaves her, Sarina tries to bring back her dead wife, a forbidden conjuring that has disastrous consequences. Jamie had hoped to give her mother something to live for, but catastrophic magical experimentation wasn't what she had in mind. Soon, Jamie is dragged into Sarina's life, to the detriment of her relationship with Ro, a fellow academic who is rightfully suspicious of Sarina and the effect she has on Jamie. When Ro finds out about the magic, the relationship breaks, and now Jamie has to face her problems alone. Those problems keep mounting. Jamie is working on a dissertation about a 300 year old "ladies' novel" that promises to reveal some profound truth about the life of its author and her challenge to the role that she finds herself confined to as a woman, but it's slow going, and Jamie's advisor is at pains to remind her that there are dramatic changes in the offing to the university, and that Jamie had best get that thesis in soon. Meanwhile, the Men's Rights Activist bro in Jamie's class keeps upping the ante, mixing disruptive "just asking questions" behavior with thinly veiled transphobic digs (Jamie is trans, a fact that is woven around her relationship to her mother and to magic). Anders tosses a lot of differently shaped objects into the air, and then juggles them, interspersing the main action with excerpts from imaginary 18th century novels (which themselves contain imaginary parables) that serve as both a prestige and a framing device. There's a lot of queer joy in here, a hell of a lot of media theory, and some very chewy ruminations on the far-right mediasphere. There's romance and heartbreak, danger and sacrifice, and most of all, there's that ambiguous magic, which gets realer and scarier as the action goes on. This is a wonderful magic trick of a novel from a versatile author whose work includes YA space opera, hard sf adventure stories, and a wealth of brilliant short stories. It's a remarkably easy novel to read, given how much very difficult stuff Anders is doing in the writing, and it lingers long after you finish the last page. Hey look at this (permalink) Google admits anti-competitive conduct involving Google Search in Australia https://www.accc.gov.au/media-release/google-admits-anti-competitive-conduct-involving-google-search-in-australia The twilight of tech unilateralism https://www.programmablemutter.com/p/the-twilight-of-tech-unilateralism MIT report: 95% of generative AI pilots at companies are failing https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/ ‘Ad Blocking is Not Piracy’ Decision Overturned By Top German Court https://torrentfreak.com/ad-blocking-is-not-piracy-decision-overturned-by-top-german-court-250819/ Object permanence (permalink) #20yrsago US CD/DVD bootlegging is not run by organized crime https://memex.craphound.com/2005/08/18/us-cd-dvd-bootlegging-is-not-run-by-organized-crime/ #20yrsago Lem’s tensor algebra poem, annotated https://web.archive.org/web/20051107014429/http://cheesedip.com/2005/08/18/lem_love__tensor_algebra.php #20yrsago Hunter S Thompson’s ashes to be sent high on fireworks https://www.nytimes.com/2005/08/22/us/ashestofireworks-sendoff-for-an-outlaw-writer.html #20yrsago Southern Baptist guide to non-gay Disney movies https://web.archive.org/web/20050917042544/http://www.bpnews.net/bpnews.asp?ID=21416 #20yrsago ItPlaysDoom: catalog of devices capable of running Doom https://web.archive.org/web/20070226184902/http://www.itplaysdoom.com/ #10yrsago Women of the Haunted Mansion cosplayers at SDCC https://www.youtube.com/watch?v=TJptS52CZIw #10yrsago Gallery of deserted Chinese amusement parks https://www.theguardian.com/artanddesign/gallery/2015/aug/14/china-deserted-amusement-parks-stefano-cerio #10yrsago New pornoscanners are also useless, cost $160 million https://www.politico.com/story/2015/08/airport-security-price-for-tsa-failed-body-scanners-160-million-121385.html #10yrsago Gender and sf awards: who wins and for what http://www.antipope.org/charlie/blog-static/2015/08/data-books-and-bias.html #10yrsago The End of the Internet Dream: the speech that won Black Hat (and Defcon) https://web.archive.org/web/20150818104913/https://medium.com/backchannel/the-end-of-the-internet-dream-ba060b17da61 #10yrsago Piracy vs the MPAA: yet another box-office record smashed https://www.techdirt.com/2015/08/18/hollywood-keeps-breaking-box-office-records-while-still-insisting-that-internet-is-killing-movies/ #10yrsago Stephen Hawking’s speech synthesizer now free/open software https://www.wired.com/2015/08/stephen-hawking-software-open-source/ #10yrsago Defector from Kremlin’s outsourced troll army wins 1 rouble in damages https://www.bbc.com/news/world-europe-33972122 #10yrsago Chuck Wendig’s Zeroes: a hacker technothriller in the War Games lineage https://memex.craphound.com/2015/08/18/chuck-wendigs-zeroes-a-hacker-technothriller-in-the-war-games-lineage/ #10yrsago Dismaland: Banksy’s (?) swipe at Disneyland https://www.theguardian.com/artanddesign/2015/aug/18/banksy-weston-super-mare-dismaland #10yrsago Giant dump of data purports to be from Ashleymadison.com https://arstechnica.com/information-technology/2015/08/data-from-hack-of-ashley-madison-cheater-site-purportedly-dumped-online/ #10yrsago Iran arms deal prosecution falls apart because of warrantless laptop search https://arstechnica.com/tech-policy/2015/08/warrantless-airport-laptop-search-dooms-iran-arms-sales-prosecution/ #10yrsago The (real) hard problem of AI https://www.youtube.com/watch?v=mukaRhQTMP8 #10yrsago Airport security confiscates three year old’s fart gun https://www.independent.co.uk/news/world/europe/toddler-has-his-minions-fart-gun-confiscated-at-dublin-airport-for-posing-security-threat-10457743.html #5yrsago South Africa's copyright and human rights https://pluralistic.net/2020/08/18/fifth-pig/#3-steps #5yrsago Upbeat surveillance marketing https://pluralistic.net/2020/08/18/fifth-pig/#hikvision #5yrsago Fed cops substitute dollars for warrants https://pluralistic.net/2020/08/18/fifth-pig/#ppp #5yrsago Deindustrialization is a market failure https://pluralistic.net/2020/08/18/fifth-pig/#deindustrialization #5yrsago Mr Cook, Tear Down That Wall https://pluralistic.net/2020/08/18/fifth-pig/#no-true-scotsman #5yrsago "Fuck the algorithm" https://pluralistic.net/2020/08/18/fifth-pig/#a-levels #5yrsago The Fifth Pig https://pluralistic.net/2020/08/18/fifth-pig/#5th-pig Upcoming appearances (permalink) Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ DC: Enshittification at Politics and Prose, Oct 8 https://politics-prose.com/cory-doctorow-10825 New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Kara Swisher (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Divesting from Amazon’s Audible and the Fight for Digital Rights (Libro.fm) https://pocketcasts.com/podcasts/9349e8d0-a87f-013a-d8af-0acc26574db2/00e6cbcf-7f27-4589-a11e-93e4ab59c04b The Utopias Podcast https://www.buzzsprout.com/2272465/episodes/17650124 Tariffs vs IP Law (Firewalls Don't Stop Dragons) https://www.youtube.com/watch?v=LFABFe-5-uQ Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. (1022 words yesterday, 11212 words total). A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

4 days ago 6 votes