Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
20
I’ve been continuing to work on a growing series of services that archive, analyze, and represent data from a social network. This network creates text-based posts at a rate of around 400,000 posts per day, and I’ve been feeding the posts through different ML models to try and gauge the broad sentiment of the network and help find posters that spread good vibes. Sentiment Analysis Sentiment Analysis using newer Transformer models like BERT has improved in accuracy significantly in recent years. On an individual post level, especially for brief text, BERT-based models don’t have a great degree of accuracy. However on a large scale these models can provide a broad measure of general sentiment on a social network. I’ve been making use of the RoBERTa model trained on a dataset of ~58 Million Tweets to gauge the disposition of users on Bluesky over the past few months. Backfilling Blues This week I pivoted my data schema for my ATProto indexing tools and needed to backfill the entire...
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from exist

When Imperfect Systems are Good, Actually: Bluesky's Lossy Timelines

Often when designing systems, we aim for perfection in things like consistency of data, availability, latency, and more. The hardest part of system design is that it’s difficult (if not impossible) to design systems that have perfect consistency, perfect availability, incredibly low latency, and incredibly high throughput, all at the same time. Instead, when we approach system design, it’s best to treat each of these properties as points on different axes that we balance to find the “right fit” for the application we’re supporting. I recently made some major tradeoffs in the design of Bluesky’s Following Feed/Timeline to improve the performance of writes at the cost of consistency in a way that doesn’t negatively affect users but reduced P99s by over 96%. Timeline Fanout When you make a post on Bluesky, your post is indexed by our systems and persisted to a database where we can fetch it to hydrate and serve in API responses. Additionally, a reference to your post is “fanned out” to your followers so they can see it in their Timelines. This process involves looking up all of your followers, then inserting a new row into each of their Timeline tables in reverse chronological order with a reference to your post. When a user loads their Timeline, we fetch a page of post references and then hydrate the posts/actors concurrently to quickly build an API response and let them see the latest content from people they follow. The Timelines table is sharded by user. This means each user gets their own Timeline partition, randomly distributed among shards of our horizontally scalable database (ScyllaDB), replicated across multiple shards for high availability. Timelines are regularly trimmed when written to, keeping them near a target length and dropping older post references to conserve space. Hot Shards in Your Area Bluesky currently has around 32 Million Users and our Timelines database is broken into hundreds of shards. To support millions of partitions on such a small number of shards, each user’s Timeline partition is colocated with tens of thousands of other users’ Timelines. Under normal circumstances with all users behaving well, this doesn’t present a problem as the work of an individual Timeline is small enough that a shard can handle the work of tens of thousands of them without being heavily taxed. Unfortunately, with a large number of users, some of them will do abnormal things like… well… following hundreds of thousands of other users. Generally, this can be dealt with via policy and moderation to prevent abusive users from causing outsized load on systems, but these processes take time and can be imperfect. When a user follows hundreds of thousands of others, their Timeline becomes hyperactive with writes and trimming occurring at massively elevated rates. This load slows down the individual operations to the user’s Timeline, which is fine for the bad behaving user, but causes problems to the tens of thousands of other users sharing a shard with them. We typically call this situation a “Hot Shard”: where some resident of a shard has “hot” data that is being written to or read from at much higher rates than others. Since the data on the shard is only replicated a few times, we can’t effectively leverage the horizontal scale of our database to process all this additional work. Instead, the “Hot Shard” ends up spending so much time doing work for a single partition that operations to the colocated partitions slow down as well. Stacking Latencies Returning to our Fanout process, let’s consider the case of Fanout for a user followed by 2,000,000 other users. Under normal circumstances, writing to a single Timeline takes an average of ~600 microseconds. If we sequentially write to the Timelines of our user’s followers, we’ll be sitting around for 20 minutes at best to Fanout this post. If instead we concurrently Fanout to 1,000 Timelines at once, we can complete this Fanout job in ~1.2 seconds. That sounds great, except it oversimplifies an important property of systems: tail latencies. The average latency of a write is ~600 microseconds, but some writes take much less time and some take much more. In fact, the P99 latency of writes to the Timelines cluster can be as high as 15 milliseconds! What does this mean for our Fanout? Well, if we concurrently write to 1,000 Timelines at once, statistically we’ll see 10 writes as slow as or slower than 15 milliseconds. In the case of timelines, each “page” of followers is 10,000 users large and each “page” must be fanned out before we fetch the next page. This means that our slowest writes will hold up the fetching and Fanout of the next page. How does this affect our expected Fanout time? Each “page” will have ~100 writes as slow as or slower than the P99 latency. If we get unlucky, they could all stack up on a single routine and end up slowing down a single page of Fanout to 1.5 seconds. In the worst case, for our 2,000,000 Follower celebrity, their post Fanout could end up taking as long as 5 minutes! That’s not even considering P99.9 and P99.99 latencies which could end up being >1 second, which could leave us waiting tens of minutes for our Fanout job. Now imagine how bad this would be for a user with 20,000,000+ Followers! So, how do we fix the problem? By embracing imperfection, of course! Lossy Timelines Imagine a user who follows hundreds of thousands of others. Their Timeline is being written to hundreds of times a second, moving so fast it would be humanly impossible to keep up with the entirety of their Timeline even if it was their full-time job. For a given user, there’s a threshold beyond which it is unreasonable for them to be able to keep up with their Timeline. Beyond this point, they likely consume content through various other feeds and do not primarily use their Following Feed. Additionally, beyond this point, it is reasonable for us to not necessarily have a perfect chronology of everything posted by the many thousands of users they follow, but provide enough content that the Timeline always has something new. Note in this case I’m using the term “reasonable” to loosely convey that as a social media service, there must be a limit to the amount of work we are expected to do for a single user. What if we introduce a mechanism to reduce the correctness of a Timeline such that there is a limit to the amount of work a single Timeline can place on a DB shard. We can assert a reasonable limit for the number of follows a user should have to have a healthy and active Timeline, then increase the “lossiness” of their Timeline the further past that limit they go. A loss_factor can be defined as min(reasonable_limit/num_follows, 1) and can be used to probabilistically drop writes to a Timeline to prevent hot shards. Just before writing a page in Fanout, we can generate a random float between 0 and 1, then compare it to the loss_factor of each user in the page. If the user’s loss_factor is smaller than the generated float, we filter the user out of the page and don’t write to their Timeline. Now, users all have the same number of “follows worth” of Fanout. For example with a reasonable_limit of 2,000, a user who follows 4,000 others will have a loss_factor of 0.5 meaning half the writes to their Timeline will get dropped. For a user following 8,000 others, their loss factor of 0.25 will drop 75% of writes to their Timeline. Thus, each user has a effective ceiling on the amount of Fanout work done for their Timeline. By specifying the limits of reasonable user behavior and embracing imperfection for users who go beyond it, we can continue to provide service that meets the expectations of users without sacrificing scalability of the system. Aside on Caching We write to Timelines at a rate of more than one million times a second during the busy parts of the day. Looking up the number of follows of a given user before fanning out to them would require more than one million additional reads per second to our primary database cluster. This additional load would not be well received by our database and the additional cost wouldn’t be worth the payoff for faster Timeline Fanout. Instead, we implemented an approach that caches high-follow accounts in a Redis sorted set, then each instance of our Fanout service loads an updated version of the set into memory every 30 seconds. This allows us to perform lookups of follow counts for high-follow accounts millions of times per second per Fanount service instance. By caching values which don’t need to be perfect to function correctly in this case, we can once again embrace imperfection in the system to improve performance and scalability without compromising the function of the service. Results We implemented Lossy Timelines a few weeks ago on our production systems and saw a dramatic reduction in hot shards on the Timelines database clusters. In fact, there now appear to be no hot shards in the cluster at all, and the P99 of a page of Fanout work has been reduced by over 90%. Additionally, with the reduction in write P99s, the P99 duration for a full post Fanout has been reduced by over 96%. Jobs that used to take 5-10 minutes for large accounts now take <10 seconds. Knowing where it’s okay to be imperfect lets you trade consistency for other desirable aspects of your systems and scale ever higher. There are plenty of other places for improvement in our Timelines architecture, but this step was a big one towards improving throughput and scalability of Bluesky’s Timelines. If you’re interested in these sorts of problems and would like to help us build the core data services that power Bluesky, check out this job listing. If you’re interested in other open positions at Bluesky, you can find them here.

6 months ago 56 votes
Emoji Griddle
10 months ago 30 votes
Jetstream: Shrinking the AT Proto Firehose by >99%

Bluesky recently saw a massive spike in activity in response to Brazil’s ban of Twitter. As a result, the AT Proto event firehose provided by Bluesky’s Relay at bsky.network has increased in volume by a huge amount. The average event rate during this surge increased by ~1,300%. Before this new surge in activity, the firehose would produce around 24 GB/day of traffic. After the surge, this volume jumped to over 232 GB/day! Keeping up with the full, verified firehose quickly became less practical on cheap cloud infrastructure with metered bandwidth. To help reduce the burden of operating bots, feed generators, labelers, and other non-verifying AT Proto services, I built Jetstream as an alternative, lightweight, filterable JSON firehose for AT Proto. How the Firehose Works The AT Proto firehose is a mechanism used to keep verified, fully synced copies of the repos of all users. Since repos are represented as Merkle Search Trees, each firehose event contains an update to the user’s MST which includes all the changed blocks (nodes in the path from the root to the modified leaf). The root of this path is signed by the repo owner, and a consumer can keep their copy of the repo’s MST up-to-date by applying the diff in the event. For a more in-depth explanation of how Merkle Trees are constructed, check out this explainer. Practically, this means that for every small JSON record added to a repo, we also send along some number of MST blocks (which are content-addressed hashes and thus very information-dense) that are mostly useful for consumers attempting to keep a fully synced, verified copy of the repo. You can think of this as the difference between cloning a git repo v.s. just grabbing the latest version of the files without the .git folder. In this case, the firehose effectively streams the diffs for the repository with commits, signatures, and metadata, which is inherently heavier than a point-in-time checkout of the repo. Because firehose events with repo updates are signed by the repo owner, they allow a consumer to process events from any operator without having to trust the messenger. This is the “Authenticated” part of the Authenticated Transfer (AT) Protocol and is crucial to the correct functioning of the network. That being said, of the hundreds of consumers of Bluesky’s production Relay, >90% of them are building feeds, bots, and other tools that don’t keep full copies of the entire network and don’t verify MST operations at all. For these consumers, all they actually process is the JSON records created, updated, and deleted in each event. If consumers already trust the provider to do validation on their end, they could get by with a much more lightweight data stream. How Jetstream Works Jetstream is a streaming service that consumes an AT Proto com.atproto.sync.subscribeRepos stream and converts it into lightweight, friendly JSON. If you want to try it out yourself, you can connect to my public Jetstream instance and view all posts on Bluesky in realtime: $ websocat "wss://jetstream2.us-east.bsky.network/subscribe?wantedCollections=app.bsky.feed.post" Note: the above instance is operated by Bluesky PBC and is free to use, more instances are listed in the official repo Readme Jetstream converts the CBOR-encoded MST blocks produced by the AT Proto firehose and translates them into JSON objects that are easier to interface with using standard tooling available in programming languages. Since Repo MSTs only contain records in their leaf nodes, this means Jetstream can drop all of the blocks in an event except for those of the leaf nodes, typically leaving only one block per event. In reality, this means that Jetstream’s JSON firehose is nearly 1/10 the size of the full protocol firehose for the same events, but lacks the verifiability and signatures included in the protocol-level firehose. Jetstream events end up looking something like: { "did": "did:plc:eygmaihciaxprqvxpfvl6flk", "time_us": 1725911162329308, "type": "com", "commit": { "rev": "3l3qo2vutsw2b", "type": "c", "collection": "app.bsky.feed.like", "rkey": "3l3qo2vuowo2b", "record": { "$type": "app.bsky.feed.like", "createdAt": "2024-09-09T19:46:02.102Z", "subject": { "cid": "bafyreidc6sydkkbchcyg62v77wbhzvb2mvytlmsychqgwf2xojjtirmzj4", "uri": "at://did:plc:wa7b35aakoll7hugkrjtf3xf/app.bsky.feed.post/3l3pte3p2e325" } }, "cid": "bafyreidwaivazkwu67xztlmuobx35hs2lnfh3kolmgfmucldvhd3sgzcqi" } } Each event lets you know the DID of the repo it applies to, when it was seen by Jetstream (a time-based cursor), and up to one updated repo record as serialized JSON. Check out this 10 second CPU profile of Jetstream serving 200k evt/sec to a local consumer: By dropping the MST and verification overhead by consuming from relay we trust, we’ve reduced the size of a firehose of all events on the network from 232 GB/day to ~41GB/day, but we can do better. Jetstream and zstd I recently read a great engineering blog from Discord about their use of zstd to compress websocket traffic to/from their Gateway service and client applications. Since Jetstream emits marshalled JSON through the websocket for developer-friendliness, I figured it might be a neat idea to see if we could get further bandwidth reduction by employing zstd to compress events we send to consumers. zstd has two basic operating modes, “simple” mode and “streaming” mode. Streaming Compression At first glance, streaming mode seems like it’d be a great fit. We’ve got a websocket connection with a consumer and streaming mode allows the compression to get more efficient over the lifetime of the connection. I went and implemented a streaming compression version of Jetstream where a consumer can request compression when connecting and will get zstd compressed JSON sent as binary messages over the socket instead of plaintext. Unfortunately, this had a massive impact on Jetstream’s server-side CPU utilization. We were effectively compressing every message once per consumer as part of their streaming session. This was not a scalable approach to offering compression on Jetstream. Additionally, Jetstream stores a buffer of the past 24 hours (configurable) of events on disk in PebbleDB to allow consumers to replay events before getting transitioned into live-tailing mode. Jetstream stores serialized JSON in the DB, so playback is just shuffling the bytes into the websocket without having to round-trip the data into a Go struct. When we layer in streaming compression, playback becomes significantly more expensive because we have to compress outgoing events on-the-fly for a consumer that’s catching up. In real numbers, this increased CPU usage of Jetstream by 23% while lowering the throughput of playback from ~200k evt/sec to ~28k evt/sec for a single local consumer. When in streaming mode, we can’t leverage the bytes we compress for one consumer and reuse them for another consumer because zstd’s streaming context window may not be in sync between the two consumers. They haven’t received exactly the same data in the session so the clients on the other end don’t have their state machines in the same state. Since streaming mode’s primary advantage is giving us eventually better efficiency as the encoder learns about the data, what if we just taught the encoder about the data at the start and compress each message statelessly? Dictionary Mode zstd offers a mechanism for initializing an encoder/decoder with pre-optimized settings by providing a dictionary trained on a sample of the data you’ll be encoding/decoding. Using this dictionary, zstd essentially uses it’s smallest encoded representations for the most frequently seen patterns in the sample data. In our case, where we’re compressing serialized JSON with a common event shape and lots of common property names, training a dictionary on a large number of real events should allow us to represent the common elements among messages in the smallest number of bytes. For take two of Jetstream with zstd, let’s to use a single encoder for the whole service that utilizes a custom dictionary trained on 100,000 real events. We can use this encoder to compress every event as we see it, before persisting and emitting it to consumers. Now we end up with two copies of every event, one that’s just serialized JSON, and one that’s statelessly compressed to zstd using our dictionary. Any consumers that want compression can have a copy of the dictionary on their end to initialize a decoder, then when we broadcast the shared compressed event, all consumers can read it without any state or context issues. This requires the consumers and server to have a pre-shared dictionary, which is a major drawback of this implementation but good enough for our purposes. That leaves the problem of event playback for compression-enabled clients. An easy solution here is to just store the compressed events as well! Since we’re only sticking the JSON records into our PebbleDB, the actual size of the 24 hour playback window is <8GB with sstable compression. If we store a copy of the JSON serialized event and a copy of the zstd compressed event, this will, at most, double our storage requirements. Then during playback, if the consumer requests compression, we can just shuffle bytes out of the compressed version of the DB into their socket instead of having to move it through a zstd encoder. Savings Running with a custom dictionary, I was able to get the average Jetstream event down from 482 bytes to just 211 bytes (~0.44 compression ratio). Jetstream allows us to live tail all posts on Bluesky as they’re posted for as little as ~850 MB/day, and we could keep up with all events moving through the firehose during the Brazil Twitter Exodus weekend for 18GB/day (down from 232GB/day). With this scheme, Jetstream is required to compress each event only once before persisting it to disk and emitting it to connected consumers. The CPU impact of these changes is significant in proportion to Jetstream’s incredibly light load but it’s a flat cost we pay once no matter how many consumers we have. (CPU profile from a 30 second pprof sample with 12 consumers live-tailing Jetstream) Additionally, with Jetstream’s shared buffer broadcast architecture, we keep memory allocations incredibly low and the cost per consumer on CPU and RAM is trivial. In the allocation profile below, more than 80% of the allocations are used to consume the full protocol firehose. The total resident memory of Jetstream sits below 16MB, 25% of which is actually consumed by the new zstd dictionary. To bring it all home, here’s a screenshot from the dashboard of my public Jetstream instance serving 12 consumers all with various filters and compression settings, running on a $5/mo OVH VPS. At our new baseline firehose activity, a consumer of the protocol-level firehose would require downloading ~3.16TB/mo to keep up. A Jetstream consumer getting all created, updated, and deleted records without compression enabled would require downloading ~400GB/mo to keep up. A Jetstream consumer that only cares about posts and has zstd compression enabled can get by on as little as ~25.5GB/mo, <99% of the full weight firehose. Feel free to join the conversation about Jetstream and zstd on Bluesky.

11 months ago 34 votes
How HLS Works

Over the past few weeks, I’ve been building out server-side short video support for Bluesky. The major aim of this feature is to support short (90 second max) video streaming at a quality that doesn’t cost an arm and a leg for us to provide for free. In order to stay within these constraints, we’re considering making use of a video CDN that can bear the brunt of the bandwidth required to support Video-on-Demand streaming. While the CDN is a pretty fully-featured product, we want to avoid too much vendor lock-in and provide some enhancements to our streaming platform that requires extending their offering and getting creative with video streaming protocols. Some of the things we’d like to be able to do that don’t work out-of-the-box are: Track view counts, viewer sessions, and duration viewed to provide better feedback for video performance. Provide dynamic closed-caption support with the flexibility to automate them in the future. Store a transcoded version of source files somewhere durable to provide a “source of truth” for videos when needed. Append a “trailer” to the end of video streams for some branding in a TikTok-esque 3-second snippet. In this post I’ll be focusing on the HLS-related features above, namely view/duration accounting, closed captions, and trailers. HLS is Just a Bunch of Text files HTTP Live Streaming (HLS) is a standard established by Apple in 2009 that allows for adaptive-bitrate live and Video-on-Demand (VOD) streaming. For the purposes of this blog post, I’ll restrict my explanations to how HLS VOD streaming works. A player that implements the HLS protocol is capable of dynamically adjusting the quality of a streamed video based on network conditions. Additionally, a server that implements the HLS protocol should provide one or more variants of a media stream which accommodate varying network qualities to allow for graceful degradation of stream quality without stopping playback. HLS implements this by producing a series of plaintext (.m3u8) “playlist” files that tell the player what bitrates and resolutions the server provides so that the player can decide which variant it should stream. HLS differentiates between two kinds of “playlist” files: Master Playlists, and Media Playlists. Master Playlists A Master Playlist is the first file fetched by your video player. It contains a series of variants which point to child Media Playlists. It also describes the approximate bitrate of the variant sources and the codecs and resolutions used by those sources. $ curl https://my.video.host.com/video_15/playlist.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360 360p/video.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720 720p/video.m3u8 In the above file, the key things to notice are the RESOLUTION parameters and the {res}/video.m3u8 links. Your media player will generally start with the lowest resolution version before jumping up to higher resolutions once the network speed between you and the server is dialed in. The links in this file are pointers to Media Playlists, generally as relative paths from the Master Playlist such that, if we wanted to grab the 720p Media Playlist, we’d navigate to: https://my.video.host.com/video_15/720p/video.m3u8. A Master Playlist can also contain multi-track audio directives and directives for closed-captions but for now let’s move onto the Media Playlist. Media Playlists A Media Playlist is yet another plaintext file that provides your video player with two key bits of data: a list of media Segments (encoded as .ts video files) and headers for each Segment that tell the player the runtime of the media. $ curl https://my.video.host.com/video_15/720p/video.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts #EXTINF:4.000, video1.ts #EXTINF:4.000, video2.ts #EXTINF:4.000, video3.ts #EXTINF:4.000, video4.ts #EXTINF:2.800, video5.ts This Media Playlist describes a video that’s 22.8 seconds long (5 x 4-second Segments + 1 x 2.8-second Segment). The playlist describes a VOD piece of media, meaning we know this playlist contains the entirety of the media the player needs. The TARGETDURATION tells us the maximum length of each Segment so the player knows how many Segments to buffer ahead of time. During live streaming, that also lets the player know how frequently to refresh the playlist file to discover new Segments. Finally the EXTINF headers for each Segment indicate the duration of the following .ts Segment file and the relative paths of the video#.ts tell the player where to load the actual media files from. Where’s the Actual Media? At this point, the video player has loaded two .m3u8 playlist files and got lots of metadata about how to play the video but it hasn’t actually loaded any media files. The .ts files referenced in the Media Playlist are where the real media is, so if we wanted to control the playlists but let the CDN handle serving actual media, we can just redirect those video#.ts requests to our CDN. .ts files are Transport Stream MPEG-2 encoded short media files that can contain video or audio and video. Tracking Views To track views of our HLS streams, we can leverage the fact that every video player must first load the Master Playlist. When a user requests the Master Playlist, we can modify the results dynamically to provide a SessionID to each response and allow us to track the user session without cookies or headers: #EXTM3U #EXT-X-VERSION:3 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360 360p/video.m3u8?session_id=12345 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720 720p/video.m3u8?session_id=12345 Now when their video player fetches the Media Playlists, it’ll include a query-string that we can use to identify the streaming session, ensuring we don’t double-count views on the video and can track which Segments of video were loaded in the session. #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts?session_id=12345&duration=4 #EXTINF:4.000, video1.ts?session_id=12345&duration=4 #EXTINF:4.000, video2.ts?session_id=12345&duration=4 #EXTINF:4.000, video3.ts?session_id=12345&duration=4 #EXTINF:4.000, video4.ts?session_id=12345&duration=4 #EXTINF:2.800, video5.ts?session_id=12345&duration=2.8 Finally when the video player fetches the media Segment files, we can measure the Segment view before we redirect to our CDN with a 302, allowing us to know the amount of video-seconds loaded in the session and which Segments were loaded. This method has limitations, namely that a media player loading a segment doesn’t necessarily mean it showed that segment to the viewer, but it’s the best we can do without an instrumented media player. Adding Subtitles Subtitles are included in the Master Playlist as a variant and then are referenced in each of the video variants to let the player know where to load subs from. #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="en_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="en",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/en.m3u8" #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360,SUBTITLES="subs" 360p/video.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720,SUBTITLES="subs" 720p/video.m3u8 Just like with the video Media Playlists, we need a Media Playlist file for the subtitle track as well so that the player knows where to load the source files from and what duration of the stream they cover. $ curl https://my.video.host.com/video_15/subtitles/en.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:22.8 #EXTINF:22.800, en.vtt In this case, since we’re only serving a short video, we can just provide a single Segment that points at a WebVTT subtitle file encompassing the entire duration of the video. If you crack open the en.vtt file you’ll see something like: $ curl https://my.video.host.com/video_15/subtitles/en.vtt WEBVTT 00:00.000 --> 00:02.000 According to all known laws of aviation, 00:02.000 --> 00:04.000 there is no way a bee should be able to fly. 00:04.000 --> 00:06.000 Its wings are too small to get its fat little body off the ground. ... The media player is capable of reading WebVTT and presenting the subtitles at the right time to the viewer. For longer videos you may want to break up your VTT files into more Segments and update the subtitle Media Playlist accordingly. To provide multiple languages and versions of subtitles, just add more EXT-X-MEDIA:TYPE=SUBTITLES lines to the Master Playlist and tweak the NAME, LANGUAGE (if different), and URI of the additional subtitle variant definitions. #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="en_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="en",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/en.m3u8" #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="fr_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="fr",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/fr.m3u8" #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="ja_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="ja",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/ja.m3u8" Appending a Trailer For branding purposes (and in other applications, for advertising purposes), it can be helpful to insert Segments of video into a playlist to change the content of the video without requiring the content be appended to and re-encoded with the source file. Thankfully, HLS allows us to easily insert Segments into the Media Playlist using this one neat trick: #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts #EXTINF:4.000, video1.ts #EXTINF:4.000, video2.ts #EXTINF:4.000, video3.ts #EXTINF:4.000, video4.ts #EXTINF:2.800, video5.ts #EXT-X-DISCONTINUITY #EXTINF:3.337, trailer0.ts #EXTINF:1.201, trailer1.ts #EXTINF:1.301, trailer2.ts #EXT-X-ENDLIST In this Media Playlist we use HLS’s EXT-X-DISCONTINUITY header to let the video player know that the following Segments may be in a different bitrate, resolution, and aspect-ratio than the preceding content. Once we’ve provided the discontinuity header, we can add more Segments just like normal that point at a different media source broken up into .ts files. Remember, HLS allows us to use relative or absolute paths here, so we could provide a full URL for these trailer#.ts files, or virtually route them so they can retain the path context of the currently viewed video. Note that we don’t need to provide the discontinuity header here, and we could also name the trailer files something like video{6-8}.ts if we wanted to, but for clarity and proper player behavior, it’s best to use the discontinuity header if your trailer content doesn’t match the bitrate and resolution of the other video Segments. When the video player goes to play this media, it will continue from video5.ts to trailer0.ts without missing a beat, making it appear as if the trailer is part of the original video. This approach allows us to dynamically change the contents of the trailer for all videos, heavily cache the trailer .ts Segment files for performance, and avoid having to encode the trailer onto the end of every video source file. Conclusion At the end of the day, we’ve now got a video streaming service capable of tracking views and watch session durations, dynamic closed caption support, and branded trailers to help grow the platform. HLS is not a terribly complex protocol. The vast majority of it is human-readable plaintext files and is easy to inspect in the wild to how it’s used in production. When I started this project, I knew next to nothing about the protocol but was able to download some .m3u8 files and get digging to discover how the protocol worked, then build my own implementation of a HLS server to accommodate the video streaming needs of Bluesky. To learn more about HLS, you can check out the official RFC here which describes all the features discussed above and more. I hope this post encourages you to go explore other protocols you use every day by poking at them in the wild, downloading the files your browser interprets for you, and figuring out how simple some of these apparently “complex” systems are. If you’re interested in solving problems like these, take a look at our open Job Recs. If you have any questions about HLS, Bluesky, or other distributed, @scale social media infrastructure, you can find me on Bluesky here and you can discuss this post here.

a year ago 32 votes
An entire Social Network in 1.6GB (GraphD Part 2)

In Part 1 of this series, we tried to answer the question “who do you follow who also follows user B” in Bluesky, a social network with millions of users and hundreds of millions of follow relationships. At the conclusion of the post, we’d developed an in-memory graph store for the network that uses HashMaps and HashSets to keep track of the followers of every user and the set of users they follow, allowing bidirectional lookups, intersections, unions, and other set operations for combining social graph data. I received some helpful feedback after that post where several people pointed me towards Roaring Bitmaps as a potential improvement on my implementation. They were right, Roaring Bitmaps would be an excellent fit for my Graph service, GraphD, and could also provide me with a much needed way to quickly persist and load the Graph data to and from disk on startup, hopefully reducing the startup time of the service. What are Bitmaps? If you just want to dive into the Roaring Bitmap spec, you can read the paper here, but it might be easier to first talk about bitmaps in general. You can think of a bitmap as a vector of one-bit values (like booleans) that let you encode a set of integer values. For instance, say we have 10,000 users on our website and want to keep track of which users have validated their email addresses. We could do this by creating a list of the uint32 user IDs of each user, in which case if all 10,000 users have validated their emails we’re storing 10k * 32 bits = 40KB. Or, we could create a vector of single-bit values that’s 10,000 bits long (10k / 8 = 1.25KB), then if a user has confirmed their email we can set the value at the index of their UID to 1. If we want to create a list of all the UIDs of validated accounts, we can walk the vector and record the index of each non-zero bit. If we want to check if user n has validated their email, we can do a O(1) lookup in the bitmap by loading the bit at index n and checking if it’s set. When Bitmaps get Big and Sparse Now when talking about our social network problem, we’re dealing with a few more than 10,000 UIDs. We need to keep track of 5.5M users and whether or not the user follows or is followed by any of the other 5.5M users in the network. To keep a bitmap of “People who follow User A”, we’re going to need 5.5M bits which would require (5.5M / 8) ~687KB of space. If we wanted to keep bitmaps of “People who follow User A” and “People who User A follows”, we’d need ~1.37MB of space per user using a simple bitmap, meaning we’d need 5,500,000 * 1.37MB = ~7.5 Terabytes of space! Clearly this isn’t an improvement of our strategy from Part 1, so how can we make this more efficient? One strategy for compressing the bitmap is to take consecutive runs of 0’s or 1’s (i.e. 00001110000001) in the bitmap and turn them into a number. For instance if we had an account that followed only the last 100 accounts in our social network, the first 5,499,900 indices in our bitmap would be 0’s and so we could represent the bitmap by saying: 5,499,900 0's, then 100 1's which you notice I’ve written here in a lot fewer than 687KB and a computer could encode using two uint32 values plus two bits (one indicator bit for the state of each run) for a total of 66 bits. This strategy is called Run Length Encoding (RLE) and works pretty well but has a few drawbacks: mainly if your data is randomly and heavily populated, you may not have many consecutive runs (imagine a bitset where every odd bit is set and every even bit is unset). Also lookups and evaluation of the bitset requires walking the whole bitset to figure out where the index you care about lives in the compressed format. Thankfully there’s a more clever way to compress bitmaps using a strategy called Roaring Bitmaps. A brief description of the storage strategy for Roaring Bitmaps from the official paper is as follows: We partition the range of 32-bit indexes ([0, n)) into chunks of 2^16 integers sharing the same 16 most significant digits. We use specialized containers to store their 16 least significant bits. When a chunk contains no more than 4096 integers, we use a sorted array of packed 16-bit integers. When there are more than 4096 integers, we use a 2^16-bit bitmap. Thus, we have two types of containers: an array container for sparse chunks and a bitmap container for dense chunks. The 4096 threshold insures that at the level of the containers, each integer uses no more than 16 bits. These bitmaps are designed to support both densely and sparsely distributed data and can provide high performance binary set operations (and/or/etc.) operating on the containers within two or more bitsets in parallel. For more info on how Roaring Bitmaps work and some neat diagrams, check out this excellent primer on Roaring Bitmaps by Vikram Oberoi. So, how does this help us build a better graph? GraphD, Revisited with Roaring Bitmaps Let’s get back to our GraphD Service, this time in Go instead of Rust. For each user we can keep track of a struct with two bitmaps: type FollowMap struct { followingBM *roaring.Bitmap followingLk sync.RWMutex followersBM *roaring.Bitmap followersLk sync.RWMutex } Our FollowMap gives us a Roaring Bitmap for both the set of users we follow, and the set of users who follow us. Adding a Follow to the graph just requires we set the right bits in both user’s respective maps: // Note I've removed locking code and error checks for brevity func (g *Graph) addFollow(actorUID, targetUID uint32) { actorMap, _ := g.g.Load(actorUID) actorMap.followingBM.Add(targetUID) targetMap, _ := g.g.Load(targetUID) targetMap.followersBM.Add(actorUID) } Even better if we want to compute the intersections of two sets (i.e. the people User A follows who also follow User B) we can do so in parallel: // Note I've removed locking code and error checks for brevity func (g *Graph) IntersectFollowingAndFollowers(actorUID, targetUID uint32) ([]uint32, error) { actorMap, ok := g.g.Load(actorUID) targetMap, ok := g.g.Load(targetUID) intersectMap := roaring.ParAnd(4, actorMap.followingBM, targetMap.followersBM) return intersectMap.ToArray(), nil } Storing the entire graph as Roaring Bitmaps in-memory costs us around 6.5GB of RAM and allows us to perform set intersections between moderately large sets (with hundreds of thousands of set bits) in under 500 microseconds while serving over 70k req/sec! And the best part of all? We can use Roaring’s serialization format to write these bitmaps to disk or transfer them over the network. Storing 164M Follows in 1.6GB In the original version of GraphD, on startup the service would read a CSV file with an adjacency list of the (ActorDID, TargetDID) pairs of all follows on the network. This required creating a CSV dump of the follows table, pausing writes to the follows table, then bringing up the service and waiting 5 minutes for it to read the CSV file, intern the DIDs as uint32 UIDs, and construct the in-memory graph. This process is slow, pauses writes for 5 minutes, and every time our service restarts we have to do it all over again! With Roaring Bitmaps, we’re now given an easy way to effectively serialize a version of the in-memory graph that is many times smaller than the adjacency list CSV and many times faster to load. We can serialize the entire graph into a SQLite DB on the local machine where each row in a table contains: (uid, DID, followers_bitmap, following_bitmap) Loading the entire graph from this SQLite DB can be done in around ~20 seconds: // Note I've removed locking code and error checks for brevity rows, err := g.db.Query(`SELECT uid, did, following, followers FROM actors;`) for rows.Next() { var uid uint32 var did string var followingBytes []byte var followersBytes []byte rows.Scan(&uid, &did, &followingBytes, &followersBytes) followingBM := roaring.NewBitmap() followingBM.FromBuffer(followingBytes) followersBM := roaring.NewBitmap() followersBM.FromBuffer(followersBytes) followMap := &FollowMap{ followingBM: followingBM, followersBM: followersBM, followingLk: sync.RWMutex{}, followersLk: sync.RWMutex{}, } g.g.Store(uid, followMap) g.setUID(did, uid) g.setDID(uid, did) } While the service is running, we can also keep track of the UIDs of actors who have added or removed a follow since the last time we saved the DB, allowing us to periodically flush changes to the on-disk SQLite only for bitmaps that have updated. Syncing our data every 5 seconds while tailing the production firehose takes 2ms and writes an average of only ~5MB to disk per flush. The crazy part of this is, the on-disk representation of our entire follow network is only ~1.6GB! Because we’re making use of Roaring’s compressed serialized format, we can turn the ~6.5GB of in-memory maps into 1.6GB of on-disk data. Our largest bitmap, the followers of the bsky.app account with over 876k members, becomes ~500KB as a blob stored in SQLite. So, to wrap up our exploration of Roaring Bitmaps for first-degree graph databases, we saw: A ~20% reduction in resident memory size compared to HashSets and HashMaps A ~84% reduction in the on-disk size of the graph compared to an adjacency list A ~93% reduction in startup time compared to loading from an adjacency list A ~66% increase in throughput of worst-case requests under load A ~59% reduction in p99 latency of worst-case requests under low My next iteration on this problem will likely be to make use of DGraph’s in-memory Serialized Roaring Bitmap library that allows you to operate on fully-compressed bitmaps so there’s no need to serialize and deserialize them when reading from or writing to disk. It also probably results in significant memory savings as well! If you’re interested in solving problems like these, take a look at our open Backend Developer Job Rec. You can find me on Bluesky here, you can chat about this post here.

a year ago 34 votes

More in AI

Pluralistic: Reverse centaurs are the answer to the AI paradox (11 Sep 2025)

Today's links Reverse centaurs are the answer to the AI paradox: Not what the machine does, but who it does it to. Hey look at this: Delights to delectate. Object permanence: Themepunks; Data is a liability; Alexa for landlords; Qanon is the Protocols of the Elders of Zio. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Reverse centaurs are the answer to the AI paradox (permalink) My latest Locus column is "Reverse Centaurs," and it sets out to unravel a paradox: how is that some AI's users describe their experience as a hellish ordeal, while others delight in the ways that AI is changing their lives for the better? https://locusmag.com/2025/09/commentary-cory-doctorow-reverse-centaurs/ The answer is contained in the concept of "centaurs" and "reverse centaurs," found in automation theory: https://pluralistic.net/2025/05/27/rancid-vibe-coding/#class-war A "centaur" is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant (a frail and vulnerable person being puppeteered by an uncaring, relentless machine). Let me give you an example: remember at the start of the summer, when Hearst published a summer reading guide that was full of nonexistent books that had been "hallucinated" by a chatbot? https://www.npr.org/2025/05/20/nx-s1-5405022/fake-summer-reading-list-ai 404 Media's Jason Koebler got in touch with the guy whose byline appeared on the list, and he was hugely embarrassed and contrite: https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-dont-exist/ But in a followup story, Koebler noticed something that the first round of dunks and memes about this poor guy had missed: this same writer had his name on many of these "best of the summer" lists in this supplement. He was practically the sole author of an entire 64-page insert: https://www.404media.co/viral-ai-generated-summer-guide-printed-by-chicago-sun-times-was-made-by-magazine-giant-hearst/ And that's where it gets interesting. Koebler got his start in journalism as an intern at the Washington Monthly, where he worked on lists like these: https://www.404media.co/podcast-ai-slop-summer/ When Koebler was doing this work, he'd be part of a team of three interns, overseen by an experienced journalist, backstopped by an extensive fact-checking department. Those little lists take a surprising amount of work, if you really care about their quality. The freelance writer who authored this giant summer reading guide with all its lists had been tasked with doing the work of literally dozens of writers, editors and fact-checkers. We don't know whether his boss told him he had to use AI, but there's no way one writer could do all that work without AI. In other words, that writer's job wasn't to write the article. His job was to be the "human in the loop" for an AI that wrote the articles, but on a schedule and with a workload that precluded his being able to do a good job. It's more true to say that his job was to be the AI's "accountability sink" (in the memorable phrasing of Dan Davies): he was being paid to take the blame for the AI's mistakes. He was, in other words, a reverse centaur. Now, I am a freelance writer as well, and not so long ago, I wanted to quote something smart I'd heard on a podcast in an article, but I couldn't remember where I heard it. So I downloaded Whisper, an open source AI transcription model from Openai, to my laptop. I threw the last 30 hours' worth of audio that I'd listened to at it, and worked away on other stuff for an hour or two. When I checked again, I had a folder full of pretty reliable transcripts. I searched the text, found the quote, and opened the audio to the supplied timecode to double-check it. I was a centaur. I got to decide how to use the AI, and I only had to use it in ways that made my work better and more satisfying. This, I think, is the explanation for the paradox of AI: the AI users who are being immiserated and precaratized by bosses who have been convinced to fire their colleagues and pile their work on the terrorized survivors of the layoffs hate the AI, because it makes their life worse in every way. Whereas the people who choose when and how to use AI – the centaurs – are only using AI to the extent that it is useful, and throwing it away when it's not. They may make poor choices about the AI, but those choices are theirs, they are not imposed from on high. A bicyclist who chooses to commute on two wheels can have a glorious ride, or they can ride like a maniac and end up eating dirt, but they are having a fundamentally different experience from, say, a gig delivery platform rider who has been given an impossible quota and is having their pay eroded by algorithmic wage discrimination: https://pluralistic.net/2024/02/29/geometry-hates-uber/#toronto-the-gullible I was very happy to put this analysis in the pages of Locus, the trade magazine for the science fiction field. The job of a science fiction writer is only incidentally to describe what a technology does – at its best, science fiction interrogates who the technology does it to and who the technology does it for. This is a political act of resistance. Margaret Thatcher's motto, after all, was "There is no alternative," by which she meant, "Stop trying to think of alternatives." The bully's trick is to present your defeat as a fait accompli: "Resistance is futile." Tech bosses practice a form of vulgar Thatcherism all the time: Mark Zuckerberg wants you to think there's no way to talk with your friends without letting him listen in; Sundar Pichai wants you to think there's no way to search the web without being spied on; Tim Cook wants you to think there's no way to have a safe and reliable computing experience without giving him a veto over which software you install; Satya Nadella wants you to think there's no way for you to edit a Word file without letting your boss compare your keystrokes-per-minute to your co-workers: https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bossware And AI bosses want you to think that the only way to use these tools is to displace and immiserate labor, because that's the promise they raise investment capital on: https://pluralistic.net/2025/08/05/ex-princes-of-labor/#hyper-criti-hype AI is a bubble. If it wasn't a bubble – if it was just a bunch of computer scientists and product teams tinkering with possible uses for advancements in back-propagation, generative adversarial networks and machine learning – there wouldn't be any controversy here. A programmer who uses a chatbot to autogen a bunch of cross-browser CSS stylesheets that mostly work, after some tinkering, would maybe mention that fact over beers – but they wouldn't get sucked into a cult obsessed with outlandish scenarios in which the chatbot wakes up and turns us all into paperclips: https://firstmonday.org/ojs/index.php/fm/article/view/13636 AI is a bubble. Bubbles burst. We're in for a near-total collapse of the AI investment mania. Most of these companies will fail. Many planned data-centers will never be opened. Many existing data-centers will be shuttered. When that happens, what will be left? AI is a bubble, and when bubbles burst, they sometimes leave behind a productive residue. At home, I enjoy 2GB symmetrical fiber optic internet, because AT&T was able to light up some of the dark fiber that Worldcom fraudulently raised billions for. Worldcom's CEO died in prison after scamming the finances of ordinary people, and the world would be a better place if that had never happened, but there was some productive residue left behind, and many of us are reaping the benefit today: https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/ Contrast that with the cryptocurrency bubble. When that bursts, we'll still have a smattering of programmers who've had a subsidized education in cryptography and secure programming in Rust, but mostly what crypto will leave behind is bad Austrian economics and worse monkey JPEGs. Like Enron, crypto will leave nothing much behind of any value. All bubbles are bad, but some are more productive than others. When the AI bubble bursts, there will be stellar bargains on GPUs (it would be ironic if scientists snapped them up at pennies on the dollar and used them for climate modeling). We'll have a lot of technical people who are much better at applied statistics than they were a decade ago. And there will be the open source models, like Whisper, the tool I used to transcribe all those podcasts. These open source models run on commodity hardware, and while the climate costs of creating those models is terrible, they're here now, and operating them isn't especially energy-intensive. When I used Whisper to transcribe 30 hours' worth of podcasts, my laptop's fan didn't even switch on. What's more, open source hackers are doing amazing things with these tools – far more than the giant corporations that released them ever anticipated. These "toy" models were released as a way to entice programmers into specializing in cloud systems operated by the big tech companies, but it turns out that these standalone models can do amazing things, and aren't just a demo for a big, doomed foundation model: https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means It doesn't matter what happens to Openai; Whisper is here to stay. It's already being rolled into other standard tools – the latest version of ffmpeg integrates Whisper and can autogen captions: https://www.theregister.com/2025/08/28/ffmpeg_8_huffman/ The things these open source standalone models can do will only expand, and they will become a given for our computing applications. Your computer or phone will be able to transcribe audio and do cool image-editing stuff like erasing strangers from the background of a photo as a standard feature. That's the good news. The bad news is all the damage the bubble is doing now and all the further damage that will come from its collapse. Today, we're getting the climate impact, obviously, and the immiseration of all those workers who are being reverse-centaured by an AI that can't do their job, but whose manufacturer's salesforce convinced their boss to fire them and replace them with an AI anyway. After the bubble bursts, there will be the mass incineration of everyday people's retirement savings and the knock-on effects as the whole market craters. And long after that, there will be the terrible impact on our society's ability to do things, as defunct foundation models grind to a halt, after the people they replaced are long gone and can't step in to pick up the work they fumble. We are busily filling the walls of society with digital asbestos and we'll be digging it out for generations to come. Every day the bubble persists, the harms of today and tomorrow increase. We need to burst that bubble as soon as possible. That's how I came to spend the summer writing a book for Farrar, Straus and Giroux with the working title The Reverse-Centaur's Guide to AI, whose goal is to improve the quality of AI criticism so that it inflicts maximum damage on AI swindlers and their terrible investment bubble. It'll be out in 2026, but for now, you can have a look at my Locus column: https://locusmag.com/2025/09/commentary-cory-doctorow-reverse-centaurs/ (Image: School Photos PCC, CC BY 2.0, modified) Hey look at this (permalink) Flush door handles are the car industry’s latest safety problem https://arstechnica.com/cars/2025/09/flush-door-handles-are-the-car-industrys-latest-safety-problem/ The Great Space Race(ism): How Science Fiction Predicted the Future–and How Afrofuturism Could Negate It https://escholarship.org/uc/item/3gv8r1r5 Object permanence (permalink) #20yrago Themepunks (AKA Makers) serialized for next ten weeks on Salon https://web.archive.org/web/20050914060107/http://www.salon.com/tech/feature/2005/09/12/themepunks_1/index_np.html #10yrsago Data is a liability, not an asset https://web.archive.org/web/20150911201818/https://richie.fi/blog/data-is-a-liability.html #10yrsago Missing from the computer science curriculum https://prog21.dadgum.com/210.html #5yrsago Alexa for landlords https://pluralistic.net/2020/09/11/protocols-of-qanon/#landlord-alexa #5yrsago Security Engineering, 3d edition https://pluralistic.net/2020/09/11/protocols-of-qanon/#security-engineering-v3 #5yrsago America's pandemic spiral https://pluralistic.net/2020/09/11/protocols-of-qanon/#doom-loops #5yrsago EFF vs filternet https://pluralistic.net/2020/09/11/protocols-of-qanon/#no-filternet #5yrsago Qanon is basically the Protocols of the Elders of Zion https://pluralistic.net/2020/09/11/protocols-of-qanon/#godwins-qanon #5yrsago Life as a precriminal https://pluralistic.net/2020/09/11/protocols-of-qanon/#chris-nocco Upcoming appearances (permalink) Ithaca: Enshittification at Buffalo Street Books, Sept 11 https://buffalostreetbooks.com/event/2025-09-11/cory-doctorow-tcpl-librarian-judd-karlman Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ Ithaca: Enshittification at Autumn Leaves Books, Sept 13 https://www.autumnleavesithaca.com/event-details/enshittification-why-everything-got-worse-and-what-to-do-about-it Ithaca: Radicalized Q&A (Cornell), Sept 16 https://events.cornell.edu/event/radicalized-qa-with-author-cory-doctorow Ithaca: The Counterfeiters (Dinner/Movie Night) (Cornell), Sept 17 https://adwhiteprofessors.cornell.edu/visits/cory-doctorow/ Ithaca: Communication Power, Policy, and Practice (Cornell), Sept 18 https://events.cornell.edu/event/policy-provocations-a-conversation-about-communication-power-policy-and-practice Ithaca: A Reverse-Centaur's Guide to Being a Better AI Critic (Cornell), Sept 18 https://events.cornell.edu/event/2025-nordlander-lecture-in-science-public-policy NYC: Enshittification and Renewal (Cornell Tech), Sept 19 https://www.eventbrite.com/e/enshittification-and-renewal-a-conversation-with-cory-doctorow-tickets-1563948454929 NYC: Brooklyn Book Fair, Sept 21 https://brooklynbookfestival.org/event/big-techs-big-heist-cory-doctorow-in-conversation-with-adam-becker/ DC: Enshittification with Rohit Chopra (Politics and Prose), Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Seattle: Enshittification and the Rot Economy, with Ed Zitron (Clarion West), Oct 22 https://www.clarionwest.org/event/2025-deep-dives-cory-doctorow/ Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

6 hours ago 2 votes
On Working with Wizards

Verifying magic on the jagged frontier

an hour ago 2 votes
Anthropic Might Legally Owe Me Thousands of Dollars

On shadow libraries, legal documents, and judicial skepticism.

yesterday 3 votes
Pluralistic: Hate the player AND the game (10 Sep 2025)

Today's links Hate the player AND the game: But above all, hate the crooked ump. Hey look at this: Delights to delectate. Object permanence: Library Tor nodes vs the DHS; Egg-board psyops; Fury Road amputation cosplay; NYPD's dirtiest cop. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Hate the player AND the game (permalink) The epigram for my forthcoming book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It is a quote from Ed Zitron: "I hate them for what they've done to the computer" (Ed even recorded a little cameo of this for the audiobook): https://www.kickstarter.com/projects/doctorow/enshittification-the-drm-free-audiobook/ Ed's a smart and passionate guy, and this was definitely the quote to sum up the rage I felt as I wrote the book. Ed's got a whole theory of who "they" are and "what they did to the computer," which he calls "the Rot Economy": https://www.wheresyoured.at/the-rot-economy/ The Rot Economy describes the ideology of bosses, starting with monsters like GE's Jack Welch, who financialized companies, optimizing them for making short term cash gains for investors, at the expense of their workers, their customers, their products and services, and, ultimately, their long-term health. For Ed, these bosses (especially tech bosses) are the sociopaths who destroyed "the computer" (a stand-in for tech more generally). I don't disagree at all. The there is a direct, undeniable line from the ideas and conduct of tech bosses and the tech hellscape we live in today. A good read on this subject is Anil Dash's scorching post from yesterday, "How Tim Cook sold out Steve Jobs": https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/ I find the Rot Economy hypothesis entirely compelling, but also, incomplete. Ed's explaining why we should hate the players and why we should hate the game, but the enshittification thesis goes even further and explains why we need to hate the umpires – the policymakers, enforcers, economists and legal theorists who created the enshittogenic environment in which the Rot Economy took hold. Some early reviews of Enshittification have expressed dissatisfaction with book's "solutions" section, complaining that all the solutions are policy oriented, and there's nothing suggested for us to do in our capacity as individual consumers: https://pluralistic.net/2025/07/31/unsatisfying-answers/#systemic-problems Those criticisms are correct: there is nothing we can do as individual consumers. Agonizing about your consumption choices will not fight enshittification any more than conscientiously sorting your recycling will end the climate emergency. Enshittification isn't caused by "lazy consumers" who choose "convenience" or are "too cheap to pay for online services": https://pluralistic.net/2024/04/12/give-me-convenience/#or-give-me-death The wellspring of enshittification isn't poor consumption choices, it's poor policy choices. The reason monsters are able to destroy our online lives isn't their personal moral failings, it's the system that rewards predatory, deceptive and unfair commercial practices and elevates their foremost practitioners to positions of power within firms: https://pluralistic.net/2023/07/28/microincentives-and-enshittification/ And here's the kicker: we know where those policy choices came from! The people who made these policy choices did so in living memory. They were warned at the time about the foreseeable consequences of their choices. They made those choices anyway. They faced zero consequences for doing so, even after every one of the prophesied horrors came to pass. Not only were they spared consequences for their actions, but they prospered as a result – they are revered as statesmen, lawyers, scholars and titans of economics. As Trashfuture showrunner Riley Quinn often says, the curse of being a leftist is that you have object permanence – you actually remember the stuff that happened and how it happened. You don't live in an eternal now that has no causal relationship to the past. It's not enough to hate the player, nor the game – we've got to remember the crooked umps who rigged the match. We have to say their names, because that's how we root out their terrible ideas and ensure that our policy interventions make real change. If Elon Musk OD'ed on ketamine tomorrow, there'd be ten Big Balls who'd tear each others' throats out in the ensuing succession fight, and the next guy would be just as stupid, racist, and authoritarian. Musk, Cook, Zuck, Pichai, Nadella, Larry Ellison – they're just filling the monster-shaped holes that policy-makers installed in our society. Start with Robert Bork, the jurist who championed the "consumer welfare" theory of antitrust, which promotes monopolies as efficient and counsels policymakers not to punish companies that take over markets, because the only way to really dominate a market is to be so good that everyone chooses your products and services. Wouldn't it just be perverse to use public funds to shut down the public's favorite companies? Bork was a virulent racist, a Nixonite criminal, and he was dead wrong about the law and the economics of monopoly: https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/ Bork's legacy of pro-monopoly advocacy is, unsurprisingly, monopolies. Monopolies that make everything more expensive and worse: from athletic shoes to microchips, glass bottles to pharmaceuticals, pro wrestling to eyeglasses: https://www.openmarketsinstitute.org/learn/monopoly-by-the-numbers These monopolies did not arise because of the iron laws of economics. They are not the product of the great forces of history. They are the direct and undeniable consequence of Robert Bork convincing the world's governments to embrace his bullshit, pro-monopoly policies. Satan took Bork to hell in 2012, but you know who's still with us? Bruce Lehman. Bruce Lehman was Bill Clinton's copyright czar, the man who, in his own words, "did an end-run around Congress" by getting an UN treaty passed that obliged its signatories to ban reverse engineering: https://www.cbc.ca/listen/cbc-podcasts/1353-the-naked-emperor/episode/16145640-ctrl-ctrl-ctrl Lehman's used the treaty to get Congress to pass the Digital Millennium Copyright Act (DMCA) and section 1201 of the DMCA made it a felony to break DRM. Bruce Lehman is why farmers can't fix their own tractors, hospitals can't fix their own ventilators, and your mechanic can't fix your car. He's why, when the manufacturer of your artificial eyes bricks a computer that is permanently wired to your nervous system, no one else can revive it: https://pluralistic.net/2022/12/12/unsafe-at-any-speed/ Bruce Lehman is why you can't use the apps of your choosing on your phone or games console. He's why we can't preserve beloved old video games. He's why Apple and Google get to steal 30 cents out of every dollar you send to a performer, software author, or creator through an app: https://pluralistic.net/2025/05/01/its-not-the-crime/#its-the-coverup Yeah, Tim Cook is a venal billionaire who owes his wealth to the Chinese sweatshops of iPhone City, where they had to install suicide nets to catch the workers who'd rather end it all than work another day for Tim Apple, but Tim Cook's power over those workers is owed to Bruce Lehman and Robert Bork. Then there's the ISP sector, whose Net Neutrality violations and underinvestment mean that people who live in the country where the internet was invented have some of the slowest, most expensive internet in the world. Big ISP bosses are some of the worst people on Earth. Take Thomas Rutledge, who CEO of Charter/Spectrum when covid broke out. At the time, Rutledge was America's highest-paid CEO. He dictated that his back-office staff could not work from home (imagine a telco boss who doesn't believe in telework!), and those back-offices all turned into super-spreader sites. Rutledge's field workers – the people who came to our homes and upgraded our internet so we could work from home – did not get PPE or danger pay. Instead, they got vouchers exclusively redeemable at restaurants that had shut down during the pandemic: https://pluralistic.net/2020/04/22/filternet/#thomas-rutledge-murderer Fuck Thomas Rutledge and may his name be a curse forever. But the reason Thomas Rutledge – and all the other terrible telco bosses – were able to reap millions by supplying us with dogshit internet while literally murdering their employees was that Trump's FCC chairman, an ex-Verizon lawyer named Ajit Pai, let them get away with it: https://pluralistic.net/2021/02/12/ajit-pai/#pai Ajit Pai engaged in some of the most flagrant cheating ever seen in American regulation (prior to Jan 20, 2025, at least). When he decided to kill Net Neutrality, he accepted obviously fraudulent comments into the official record, including one million identical comments from @pornhub.com email addresses, as well as millions of comments whose return addresses were taken from darknet data-dumps, including the email addresses of dead people and of sitting US senators who supported Net Neutrality: https://pluralistic.net/2023/11/10/digital-redlining/#stop-confusing-the-issue-with-relevant-facts Pai – and his co-conspirators – are the umps who rigged the game. Hate Thomas Rutledge to be sure, but to prevent people like Rutledge from gaining power over your digital life in future, you must remember Ajit Pai with the special form of white-hot rage that keeps people like him from ever making policy decisions again. Then there's Canada's hall of shame, which is full of monsters. Two of my least favorite are James Moore and Tony Clement, who, as ministers under Stephen Harper, rammed through a Canadian version of the DMCA, 2012's Bill C-11, despite their own consultation, which found that Canadians overwhelmingly rejected the idea: https://pluralistic.net/2024/11/15/radical-extremists/#sex-pest Clement (now a disgraced sex-pest) and Moore (still accepted into polite society as a corporate lawyer) are the reason that Canada's Right to Repair and interop laws are dead on arrival. THey're also why Canada can't retaliate against Trump's tariffs by jailbreaking US products, making everything cheaper for Canadians and birthing new, global Canadian tech businesses: https://pluralistic.net/2025/01/15/beauty-eh/#its-the-only-war-the-yankees-lost-except-for-vietnam-and-also-the-alamo-and-the-bay-of-ham In Europe, there's Axel Voss, the man behind 2019's "filternet" proposal, which requires tech platforms to spend hundreds of millions of euros for copyright filters that use AI to process everything posted to the public internet in Europe and block anything the AI thinks is "copyrighted": https://memex.craphound.com/2019/03/26/article-13-will-wreck-the-internet-because-swedish-meps-accidentally-pushed-the-wrong-voting-button/ For years, Voss maintained that none of this was true, that there would be no filters, and dismissed his critics as hysterical fools: https://memex.craphound.com/2019/04/03/after-months-of-insisting-that-article13-doesnt-require-filters-top-eu-commissioner-says-article-13-requires-filters/ But then, after his law passed, he admitted he "didn't know what he was voting for": https://memex.craphound.com/2018/09/14/father-of-the-catastrophic-copyright-directive-reveals-he-didnt-know-what-he-was-voting-for/ Fuck the media lobbyists who spent hundreds of millions of euros to push this catastrophic law through: https://memex.craphound.com/2018/12/13/clash-of-the-corporate-titans-whos-spending-what-in-europes-copyright-directive-battle/ But especially and forever, fuck Axel Voss, the policymaker who helped turn those corporate bribes into policy. Ed Zitron is right to hate the people who implement the Rot Economy for what they did to the computer. But those people are only doing what policymakers let them do. Corporate monsters thrive in an enshittogenic environment. But political monsters are the ones create that enshittogenic environment. They're the ones who are terraforming our planet to sideline human life and replace it with the immortal colony organisms we call "limited liability corporations." Hey look at this (permalink) Dwayne Johnson Will Play the Chicken Man in ‘Lizard Music’ https://gizmodo.com/dwayne-johnson-to-next-play-the-chicken-man-in-lizard-music-2000655464 Qualifying Conditions https://www.jwz.org/blog/2025/09/qualifying-conditions/ Cindy Cohn Is Leaving the EFF, but Not the Fight for Digital Rights https://www.wired.com/story/eff-cindy-cohn-stepping-down/ Five technological achievements! (That we won’t see any time soon.) https://crookedtimber.org/2025/09/09/five-technological-achievements-that-we-wont-see-any-time-soon/ A notional design studio. https://ethanmarcotte.com/wrote/a-notional-design-studio/ Object permanence (permalink) #20yrsago Anti-trusted-computing video https://www.lafkon.net/tc/ #10yrsago Library offers Tor nodes; DHS tells them to stop https://www.propublica.org/article/library-support-anonymous-internet-browsing-effort-stops-after-dhs-email #10yrsago Ashley Madison’s passwords were badly encrypted, 15 million+ passwords headed for the Web https://arstechnica.com/information-technology/2015/09/ashley-madison-password-crack-could-spell-trouble-across-the-internet/ #10yrsago Heathrow security insists that ice is a liquid https://gizmodo.com/what-happens-if-you-take-frozen-liquids-through-airport-1729772148 #10yrago DoJ says it will consider jailing executives who order corporate crimes https://www.nytimes.com/2015/09/10/us/politics/new-justice-dept-rules-aimed-at-prosecuting-corporate-executives.html #10yrsago Government-run egg board waged high-price, secret PSYOPS war on vegan egg-replacement https://www.theguardian.com/business/2015/sep/06/usda-american-egg-board-paid-bloggers-hampton-creek #10yrago Using sandwiches to teach the Socratic method https://web.archive.org/web/20140810204054/https://medium.com/@kmikeym/is-this-a-sandwich-50b1317eb3f5 #10yrago Fury Road cosplay: amputated arm edition https://web.archive.org/web/20150911194228/http://www.tor.com/2015/09/09/afternoon-roundup-furiosa-real-prosthetic-arm-cosplay/ #5yrsago Kids' smart-watches unsafe at any speed https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#digital-parenting #5yrsago Georgia voter suppression, quantified https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#georgia-suppression #5yrsago The rise and rise of one of NYPD's dirtiest cops https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#50a #5yrago Inaudible https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#audible-exclusive Upcoming appearances (permalink) Ithaca: Enshittification at Buffalo Street Books, Sept 11 https://buffalostreetbooks.com/event/2025-09-11/cory-doctorow-tcpl-librarian-judd-karlman Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ Ithaca: Enshittification at Autumn Leaves Books, Sept 13 https://www.autumnleavesithaca.com/event-details/enshittification-why-everything-got-worse-and-what-to-do-about-it Ithaca: Radicalized Q&A (Cornell), Sept 16 https://events.cornell.edu/event/radicalized-qa-with-author-cory-doctorow Ithaca: The Counterfeiters (Dinner/Movie Night) (Cornell), Sept 17 https://adwhiteprofessors.cornell.edu/visits/cory-doctorow/ Ithaca: Communication Power, Policy, and Practice (Cornell), Sept 18 https://events.cornell.edu/event/policy-provocations-a-conversation-about-communication-power-policy-and-practice Ithaca: A Reverse-Centaur's Guide to Being a Better AI Critic (Cornell), Sept 18 https://events.cornell.edu/event/2025-nordlander-lecture-in-science-public-policy NYC: Enshittification and Renewal (Cornell Tech), Sept 19 https://www.eventbrite.com/e/enshittification-and-renewal-a-conversation-with-cory-doctorow-tickets-1563948454929 NYC: Brooklyn Book Fair, Sept 21 https://brooklynbookfestival.org/event/big-techs-big-heist-cory-doctorow-in-conversation-with-adam-becker/ DC: Enshittification with Rohit Chopra (Politics and Prose), Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

yesterday 3 votes
ML for SWEs 66: Safety is a fundamental AI engineering requirement

The debate about prioritizing speed or safety is over and reality has made the decision for us.

yesterday 6 votes