Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
56
Often when designing systems, we aim for perfection in things like consistency of data, availability, latency, and more. The hardest part of system design is that it’s difficult (if not impossible) to design systems that have perfect consistency, perfect availability, incredibly low latency, and incredibly high throughput, all at the same time. Instead, when we approach system design, it’s best to treat each of these properties as points on different axes that we balance to find the “right fit” for the application we’re supporting. I recently made some major tradeoffs in the design of Bluesky’s Following Feed/Timeline to improve the performance of writes at the cost of consistency in a way that doesn’t negatively affect users but reduced P99s by over 96%. Timeline Fanout When you make a post on Bluesky, your post is indexed by our systems and persisted to a database where we can fetch it to hydrate and serve in API responses. Additionally, a reference to your post is “fanned out” to...
6 months ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from exist

Emoji Griddle
10 months ago 30 votes
Jetstream: Shrinking the AT Proto Firehose by >99%

Bluesky recently saw a massive spike in activity in response to Brazil’s ban of Twitter. As a result, the AT Proto event firehose provided by Bluesky’s Relay at bsky.network has increased in volume by a huge amount. The average event rate during this surge increased by ~1,300%. Before this new surge in activity, the firehose would produce around 24 GB/day of traffic. After the surge, this volume jumped to over 232 GB/day! Keeping up with the full, verified firehose quickly became less practical on cheap cloud infrastructure with metered bandwidth. To help reduce the burden of operating bots, feed generators, labelers, and other non-verifying AT Proto services, I built Jetstream as an alternative, lightweight, filterable JSON firehose for AT Proto. How the Firehose Works The AT Proto firehose is a mechanism used to keep verified, fully synced copies of the repos of all users. Since repos are represented as Merkle Search Trees, each firehose event contains an update to the user’s MST which includes all the changed blocks (nodes in the path from the root to the modified leaf). The root of this path is signed by the repo owner, and a consumer can keep their copy of the repo’s MST up-to-date by applying the diff in the event. For a more in-depth explanation of how Merkle Trees are constructed, check out this explainer. Practically, this means that for every small JSON record added to a repo, we also send along some number of MST blocks (which are content-addressed hashes and thus very information-dense) that are mostly useful for consumers attempting to keep a fully synced, verified copy of the repo. You can think of this as the difference between cloning a git repo v.s. just grabbing the latest version of the files without the .git folder. In this case, the firehose effectively streams the diffs for the repository with commits, signatures, and metadata, which is inherently heavier than a point-in-time checkout of the repo. Because firehose events with repo updates are signed by the repo owner, they allow a consumer to process events from any operator without having to trust the messenger. This is the “Authenticated” part of the Authenticated Transfer (AT) Protocol and is crucial to the correct functioning of the network. That being said, of the hundreds of consumers of Bluesky’s production Relay, >90% of them are building feeds, bots, and other tools that don’t keep full copies of the entire network and don’t verify MST operations at all. For these consumers, all they actually process is the JSON records created, updated, and deleted in each event. If consumers already trust the provider to do validation on their end, they could get by with a much more lightweight data stream. How Jetstream Works Jetstream is a streaming service that consumes an AT Proto com.atproto.sync.subscribeRepos stream and converts it into lightweight, friendly JSON. If you want to try it out yourself, you can connect to my public Jetstream instance and view all posts on Bluesky in realtime: $ websocat "wss://jetstream2.us-east.bsky.network/subscribe?wantedCollections=app.bsky.feed.post" Note: the above instance is operated by Bluesky PBC and is free to use, more instances are listed in the official repo Readme Jetstream converts the CBOR-encoded MST blocks produced by the AT Proto firehose and translates them into JSON objects that are easier to interface with using standard tooling available in programming languages. Since Repo MSTs only contain records in their leaf nodes, this means Jetstream can drop all of the blocks in an event except for those of the leaf nodes, typically leaving only one block per event. In reality, this means that Jetstream’s JSON firehose is nearly 1/10 the size of the full protocol firehose for the same events, but lacks the verifiability and signatures included in the protocol-level firehose. Jetstream events end up looking something like: { "did": "did:plc:eygmaihciaxprqvxpfvl6flk", "time_us": 1725911162329308, "type": "com", "commit": { "rev": "3l3qo2vutsw2b", "type": "c", "collection": "app.bsky.feed.like", "rkey": "3l3qo2vuowo2b", "record": { "$type": "app.bsky.feed.like", "createdAt": "2024-09-09T19:46:02.102Z", "subject": { "cid": "bafyreidc6sydkkbchcyg62v77wbhzvb2mvytlmsychqgwf2xojjtirmzj4", "uri": "at://did:plc:wa7b35aakoll7hugkrjtf3xf/app.bsky.feed.post/3l3pte3p2e325" } }, "cid": "bafyreidwaivazkwu67xztlmuobx35hs2lnfh3kolmgfmucldvhd3sgzcqi" } } Each event lets you know the DID of the repo it applies to, when it was seen by Jetstream (a time-based cursor), and up to one updated repo record as serialized JSON. Check out this 10 second CPU profile of Jetstream serving 200k evt/sec to a local consumer: By dropping the MST and verification overhead by consuming from relay we trust, we’ve reduced the size of a firehose of all events on the network from 232 GB/day to ~41GB/day, but we can do better. Jetstream and zstd I recently read a great engineering blog from Discord about their use of zstd to compress websocket traffic to/from their Gateway service and client applications. Since Jetstream emits marshalled JSON through the websocket for developer-friendliness, I figured it might be a neat idea to see if we could get further bandwidth reduction by employing zstd to compress events we send to consumers. zstd has two basic operating modes, “simple” mode and “streaming” mode. Streaming Compression At first glance, streaming mode seems like it’d be a great fit. We’ve got a websocket connection with a consumer and streaming mode allows the compression to get more efficient over the lifetime of the connection. I went and implemented a streaming compression version of Jetstream where a consumer can request compression when connecting and will get zstd compressed JSON sent as binary messages over the socket instead of plaintext. Unfortunately, this had a massive impact on Jetstream’s server-side CPU utilization. We were effectively compressing every message once per consumer as part of their streaming session. This was not a scalable approach to offering compression on Jetstream. Additionally, Jetstream stores a buffer of the past 24 hours (configurable) of events on disk in PebbleDB to allow consumers to replay events before getting transitioned into live-tailing mode. Jetstream stores serialized JSON in the DB, so playback is just shuffling the bytes into the websocket without having to round-trip the data into a Go struct. When we layer in streaming compression, playback becomes significantly more expensive because we have to compress outgoing events on-the-fly for a consumer that’s catching up. In real numbers, this increased CPU usage of Jetstream by 23% while lowering the throughput of playback from ~200k evt/sec to ~28k evt/sec for a single local consumer. When in streaming mode, we can’t leverage the bytes we compress for one consumer and reuse them for another consumer because zstd’s streaming context window may not be in sync between the two consumers. They haven’t received exactly the same data in the session so the clients on the other end don’t have their state machines in the same state. Since streaming mode’s primary advantage is giving us eventually better efficiency as the encoder learns about the data, what if we just taught the encoder about the data at the start and compress each message statelessly? Dictionary Mode zstd offers a mechanism for initializing an encoder/decoder with pre-optimized settings by providing a dictionary trained on a sample of the data you’ll be encoding/decoding. Using this dictionary, zstd essentially uses it’s smallest encoded representations for the most frequently seen patterns in the sample data. In our case, where we’re compressing serialized JSON with a common event shape and lots of common property names, training a dictionary on a large number of real events should allow us to represent the common elements among messages in the smallest number of bytes. For take two of Jetstream with zstd, let’s to use a single encoder for the whole service that utilizes a custom dictionary trained on 100,000 real events. We can use this encoder to compress every event as we see it, before persisting and emitting it to consumers. Now we end up with two copies of every event, one that’s just serialized JSON, and one that’s statelessly compressed to zstd using our dictionary. Any consumers that want compression can have a copy of the dictionary on their end to initialize a decoder, then when we broadcast the shared compressed event, all consumers can read it without any state or context issues. This requires the consumers and server to have a pre-shared dictionary, which is a major drawback of this implementation but good enough for our purposes. That leaves the problem of event playback for compression-enabled clients. An easy solution here is to just store the compressed events as well! Since we’re only sticking the JSON records into our PebbleDB, the actual size of the 24 hour playback window is <8GB with sstable compression. If we store a copy of the JSON serialized event and a copy of the zstd compressed event, this will, at most, double our storage requirements. Then during playback, if the consumer requests compression, we can just shuffle bytes out of the compressed version of the DB into their socket instead of having to move it through a zstd encoder. Savings Running with a custom dictionary, I was able to get the average Jetstream event down from 482 bytes to just 211 bytes (~0.44 compression ratio). Jetstream allows us to live tail all posts on Bluesky as they’re posted for as little as ~850 MB/day, and we could keep up with all events moving through the firehose during the Brazil Twitter Exodus weekend for 18GB/day (down from 232GB/day). With this scheme, Jetstream is required to compress each event only once before persisting it to disk and emitting it to connected consumers. The CPU impact of these changes is significant in proportion to Jetstream’s incredibly light load but it’s a flat cost we pay once no matter how many consumers we have. (CPU profile from a 30 second pprof sample with 12 consumers live-tailing Jetstream) Additionally, with Jetstream’s shared buffer broadcast architecture, we keep memory allocations incredibly low and the cost per consumer on CPU and RAM is trivial. In the allocation profile below, more than 80% of the allocations are used to consume the full protocol firehose. The total resident memory of Jetstream sits below 16MB, 25% of which is actually consumed by the new zstd dictionary. To bring it all home, here’s a screenshot from the dashboard of my public Jetstream instance serving 12 consumers all with various filters and compression settings, running on a $5/mo OVH VPS. At our new baseline firehose activity, a consumer of the protocol-level firehose would require downloading ~3.16TB/mo to keep up. A Jetstream consumer getting all created, updated, and deleted records without compression enabled would require downloading ~400GB/mo to keep up. A Jetstream consumer that only cares about posts and has zstd compression enabled can get by on as little as ~25.5GB/mo, <99% of the full weight firehose. Feel free to join the conversation about Jetstream and zstd on Bluesky.

11 months ago 34 votes
How HLS Works

Over the past few weeks, I’ve been building out server-side short video support for Bluesky. The major aim of this feature is to support short (90 second max) video streaming at a quality that doesn’t cost an arm and a leg for us to provide for free. In order to stay within these constraints, we’re considering making use of a video CDN that can bear the brunt of the bandwidth required to support Video-on-Demand streaming. While the CDN is a pretty fully-featured product, we want to avoid too much vendor lock-in and provide some enhancements to our streaming platform that requires extending their offering and getting creative with video streaming protocols. Some of the things we’d like to be able to do that don’t work out-of-the-box are: Track view counts, viewer sessions, and duration viewed to provide better feedback for video performance. Provide dynamic closed-caption support with the flexibility to automate them in the future. Store a transcoded version of source files somewhere durable to provide a “source of truth” for videos when needed. Append a “trailer” to the end of video streams for some branding in a TikTok-esque 3-second snippet. In this post I’ll be focusing on the HLS-related features above, namely view/duration accounting, closed captions, and trailers. HLS is Just a Bunch of Text files HTTP Live Streaming (HLS) is a standard established by Apple in 2009 that allows for adaptive-bitrate live and Video-on-Demand (VOD) streaming. For the purposes of this blog post, I’ll restrict my explanations to how HLS VOD streaming works. A player that implements the HLS protocol is capable of dynamically adjusting the quality of a streamed video based on network conditions. Additionally, a server that implements the HLS protocol should provide one or more variants of a media stream which accommodate varying network qualities to allow for graceful degradation of stream quality without stopping playback. HLS implements this by producing a series of plaintext (.m3u8) “playlist” files that tell the player what bitrates and resolutions the server provides so that the player can decide which variant it should stream. HLS differentiates between two kinds of “playlist” files: Master Playlists, and Media Playlists. Master Playlists A Master Playlist is the first file fetched by your video player. It contains a series of variants which point to child Media Playlists. It also describes the approximate bitrate of the variant sources and the codecs and resolutions used by those sources. $ curl https://my.video.host.com/video_15/playlist.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360 360p/video.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720 720p/video.m3u8 In the above file, the key things to notice are the RESOLUTION parameters and the {res}/video.m3u8 links. Your media player will generally start with the lowest resolution version before jumping up to higher resolutions once the network speed between you and the server is dialed in. The links in this file are pointers to Media Playlists, generally as relative paths from the Master Playlist such that, if we wanted to grab the 720p Media Playlist, we’d navigate to: https://my.video.host.com/video_15/720p/video.m3u8. A Master Playlist can also contain multi-track audio directives and directives for closed-captions but for now let’s move onto the Media Playlist. Media Playlists A Media Playlist is yet another plaintext file that provides your video player with two key bits of data: a list of media Segments (encoded as .ts video files) and headers for each Segment that tell the player the runtime of the media. $ curl https://my.video.host.com/video_15/720p/video.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts #EXTINF:4.000, video1.ts #EXTINF:4.000, video2.ts #EXTINF:4.000, video3.ts #EXTINF:4.000, video4.ts #EXTINF:2.800, video5.ts This Media Playlist describes a video that’s 22.8 seconds long (5 x 4-second Segments + 1 x 2.8-second Segment). The playlist describes a VOD piece of media, meaning we know this playlist contains the entirety of the media the player needs. The TARGETDURATION tells us the maximum length of each Segment so the player knows how many Segments to buffer ahead of time. During live streaming, that also lets the player know how frequently to refresh the playlist file to discover new Segments. Finally the EXTINF headers for each Segment indicate the duration of the following .ts Segment file and the relative paths of the video#.ts tell the player where to load the actual media files from. Where’s the Actual Media? At this point, the video player has loaded two .m3u8 playlist files and got lots of metadata about how to play the video but it hasn’t actually loaded any media files. The .ts files referenced in the Media Playlist are where the real media is, so if we wanted to control the playlists but let the CDN handle serving actual media, we can just redirect those video#.ts requests to our CDN. .ts files are Transport Stream MPEG-2 encoded short media files that can contain video or audio and video. Tracking Views To track views of our HLS streams, we can leverage the fact that every video player must first load the Master Playlist. When a user requests the Master Playlist, we can modify the results dynamically to provide a SessionID to each response and allow us to track the user session without cookies or headers: #EXTM3U #EXT-X-VERSION:3 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360 360p/video.m3u8?session_id=12345 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720 720p/video.m3u8?session_id=12345 Now when their video player fetches the Media Playlists, it’ll include a query-string that we can use to identify the streaming session, ensuring we don’t double-count views on the video and can track which Segments of video were loaded in the session. #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts?session_id=12345&duration=4 #EXTINF:4.000, video1.ts?session_id=12345&duration=4 #EXTINF:4.000, video2.ts?session_id=12345&duration=4 #EXTINF:4.000, video3.ts?session_id=12345&duration=4 #EXTINF:4.000, video4.ts?session_id=12345&duration=4 #EXTINF:2.800, video5.ts?session_id=12345&duration=2.8 Finally when the video player fetches the media Segment files, we can measure the Segment view before we redirect to our CDN with a 302, allowing us to know the amount of video-seconds loaded in the session and which Segments were loaded. This method has limitations, namely that a media player loading a segment doesn’t necessarily mean it showed that segment to the viewer, but it’s the best we can do without an instrumented media player. Adding Subtitles Subtitles are included in the Master Playlist as a variant and then are referenced in each of the video variants to let the player know where to load subs from. #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="en_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="en",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/en.m3u8" #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360,SUBTITLES="subs" 360p/video.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720,SUBTITLES="subs" 720p/video.m3u8 Just like with the video Media Playlists, we need a Media Playlist file for the subtitle track as well so that the player knows where to load the source files from and what duration of the stream they cover. $ curl https://my.video.host.com/video_15/subtitles/en.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:22.8 #EXTINF:22.800, en.vtt In this case, since we’re only serving a short video, we can just provide a single Segment that points at a WebVTT subtitle file encompassing the entire duration of the video. If you crack open the en.vtt file you’ll see something like: $ curl https://my.video.host.com/video_15/subtitles/en.vtt WEBVTT 00:00.000 --> 00:02.000 According to all known laws of aviation, 00:02.000 --> 00:04.000 there is no way a bee should be able to fly. 00:04.000 --> 00:06.000 Its wings are too small to get its fat little body off the ground. ... The media player is capable of reading WebVTT and presenting the subtitles at the right time to the viewer. For longer videos you may want to break up your VTT files into more Segments and update the subtitle Media Playlist accordingly. To provide multiple languages and versions of subtitles, just add more EXT-X-MEDIA:TYPE=SUBTITLES lines to the Master Playlist and tweak the NAME, LANGUAGE (if different), and URI of the additional subtitle variant definitions. #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="en_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="en",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/en.m3u8" #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="fr_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="fr",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/fr.m3u8" #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="ja_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="ja",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/ja.m3u8" Appending a Trailer For branding purposes (and in other applications, for advertising purposes), it can be helpful to insert Segments of video into a playlist to change the content of the video without requiring the content be appended to and re-encoded with the source file. Thankfully, HLS allows us to easily insert Segments into the Media Playlist using this one neat trick: #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts #EXTINF:4.000, video1.ts #EXTINF:4.000, video2.ts #EXTINF:4.000, video3.ts #EXTINF:4.000, video4.ts #EXTINF:2.800, video5.ts #EXT-X-DISCONTINUITY #EXTINF:3.337, trailer0.ts #EXTINF:1.201, trailer1.ts #EXTINF:1.301, trailer2.ts #EXT-X-ENDLIST In this Media Playlist we use HLS’s EXT-X-DISCONTINUITY header to let the video player know that the following Segments may be in a different bitrate, resolution, and aspect-ratio than the preceding content. Once we’ve provided the discontinuity header, we can add more Segments just like normal that point at a different media source broken up into .ts files. Remember, HLS allows us to use relative or absolute paths here, so we could provide a full URL for these trailer#.ts files, or virtually route them so they can retain the path context of the currently viewed video. Note that we don’t need to provide the discontinuity header here, and we could also name the trailer files something like video{6-8}.ts if we wanted to, but for clarity and proper player behavior, it’s best to use the discontinuity header if your trailer content doesn’t match the bitrate and resolution of the other video Segments. When the video player goes to play this media, it will continue from video5.ts to trailer0.ts without missing a beat, making it appear as if the trailer is part of the original video. This approach allows us to dynamically change the contents of the trailer for all videos, heavily cache the trailer .ts Segment files for performance, and avoid having to encode the trailer onto the end of every video source file. Conclusion At the end of the day, we’ve now got a video streaming service capable of tracking views and watch session durations, dynamic closed caption support, and branded trailers to help grow the platform. HLS is not a terribly complex protocol. The vast majority of it is human-readable plaintext files and is easy to inspect in the wild to how it’s used in production. When I started this project, I knew next to nothing about the protocol but was able to download some .m3u8 files and get digging to discover how the protocol worked, then build my own implementation of a HLS server to accommodate the video streaming needs of Bluesky. To learn more about HLS, you can check out the official RFC here which describes all the features discussed above and more. I hope this post encourages you to go explore other protocols you use every day by poking at them in the wild, downloading the files your browser interprets for you, and figuring out how simple some of these apparently “complex” systems are. If you’re interested in solving problems like these, take a look at our open Job Recs. If you have any questions about HLS, Bluesky, or other distributed, @scale social media infrastructure, you can find me on Bluesky here and you can discuss this post here.

a year ago 31 votes
An entire Social Network in 1.6GB (GraphD Part 2)

In Part 1 of this series, we tried to answer the question “who do you follow who also follows user B” in Bluesky, a social network with millions of users and hundreds of millions of follow relationships. At the conclusion of the post, we’d developed an in-memory graph store for the network that uses HashMaps and HashSets to keep track of the followers of every user and the set of users they follow, allowing bidirectional lookups, intersections, unions, and other set operations for combining social graph data. I received some helpful feedback after that post where several people pointed me towards Roaring Bitmaps as a potential improvement on my implementation. They were right, Roaring Bitmaps would be an excellent fit for my Graph service, GraphD, and could also provide me with a much needed way to quickly persist and load the Graph data to and from disk on startup, hopefully reducing the startup time of the service. What are Bitmaps? If you just want to dive into the Roaring Bitmap spec, you can read the paper here, but it might be easier to first talk about bitmaps in general. You can think of a bitmap as a vector of one-bit values (like booleans) that let you encode a set of integer values. For instance, say we have 10,000 users on our website and want to keep track of which users have validated their email addresses. We could do this by creating a list of the uint32 user IDs of each user, in which case if all 10,000 users have validated their emails we’re storing 10k * 32 bits = 40KB. Or, we could create a vector of single-bit values that’s 10,000 bits long (10k / 8 = 1.25KB), then if a user has confirmed their email we can set the value at the index of their UID to 1. If we want to create a list of all the UIDs of validated accounts, we can walk the vector and record the index of each non-zero bit. If we want to check if user n has validated their email, we can do a O(1) lookup in the bitmap by loading the bit at index n and checking if it’s set. When Bitmaps get Big and Sparse Now when talking about our social network problem, we’re dealing with a few more than 10,000 UIDs. We need to keep track of 5.5M users and whether or not the user follows or is followed by any of the other 5.5M users in the network. To keep a bitmap of “People who follow User A”, we’re going to need 5.5M bits which would require (5.5M / 8) ~687KB of space. If we wanted to keep bitmaps of “People who follow User A” and “People who User A follows”, we’d need ~1.37MB of space per user using a simple bitmap, meaning we’d need 5,500,000 * 1.37MB = ~7.5 Terabytes of space! Clearly this isn’t an improvement of our strategy from Part 1, so how can we make this more efficient? One strategy for compressing the bitmap is to take consecutive runs of 0’s or 1’s (i.e. 00001110000001) in the bitmap and turn them into a number. For instance if we had an account that followed only the last 100 accounts in our social network, the first 5,499,900 indices in our bitmap would be 0’s and so we could represent the bitmap by saying: 5,499,900 0's, then 100 1's which you notice I’ve written here in a lot fewer than 687KB and a computer could encode using two uint32 values plus two bits (one indicator bit for the state of each run) for a total of 66 bits. This strategy is called Run Length Encoding (RLE) and works pretty well but has a few drawbacks: mainly if your data is randomly and heavily populated, you may not have many consecutive runs (imagine a bitset where every odd bit is set and every even bit is unset). Also lookups and evaluation of the bitset requires walking the whole bitset to figure out where the index you care about lives in the compressed format. Thankfully there’s a more clever way to compress bitmaps using a strategy called Roaring Bitmaps. A brief description of the storage strategy for Roaring Bitmaps from the official paper is as follows: We partition the range of 32-bit indexes ([0, n)) into chunks of 2^16 integers sharing the same 16 most significant digits. We use specialized containers to store their 16 least significant bits. When a chunk contains no more than 4096 integers, we use a sorted array of packed 16-bit integers. When there are more than 4096 integers, we use a 2^16-bit bitmap. Thus, we have two types of containers: an array container for sparse chunks and a bitmap container for dense chunks. The 4096 threshold insures that at the level of the containers, each integer uses no more than 16 bits. These bitmaps are designed to support both densely and sparsely distributed data and can provide high performance binary set operations (and/or/etc.) operating on the containers within two or more bitsets in parallel. For more info on how Roaring Bitmaps work and some neat diagrams, check out this excellent primer on Roaring Bitmaps by Vikram Oberoi. So, how does this help us build a better graph? GraphD, Revisited with Roaring Bitmaps Let’s get back to our GraphD Service, this time in Go instead of Rust. For each user we can keep track of a struct with two bitmaps: type FollowMap struct { followingBM *roaring.Bitmap followingLk sync.RWMutex followersBM *roaring.Bitmap followersLk sync.RWMutex } Our FollowMap gives us a Roaring Bitmap for both the set of users we follow, and the set of users who follow us. Adding a Follow to the graph just requires we set the right bits in both user’s respective maps: // Note I've removed locking code and error checks for brevity func (g *Graph) addFollow(actorUID, targetUID uint32) { actorMap, _ := g.g.Load(actorUID) actorMap.followingBM.Add(targetUID) targetMap, _ := g.g.Load(targetUID) targetMap.followersBM.Add(actorUID) } Even better if we want to compute the intersections of two sets (i.e. the people User A follows who also follow User B) we can do so in parallel: // Note I've removed locking code and error checks for brevity func (g *Graph) IntersectFollowingAndFollowers(actorUID, targetUID uint32) ([]uint32, error) { actorMap, ok := g.g.Load(actorUID) targetMap, ok := g.g.Load(targetUID) intersectMap := roaring.ParAnd(4, actorMap.followingBM, targetMap.followersBM) return intersectMap.ToArray(), nil } Storing the entire graph as Roaring Bitmaps in-memory costs us around 6.5GB of RAM and allows us to perform set intersections between moderately large sets (with hundreds of thousands of set bits) in under 500 microseconds while serving over 70k req/sec! And the best part of all? We can use Roaring’s serialization format to write these bitmaps to disk or transfer them over the network. Storing 164M Follows in 1.6GB In the original version of GraphD, on startup the service would read a CSV file with an adjacency list of the (ActorDID, TargetDID) pairs of all follows on the network. This required creating a CSV dump of the follows table, pausing writes to the follows table, then bringing up the service and waiting 5 minutes for it to read the CSV file, intern the DIDs as uint32 UIDs, and construct the in-memory graph. This process is slow, pauses writes for 5 minutes, and every time our service restarts we have to do it all over again! With Roaring Bitmaps, we’re now given an easy way to effectively serialize a version of the in-memory graph that is many times smaller than the adjacency list CSV and many times faster to load. We can serialize the entire graph into a SQLite DB on the local machine where each row in a table contains: (uid, DID, followers_bitmap, following_bitmap) Loading the entire graph from this SQLite DB can be done in around ~20 seconds: // Note I've removed locking code and error checks for brevity rows, err := g.db.Query(`SELECT uid, did, following, followers FROM actors;`) for rows.Next() { var uid uint32 var did string var followingBytes []byte var followersBytes []byte rows.Scan(&uid, &did, &followingBytes, &followersBytes) followingBM := roaring.NewBitmap() followingBM.FromBuffer(followingBytes) followersBM := roaring.NewBitmap() followersBM.FromBuffer(followersBytes) followMap := &FollowMap{ followingBM: followingBM, followersBM: followersBM, followingLk: sync.RWMutex{}, followersLk: sync.RWMutex{}, } g.g.Store(uid, followMap) g.setUID(did, uid) g.setDID(uid, did) } While the service is running, we can also keep track of the UIDs of actors who have added or removed a follow since the last time we saved the DB, allowing us to periodically flush changes to the on-disk SQLite only for bitmaps that have updated. Syncing our data every 5 seconds while tailing the production firehose takes 2ms and writes an average of only ~5MB to disk per flush. The crazy part of this is, the on-disk representation of our entire follow network is only ~1.6GB! Because we’re making use of Roaring’s compressed serialized format, we can turn the ~6.5GB of in-memory maps into 1.6GB of on-disk data. Our largest bitmap, the followers of the bsky.app account with over 876k members, becomes ~500KB as a blob stored in SQLite. So, to wrap up our exploration of Roaring Bitmaps for first-degree graph databases, we saw: A ~20% reduction in resident memory size compared to HashSets and HashMaps A ~84% reduction in the on-disk size of the graph compared to an adjacency list A ~93% reduction in startup time compared to loading from an adjacency list A ~66% increase in throughput of worst-case requests under load A ~59% reduction in p99 latency of worst-case requests under low My next iteration on this problem will likely be to make use of DGraph’s in-memory Serialized Roaring Bitmap library that allows you to operate on fully-compressed bitmaps so there’s no need to serialize and deserialize them when reading from or writing to disk. It also probably results in significant memory savings as well! If you’re interested in solving problems like these, take a look at our open Backend Developer Job Rec. You can find me on Bluesky here, you can chat about this post here.

a year ago 34 votes

More in AI

ML for SWEs 66: Safety is a fundamental AI engineering requirement

The debate about prioritizing speed or safety is over and reality has made the decision for us.

10 hours ago 4 votes
Pluralistic: Hate the player AND the game (10 Sep 2025)

Today's links Hate the player AND the game: But above all, hate the crooked ump. Hey look at this: Delights to delectate. Object permanence: Library Tor nodes vs the DHS; Egg-board psyops; Fury Road amputation cosplay; NYPD's dirtiest cop. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Hate the player AND the game (permalink) The epigram for my forthcoming book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It is a quote from Ed Zitron: "I hate them for what they've done to the computer" (Ed even recorded a little cameo of this for the audiobook): https://www.kickstarter.com/projects/doctorow/enshittification-the-drm-free-audiobook/ Ed's a smart and passionate guy, and this was definitely the quote to sum up the rage I felt as I wrote the book. Ed's got a whole theory of who "they" are and "what they did to the computer," which he calls "the Rot Economy": https://www.wheresyoured.at/the-rot-economy/ The Rot Economy describes the ideology of bosses, starting with monsters like GE's Jack Welch, who financialized companies, optimizing them for making short term cash gains for investors, at the expense of their workers, their customers, their products and services, and, ultimately, their long-term health. For Ed, these bosses (especially tech bosses) are the sociopaths who destroyed "the computer" (a stand-in for tech more generally). I don't disagree at all. The there is a direct, undeniable line from the ideas and conduct of tech bosses and the tech hellscape we live in today. A good read on this subject is Anil Dash's scorching post from yesterday, "How Tim Cook sold out Steve Jobs": https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/ I find the Rot Economy hypothesis entirely compelling, but also, incomplete. Ed's explaining why we should hate the players and why we should hate the game, but the enshittification thesis goes even further and explains why we need to hate the umpires – the policymakers, enforcers, economists and legal theorists who created the enshittogenic environment in which the Rot Economy took hold. Some early reviews of Enshittification have expressed dissatisfaction with book's "solutions" section, complaining that all the solutions are policy oriented, and there's nothing suggested for us to do in our capacity as individual consumers: https://pluralistic.net/2025/07/31/unsatisfying-answers/#systemic-problems Those criticisms are correct: there is nothing we can do as individual consumers. Agonizing about your consumption choices will not fight enshittification any more than conscientiously sorting your recycling will end the climate emergency. Enshittification isn't caused by "lazy consumers" who choose "convenience" or are "too cheap to pay for online services": https://pluralistic.net/2024/04/12/give-me-convenience/#or-give-me-death The wellspring of enshittification isn't poor consumption choices, it's poor policy choices. The reason monsters are able to destroy our online lives isn't their personal moral failings, it's the system that rewards predatory, deceptive and unfair commercial practices and elevates their foremost practitioners to positions of power within firms: https://pluralistic.net/2023/07/28/microincentives-and-enshittification/ And here's the kicker: we know where those policy choices came from! The people who made these policy choices did so in living memory. They were warned at the time about the foreseeable consequences of their choices. They made those choices anyway. They faced zero consequences for doing so, even after every one of the prophesied horrors came to pass. Not only were they spared consequences for their actions, but they prospered as a result – they are revered as statesmen, lawyers, scholars and titans of economics. As Trashfuture showrunner Riley Quinn often says, the curse of being a leftist is that you have object permanence – you actually remember the stuff that happened and how it happened. You don't live in an eternal now that has no causal relationship to the past. It's not enough to hate the player, nor the game – we've got to remember the crooked umps who rigged the match. We have to say their names, because that's how we root out their terrible ideas and ensure that our policy interventions make real change. If Elon Musk OD'ed on ketamine tomorrow, there'd be ten Big Balls who'd tear each others' throats out in the ensuing succession fight, and the next guy would be just as stupid, racist, and authoritarian. Musk, Cook, Zuck, Pichai, Nadella, Larry Ellison – they're just filling the monster-shaped holes that policy-makers installed in our society. Start with Robert Bork, the jurist who championed the "consumer welfare" theory of antitrust, which promotes monopolies as efficient and counsels policymakers not to punish companies that take over markets, because the only way to really dominate a market is to be so good that everyone chooses your products and services. Wouldn't it just be perverse to use public funds to shut down the public's favorite companies? Bork was a virulent racist, a Nixonite criminal, and he was dead wrong about the law and the economics of monopoly: https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/ Bork's legacy of pro-monopoly advocacy is, unsurprisingly, monopolies. Monopolies that make everything more expensive and worse: from athletic shoes to microchips, glass bottles to pharmaceuticals, pro wrestling to eyeglasses: https://www.openmarketsinstitute.org/learn/monopoly-by-the-numbers These monopolies did not arise because of the iron laws of economics. They are not the product of the great forces of history. They are the direct and undeniable consequence of Robert Bork convincing the world's governments to embrace his bullshit, pro-monopoly policies. Satan took Bork to hell in 2012, but you know who's still with us? Bruce Lehman. Bruce Lehman was Bill Clinton's copyright czar, the man who, in his own words, "did an end-run around Congress" by getting an UN treaty passed that obliged its signatories to ban reverse engineering: https://www.cbc.ca/listen/cbc-podcasts/1353-the-naked-emperor/episode/16145640-ctrl-ctrl-ctrl Lehman's used the treaty to get Congress to pass the Digital Millennium Copyright Act (DMCA) and section 1201 of the DMCA made it a felony to break DRM. Bruce Lehman is why farmers can't fix their own tractors, hospitals can't fix their own ventilators, and your mechanic can't fix your car. He's why, when the manufacturer of your artificial eyes bricks a computer that is permanently wired to your nervous system, no one else can revive it: https://pluralistic.net/2022/12/12/unsafe-at-any-speed/ Bruce Lehman is why you can't use the apps of your choosing on your phone or games console. He's why we can't preserve beloved old video games. He's why Apple and Google get to steal 30 cents out of every dollar you send to a performer, software author, or creator through an app: https://pluralistic.net/2025/05/01/its-not-the-crime/#its-the-coverup Yeah, Tim Cook is a venal billionaire who owes his wealth to the Chinese sweatshops of iPhone City, where they had to install suicide nets to catch the workers who'd rather end it all than work another day for Tim Apple, but Tim Cook's power over those workers is owed to Bruce Lehman and Robert Bork. Then there's the ISP sector, whose Net Neutrality violations and underinvestment mean that people who live in the country where the internet was invented have some of the slowest, most expensive internet in the world. Big ISP bosses are some of the worst people on Earth. Take Thomas Rutledge, who CEO of Charter/Spectrum when covid broke out. At the time, Rutledge was America's highest-paid CEO. He dictated that his back-office staff could not work from home (imagine a telco boss who doesn't believe in telework!), and those back-offices all turned into super-spreader sites. Rutledge's field workers – the people who came to our homes and upgraded our internet so we could work from home – did not get PPE or danger pay. Instead, they got vouchers exclusively redeemable at restaurants that had shut down during the pandemic: https://pluralistic.net/2020/04/22/filternet/#thomas-rutledge-murderer Fuck Thomas Rutledge and may his name be a curse forever. But the reason Thomas Rutledge – and all the other terrible telco bosses – were able to reap millions by supplying us with dogshit internet while literally murdering their employees was that Trump's FCC chairman, an ex-Verizon lawyer named Ajit Pai, let them get away with it: https://pluralistic.net/2021/02/12/ajit-pai/#pai Ajit Pai engaged in some of the most flagrant cheating ever seen in American regulation (prior to Jan 20, 2025, at least). When he decided to kill Net Neutrality, he accepted obviously fraudulent comments into the official record, including one million identical comments from @pornhub.com email addresses, as well as millions of comments whose return addresses were taken from darknet data-dumps, including the email addresses of dead people and of sitting US senators who supported Net Neutrality: https://pluralistic.net/2023/11/10/digital-redlining/#stop-confusing-the-issue-with-relevant-facts Pai – and his co-conspirators – are the umps who rigged the game. Hate Thomas Rutledge to be sure, but to prevent people like Rutledge from gaining power over your digital life in future, you must remember Ajit Pai with the special form of white-hot rage that keeps people like him from ever making policy decisions again. Then there's Canada's hall of shame, which is full of monsters. Two of my least favorite are James Moore and Tony Clement, who, as ministers under Stephen Harper, rammed through a Canadian version of the DMCA, 2012's Bill C-11, despite their own consultation, which found that Canadians overwhelmingly rejected the idea: https://pluralistic.net/2024/11/15/radical-extremists/#sex-pest Clement (now a disgraced sex-pest) and Moore (still accepted into polite society as a corporate lawyer) are the reason that Canada's Right to Repair and interop laws are dead on arrival. THey're also why Canada can't retaliate against Trump's tariffs by jailbreaking US products, making everything cheaper for Canadians and birthing new, global Canadian tech businesses: https://pluralistic.net/2025/01/15/beauty-eh/#its-the-only-war-the-yankees-lost-except-for-vietnam-and-also-the-alamo-and-the-bay-of-ham In Europe, there's Axel Voss, the man behind 2019's "filternet" proposal, which requires tech platforms to spend hundreds of millions of euros for copyright filters that use AI to process everything posted to the public internet in Europe and block anything the AI thinks is "copyrighted": https://memex.craphound.com/2019/03/26/article-13-will-wreck-the-internet-because-swedish-meps-accidentally-pushed-the-wrong-voting-button/ For years, Voss maintained that none of this was true, that there would be no filters, and dismissed his critics as hysterical fools: https://memex.craphound.com/2019/04/03/after-months-of-insisting-that-article13-doesnt-require-filters-top-eu-commissioner-says-article-13-requires-filters/ But then, after his law passed, he admitted he "didn't know what he was voting for": https://memex.craphound.com/2018/09/14/father-of-the-catastrophic-copyright-directive-reveals-he-didnt-know-what-he-was-voting-for/ Fuck the media lobbyists who spent hundreds of millions of euros to push this catastrophic law through: https://memex.craphound.com/2018/12/13/clash-of-the-corporate-titans-whos-spending-what-in-europes-copyright-directive-battle/ But especially and forever, fuck Axel Voss, the policymaker who helped turn those corporate bribes into policy. Ed Zitron is right to hate the people who implement the Rot Economy for what they did to the computer. But those people are only doing what policymakers let them do. Corporate monsters thrive in an enshittogenic environment. But political monsters are the ones create that enshittogenic environment. They're the ones who are terraforming our planet to sideline human life and replace it with the immortal colony organisms we call "limited liability corporations." Hey look at this (permalink) Dwayne Johnson Will Play the Chicken Man in ‘Lizard Music’ https://gizmodo.com/dwayne-johnson-to-next-play-the-chicken-man-in-lizard-music-2000655464 Qualifying Conditions https://www.jwz.org/blog/2025/09/qualifying-conditions/ Cindy Cohn Is Leaving the EFF, but Not the Fight for Digital Rights https://www.wired.com/story/eff-cindy-cohn-stepping-down/ Five technological achievements! (That we won’t see any time soon.) https://crookedtimber.org/2025/09/09/five-technological-achievements-that-we-wont-see-any-time-soon/ A notional design studio. https://ethanmarcotte.com/wrote/a-notional-design-studio/ Object permanence (permalink) #20yrsago Anti-trusted-computing video https://www.lafkon.net/tc/ #10yrsago Library offers Tor nodes; DHS tells them to stop https://www.propublica.org/article/library-support-anonymous-internet-browsing-effort-stops-after-dhs-email #10yrsago Ashley Madison’s passwords were badly encrypted, 15 million+ passwords headed for the Web https://arstechnica.com/information-technology/2015/09/ashley-madison-password-crack-could-spell-trouble-across-the-internet/ #10yrsago Heathrow security insists that ice is a liquid https://gizmodo.com/what-happens-if-you-take-frozen-liquids-through-airport-1729772148 #10yrago DoJ says it will consider jailing executives who order corporate crimes https://www.nytimes.com/2015/09/10/us/politics/new-justice-dept-rules-aimed-at-prosecuting-corporate-executives.html #10yrsago Government-run egg board waged high-price, secret PSYOPS war on vegan egg-replacement https://www.theguardian.com/business/2015/sep/06/usda-american-egg-board-paid-bloggers-hampton-creek #10yrago Using sandwiches to teach the Socratic method https://web.archive.org/web/20140810204054/https://medium.com/@kmikeym/is-this-a-sandwich-50b1317eb3f5 #10yrago Fury Road cosplay: amputated arm edition https://web.archive.org/web/20150911194228/http://www.tor.com/2015/09/09/afternoon-roundup-furiosa-real-prosthetic-arm-cosplay/ #5yrsago Kids' smart-watches unsafe at any speed https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#digital-parenting #5yrsago Georgia voter suppression, quantified https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#georgia-suppression #5yrsago The rise and rise of one of NYPD's dirtiest cops https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#50a #5yrago Inaudible https://pluralistic.net/2020/09/10/booksellers-vs-big-tech/#audible-exclusive Upcoming appearances (permalink) Ithaca: Enshittification at Buffalo Street Books, Sept 11 https://buffalostreetbooks.com/event/2025-09-11/cory-doctorow-tcpl-librarian-judd-karlman Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ Ithaca: Enshittification at Autumn Leaves Books, Sept 13 https://www.autumnleavesithaca.com/event-details/enshittification-why-everything-got-worse-and-what-to-do-about-it Ithaca: Radicalized Q&A (Cornell), Sept 16 https://events.cornell.edu/event/radicalized-qa-with-author-cory-doctorow Ithaca: The Counterfeiters (Dinner/Movie Night) (Cornell), Sept 17 https://adwhiteprofessors.cornell.edu/visits/cory-doctorow/ Ithaca: Communication Power, Policy, and Practice (Cornell), Sept 18 https://events.cornell.edu/event/policy-provocations-a-conversation-about-communication-power-policy-and-practice Ithaca: A Reverse-Centaur's Guide to Being a Better AI Critic (Cornell), Sept 18 https://events.cornell.edu/event/2025-nordlander-lecture-in-science-public-policy NYC: Enshittification and Renewal (Cornell Tech), Sept 19 https://www.eventbrite.com/e/enshittification-and-renewal-a-conversation-with-cory-doctorow-tickets-1563948454929 NYC: Brooklyn Book Fair, Sept 21 https://brooklynbookfestival.org/event/big-techs-big-heist-cory-doctorow-in-conversation-with-adam-becker/ DC: Enshittification with Rohit Chopra (Politics and Prose), Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

9 hours ago 2 votes
Anthropic Might Legally Owe Me Thousands of Dollars

On shadow libraries, legal documents, and judicial skepticism.

6 hours ago 2 votes
Pluralistic: Trump steals $400b from American workers (09 Sep 2025)

Today's links Trump steals $400b from American workers: You get a noncompete, and you get a noncompete, and you get a noncompete! Hey look at this: Delights to delectate. Object permanence: Spying baby-monitors; FBI tests spy-gear at Burning Man; Little Brother optioned by Paramount; Best-paid CEOs have worst-paid workers. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Trump steals $400b from American workers (permalink) Trump's stolen a lot of workers' wages over the years, but this week, he has become history's greatest thief of wages, having directed his FTC to stop enforcing its ban on noncompetes "agreements," a move that will cost American workers $400 billion over the next ten years: https://prospect.org/labor/2025-09-09-trump-lets-bosses-grab-400-billion-worker-pay-noncompete-agreements/ The argument for noncompetes is this: modern industry is IP-intensive, and IP-intensive businesses need noncompetes, otherwise workers will take proprietary information with them when they walk out the door and bring it to a competitor. Who would invest in an IP-intensive firm under those circumstances? I'll tell you who would: Hollywood and Silicon Valley. These are two most IP-intensive industries in human history, both of which were incubated in California, a state whose constitution prohibits noncompetes and has done so through the entire history of those two industries. Indeed, we wouldn't have a Silicon Valley if California had noncompetes. Silicon Valley was founded by Robert Shockley, who won the Nobel Price for his role in inventing the silicon transistor (hence Silicon Valley). Shockley was a paranoid, virulent racist who couldn't produce a working chip because he was consumed by eugenic fervor and spent all his time on the road offering shares of his Nobel prize money to Black women who would agree to have their tubes tied. Lucky for (literally) everyone (except Robert Shockley), California doesn't have noncompetes, so eight of his top engineers ("The Traitorous Eight") were able to quit Shockley Semiconductor and start the first successful chip business: Fairchild Semiconductor. And then two of Fairchild's top engineers quit to found Intel: https://pluralistic.net/2021/10/24/the-traitorous-eight-and-the-battle-of-germanium-valley/ It's not just Silicon Valley that's rooted in wresting IP away from asshole control-freaks: that's Hollywood's story, too. Ever wonder how it was that movies were invented at Edison Labs in New Jersey, but the film industry was incubated in California, literally as far away from Edison as you could possibly get without ending up in Mexico? In short: California got the motion picture industry because Edison was an asshole who used his patents to control what kinds of movies could be made and to suck rents out of filmmakers to license those patents. So the most ambitious filmmakers in America fled to California, where Edison couldn't easily enforce his patents, and founded Hollywood: https://www.nytimes.com/2005/08/21/weekinreview/lala-land-the-origins.html?unlocked_article_code=1.kk8.5T1M.VSaEsN5Vn9tM&amp;smid=url-share And Hollywood stayed in Calfornia, a place where noncompetes couldn't be enforced, where "IP" could hop from one studio to another, smuggled out between the ears of writers, actors, directors, SFX wizards, prop makers, scenepainters, makeup artists, costumers, and the most creative professionals in Hollywood: accountants. Empirically speaking, the function of noncompetes is to trap good workers and good ideas in companies controlled by asshole bosses who can't get anything done. Any disinvestment that can be attributed to the absence of noncompetes is completely swamped by the dividends generated by good workers and good ideas escaping from control-freak asshole bosses and founding productive firms. As ever, money talks and bullshit walks. Today, one in 18 US workers is trapped by a noncompete, and those aren't the knowledge workers of Silicon Valley workers or Hollywood. So who is captured by this form of contractual indenture? The median US worker under noncompete is a fast-food worker stuck with the tipped minimum wage, or a pet groomer making the regular minimum wage. The function of the noncompete in America isn't to secure investment for knowledge-intensive industries – it's to stop the cashier at Wendy's from getting an extra $0.25/hour working the fry-trap at the McDonald's across the street. Noncompetes are an integral part of the conservative project, which is the substitution of individual power for democratic choice. As Dan Savage puts it, the GOP agenda is "Husbands you can't leave [ed: ending no-fault divorce], pregnancies you can't prevent or terminate [ed: banning contraception and abortion], politicians you can't vote out of office [ed: gerrymandering and voter suppression." Add to that: jobs you can't quit. It's not just noncompetes that lock workers to shitty bosses. When Biden's FTC investigated the issue, they revealed a widespread practice called "training repayment agreement provision," (TRAPs) that puts workers on the hook for thousands of dollars if they quit or get fired: https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose A TRAPped worker – often a pet-groomer at a private equity-owned giant like Petsmart – is charged $5,500 or more for three weeks of "training" that actually amount to one or two weeks of sweeping up pet-hair. But if they leave or get fired in the next three years, they have to pay back that whole amount: https://pluralistic.net/2022/08/04/its-a-trap/#a-little-on-the-nose A closely related concept is "bondage fees," which have been imposed on whole classes of workers, like doormen in NYC apartment buildings: https://pluralistic.net/2023/04/21/bondage-fees/#doorman-building These fees trap workers in dead-end jobs by forcing anyone who hires them away to pay massive fees to their former employers. It's just another way to lock workers to businesses. The irony here is that conservatives claim to worship "voluntarism" and "free choice," and insist that the virtue of markets is that they "aggregate price signals" so that companies can respond to these signals by efficiently matching demand to supply. But though conservatives say they worship free choice as an engine of economic efficiency, they understand that their ideas are so unpopular that they can only succeed if people are coerced into adopting them, hence voter suppression, gerrymandering, noncompetes, and other heads-I-win/tails-you-lose propositions. Noncompetes aren't about preventing the loss of IP – they're about preventing the loss of process knowledge, the know-how to turn ideas into products and services. Bosses love IP, because it can be alienated, hoarded and sold, while process knowledge is ineluctably vested in the bodies, minds and relations of workers. No IP law can keep employees from taking process knowledge with them on their way out the door, so bosses want to ban them from leaving: https://pluralistic.net/2025/09/08/process-knowledge/#dance-monkey-dance Biden's FTC banned noncompetes nationwide, for nearly every category of employment, deeming them an "unfair method of competition": https://www.ftc.gov/news-events/news/press-releases/2023/03/ftc-extends-public-comment-period-its-proposed-rule-ban-noncompete-clauses-until-april-19 FTC economists estimated that killing noncompetes would result in $400b in wage gains for the American workforce over the next decade, as good workers migrated to good bosses. Of course this was challenged by the business lobby, which sued to get the rule overturned. Trump's FTC has not only declined to defend the rule in court, they've also decided to stop trying to enforce it. Trump is now the king of wage-theft, and MAGA is a relentless engine of enshittification. After all, the thesis of enshittification is that companies make their products and practices worse for suppliers, users and business customers only when they calculate that they can do so without facing punishment – from regulators, competitors, or workers. Trump's regulators are all either comatose or so captured they wear gimpsuits and leashes in public. They're not keeping companies in line. And his antitrust shops have turned into pay-for-play operations, where a $1m payment to a MAGA influencer gets your case dropped: https://www.thebignewsletter.com/p/an-attempted-coup-at-the-antitrust Trump neutered the National Labor Relations Board and now he's revived indentured servitude nationwide, formalizing the idea of government-backed jobs you can't quit. If you can't quit your job or vote our your politicians, why wouldn't your boss or your elected representative just relentless fuck you over? Not merely for sadism's sake (though sadism undoubtedly plays a part here), but simply to make things better for themselves by making things worse for you? It's exactly the same logic of platform lock-in: once you can't leave, they don't have to keep you happy. Formalizing the legality of noncompetes will only lead to their monotonic spread. When Antonin Scalia greenlit binding arbitration waivers in consumer contracts, only a tiny number of companies used them, forcing customers to sign away their right to sue them no matter how badly, negligently or criminally they behaved. Today, binding arbitration has expanded into every kind of contract, even to the point where groovy, open source, decentralized, federated social media platforms are forcing it on their users: https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-and-all-NON-NEGOTIATED-agreements Same for noncompetes: as private equity rolls up whole sectors – funeral homes, pet groomers, hospices – they will stuff noncompetes into the contracts of every employer in each industry, so no matter where a worker applies for a job, they'll have to sign a noncompete. Why wouldn't they? If workers can't leave, they'll accept worse working conditions and lower pay. The best workers will be stuck with the worst employers. And despite owing their existence to bans on noncompetes Silicon Valley and Hollywood will happily cram noncompetes down their workers' throats. If you doubt it, just read up on the "no poach" scandal, where the biggest tech and movie companies entered into a criminal conspiracy not to hire away each others' employees: https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_Litigation The conservative future, folks: jobs you can't quit, politicians you can't vote out of office, husbands you can't divorce, and pregnancies you can't prevent or terminate. Hey look at this (permalink) Nate Silver's big list of grievances https://www.garbageday.email/p/nate-silver-s-big-list-of-grievances Electronic Dance Music vs. Copyright: Law as Weaponized Culture https://drive.proton.me/urls/TVH0PW4TZ8#EM5VMl1BUlny Google admits the open web is in ‘rapid decline’ https://www.theverge.com/news/773928/google-open-web-rapid-decline Britain Owes Palestine https://www.britainowespalestine.org/ A Dramatic Reading of The Recent New York Times Dispatch from the Hamptons. https://bsky.app/profile/zohrankmamdani.bsky.social/post/3lyech7chqs2q Object permanence (permalink) #20yrsago Crooks take anti-forensic countermeasures https://www.newscientist.com/article/mg18725163-800-television-shows-scramble-forensic-evidence/ #20yrsago Recording industry demands digital radio broadcast flag https://web.archive.org/web/20051018100306/https://www.godwinslaw.org/weblog/archive/2005/09/09/riaas-big-push-to-copy-protect-digital-radio #20yrsago Unicef/Save the Children sell out to recording industry https://web.archive.org/web/20050914034709/http://www.promusicae.org/pdf/campana_jovenes_musica_e_internet.pdf #15yrsago TSA forces pregnant traveller into full-body scanner https://web.archive.org/web/20100910235117/https://consumerist.com/2010/09/pregnant-traveler-tsa-screeners-bullied-me-into-full-body-scan.html #10yrsago Help crowdfund a relentless tsunami of FOIA requests into America’s private prisons https://www.muckrock.com/project/the-private-prison-project-8/ #10yrsago Your baby monitor is an Internet-connected spycam vulnerable to voyeurs and crooks https://web.archive.org/web/20210505050810/https://www.rapid7.com/blog/post/2015/09/02/iotsec-disclosure-10-new-vulns-for-several-video-baby-monitors/ #10yrsago Inept copyright bot sends 2600 a legal threat over ink blotches https://www.2600.com/content/2600-accused-using-unauthorized-ink-splotches #10yrsago FBI used Burning Man to field-test new surveillance equipment https://www.muckrock.com/news/archives/2015/sep/01/burning-man-fbi-file/ #10yrsago Fury Road, hieroglyph edition https://imgur.com/gallery/you-will-ride-eternal-papyrus-chrome-you-will-ride-eternal-papyrus-chrome-BxdOcTr#/t/chrome #10yrsago Little Brother optioned by Paramount https://www.tracking-board.com/tb-exclusive-paramount-pictures-picks-up-ny-times-bestselling-ya-novel-little-brother/ #10yrsago Record street-marches in Moldova against corrupt oligarchs https://www.euractiv.com/section/europe-s-east/news/moldova-banking-scandal-fuels-biggest-protest-ever/ #5yrsago Germany's amazing new competition proposalhttps://pluralistic.net/2020/09/09/free-sample/#wunderschoen #5yrsago DRM versus human rights https://pluralistic.net/2020/09/09/free-sample/#que-viva #1yrago America's best-paid CEOs have the worst-paid employees https://pluralistic.net/2024/09/09/low-wage-100/#executive-excess Upcoming appearances (permalink) Ithaca: Enshittification at Buffalo Street Books, Sept 11 https://buffalostreetbooks.com/event/2025-09-11/cory-doctorow-tcpl-librarian-judd-karlman Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ Ithaca: Enshittification at Autumn Leaves Books, Sept 13 https://www.autumnleavesithaca.com/event-details/enshittification-why-everything-got-worse-and-what-to-do-about-it Ithaca: Radicalized Q&A (Cornell), Sept 16 https://events.cornell.edu/event/radicalized-qa-with-author-cory-doctorow Ithaca: The Counterfeiters (Dinner/Movie Night) (Cornell), Sept 17 https://adwhiteprofessors.cornell.edu/visits/cory-doctorow/ Ithaca: Communication Power, Policy, and Practice (Cornell), Sept 18 https://events.cornell.edu/event/policy-provocations-a-conversation-about-communication-power-policy-and-practice Ithaca: A Reverse-Centaur's Guide to Being a Better AI Critic (Cornell), Sept 18 https://events.cornell.edu/event/2025-nordlander-lecture-in-science-public-policy NYC: Enshittification and Renewal (Cornell Tech), Sept 19 https://www.eventbrite.com/e/enshittification-and-renewal-a-conversation-with-cory-doctorow-tickets-1563948454929 NYC: Brooklyn Book Fair, Sept 21 https://brooklynbookfestival.org/event/big-techs-big-heist-cory-doctorow-in-conversation-with-adam-becker/ DC: Enshittification with Rohit Chopra (Politics and Prose), Oct 8 https://politics-prose.com/cory-doctorow-10825 NYC: Enshittification with Lina Khan (Brooklyn Public Library), Oct 9 https://www.bklynlibrary.org/calendar/cory-doctorow-discusses-central-library-dweck-20251009-0700pm New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Anand Giridharadas (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Madrid: Conferencia EUROPEA 4D (Virtual), Oct 28 https://4d.cat/es/conferencia/ Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Nerd Harder! (This Week in Tech) https://twit.tv/shows/this-week-in-tech/episodes/1047 Techtonic with Mark Hurst https://www.wfmu.org/playlists/shows/155658 Cory Doctorow DESTROYS Enshittification (QAA Podcast) https://soundcloud.com/qanonanonymous/cory-doctorow-destroys-enshitification-e338 Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. FIRST DRAFT COMPLETE AND SUBMITTED. A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

2 days ago 2 votes
A guide to understanding AI as normal technology

And a big change for this newsletter

2 days ago 8 votes