Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from exist

When Imperfect Systems are Good, Actually: Bluesky's Lossy Timelines

Often when designing systems, we aim for perfection in things like consistency of data, availability, latency, and more. The hardest part of system design is that it’s difficult (if not impossible) to design systems that have perfect consistency, perfect availability, incredibly low latency, and incredibly high throughput, all at the same time. Instead, when we approach system design, it’s best to treat each of these properties as points on different axes that we balance to find the “right fit” for the application we’re supporting. I recently made some major tradeoffs in the design of Bluesky’s Following Feed/Timeline to improve the performance of writes at the cost of consistency in a way that doesn’t negatively affect users but reduced P99s by over 96%. Timeline Fanout When you make a post on Bluesky, your post is indexed by our systems and persisted to a database where we can fetch it to hydrate and serve in API responses. Additionally, a reference to your post is “fanned out” to your followers so they can see it in their Timelines. This process involves looking up all of your followers, then inserting a new row into each of their Timeline tables in reverse chronological order with a reference to your post. When a user loads their Timeline, we fetch a page of post references and then hydrate the posts/actors concurrently to quickly build an API response and let them see the latest content from people they follow. The Timelines table is sharded by user. This means each user gets their own Timeline partition, randomly distributed among shards of our horizontally scalable database (ScyllaDB), replicated across multiple shards for high availability. Timelines are regularly trimmed when written to, keeping them near a target length and dropping older post references to conserve space. Hot Shards in Your Area Bluesky currently has around 32 Million Users and our Timelines database is broken into hundreds of shards. To support millions of partitions on such a small number of shards, each user’s Timeline partition is colocated with tens of thousands of other users’ Timelines. Under normal circumstances with all users behaving well, this doesn’t present a problem as the work of an individual Timeline is small enough that a shard can handle the work of tens of thousands of them without being heavily taxed. Unfortunately, with a large number of users, some of them will do abnormal things like… well… following hundreds of thousands of other users. Generally, this can be dealt with via policy and moderation to prevent abusive users from causing outsized load on systems, but these processes take time and can be imperfect. When a user follows hundreds of thousands of others, their Timeline becomes hyperactive with writes and trimming occurring at massively elevated rates. This load slows down the individual operations to the user’s Timeline, which is fine for the bad behaving user, but causes problems to the tens of thousands of other users sharing a shard with them. We typically call this situation a “Hot Shard”: where some resident of a shard has “hot” data that is being written to or read from at much higher rates than others. Since the data on the shard is only replicated a few times, we can’t effectively leverage the horizontal scale of our database to process all this additional work. Instead, the “Hot Shard” ends up spending so much time doing work for a single partition that operations to the colocated partitions slow down as well. Stacking Latencies Returning to our Fanout process, let’s consider the case of Fanout for a user followed by 2,000,000 other users. Under normal circumstances, writing to a single Timeline takes an average of ~600 microseconds. If we sequentially write to the Timelines of our user’s followers, we’ll be sitting around for 20 minutes at best to Fanout this post. If instead we concurrently Fanout to 1,000 Timelines at once, we can complete this Fanout job in ~1.2 seconds. That sounds great, except it oversimplifies an important property of systems: tail latencies. The average latency of a write is ~600 microseconds, but some writes take much less time and some take much more. In fact, the P99 latency of writes to the Timelines cluster can be as high as 15 milliseconds! What does this mean for our Fanout? Well, if we concurrently write to 1,000 Timelines at once, statistically we’ll see 10 writes as slow as or slower than 15 milliseconds. In the case of timelines, each “page” of followers is 10,000 users large and each “page” must be fanned out before we fetch the next page. This means that our slowest writes will hold up the fetching and Fanout of the next page. How does this affect our expected Fanout time? Each “page” will have ~100 writes as slow as or slower than the P99 latency. If we get unlucky, they could all stack up on a single routine and end up slowing down a single page of Fanout to 1.5 seconds. In the worst case, for our 2,000,000 Follower celebrity, their post Fanout could end up taking as long as 5 minutes! That’s not even considering P99.9 and P99.99 latencies which could end up being >1 second, which could leave us waiting tens of minutes for our Fanout job. Now imagine how bad this would be for a user with 20,000,000+ Followers! So, how do we fix the problem? By embracing imperfection, of course! Lossy Timelines Imagine a user who follows hundreds of thousands of others. Their Timeline is being written to hundreds of times a second, moving so fast it would be humanly impossible to keep up with the entirety of their Timeline even if it was their full-time job. For a given user, there’s a threshold beyond which it is unreasonable for them to be able to keep up with their Timeline. Beyond this point, they likely consume content through various other feeds and do not primarily use their Following Feed. Additionally, beyond this point, it is reasonable for us to not necessarily have a perfect chronology of everything posted by the many thousands of users they follow, but provide enough content that the Timeline always has something new. Note in this case I’m using the term “reasonable” to loosely convey that as a social media service, there must be a limit to the amount of work we are expected to do for a single user. What if we introduce a mechanism to reduce the correctness of a Timeline such that there is a limit to the amount of work a single Timeline can place on a DB shard. We can assert a reasonable limit for the number of follows a user should have to have a healthy and active Timeline, then increase the “lossiness” of their Timeline the further past that limit they go. A loss_factor can be defined as min(reasonable_limit/num_follows, 1) and can be used to probabilistically drop writes to a Timeline to prevent hot shards. Just before writing a page in Fanout, we can generate a random float between 0 and 1, then compare it to the loss_factor of each user in the page. If the user’s loss_factor is smaller than the generated float, we filter the user out of the page and don’t write to their Timeline. Now, users all have the same number of “follows worth” of Fanout. For example with a reasonable_limit of 2,000, a user who follows 4,000 others will have a loss_factor of 0.5 meaning half the writes to their Timeline will get dropped. For a user following 8,000 others, their loss factor of 0.25 will drop 75% of writes to their Timeline. Thus, each user has a effective ceiling on the amount of Fanout work done for their Timeline. By specifying the limits of reasonable user behavior and embracing imperfection for users who go beyond it, we can continue to provide service that meets the expectations of users without sacrificing scalability of the system. Aside on Caching We write to Timelines at a rate of more than one million times a second during the busy parts of the day. Looking up the number of follows of a given user before fanning out to them would require more than one million additional reads per second to our primary database cluster. This additional load would not be well received by our database and the additional cost wouldn’t be worth the payoff for faster Timeline Fanout. Instead, we implemented an approach that caches high-follow accounts in a Redis sorted set, then each instance of our Fanout service loads an updated version of the set into memory every 30 seconds. This allows us to perform lookups of follow counts for high-follow accounts millions of times per second per Fanount service instance. By caching values which don’t need to be perfect to function correctly in this case, we can once again embrace imperfection in the system to improve performance and scalability without compromising the function of the service. Results We implemented Lossy Timelines a few weeks ago on our production systems and saw a dramatic reduction in hot shards on the Timelines database clusters. In fact, there now appear to be no hot shards in the cluster at all, and the P99 of a page of Fanout work has been reduced by over 90%. Additionally, with the reduction in write P99s, the P99 duration for a full post Fanout has been reduced by over 96%. Jobs that used to take 5-10 minutes for large accounts now take <10 seconds. Knowing where it’s okay to be imperfect lets you trade consistency for other desirable aspects of your systems and scale ever higher. There are plenty of other places for improvement in our Timelines architecture, but this step was a big one towards improving throughput and scalability of Bluesky’s Timelines. If you’re interested in these sorts of problems and would like to help us build the core data services that power Bluesky, check out this job listing. If you’re interested in other open positions at Bluesky, you can find them here.

4 months ago 36 votes
Emoji Griddle
8 months ago 17 votes
Jetstream: Shrinking the AT Proto Firehose by >99%

Bluesky recently saw a massive spike in activity in response to Brazil’s ban of Twitter. As a result, the AT Proto event firehose provided by Bluesky’s Relay at bsky.network has increased in volume by a huge amount. The average event rate during this surge increased by ~1,300%. Before this new surge in activity, the firehose would produce around 24 GB/day of traffic. After the surge, this volume jumped to over 232 GB/day! Keeping up with the full, verified firehose quickly became less practical on cheap cloud infrastructure with metered bandwidth. To help reduce the burden of operating bots, feed generators, labelers, and other non-verifying AT Proto services, I built Jetstream as an alternative, lightweight, filterable JSON firehose for AT Proto. How the Firehose Works The AT Proto firehose is a mechanism used to keep verified, fully synced copies of the repos of all users. Since repos are represented as Merkle Search Trees, each firehose event contains an update to the user’s MST which includes all the changed blocks (nodes in the path from the root to the modified leaf). The root of this path is signed by the repo owner, and a consumer can keep their copy of the repo’s MST up-to-date by applying the diff in the event. For a more in-depth explanation of how Merkle Trees are constructed, check out this explainer. Practically, this means that for every small JSON record added to a repo, we also send along some number of MST blocks (which are content-addressed hashes and thus very information-dense) that are mostly useful for consumers attempting to keep a fully synced, verified copy of the repo. You can think of this as the difference between cloning a git repo v.s. just grabbing the latest version of the files without the .git folder. In this case, the firehose effectively streams the diffs for the repository with commits, signatures, and metadata, which is inherently heavier than a point-in-time checkout of the repo. Because firehose events with repo updates are signed by the repo owner, they allow a consumer to process events from any operator without having to trust the messenger. This is the “Authenticated” part of the Authenticated Transfer (AT) Protocol and is crucial to the correct functioning of the network. That being said, of the hundreds of consumers of Bluesky’s production Relay, >90% of them are building feeds, bots, and other tools that don’t keep full copies of the entire network and don’t verify MST operations at all. For these consumers, all they actually process is the JSON records created, updated, and deleted in each event. If consumers already trust the provider to do validation on their end, they could get by with a much more lightweight data stream. How Jetstream Works Jetstream is a streaming service that consumes an AT Proto com.atproto.sync.subscribeRepos stream and converts it into lightweight, friendly JSON. If you want to try it out yourself, you can connect to my public Jetstream instance and view all posts on Bluesky in realtime: $ websocat "wss://jetstream2.us-east.bsky.network/subscribe?wantedCollections=app.bsky.feed.post" Note: the above instance is operated by Bluesky PBC and is free to use, more instances are listed in the official repo Readme Jetstream converts the CBOR-encoded MST blocks produced by the AT Proto firehose and translates them into JSON objects that are easier to interface with using standard tooling available in programming languages. Since Repo MSTs only contain records in their leaf nodes, this means Jetstream can drop all of the blocks in an event except for those of the leaf nodes, typically leaving only one block per event. In reality, this means that Jetstream’s JSON firehose is nearly 1/10 the size of the full protocol firehose for the same events, but lacks the verifiability and signatures included in the protocol-level firehose. Jetstream events end up looking something like: { "did": "did:plc:eygmaihciaxprqvxpfvl6flk", "time_us": 1725911162329308, "type": "com", "commit": { "rev": "3l3qo2vutsw2b", "type": "c", "collection": "app.bsky.feed.like", "rkey": "3l3qo2vuowo2b", "record": { "$type": "app.bsky.feed.like", "createdAt": "2024-09-09T19:46:02.102Z", "subject": { "cid": "bafyreidc6sydkkbchcyg62v77wbhzvb2mvytlmsychqgwf2xojjtirmzj4", "uri": "at://did:plc:wa7b35aakoll7hugkrjtf3xf/app.bsky.feed.post/3l3pte3p2e325" } }, "cid": "bafyreidwaivazkwu67xztlmuobx35hs2lnfh3kolmgfmucldvhd3sgzcqi" } } Each event lets you know the DID of the repo it applies to, when it was seen by Jetstream (a time-based cursor), and up to one updated repo record as serialized JSON. Check out this 10 second CPU profile of Jetstream serving 200k evt/sec to a local consumer: By dropping the MST and verification overhead by consuming from relay we trust, we’ve reduced the size of a firehose of all events on the network from 232 GB/day to ~41GB/day, but we can do better. Jetstream and zstd I recently read a great engineering blog from Discord about their use of zstd to compress websocket traffic to/from their Gateway service and client applications. Since Jetstream emits marshalled JSON through the websocket for developer-friendliness, I figured it might be a neat idea to see if we could get further bandwidth reduction by employing zstd to compress events we send to consumers. zstd has two basic operating modes, “simple” mode and “streaming” mode. Streaming Compression At first glance, streaming mode seems like it’d be a great fit. We’ve got a websocket connection with a consumer and streaming mode allows the compression to get more efficient over the lifetime of the connection. I went and implemented a streaming compression version of Jetstream where a consumer can request compression when connecting and will get zstd compressed JSON sent as binary messages over the socket instead of plaintext. Unfortunately, this had a massive impact on Jetstream’s server-side CPU utilization. We were effectively compressing every message once per consumer as part of their streaming session. This was not a scalable approach to offering compression on Jetstream. Additionally, Jetstream stores a buffer of the past 24 hours (configurable) of events on disk in PebbleDB to allow consumers to replay events before getting transitioned into live-tailing mode. Jetstream stores serialized JSON in the DB, so playback is just shuffling the bytes into the websocket without having to round-trip the data into a Go struct. When we layer in streaming compression, playback becomes significantly more expensive because we have to compress outgoing events on-the-fly for a consumer that’s catching up. In real numbers, this increased CPU usage of Jetstream by 23% while lowering the throughput of playback from ~200k evt/sec to ~28k evt/sec for a single local consumer. When in streaming mode, we can’t leverage the bytes we compress for one consumer and reuse them for another consumer because zstd’s streaming context window may not be in sync between the two consumers. They haven’t received exactly the same data in the session so the clients on the other end don’t have their state machines in the same state. Since streaming mode’s primary advantage is giving us eventually better efficiency as the encoder learns about the data, what if we just taught the encoder about the data at the start and compress each message statelessly? Dictionary Mode zstd offers a mechanism for initializing an encoder/decoder with pre-optimized settings by providing a dictionary trained on a sample of the data you’ll be encoding/decoding. Using this dictionary, zstd essentially uses it’s smallest encoded representations for the most frequently seen patterns in the sample data. In our case, where we’re compressing serialized JSON with a common event shape and lots of common property names, training a dictionary on a large number of real events should allow us to represent the common elements among messages in the smallest number of bytes. For take two of Jetstream with zstd, let’s to use a single encoder for the whole service that utilizes a custom dictionary trained on 100,000 real events. We can use this encoder to compress every event as we see it, before persisting and emitting it to consumers. Now we end up with two copies of every event, one that’s just serialized JSON, and one that’s statelessly compressed to zstd using our dictionary. Any consumers that want compression can have a copy of the dictionary on their end to initialize a decoder, then when we broadcast the shared compressed event, all consumers can read it without any state or context issues. This requires the consumers and server to have a pre-shared dictionary, which is a major drawback of this implementation but good enough for our purposes. That leaves the problem of event playback for compression-enabled clients. An easy solution here is to just store the compressed events as well! Since we’re only sticking the JSON records into our PebbleDB, the actual size of the 24 hour playback window is <8GB with sstable compression. If we store a copy of the JSON serialized event and a copy of the zstd compressed event, this will, at most, double our storage requirements. Then during playback, if the consumer requests compression, we can just shuffle bytes out of the compressed version of the DB into their socket instead of having to move it through a zstd encoder. Savings Running with a custom dictionary, I was able to get the average Jetstream event down from 482 bytes to just 211 bytes (~0.44 compression ratio). Jetstream allows us to live tail all posts on Bluesky as they’re posted for as little as ~850 MB/day, and we could keep up with all events moving through the firehose during the Brazil Twitter Exodus weekend for 18GB/day (down from 232GB/day). With this scheme, Jetstream is required to compress each event only once before persisting it to disk and emitting it to connected consumers. The CPU impact of these changes is significant in proportion to Jetstream’s incredibly light load but it’s a flat cost we pay once no matter how many consumers we have. (CPU profile from a 30 second pprof sample with 12 consumers live-tailing Jetstream) Additionally, with Jetstream’s shared buffer broadcast architecture, we keep memory allocations incredibly low and the cost per consumer on CPU and RAM is trivial. In the allocation profile below, more than 80% of the allocations are used to consume the full protocol firehose. The total resident memory of Jetstream sits below 16MB, 25% of which is actually consumed by the new zstd dictionary. To bring it all home, here’s a screenshot from the dashboard of my public Jetstream instance serving 12 consumers all with various filters and compression settings, running on a $5/mo OVH VPS. At our new baseline firehose activity, a consumer of the protocol-level firehose would require downloading ~3.16TB/mo to keep up. A Jetstream consumer getting all created, updated, and deleted records without compression enabled would require downloading ~400GB/mo to keep up. A Jetstream consumer that only cares about posts and has zstd compression enabled can get by on as little as ~25.5GB/mo, <99% of the full weight firehose. Feel free to join the conversation about Jetstream and zstd on Bluesky.

9 months ago 22 votes
How HLS Works

Over the past few weeks, I’ve been building out server-side short video support for Bluesky. The major aim of this feature is to support short (90 second max) video streaming at a quality that doesn’t cost an arm and a leg for us to provide for free. In order to stay within these constraints, we’re considering making use of a video CDN that can bear the brunt of the bandwidth required to support Video-on-Demand streaming. While the CDN is a pretty fully-featured product, we want to avoid too much vendor lock-in and provide some enhancements to our streaming platform that requires extending their offering and getting creative with video streaming protocols. Some of the things we’d like to be able to do that don’t work out-of-the-box are: Track view counts, viewer sessions, and duration viewed to provide better feedback for video performance. Provide dynamic closed-caption support with the flexibility to automate them in the future. Store a transcoded version of source files somewhere durable to provide a “source of truth” for videos when needed. Append a “trailer” to the end of video streams for some branding in a TikTok-esque 3-second snippet. In this post I’ll be focusing on the HLS-related features above, namely view/duration accounting, closed captions, and trailers. HLS is Just a Bunch of Text files HTTP Live Streaming (HLS) is a standard established by Apple in 2009 that allows for adaptive-bitrate live and Video-on-Demand (VOD) streaming. For the purposes of this blog post, I’ll restrict my explanations to how HLS VOD streaming works. A player that implements the HLS protocol is capable of dynamically adjusting the quality of a streamed video based on network conditions. Additionally, a server that implements the HLS protocol should provide one or more variants of a media stream which accommodate varying network qualities to allow for graceful degradation of stream quality without stopping playback. HLS implements this by producing a series of plaintext (.m3u8) “playlist” files that tell the player what bitrates and resolutions the server provides so that the player can decide which variant it should stream. HLS differentiates between two kinds of “playlist” files: Master Playlists, and Media Playlists. Master Playlists A Master Playlist is the first file fetched by your video player. It contains a series of variants which point to child Media Playlists. It also describes the approximate bitrate of the variant sources and the codecs and resolutions used by those sources. $ curl https://my.video.host.com/video_15/playlist.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360 360p/video.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720 720p/video.m3u8 In the above file, the key things to notice are the RESOLUTION parameters and the {res}/video.m3u8 links. Your media player will generally start with the lowest resolution version before jumping up to higher resolutions once the network speed between you and the server is dialed in. The links in this file are pointers to Media Playlists, generally as relative paths from the Master Playlist such that, if we wanted to grab the 720p Media Playlist, we’d navigate to: https://my.video.host.com/video_15/720p/video.m3u8. A Master Playlist can also contain multi-track audio directives and directives for closed-captions but for now let’s move onto the Media Playlist. Media Playlists A Media Playlist is yet another plaintext file that provides your video player with two key bits of data: a list of media Segments (encoded as .ts video files) and headers for each Segment that tell the player the runtime of the media. $ curl https://my.video.host.com/video_15/720p/video.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts #EXTINF:4.000, video1.ts #EXTINF:4.000, video2.ts #EXTINF:4.000, video3.ts #EXTINF:4.000, video4.ts #EXTINF:2.800, video5.ts This Media Playlist describes a video that’s 22.8 seconds long (5 x 4-second Segments + 1 x 2.8-second Segment). The playlist describes a VOD piece of media, meaning we know this playlist contains the entirety of the media the player needs. The TARGETDURATION tells us the maximum length of each Segment so the player knows how many Segments to buffer ahead of time. During live streaming, that also lets the player know how frequently to refresh the playlist file to discover new Segments. Finally the EXTINF headers for each Segment indicate the duration of the following .ts Segment file and the relative paths of the video#.ts tell the player where to load the actual media files from. Where’s the Actual Media? At this point, the video player has loaded two .m3u8 playlist files and got lots of metadata about how to play the video but it hasn’t actually loaded any media files. The .ts files referenced in the Media Playlist are where the real media is, so if we wanted to control the playlists but let the CDN handle serving actual media, we can just redirect those video#.ts requests to our CDN. .ts files are Transport Stream MPEG-2 encoded short media files that can contain video or audio and video. Tracking Views To track views of our HLS streams, we can leverage the fact that every video player must first load the Master Playlist. When a user requests the Master Playlist, we can modify the results dynamically to provide a SessionID to each response and allow us to track the user session without cookies or headers: #EXTM3U #EXT-X-VERSION:3 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360 360p/video.m3u8?session_id=12345 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720 720p/video.m3u8?session_id=12345 Now when their video player fetches the Media Playlists, it’ll include a query-string that we can use to identify the streaming session, ensuring we don’t double-count views on the video and can track which Segments of video were loaded in the session. #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts?session_id=12345&duration=4 #EXTINF:4.000, video1.ts?session_id=12345&duration=4 #EXTINF:4.000, video2.ts?session_id=12345&duration=4 #EXTINF:4.000, video3.ts?session_id=12345&duration=4 #EXTINF:4.000, video4.ts?session_id=12345&duration=4 #EXTINF:2.800, video5.ts?session_id=12345&duration=2.8 Finally when the video player fetches the media Segment files, we can measure the Segment view before we redirect to our CDN with a 302, allowing us to know the amount of video-seconds loaded in the session and which Segments were loaded. This method has limitations, namely that a media player loading a segment doesn’t necessarily mean it showed that segment to the viewer, but it’s the best we can do without an instrumented media player. Adding Subtitles Subtitles are included in the Master Playlist as a variant and then are referenced in each of the video variants to let the player know where to load subs from. #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="en_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="en",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/en.m3u8" #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=688540,CODECS="avc1.64001e,mp4a.40.2",RESOLUTION=640x360,SUBTITLES="subs" 360p/video.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=1921217,CODECS="avc1.64001f,mp4a.40.2",RESOLUTION=1280x720,SUBTITLES="subs" 720p/video.m3u8 Just like with the video Media Playlists, we need a Media Playlist file for the subtitle track as well so that the player knows where to load the source files from and what duration of the stream they cover. $ curl https://my.video.host.com/video_15/subtitles/en.m3u8 #EXTM3U #EXT-X-VERSION:3 #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:22.8 #EXTINF:22.800, en.vtt In this case, since we’re only serving a short video, we can just provide a single Segment that points at a WebVTT subtitle file encompassing the entire duration of the video. If you crack open the en.vtt file you’ll see something like: $ curl https://my.video.host.com/video_15/subtitles/en.vtt WEBVTT 00:00.000 --> 00:02.000 According to all known laws of aviation, 00:02.000 --> 00:04.000 there is no way a bee should be able to fly. 00:04.000 --> 00:06.000 Its wings are too small to get its fat little body off the ground. ... The media player is capable of reading WebVTT and presenting the subtitles at the right time to the viewer. For longer videos you may want to break up your VTT files into more Segments and update the subtitle Media Playlist accordingly. To provide multiple languages and versions of subtitles, just add more EXT-X-MEDIA:TYPE=SUBTITLES lines to the Master Playlist and tweak the NAME, LANGUAGE (if different), and URI of the additional subtitle variant definitions. #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="en_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="en",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/en.m3u8" #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="fr_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="fr",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/fr.m3u8" #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="ja_subtitle",DEFAULT=NO,AUTOSELECT=yes,LANGUAGE="ja",FORCED="no",CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog",URI="subtitles/ja.m3u8" Appending a Trailer For branding purposes (and in other applications, for advertising purposes), it can be helpful to insert Segments of video into a playlist to change the content of the video without requiring the content be appended to and re-encoded with the source file. Thankfully, HLS allows us to easily insert Segments into the Media Playlist using this one neat trick: #EXTM3U #EXT-X-VERSION:3 #EXT-X-PLAYLIST-TYPE:VOD #EXT-X-MEDIA-SEQUENCE:0 #EXT-X-TARGETDURATION:4 #EXTINF:4.000, video0.ts #EXTINF:4.000, video1.ts #EXTINF:4.000, video2.ts #EXTINF:4.000, video3.ts #EXTINF:4.000, video4.ts #EXTINF:2.800, video5.ts #EXT-X-DISCONTINUITY #EXTINF:3.337, trailer0.ts #EXTINF:1.201, trailer1.ts #EXTINF:1.301, trailer2.ts #EXT-X-ENDLIST In this Media Playlist we use HLS’s EXT-X-DISCONTINUITY header to let the video player know that the following Segments may be in a different bitrate, resolution, and aspect-ratio than the preceding content. Once we’ve provided the discontinuity header, we can add more Segments just like normal that point at a different media source broken up into .ts files. Remember, HLS allows us to use relative or absolute paths here, so we could provide a full URL for these trailer#.ts files, or virtually route them so they can retain the path context of the currently viewed video. Note that we don’t need to provide the discontinuity header here, and we could also name the trailer files something like video{6-8}.ts if we wanted to, but for clarity and proper player behavior, it’s best to use the discontinuity header if your trailer content doesn’t match the bitrate and resolution of the other video Segments. When the video player goes to play this media, it will continue from video5.ts to trailer0.ts without missing a beat, making it appear as if the trailer is part of the original video. This approach allows us to dynamically change the contents of the trailer for all videos, heavily cache the trailer .ts Segment files for performance, and avoid having to encode the trailer onto the end of every video source file. Conclusion At the end of the day, we’ve now got a video streaming service capable of tracking views and watch session durations, dynamic closed caption support, and branded trailers to help grow the platform. HLS is not a terribly complex protocol. The vast majority of it is human-readable plaintext files and is easy to inspect in the wild to how it’s used in production. When I started this project, I knew next to nothing about the protocol but was able to download some .m3u8 files and get digging to discover how the protocol worked, then build my own implementation of a HLS server to accommodate the video streaming needs of Bluesky. To learn more about HLS, you can check out the official RFC here which describes all the features discussed above and more. I hope this post encourages you to go explore other protocols you use every day by poking at them in the wild, downloading the files your browser interprets for you, and figuring out how simple some of these apparently “complex” systems are. If you’re interested in solving problems like these, take a look at our open Job Recs. If you have any questions about HLS, Bluesky, or other distributed, @scale social media infrastructure, you can find me on Bluesky here and you can discuss this post here.

a year ago 20 votes
An entire Social Network in 1.6GB (GraphD Part 2)

In Part 1 of this series, we tried to answer the question “who do you follow who also follows user B” in Bluesky, a social network with millions of users and hundreds of millions of follow relationships. At the conclusion of the post, we’d developed an in-memory graph store for the network that uses HashMaps and HashSets to keep track of the followers of every user and the set of users they follow, allowing bidirectional lookups, intersections, unions, and other set operations for combining social graph data. I received some helpful feedback after that post where several people pointed me towards Roaring Bitmaps as a potential improvement on my implementation. They were right, Roaring Bitmaps would be an excellent fit for my Graph service, GraphD, and could also provide me with a much needed way to quickly persist and load the Graph data to and from disk on startup, hopefully reducing the startup time of the service. What are Bitmaps? If you just want to dive into the Roaring Bitmap spec, you can read the paper here, but it might be easier to first talk about bitmaps in general. You can think of a bitmap as a vector of one-bit values (like booleans) that let you encode a set of integer values. For instance, say we have 10,000 users on our website and want to keep track of which users have validated their email addresses. We could do this by creating a list of the uint32 user IDs of each user, in which case if all 10,000 users have validated their emails we’re storing 10k * 32 bits = 40KB. Or, we could create a vector of single-bit values that’s 10,000 bits long (10k / 8 = 1.25KB), then if a user has confirmed their email we can set the value at the index of their UID to 1. If we want to create a list of all the UIDs of validated accounts, we can walk the vector and record the index of each non-zero bit. If we want to check if user n has validated their email, we can do a O(1) lookup in the bitmap by loading the bit at index n and checking if it’s set. When Bitmaps get Big and Sparse Now when talking about our social network problem, we’re dealing with a few more than 10,000 UIDs. We need to keep track of 5.5M users and whether or not the user follows or is followed by any of the other 5.5M users in the network. To keep a bitmap of “People who follow User A”, we’re going to need 5.5M bits which would require (5.5M / 8) ~687KB of space. If we wanted to keep bitmaps of “People who follow User A” and “People who User A follows”, we’d need ~1.37MB of space per user using a simple bitmap, meaning we’d need 5,500,000 * 1.37MB = ~7.5 Terabytes of space! Clearly this isn’t an improvement of our strategy from Part 1, so how can we make this more efficient? One strategy for compressing the bitmap is to take consecutive runs of 0’s or 1’s (i.e. 00001110000001) in the bitmap and turn them into a number. For instance if we had an account that followed only the last 100 accounts in our social network, the first 5,499,900 indices in our bitmap would be 0’s and so we could represent the bitmap by saying: 5,499,900 0's, then 100 1's which you notice I’ve written here in a lot fewer than 687KB and a computer could encode using two uint32 values plus two bits (one indicator bit for the state of each run) for a total of 66 bits. This strategy is called Run Length Encoding (RLE) and works pretty well but has a few drawbacks: mainly if your data is randomly and heavily populated, you may not have many consecutive runs (imagine a bitset where every odd bit is set and every even bit is unset). Also lookups and evaluation of the bitset requires walking the whole bitset to figure out where the index you care about lives in the compressed format. Thankfully there’s a more clever way to compress bitmaps using a strategy called Roaring Bitmaps. A brief description of the storage strategy for Roaring Bitmaps from the official paper is as follows: We partition the range of 32-bit indexes ([0, n)) into chunks of 2^16 integers sharing the same 16 most significant digits. We use specialized containers to store their 16 least significant bits. When a chunk contains no more than 4096 integers, we use a sorted array of packed 16-bit integers. When there are more than 4096 integers, we use a 2^16-bit bitmap. Thus, we have two types of containers: an array container for sparse chunks and a bitmap container for dense chunks. The 4096 threshold insures that at the level of the containers, each integer uses no more than 16 bits. These bitmaps are designed to support both densely and sparsely distributed data and can provide high performance binary set operations (and/or/etc.) operating on the containers within two or more bitsets in parallel. For more info on how Roaring Bitmaps work and some neat diagrams, check out this excellent primer on Roaring Bitmaps by Vikram Oberoi. So, how does this help us build a better graph? GraphD, Revisited with Roaring Bitmaps Let’s get back to our GraphD Service, this time in Go instead of Rust. For each user we can keep track of a struct with two bitmaps: type FollowMap struct { followingBM *roaring.Bitmap followingLk sync.RWMutex followersBM *roaring.Bitmap followersLk sync.RWMutex } Our FollowMap gives us a Roaring Bitmap for both the set of users we follow, and the set of users who follow us. Adding a Follow to the graph just requires we set the right bits in both user’s respective maps: // Note I've removed locking code and error checks for brevity func (g *Graph) addFollow(actorUID, targetUID uint32) { actorMap, _ := g.g.Load(actorUID) actorMap.followingBM.Add(targetUID) targetMap, _ := g.g.Load(targetUID) targetMap.followersBM.Add(actorUID) } Even better if we want to compute the intersections of two sets (i.e. the people User A follows who also follow User B) we can do so in parallel: // Note I've removed locking code and error checks for brevity func (g *Graph) IntersectFollowingAndFollowers(actorUID, targetUID uint32) ([]uint32, error) { actorMap, ok := g.g.Load(actorUID) targetMap, ok := g.g.Load(targetUID) intersectMap := roaring.ParAnd(4, actorMap.followingBM, targetMap.followersBM) return intersectMap.ToArray(), nil } Storing the entire graph as Roaring Bitmaps in-memory costs us around 6.5GB of RAM and allows us to perform set intersections between moderately large sets (with hundreds of thousands of set bits) in under 500 microseconds while serving over 70k req/sec! And the best part of all? We can use Roaring’s serialization format to write these bitmaps to disk or transfer them over the network. Storing 164M Follows in 1.6GB In the original version of GraphD, on startup the service would read a CSV file with an adjacency list of the (ActorDID, TargetDID) pairs of all follows on the network. This required creating a CSV dump of the follows table, pausing writes to the follows table, then bringing up the service and waiting 5 minutes for it to read the CSV file, intern the DIDs as uint32 UIDs, and construct the in-memory graph. This process is slow, pauses writes for 5 minutes, and every time our service restarts we have to do it all over again! With Roaring Bitmaps, we’re now given an easy way to effectively serialize a version of the in-memory graph that is many times smaller than the adjacency list CSV and many times faster to load. We can serialize the entire graph into a SQLite DB on the local machine where each row in a table contains: (uid, DID, followers_bitmap, following_bitmap) Loading the entire graph from this SQLite DB can be done in around ~20 seconds: // Note I've removed locking code and error checks for brevity rows, err := g.db.Query(`SELECT uid, did, following, followers FROM actors;`) for rows.Next() { var uid uint32 var did string var followingBytes []byte var followersBytes []byte rows.Scan(&uid, &did, &followingBytes, &followersBytes) followingBM := roaring.NewBitmap() followingBM.FromBuffer(followingBytes) followersBM := roaring.NewBitmap() followersBM.FromBuffer(followersBytes) followMap := &FollowMap{ followingBM: followingBM, followersBM: followersBM, followingLk: sync.RWMutex{}, followersLk: sync.RWMutex{}, } g.g.Store(uid, followMap) g.setUID(did, uid) g.setDID(uid, did) } While the service is running, we can also keep track of the UIDs of actors who have added or removed a follow since the last time we saved the DB, allowing us to periodically flush changes to the on-disk SQLite only for bitmaps that have updated. Syncing our data every 5 seconds while tailing the production firehose takes 2ms and writes an average of only ~5MB to disk per flush. The crazy part of this is, the on-disk representation of our entire follow network is only ~1.6GB! Because we’re making use of Roaring’s compressed serialized format, we can turn the ~6.5GB of in-memory maps into 1.6GB of on-disk data. Our largest bitmap, the followers of the bsky.app account with over 876k members, becomes ~500KB as a blob stored in SQLite. So, to wrap up our exploration of Roaring Bitmaps for first-degree graph databases, we saw: A ~20% reduction in resident memory size compared to HashSets and HashMaps A ~84% reduction in the on-disk size of the graph compared to an adjacency list A ~93% reduction in startup time compared to loading from an adjacency list A ~66% increase in throughput of worst-case requests under load A ~59% reduction in p99 latency of worst-case requests under low My next iteration on this problem will likely be to make use of DGraph’s in-memory Serialized Roaring Bitmap library that allows you to operate on fully-compressed bitmaps so there’s no need to serialize and deserialize them when reading from or writing to disk. It also probably results in significant memory savings as well! If you’re interested in solving problems like these, take a look at our open Backend Developer Job Rec. You can find me on Bluesky here, you can chat about this post here.

a year ago 22 votes

More in programming

My first year since coming back to Linux

<![CDATA[It has been a year since I set up my System76 Merkaat with Linux Mint. In July of 2024 I migrated from ChromeOS and the Merkaat has been my daily driver on the desktop. A year later I have nothing major to report, which is the point. Despite the occasional unplanned reinstallation I have been enjoying the stability of Linux and just using the PC. This stability finally enabled me to burn bridges with mainstream operating systems and fully embrace Linux and open systems. I'm ready to handle the worst and get back to work. Just a few years ago the frustration of troubleshooting a broken system would have made me seriously consider the switch to a proprietary solution. But a year of regular use, with an ordinary mix of quiet moments and glitches, gave me the confidence to stop worrying and learn to love Linux. linux a href="https://remark.as/p/journal.paoloamoroso.com/my-first-year-since-coming-back-to-linux"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>

18 hours ago 3 votes
Overanalyzing a minor quirk of Espressif’s reset circuit

The mystery In the previous article, I briefly mentioned a slight difference between the ESP-Prog and the reproduced circuit, when it comes to EN: Focusing on EN, it looks like the voltage level goes back to 3.3V much faster on the ESP-Prog than on the breadboard circuit. The grid is horizontally spaced at 2ms, so … Continue reading Overanalyzing a minor quirk of Espressif’s reset circuit → The post Overanalyzing a minor quirk of Espressif’s reset circuit appeared first on Quentin Santos.

19 hours ago 2 votes
What can agents actually do?

There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they’re a bit too abstract. Sort of like saying that your company could be much better if you merely adopted software. That’s certainly true, but it’s not a particularly helpful claim. This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and make the case that the potential of AI agents is equivalent to the potential of this generation of AI. By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company. How do agents work? At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic’s /v1/message with a prompt: How should I adopt LLMs in my company? That prompt is used to fill the LLM’s context window, which conditions the model to generate certain kinds of responses. This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result. Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely. However, composing the perfect context window is very time intensive, benefiting from techniques like metaprompting to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, Who is going to become the next mayor of New York City?, then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you’re asking the question to begin with! This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window. This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response. However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive: The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using. Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend The response from the API call with defined functions is either: Generated text as any other call to an LLM might provide A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a get_weather tool, when prompted about the weather in Paris, might return this response: [{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }] The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France). If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window. The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context. What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context. This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios: Flow control via rules has concrete rules about how tools can be used. Some examples: it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc) it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval) it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails) apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data) a tool to escalate to a human representative can only be called after five back and forths with the LLM agent Flow control via statistics can use statistics to identify and act on abnormal behavior: if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters. At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include: Building general context to add to context window, sometimes thought of as maintaining memory Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc Periodically initiating workflows at a certain time, such as hourly review of incoming tickets Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are: Use an LLM to evaluate a context window and get a result Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response Manage flow control for tool usage via rules or statistical analysis Agents are software programs, and can do anything other software programs do Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities. Use Case 1: Customer Support Agent One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact. Our approach might be: Allow tickets (or support chats) to flow into an AI agent Provide a variety of tools to the agent to support: Retrieving information about the user: recent customer support tickets, account history, account state, and so on Escalating to next tier of customer support Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much) Closing the user account on request Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling Review hourly, then daily, and then weekly metrics of agent performance Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review. Note that even when you’ve moved “Customer Support to AI agents”, you still have: a tier of human agents dealing with the most complex calls humans reviewing the periodic performance statistics humans performing quality control on AI agent-customer interactions You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones. Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent. Use Case 2: Triaging incoming bug reports When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow. The process might work as follows: Pipe all created incidents and all created tickets to this agent for review. Expose these tools to the agent: Open an incident Retrieve current incidents Retrieve recently created tickets Retrieve production metrics Retrieve deployment logs Retrieve feature flag change logs Toggle known-safe feature flags Propose merging an incident with another for human approval Propose merging a ticket with another ticket for human approval Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues. Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow. Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket. Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time. Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience. Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion. Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident. Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time. Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time. This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes: The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident. You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval. Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL. Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed. Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation. Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely. Do AI Agents Represent Entirety of this Generation of AI? If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit. Closing thoughts LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work. Software isn’t magic, it’s very logical, but what it can accomplish is magical. The same goes for agents and LLMs. The more we can accelerate that learning curve, the better for our industry.

16 hours ago 2 votes
Can tinygrad win?

This is not going to be a cakewalk like self driving cars. Most of comma’s competition is now out of business, taking billions and billions of dollars with it. Re: Tesla and FSD, we always expected Tesla to have the lead, but it’s not a winner take all market, it will look more like iOS vs Android. comma has been around for 10 years, is profitable, and is now growing rapidly. In self driving, most of the competition wasn’t even playing the right game. This isn’t how it is for ML frameworks. tinygrad’s competition is playing the right game, open source, and run by some quite smart people. But this is my second startup, so hopefully taking a bit more risk is appropriate. For comma to win, all it would take is people in 2016 being wrong about LIDAR, mapping, end to end, and hand coding, which hopefully we all agree now that they were. For tinygrad to win, it requires something much deeper to be wrong about software development in general. As it stands now, tinygrad is 14556 lines. Line count is not a perfect proxy for complexity, but when you have differences of multiple orders of magnitude, it might mean something. I asked ChatGPT to estimate the lines of code in PyTorch, JAX, and MLIR. JAX = 400k MLIR = 950k PyTorch = 3300k They range from one to two orders of magnitude off. And this isn’t even including all the libraries and drivers the other frameworks rely on, CUDA, cuBLAS, Triton, nccl, LLVM, etc…. tinygrad includes every single piece of code needed to drive an AMD RDNA3 GPU except for LLVM, and we plan to remove LLVM in a year or two as well. But so what? What does line count matter? One hypothesis is that tinygrad is only smaller because it’s not speed or feature competitive, and that if and when it becomes competitive, it will also be that many lines. But I just don’t think that’s true. tinygrad is already feature competitive, and for speed, I think the bitter lesson also applies to software. When you look at the machine learning ecosystem, you realize it’s just the same problems over and over again. The problem of multi machine, multi GPU, multi SM, multi ALU, cross machine memory scheduling, DRAM scheduling, SRAM scheduling, register scheduling, it’s all the same underlying problem at different scales. And yet, in all the current ecosystems, there are completely different codebases and libraries at each scale. I don’t think this stands. I suspect there is a simple formulation of the problem underlying all of the scheduling. Of course, this problem will be in NP and hard to optimize, but I’m betting the bitter lesson wins here. The goal of the tinygrad project is to abstract away everything except the absolute core problem in the cleanest way possible. This is why we need to replace everything. A model for the hardware is simple compared to a model for CUDA. If we succeed, tinygrad will not only be the fastest NN framework, but it will be under 25k lines all in, GPT-5 scale training job to MMIO on the PCIe bus! Here are the steps to get there: Expose the underlying search problem spanning several orders of magnitude. Due to the execution of neural networks not being data dependent, this problem is very amenable to search. Make sure your formulation is simple and complete. Fully capture all dimensions of the search space. The optimization goal is simple, run faster. Apply the state of the art in search. Burn compute. Use LLMs to guide. Use SAT solvers. Reinforcement learning. It doesn’t matter, there’s no way to cheat this goal. Just see if it runs faster. If this works, not only do we win with tinygrad, but hopefully people begin to rethink software in general. Of course, it’s a big if, this isn’t like comma where it was hard to lose. But if it wins… The main thing to watch is development speed. Our bet has to be that tinygrad’s development speed is outpacing the others. We have the AMD contract to train LLaMA 405B as fast as NVIDIA due in a year, let’s see if we succeed.

20 hours ago 1 votes
Do You Even Personalize, Bro?

There’s a video on YouTube from “Technology Connections” — who I’ve never heard of or watched until now — called Algorithms are breaking how we think. I learned of this video from Gedeon Maheux of The Iconfactory fame. Speaking in the context of why they made Tapestry, he said the ideas in this video would be their manifesto. So I gave it a watch. Generally speaking, the video asks: Does anyone care to have a self-directed experience online, or with a computer more generally? I'm not sure how infrequently we’re actually deciding for ourselves these days [how we decide what we want to see, watch, and do on the internet] Ironically we spend more time than ever on computing devices, but less time than ever curating our own experiences with them. Which — again ironically — is the inverse of many things in our lives. Generally speaking, the more time we spend with something, the more we invest in making it our own — customizing it to our own idiosyncrasies. But how much time do you spend curating, customizing, and personalizing your digital experience? (If you’re reading this in an RSS reader, high five!) I’m not talking about “I liked that post, or saved that video, so the algorithm is personalizing things for me”. Do you know what to get yourself more of? Do you know where to find it? Do you even ask yourself these questions? “That sounds like too much work” you might say. And you’re right, it is work. As the guy in the video says: I'm one of those weirdos who think the most rewarding things in life take effort Me too. Email · Mastodon · Bluesky

8 hours ago 1 votes