More from Marco.org
Today, on the tenth anniversary of Overcast 1.0, I’m happy to launch a complete rewrite and redesign of most of the iOS app, built to carry Overcast into the next decade — and hopefully beyond. Like podcasts better than blog posts? Listen to ATP #596 for more! What’s new Much faster, more responsive, more reliable, and more accessible. Modern design, optimized for easily-reached controls on today’s phone sizes. Improvements throughout, such as undoing large seeks, new playlist-priority options, easier navigation, and more. What’s not Most features. Overcast is still Overcast! The audio engine. It’s the best part of Overcast, and still leads the industry in sound quality, silence skipping, and volume normalization. (More soon!) The business. I’m still a one-person operation, with no funding or external ownership, serving only my customers. My principles. I always want to make the best podcast app, and I’ll never disrespect your time, attention, or privacy. What’s gone Streaming. Most big podcasts now use dynamic ad insertion, which causes bugs and problems for streaming playback.1 Downloading episodes completely before they begin playback is much more reliable. Tapping a non-downloading episode will now open the playback screen, download it, then start playback. It works similarly to the way streaming did before, but playback begins after the download completes, not after a portion of it is buffered. On today’s fast networks, this usually only takes a few extra seconds. And in the near future, I’ll be adding smarter options and more control over selective downloading of episodes to further improve the experience for people who don’t automatically download every episode. What’s next The last few missing features from the old app, such as Shortcuts support, storage management, and OPML. These are absent now, but will return soon. More options for downloading and deleting episodes. Upgrading the Apple Watch app to the new, faster sync engine. (The Watch app is currently unchanged from the previous one.) And, of course, more features, including some of your most-requested features over the last decade. Getting this rewrite out the door was a monumental task. Thank you for your patience as I work through this list! Why? Most of Overcast’s core code was 10 years old, which made it cumbersome or impossible to easily move with the times, adopt new iOS functionality, or add new features, especially as one person. That’s why there haven’t been many new features or changes in years. You saw it, and I saw it. I wasn’t able to serve my customers as well as I wanted. For Overcast to have a future, it needed a modern foundation for its second decade. I’ve spent the past 18 months rebuilding most of the app with Swift, SwiftUI, Blackbird, and modern Swift concurrency. Now, development is rapidly accelerating. I’m more responsive, iterating more quickly, and ultimately making the app much better. Thank you all so much for the first decade of Overcast. Here’s to the next one. Dynamic ad insertion (DAI) splices ads into each download, and no two downloads are guaranteed to have the same number or duration of ads. So, for example, if the first half of an episode downloads, then the download fails, and it downloads the second half with another request, the combined audio may jump forward or back at the halfway mark, losing or repeating content. ↩︎
Overcast’s latest update (2022.2) brings the largest redesign in its nearly-eight-year history, plus many of the most frequently requested features and lots of under-the-hood improvements. I’m pretty proud of this one. For this first and largest phase of the redesign, I focused on the home screen, playlist screen, typography, and spacing. (I plan to revamp the now-playing and individual-podcast screens in a later update.) The home screen is radically different: Home screen, before (left) and after (right). Playlists now have strong visual identities for nicer and easier navigation. Each playlist has a customizable color, and a custom icon can be selected from over 3,000 SF Symbols to match modern iOS design and the other icons within Overcast. And playlists can be manually reordered with drag-and-drop. Recently played and newly published episodes can now be displayed on the home screen for quick access, much like the widget and CarPlay experience. Podcasts can now be pinned to the top of the home-screen list. Pinned podcasts can also be manually reordered with drag-and-drop. I’ve also rethought the old stacked “Podcasts” and “Played Podcasts” sections to better match people’s needs and expectations. Now, the toggle atop the podcast list switches between three modes: podcasts with current episodes, all followed podcasts, and inactive podcasts (those that you don’t follow and therefore won’t get any more episodes from, or haven’t posted a new episode in a long time). The playlist screen’s structure remains mostly the same, while refining the design for the modern era: Playlist screen, before (left) and after (right). Here, it’s more apparent that I’ve replaced the system San Francisco font with an alternate variant, San Francisco Rounded, to increase legibility and better match the personality of the app. I’ve also added highly demanded features: By far, Overcast’s most-requested feature is a Mark as Played feature. That’s now available as a checkmark button on episode rows, as well as a left-side swipe action. The second-most-requested feature is a way to view all starred episodes. Special playlists for Starred, Downloaded, and In Progress can now be created. The light and dark themes now each have a customizable tint color from the modern iOS UI-color palette, including these favorites from beta testers: And throughout the app, I’ve made tons of tweaks and bug fixes, including: Notifications and background downloads are now much more reliable. Episode downloads can now be individually deleted or re-downloaded. Links can now be opened in Safari. (under Nitpicky Details) Performance is now significantly better with very large playlists and collections. Fixed bugs with episode-duration detection, CarPlay lists, Mac-app sharing, and much more. So much is better in this update that I can’t even remember it all. Thank you so much to everyone who helped me beta-test this massive update. As always, Overcast is free in the App Store. Go get it!
Losing Steve affected me more than it probably should have, given that I never met him or had any correspondence with him. But losing him was devastating — not just to my world, but the world. He was a sort of virtual father figure: I was always hoping that maybe Steve would notice something I did. We all wanted his attention and approval, and that drove us to do better work — even those of us who never worked at Apple. Nobody replaced him in this role. Nobody can. But as an outsider who had no personal relationship with him to mourn, it has been most depressing to consider how much of his work the world missed out on. He wasn’t taken from us after a long, complete life — he was taken in his prime. He had so much more to offer the world.
After the dust settles from the developer class-action settlement, the South Korean law, the JFTC announcement, and the Apple v. Epic decision, I think the most likely long-term outcome isn’t very different from the status quo — and that’s a good thing. Allowing external purchases Here’s what I think we’ll end up with: Apple will still require apps to use their IAP system for any qualifying purchases that occur in the apps themselves. All app types will be allowed to link out to a browser for other purchase methods. Most apps will be required to also offer IAP side-by-side with any external methods.1 Only “Reader apps” will be exempt from this requirement.2 Apple will have many rules regarding the display, descriptions, and behavior of external purchases, many of which will be unpublished and ever-changing. App Review will be extremely harsh, inconsistent, capricious, petty, and punitive with their enforcement.3 Apple won’t require price-matching between IAP and external purchases. These few but important corrections reduce Apple’s worst behavior and should relieve most regulatory pressure. The result won’t look much different than the status quo: Most big media apps (qualifying as “reader” apps) won’t offer IAP, but will finally be allowed to link to their websites from their apps and offer purchases there. Many games will offer both IAP and external purchases, with the external choice offering a discount, bonus gems, extra loot boxes, or other manipulative tricks to optimize the profitability of casino games for children (commissions from which have been the largest portion of Apple’s “services revenue” to date). Most importantly, many products, services, and business models will become possible that previously weren’t, leading to more apps, more competition, and more money going to more places. External purchase methods will evolve to be almost as convenient as IAP (especially if Apple Pay is permitted in this context), and payment processors will reduce the burden of manual credit-card entry with shared credentials available across multiple apps. The payment-fraud doomsday scenarios argued by Apple and many fans mostly won’t happen, in part because App Review will prevent most obvious cases, but also because parents don’t typically offer their credit cards to untrustworthy children; and for buyers of all ages, most credit cards themselves provide stronger fraud prevention and easier recourse from unwanted charges than the App Store ever has. No side-loading I don’t expect side-loading or alternative app stores to become possible, and I’m relieved, because that is not a future I want for iOS. When evaluating such ideas, I merely ask myself: “What would Facebook do?” Facebook owns four of the top ten apps in the world. If side-loading became possible, Facebook could remove Instagram, WhatsApp, the Facebook app, and Messenger from Apple’s App Store, requiring customers to install these extremely popular apps directly from Facebook via side-loading. And everyone would. Most people use a Facebook-owned app not because it’s a good app, but because it’s a means to an important end in their life. Social pressure, family pressure, and network lock-in prevent most users from seeking meaningful alternatives. People would jump through a few hoops if they had to. Facebook would soon have apps that bypassed App Review installed on the majority of iPhones in the world. Technical limitations of the OS would prevent the most egregious abuses, but there’s a lot they could still do. We don’t need to do much imagining — they already have attempted multiple hacks, workarounds, privacy invasions, and other unscrupulous and technically invasive behavior with their apps over time to surveil user behavior outside of their app and stay running longer in the background than users intend or expect. The OS could evolve over time to reduce some of these vulnerabilities, but technical measures alone cannot address all of them. Without the threat of App Review to keep them in check, Facebook’s apps would become even more monstrous than they already are. As a user and a fan of iOS, I don’t want any part of that. No alternative app stores Alternative app stores would be even worse. Rather than offering individual apps via side-loading, Facebook could offer just one: The Facebook App Store. Instagram, WhatsApp, the Facebook app, and Messenger could all be available exclusively there. The majority of iOS users in the world would soon install it, and Facebook would start using leverage in other areas — apps’ social accounts, stats packages, app-install ads, ad-attribution requirements — to heavily incentivize (and likely strong-arm) a huge number of developers to offer their apps in the Facebook App Store, likely in addition to Apple’s. Maybe I’d be required to add the Facebook SDK to my app in order to be in their store, which they would then use to surveil my users. Maybe I’d need to buy app-install ads to show up in search there at all. Maybe I’d need to pay Facebook to “promote” each app update to reach more than a tiny percentage of my existing customers. And Facebook wouldn’t even be the only app store likely to become a large player on iOS. Amazon would almost certainly bring their garbage “Appstore” to iOS, but at least that one probably wouldn’t go anywhere. Maybe Google would bring the Play Store to iOS and offer a unified SDK to develop a single codebase for iOS and Android, effectively making every app feel like an Android app and further marginalizing native apps when they’re already hurting. Media conglomerates that own many big-name properties, like Disney, might each have their own app stores for their high-profile apps. Running your own store means you can promote all of your own apps as much as you want. What giant corporation would resist? Don’t forget games! Epic and Steam would come to iOS with their own game stores. Maybe Microsoft and Nintendo, too. Maybe you’d need to install seven different app stores on your iPhone just to get the apps and games you already use — and all without App Review to keep them in check. Most developers would probably need to start submitting our apps to multiple app stores, each with its own rules, metadata, technical requirements, capabilities, approval delays, payment processing, stats, crash reports, ads, promotion methods, and user reviews. As a user, a multiple-app-store world sounds like an annoying mess; as a developer, it terrifies me. Apple’s App Store is the devil we know. The most viable alternatives that would crop up would be far worse. Course correction The way Apple runs its business isn’t perfect, but it’s also not a democracy. I loved this part of Judge Yvonne Gonzalez Rogers’ decision in Apple v. Epic, as quoted by Ben Thompson’s excellent article that you should read: Apple has not offered any justification for the actions other than to argue entitlement. Where its actions harm competition and result in supracompetitive pricing and profits, Apple is wrong. I interpret “entitlement” without a negative connotation here — Apple is entitled to run their platform mostly as they wish, with governmental interference only warranted to fix market-scale issues that harm large segments of commerce or society. As a developer, I’d love to see more changes to Apple’s control over iOS. But it’s hard to make larger changes without potentially harming much of what makes iOS great for both users and developers. Judge Gonzalez Rogers got it right: we needed a minor course correction to address the most egregiously anticompetitive behavior, but most of the way Apple runs iOS is best left to Apple. If the South Korean law holds, IAP may not be required — but only in South Korea. With this exception, I expect the rest of these rules to be enforced the same way globally. ↩︎ Apple defines “reader” apps as “[allowing] a user to access previously purchased content or content subscriptions (specifically: magazines, newspapers, books, audio, music, and video).” This includes many apps that Apple’s services compete with, such as Netflix, Spotify, and Kindle, that raise anticompetitive concerns among regulators and legislators when forced to give Apple 30%. ↩︎ App Review has higher-level queues for managerial review of controversial rules or edge cases, typically identifiable from the outside by an app stuck with “In Review” status for days or weeks, and often ending in a phone call from “Bill”. I’d expect any app offering external purchases to have a very high chance of being escalated to a slower, more pain-in-the-ass review process, possibly causing it not to be worthwhile for most small developers to deal with. I have no plans to add external purchases to Overcast for multiple reasons, including this — but mostly because, for my purposes, I’m satisfied with Apple’s IAP system. ↩︎
More in programming
I've been running the Framework Desktop for a few months here in Copenhagen now. It's an incredible machine. It's completely quiet, even under heavy, stress-all-cores load. It's tiny too, at just 4.5L of volume, especially compared to my old beautiful but bulky North tower running the 7950X — yet it's faster! And finally, it's simply funky, quirky, and fun! In some ways, the Framework Desktop is a curious machine. Desktop PCs are already very user-repairable! So why is Framework even bringing their talents to this domain? In the laptop realm, they're basically alone with that concept, but in the desktop space, it's rather crowded already. Yet it somehow still makes sense. Partly because Framework has gone with the AMD Ryzen AI Max 395+, which is technically a laptop CPU. You can find it in the ASUS ROG Flow Z13 and the HP ZBook Ultra. Which means it'll fit in a tiny footprint, and Framework apparently just wanted to see what they could do in that form factor. They clearly had fun with it. Look at mine: There are 21 little tiles on the front that you can get in a bunch of different colors or with logos from Framework. Or you can 3D print your own! It's a welcome change in aesthetic from the brushed aluminum or gamer-focused RGBs approach that most of the competition is taking. But let's cut to the benchmarks. That's really why you'd buy a machine like the Framework Desktop. There are significantly cheaper mini PCs available from Beelink and others, but so far, Framework has the only AMD 395+ unit on sale that's completely silent (the GMKTec very much is not, nor is the Z3 Flow). And for me, that's just a dealbreaker. I can't listen to roaring fans anymore. Here's the key benchmark for me: That's the only type of multi-core workload I really sit around waiting on these days, and the Framework Desktop absolutely crushes it. It's almost twice as fast as the Beelink SER8 and still a solid third faster than the Beelink SER9 too. Of course, it's also a lot more expensive, but you're clearly getting some multi-core bang for your buck here! It's even a more dramatic difference to the Macs. It's a solid 40% faster than the M4 Max and 50% faster than the M4 Pro! Now some will say "that's just because Docker is faster on Linux," and they're not entirely wrong. Docker runs natively on Linux, so for this test, where the MySQL/Redis/ElasticSearch data stores run in Docker while Ruby and the app code runs natively, that's part of the answer. Last I checked, it was about 25% of the difference. But so what? Docker is an integral part of the workflow for tons of developers. We use it to be able to run different versions of MySQL, Redis, and ElasticSearch for different applications on the same machine at the same time. You can't really do that without Docker. So this is what Real World benchmarks reveal. It's not just about having a Docker advantage, though. The AMD 395+ is also incredibly potent in RAW CPU performance. Those 16 Zen5 cores are running at 5.1GHz, and in Geekbench 6 multicore, this is how they stack up: Basically matching the M4 Max! And a good chunk faster than the M4 Pro (as well as other AMDs and Intel's 14900K!). No wonder that it's crazy quick with a full-core stress test like running 30,000 assertions for our HEY test suite. To be fair, the M4s are faster in single-core performance. Apple holds the crown there. It's about 20%. And you'll see that in benchmarks like Speedometer, which mostly measures JavaScript single-core performance. The Framework Desktop puts out 670 vs 744 on the M4 Pro on Speedometer 2.1. On SP 3.1, it's an even bigger difference with 35 vs 50. But I've found that all these computers feel fast enough in single-core performance these days. I can't actually feel the difference browsing on a machine that does 670 vs 744 on SP2.1. Hell, I can barely feel the difference between the SER8, which does 506, and the M4 Pro! The only time I actually feel like I'm waiting on anything is in multi-core workloads like the HEY test suite, and here the AMD 395+ is very near the fastest you can get for a consumer desktop machine today at any price. It gets even better when you bring price into the equation, though. The Framework Desktop with 64GB RAM + 2TB NVMe is $1,876. To get a Mac Studio with similar specs — M4 Max, 64GB RAM, 2TB NVMe — you'll literally spend nearly twice as much at $3,299! If you go for 128GB RAM, you'll spend $2,276 on the Framework, but $4,099 on the Mac. And it'll still be way slower for development work using Docker! The Framework Desktop is simply a great deal. Speaking of 64GB vs 128GB, I've been running the 64GB version, and I almost never get anywhere close to the limits. I think the highest I've seen in regular use is about 20GB of RAM in action. Linux is really efficient. Especially when you're using a window manager like Hyprland, as we do in Omarchy. The only reason you really want to go for the full 128GB RAM is to run local LLM models. The AMD 395+ uses unified memory, like Apple, so nearly all of it is addressable to be used by the GPU. That means you can run monster models, like the new 120b gpt-oss from OpenAI. Framework has a video showing them pushing out 40 tokens/second doing just that. That seems about in range of the numbers I've seen from the M4 Max, which also seem in the 40-50 token/second range, but I'll defer to folks who benchmark local LLMs for the exact details on that. I tried running the new gpt-oss-20b on my 64GB machine, though, and I wasn't exactly blown away by the accuracy. In fact, I'd say it was pretty bad. I mean, exceptionally cool that it's doable, but very far off the frontier models we have access to as SaaS. So personally, this isn't yet something I actually use all that much in day-to-day development. I want the best models running at full speed, and right now that means SaaS. So if you just want the best, small computer that runs Linux superbly well out of the box, you should buy the Framework Desktop. It's completely quiet, fantastically fast, and super fun to look at. But I think it's also fair to mention that you can get something like a Beelink SER9 for half the price! Yes, it's also only 2/3 the performance in multi-core, but it's just as fast in single-core. Most developers could totally get away with the SER9, and barely notice what they were missing. But there are just as many people for whom the extra $1,000 is worth the price to run the test suite 40 seconds quicker! You know who you are. Oh, before I close, I also need to mention that this thing is a gaming powerhouse. It basically punches about as hard as an RTX 4060! With an iGPU! That's kinda crazy. Totally new territory on the PC side for integrated graphics. ETA Prime has a video showing the same chip in the GMK Tech running premier games at 1440p High Settings at great frame rates. You can run most games under Linux these days too (thanks Valve and Steam Deck!), but if you need to dual boot with Windows, the dual NVMe slots in the Framework Desktop come very handy. Framework did good with this one. AMD really blew it out of the water with the 395+. We're spoiled to have such incredible hardware available for Linux at such appealing discounts over similar stuff from Cupertino. What a great time to love open source software and tinker-friendly hardware!
I was listening to a podcast interview with the Jackson Browne (American singer/songwriter, political activist, and inductee into the Rock and Roll Hall of Fame) and the interviewer asks him how he approaches writing songs with social commentaries and critiques — something along the lines of: “How do you get from the New York Times headline on a social subject to the emotional heart of a song that matters to each individual?” Browne discusses how if you’re too subtle, people won’t know what you’re talking about. And if you’re too direct, you run the risk of making people feel like they’re being scolded. Here’s what he says about his songwriting: I want this to sound like you and I were drinking in a bar and we’re just talking about what’s going on in the world. Not as if you’re at some elevated place and lecturing people about something they should know about but don’t but [you think] they should care. You have to get to people where [they are, where] they do care and where they do know. I think that’s a great insight for anyone looking to have a connecting, effective voice. I know for me, it’s really easily to slide into a lecturing voice — you “should” do this and you “shouldn’t” do that. But I like Browne’s framing of trying to have an informal, conversational tone that meets people where they are. Like you’re discussing an issue in the bar, rather than listening to a sermon. Chris Coyier is the canonical example of this that comes to mind. I still think of this post from CSS Tricks where Chris talks about how to have submit buttons that go to different URLs: When you submit that form, it’s going to go to the URL /submit. Say you need another submit button that submits to a different URL. It doesn’t matter why. There is always a reason for things. The web is a big place and all that. He doesn’t conjure up some universally-applicable, justified rationale for why he’s sharing this method. Nor is there any pontificating on why this is “good” or “bad”. Instead, like most of Chris’ stuff, I read it as a humble acknowledgement of the practicalities at hand — “Hey, the world is a big place. People have to do crafty things to make their stuff work. And if you’re in that situation, here’s something that might help what ails ya.” I want to work on developing that kind of a voice because I love reading voices like that. Email · Mastodon · Bluesky
Previously, I wrote some sketchy ideas for what I call a p-fast trie, which is basically a wide fan-out variant of an x-fast trie. It allows you to find the longest matching prefix or nearest predecessor or successor of a query string in a set of names in O(log k) time, where k is the key length. My initial sketch was more complicated and greedy for space than necessary, so here’s a simplified revision. (“p” now stands for prefix.) layout A p-fast trie stores a lexicographically ordered set of names. A name is a sequence of characters from some small-ish character set. For example, DNS names can be represented as a set of about 50 letters, digits, punctuation and escape characters, usually one per byte of name. Names that are arbitrary bit strings can be split into chunks of 6 bits to make a set of 64 characters. Every unique prefix of every name is added to a hash table. An entry in the hash table contains: A shared reference to the closest name lexicographically greater than or equal to the prefix. Multiple hash table entries will refer to the same name. A reference to a name might instead be a reference to a leaf object containing the name. The length of the prefix. To save space, each prefix is not stored separately, but implied by the combination of the closest name and prefix length. A bitmap with one bit per possible character, corresponding to the next character after this prefix. For every other prefix that matches this prefix and is one character longer than this prefix, a bit is set in the bitmap corresponding to the last character of the longer prefix. search The basic algorithm is a longest-prefix match. Look up the query string in the hash table. If there’s a match, great, done. Otherwise proceed by binary chop on the length of the query string. If the prefix isn’t in the hash table, reduce the prefix length and search again. (If the empty prefix isn’t in the hash table then there are no names to find.) If the prefix is in the hash table, check the next character of the query string in the bitmap. If its bit is set, increase the prefix length and search again. Otherwise, this prefix is the answer. predecessor Instead of putting leaf objects in a linked list, we can use a more complicated search algorithm to find names lexicographically closest to the query string. It’s tricky because a longest-prefix match can land in the wrong branch of the implicit trie. Here’s an outline of a predecessor search; successor requires more thought. During the binary chop, when we find a prefix in the hash table, compare the complete query string against the complete name that the hash table entry refers to (the closest name greater than or equal to the common prefix). If the name is greater than the query string we’re in the wrong branch of the trie, so reduce the length of the prefix and search again. Otherwise search the set bits in the bitmap for one corresponding to the greatest character less than the query string’s next character; if there is one remember it and the prefix length. This will be the top of the sub-trie containing the predecessor, unless we find a longer match. If the next character’s bit is set in the bitmap, continue searching with a longer prefix, else stop. When the binary chop has finished, we need to walk down the predecessor sub-trie to find its greatest leaf. This must be done one character at a time – there’s no shortcut. thoughts In my previous note I wondered how the number of search steps in a p-fast trie compares to a qp-trie. I have some old numbers measuring the average depth of binary, 4-bit, 5-bit, 6-bit and 4-bit, 5-bit, dns qp-trie variants. A DNS-trie varies between 7 and 15 deep on average, depending on the data set. The number of steps for a search matches the depth for exact-match lookups, and is up to twice the depth for predecessor searches. A p-fast trie is at most 9 hash table probes for DNS names, and unlikely to be more than 7. I didn’t record the average length of names in my benchmark data sets, but I guess they would be 8–32 characters, meaning 3–5 probes. Which is far fewer than a qp-trie, though I suspect a hash table probe takes more time than chasing a qp-trie pointer. (But this kind of guesstimate is notoriously likely to be wrong!) However, a predecessor search might need 30 probes to walk down the p-fast trie, which I think suggests a linked list of leaf objects is a better option.
New Logic for Programmers Release! v0.11 is now available! This is over 20% longer than v0.10, with a new chapter on code proofs, three chapter overhauls, and more! Full release notes here. Software books I wish I could read I'm writing Logic for Programmers because it's a book I wanted to have ten years ago. I had to learn everything in it the hard way, which is why I'm ensuring that everybody else can learn it the easy way. Books occupy a sort of weird niche in software. We're great at sharing information via blogs and git repos and entire websites. These have many benefits over books: they're free, they're easily accessible, they can be updated quickly, they can even be interactive. But no blog post has influenced me as profoundly as Data and Reality or Making Software. There is no blog or talk about debugging as good as the Debugging book. It might not be anything deeper than "people spend more time per word on writing books than blog posts". I dunno. So here are some other books I wish I could read. I don't think any of them exist yet but it's a big world out there. Also while they're probably best as books, a website or a series of blog posts would be ok too. Everything about Configurations The whole topic of how we configure software, whether by CLI flags, environmental vars, or JSON/YAML/XML/Dhall files. What causes the configuration complexity clock? How do we distinguish between basic, advanced, and developer-only configuration options? When should we disallow configuration? How do we test all possible configurations for correctness? Why do so many widespread outages trace back to misconfiguration, and how do we prevent them? I also want the same for plugin systems. Manifests, permissions, common APIs and architectures, etc. Configuration management is more universal, though, since everybody either uses software with configuration or has made software with configuration. The Big Book of Complicated Data Schemas I guess this would kind of be like Schema.org, except with a lot more on the "why" and not the what. Why is important for the Volcano model to have a "smokingAllowed" field?1 I'd see this less as "here's your guide to putting Volcanos in your database" and more "here's recurring motifs in modeling interesting domains", to help a person see sources of complexity in their own domain. Does something crop up if the references can form a cycle? If a relationship needs to be strictly temporary, or a reference can change type? Bonus: path dependence in data models, where an additional requirement leads to a vastly different ideal data model that a company couldn't do because they made the old model. (This has got to exist, right? Business modeling is a big enough domain that this must exist. Maybe The Essence of Software touches on this? Man I feel bad I haven't read that yet.) Computer Science for Software Engineers Yes, I checked, this book does not exist (though maybe this is the same thing). I don't have any formal software education; everything I know was either self-taught or learned on the job. But it's way easier to learn software engineering that way than computer science. And I bet there's a lot of other engineers in the same boat. This book wouldn't have to be comprehensive or instructive: just enough about each topic to understand why it's an area of study and appreciate how research in it eventually finds its way into practice. MISU Patterns MISU, or "Make Illegal States Unrepresentable", is the idea of designing system invariants in the structure of your data. For example, if a Contact needs at least one of email or phone to be non-null, make it a sum type over EmailContact, PhoneContact, EmailPhoneContact (from this post). MISU is great. Most MISU in the wild look very different than that, though, because the concept of MISU is so broad there's lots of different ways to achieve it. And that means there are "patterns": smart constructors, product types, properly using sets, newtypes to some degree, etc. Some of them are specific to typed FP, while others can be used in even untyped languages. Someone oughta make a pattern book. My one request would be to not give them cutesy names. Do something like the Aarne–Thompson–Uther Index, where items are given names like "Recognition by manner of throwing cakes of different weights into faces of old uncles". Names can come later. The Tools of '25 Not something I'd read, but something to recommend to junior engineers. Starting out it's easy to think the only bit that matters is the language or framework and not realize the enormous amount of surrounding tooling you'll have to learn. This book would cover the basics of tools that enough developers will probably use at some point: git, VSCode, very basic Unix and bash, curl. Maybe the general concepts of tools that appear in every ecosystem, like package managers, build tools, task runners. That might be easier if we specialize this to one particular domain, like webdev or data science. Ideally the book would only have to be updated every five years or so. No LLM stuff because I don't expect the tooling will be stable through 2026, to say nothing of 2030. A History of Obsolete Optimizations Probably better as a really long blog series. Each chapter would be broken up into two parts: A deep dive into a brilliant, elegant, insightful historical optimization designed to work within the constraints of that era's computing technology What we started doing instead, once we had more compute/network/storage available. c.f. A Spellchecker Used to Be a Major Feat of Software Engineering. Bonus topics would be brilliance obsoleted by standardization (like what people did before git and json were universal), optimizations we do today that may not stand the test of time, and optimizations from the past that did. Sphinx Internals I need this. I've spent so much goddamn time digging around in Sphinx and docutils source code I'm gonna throw up. Systems Distributed Talk Today! Online premier's at noon central / 5 PM UTC, here! I'll be hanging out to answer questions and be awkward. You ever watch a recording of your own talk? It's real uncomfortable! In this case because it's a field on one of Volcano's supertypes. I guess schemas gotta follow LSP too ↩