More from macwright.com
(async () => { const colors = ['fb6b1d','e83b3b','831c5d','c32454','f04f78','f68181','fca790','e3c896','ab947a','966c6c','625565','3e3546','0b5e65','0b8a8f','1ebc73','91db69','fbff86','fbb954','cd683d','9e4539','7a3045','6b3e75','905ea9','a884f3','eaaded', '8fd3ff', '4d9be6', '4d65b4', '484a77', '30e1b9', '8ff8e2'].map(c => `#${c}`); const mask = document.querySelector('#mask'); const replacement = await fetch('/images/2025-04-12-tidbyt-second-life-tidbyt-mask.svg').then(r => r.text()); mask.style = ''; mask.innerHTML = replacement; let i = 0; let delay = 10; const svg = mask.querySelector('svg'); svg.removeAttribute('width'); svg.removeAttribute('height'); svg.setAttribute('style', 'width:auto;height:auto;position:absolute;top:0;right:0;bottom:0;left:0;opacity:0.4;'); for (const path of svg.querySelectorAll('path')) { delay += 20; delay *= 1.02; setTimeout(() => { path.setAttribute('fill', colors[i++ % colors.length]); }, delay) path.addEventListener('mouseover', () => { path.setAttribute('fill', colors[i++ % colors.length]); }); } })() Remember the Tidbyt? It’s a super low-resolution, internet-connected, wood-paneled display that I wrote a review of it back in 2022. It’s been on my shelf for years now, showing the time, weather, warning me when the UV is going to be high. In 2023 I used it as an excuse to learn some Rust, to render custom graphics. It’s a toy, a distraction, a worry stone for me to work on when I need something open-ended and low-stakes. Anyway, the company that made the Tidbyt is no more. They got acquihired by Modal, a company that makes serverless AI compute hosting. So, they aren’t making devices right now, and the blog post promises that their cloud services will keep working. I don’t hold anything against the Tidbyt team: in fact, our Val Town office was coincidentally right next to theirs in a WeWork, and we met in real life! They’re very nice folks, and were doing so much with a small team. Lots of respect to them. Modal made a smart choice acquiring Tidbyt. But realistically, it’s time to make sure my device doesn’t become e-waste. The Tidbyt is ready for this One of the biggest critiques of the Tidbyt was that it was just an LED matrix and an ESP chip. You could buy an LED matrix on Sparkfun, the ESP, a power supply, some wood for the enclosure, and you’d have your own DIY Tidbyt. Maybe you could do it for half the price! But that’s also a strength. The Tidbyt is not some custom SoC with an exotic custom software stack and boutique hardware. It is what it looks like: a neat combination of commonplace parts. That makes it kind of future-proof and flexible. The first step is to replace the firmware. Tidbyt’s stock firmware routes all of its requests through the Tidbyt company’s servers. I want to eliminate that hop. Replacing the firmware Thankfully, Tidbyt published their ‘HDK’, which is an open source version of their stock firmware. It’s remarkably simple: It connects to Wifi It downloads a WebP image from a URL It displays that WebP image The HDK contains the code to do this stuff. There’s very little code required, but it does drag in a WebP decoder, Wifi library, and a library for running the LED matrix. But, setting up the HDK I ran into issues both small and large: it had issues with HTTPS URLs and Wifi passwords that contain spaces. Plus nobody has been added as a contributor to the HDK repository, so Pull Requests aren’t being accepted and it hasn’t had a change in 7 months. But the community came to the rescue with tronbyt’s firmware-http, a fork of the HDK that fixes every issue I experienced. Open source works! So back in 2022 I included this chart of the Tidbyt network: With an updated HDK, this workflow is a lot simpler. Instead of sending images to the Tidbyt servers and those Tidbyt servers delivering them to my device, the device makes requests directly of the server that generates the images. Replacing pixlet The Tidbyt team wrote pixlet, a little framework for generating pixel graphics that the Tidbyt displays. It lets you define a React-like tree of components - some text in a stack, a rectangle, images, and so on - and does all of the layout and rendering. The tronbyt community also forked pixlet and are actively developing it, which is fantastic. But this part of the stack I really never liked. That’s why I spent so much time reimplementing it in Rust and JavaScript. Partly it’s the language - pixlet apps are written in starlark, which is kind of an outgrowth of the Bazel build system from Google. Starlark is sort of like Python, but isn’t actually compatible with anything in the Python ecosystem. It’s very niche, limited, and overall just weird. I think I understand why Tidbyt would choose Starlark - it’s fast and has hermetic execution - making it safe to run untrusted Starlark programs because they can’t access the filesystem, network, or even the system clock without being given explicit controlled APIs to do those things. If you’re building a cloud service that runs a lot of untrusted user code, dictating that code is all Starlark is a really good cheat code - I know firsthand how hard it is to run untrusted JavaScript. But I’m not building a cloud service full of untrusted code. People who are self-hosting their Tidbyt devices (dozens of us!) don’t benefit from the tradeoffs of the Starlark language. They’d be better off with something normal. I rewrote pixlet again It’s called indiepixel and it’s a Python reimplementation of pixlet. It supports almost the entire pixlet API, and comes with the added benefit of being Python. You can use Python modules! You can read from the filesystem, parse CSVs, do all of your usual Python stuff. You can embed it in a Python application to render some graphics. What does indiepixel do currently? Renders text in the glorious retro BDF pixel font format. Renders pixelated pie charts, rectangles, and boxes. Supports animation for its WebP outputs. Provides a nice UI for browsing your selection of screens. It’ll probably never be finished, but it works well enough to power my Tidbyt. I’m running indiepixel on a free Render server instance, but it should run pretty much the same on any Python-compatible hosting: the only tricky dependency is Pillow, which it uses for image parsing and rendering. My free time for computer-oriented side projects has been limited, due to other commitments and an intention to get offline on the weekends. I’ve been sewing, biking, and running more. So I really want a side project I can enjoy, and indiepixel has fit the bill. It’s really satisfying to implement a new widget and see it rendered in blocky 64x32 pixels. The Pillow image rendering library for Python is mostly wonderful and very powerful. Why Python? Why is indiepixel written in Python? Well - I learned from tidbyt-rs that Rust would be an awkward fit as a scripting language for rendering graphics. The well-known Rust complexities around memory management made simple things difficult for me, which would make them totally unacceptable for others. Besides the attraction of being able to compile a small binary that might be able to run on the Tidbyt itself, Rust didn’t have many other advantages. The Pillow module really is such an advantage for Python. JavaScript doesn’t have a real alternative: there’s sharp, a great module for image conversion, but nothing that has such a great canvas interface. node-canvas is fine, but it doesn’t support WebP or animation, which are critical features for this project. I also wanted a test out the amazing new Python tooling that Astral is cooking up, like uv. I now have a better grasp of the Python ecosystem than I did a few months ago, and it’s optimistic but mixed. uv is amazing, but Python has a lot of legacy cruft around packaging. People are critical of NPM, but I think it did benefit from being established after PyPI and learning from its lessons. Thank you Steven Loria for a PR that fixed everything and made it all work and saved me months of tweaking settings. The graphic I watercolored that Tidbyt a while while ago and have been seriously dragging my feet on finishing this blog post. Sometimes the watercolor-illustration wags the technical-blog-post dog’s tail? Anyway, it’s a callback to that little world, with some small tweaks: this time I thought it’d be nice to have it be both watercolored and interactive. That ‘cybernetic’ feel. The secret recipe: a nice palette from lospec, creating a black & white mask of areas in Affinity Photo and vectorizing it with potrace, and then just some JavaScript that recolors based on hover handling. If you’re using the Tidbyt or some similar pixel-displaying device, try out indiepixel! It’s niche and has required a silly amount of effort to generate a glorified weather clock in my apartment, but it was a fun time chasing another interest.
Reading Whether it’s cryptocurrency scammers mining with FOSS compute resources or Google engineers too lazy to design their software properly or Silicon Valley ripping off all the data they can get their hands on at everyone else’s expense… I am sick and tired of having all of these costs externalized directly into my fucking face. Drew DeVault on the annoyance and cost of AI scrapers. I share some of that pain: Val Town is routinely hammered by some AI company’s poorly-coded scraping bot. I think it’s like this for everyone, and it’s hard to tell if AI companies even care that everyone hates them. And perhaps most recently, when a person who publishes their work under a free license discovers that work has been used by tech mega-giants to train extractive, exploitative large language models? Wait, no, not like that. Molly White wrote a more positive article about the LLM scraping problem, but I have my doubts about its positivity. For example, she suggests that Wikimedia’s approach with “Wikimedia Enterprise” gives LLM companies a way to scrape the site without creating too much cost. But that doesn’t seem like it’s working. The problem is that these companies really truly do not care. Harberger taxes represent an elegant theoretical solution that fails in practice for immobile property. Just as mobile home residents face exploitation through sudden ground rent increases, property owners under a Harberger system would face similar hold-up problems. This creates an impossible dilemma: pay increasingly burdensome taxes or surrender investments at below-market values. Progress and Poverty, a blog about Georgism, has this post about Herberger taxes, which are a super neat idea. The gist is that you would be in charge of saying how much your house is worth, but the added wrinkle is that by saying a price you are bound to be open to selling your house at that price. So if you go too low, someone will buy it, or too high, and you’re paying too much in taxes. It’s clever but doesn’t work, and the analysis points to the vital difference between housing and other goods: that buying, selling, and moving between houses is anything but simple. I’ve always been a little skeptical of the line that the AI crowd feels contempt for artists, or that such a sense is particularly widespread—because certainly they all do not!—but it’s hard to take away any other impression from a trend so widely cheered in its halls as AI Ghiblification. Brian Merchant on the OpenAI Studio Ghibli ‘trend’ is a good read. I can’t stop thinking that AI is in danger of being right-wing coded, the examples of this, like the horrifying White House tweet mentioned in that article, are multiplying. I feel bad when I recoil to innocent usage of the tool by good people who just want something cute. It is kind of fine, on the micro level. But with context, it’s so bad in so many ways. Already the joy and attachment I’ve felt to the graphic style is fading as more shitty Studio Ghibli knockoffs have been created in the last month than in all of the studio’s work. Two days later, at a state dinner in the White House, Mark gets another chance to speak with Xi. In Mandarin, he asks Xi if he’ll do him the honor of naming his unborn child. Xi refuses. Careless People was a good read. It’s devastating for Zuckerberg, Joel Kaplan, and Sheryl Sandberg, as well as a bunch of global leaders who are eager to provide tax loopholes for Facebook. Perhaps the only person who ends the book as a hero is President Obama, who sees through it all. In a March 26 Slack message, Lavingia also suggested that the agency should do away with paper forms entirely, aiming for “full digitization.” “There are over 400 vet-facing forms that the VA supports, and only about 10 percent of those are digitized,” says a VA worker, noting that digitizing forms “can take years because of the sensitivity of the data” they contain. Additionally, many veterans are elderly and prefer using paper forms because they lack the technical skills to navigate digital platforms. “Many vets don’t have computers or can’t see at all,” they say. “My skin is crawling thinking about the nonchalantness of this guy.” Perhaps because of proximity, the story that Sahil Lavingia has been working for DOGE seems important. It was a relief when a few other people noticed it and started retelling the story to the tech sphere, like Dan Brown’s “Gumroad is not open source” and Ernie Smith’s “Gunkroad”, but I have to nitpick on the structure here: using a non-compliant open source license is not the headline, collaborating with fascists and carelessly endangering disabled veterans is. Listening Septet by John Carroll Kirby I saw John Carroll Kirby play at Public Records and have been listening to them constantly ever since. The music is such a paradox: the components sound like elevator music or incredibly cheesy jazz if you listen to a few seconds, but if you keep listening it’s a unique, deep sound. Sierra Tracks by Vega Trails More new jazz! Mammoth Hands and Portico Quartet overlap with Vega Trails, which is a beautiful minimalist band. Watching This short video with John Wilson was great. He says a bit about having a real physical video camera, not just a phone, which reminded me of an old post of mine, Carrying a Camera.
I used to make little applications just for myself. Sixteen years ago (oof) I wrote a habit tracking application, and a keylogger that let me keep track of when I was using a computer, and generate some pretty charts. I’ve taken a long break from those kinds of things. I love my hobbies, but they’ve drifted toward the non-technical, and the idea of keeping a server online for a fun project is unappealing (which is something that I hope Val Town, where I work, fixes). Some folks maintain whole ‘homelab’ setups and run Kubernetes in their basement. Not me, at least for now. But I have been tiptoeing back into some little custom tools that only I use, with a focus on just my own computing experience. Here’s a quick tour. Hammerspoon Hammerspoon is an extremely powerful scripting tool for macOS that lets you write custom keyboard shortcuts, UIs, and more with the very friendly little language Lua. Right now my Hammerspoon configuration is very simple, but I think I’ll use it for a lot more as time progresses. Here it is: hs.hotkey.bind({"cmd", "shift"}, "return", function() local frontmost = hs.application.frontmostApplication() if frontmost:name() == "Ghostty" then frontmost:hide() else hs.application.launchOrFocus("Ghostty") end end) Not much! But I recently switched to Ghostty as my terminal, and I heavily relied on iTerm2’s global show/hide shortcut. Ghostty doesn’t have an equivalent, and Mikael Henriksson suggested a script like this in GitHub discussions, so I ran with it. Hammerspoon can do practically anything, so it’ll probably be useful for other stuff too. SwiftBar I review a lot of PRs these days. I wanted an easy way to see how many were in my review queue and go to them quickly. So, this script runs with SwiftBar, which is a flexible way to put any script’s output into your menu bar. It uses the GitHub CLI to list the issues, and jq to massage that output into a friendly list of issues, which I can click on to go directly to the issue on GitHub. #!/bin/bash # <xbar.title>GitHub PR Reviews</xbar.title> # <xbar.version>v0.0</xbar.version> # <xbar.author>Tom MacWright</xbar.author> # <xbar.author.github>tmcw</xbar.author.github> # <xbar.desc>Displays PRs that you need to review</xbar.desc> # <xbar.image></xbar.image> # <xbar.dependencies>Bash GNU AWK</xbar.dependencies> # <xbar.abouturl></xbar.abouturl> DATA=$(gh search prs --state=open -R val-town/val.town --review-requested=@me --json url,title,number,author) echo "$(echo "$DATA" | jq 'length') PR" echo '---' echo "$DATA" | jq -c '.[]' | while IFS= read -r pr; do TITLE=$(echo "$pr" | jq -r '.title') AUTHOR=$(echo "$pr" | jq -r '.author.login') URL=$(echo "$pr" | jq -r '.url') echo "$TITLE ($AUTHOR) | href=$URL" done Tampermonkey Tampermonkey is essentially a twist on Greasemonkey: both let you run your own JavaScript on anybody’s webpage. Sidenote: Greasemonkey was created by Aaron Boodman, who went on to write Replicache, which I used in Placemark, and is now working on Zero, the successor to Replicache. Anyway, I have a few fancy credit cards which have ‘offers’ which only work if you ‘activate’ them. This is an annoying dark pattern! And there’s a solution to it - CardPointers - but I neither spend enough nor care enough about points hacking to justify the cost. Plus, I’d like to know what code is running on my bank website. So, Tampermonkey to the rescue! I wrote userscripts for Chase, American Express, and Citi. You can check them out on this Gist but I strongly recommend to read through all the code because of the afore-mentioned risks around running untrusted code on your bank account’s website! Obsidian Freeform This is a plugin for Obsidian, the notetaking tool that I use every day. Freeform is pretty cool, if I can say so myself (I wrote it), but could be much better. The development experience is lackluster because you can’t preview output at the same time as writing code: you have to toggle between the two states. I’ll fix that eventually, or perhaps Obsidian will add new API that makes it all work. I use Freeform for a lot of private health & financial data, almost always with an Observable Plot visualization as an eventual output. For example, when I was switching banks and one of the considerations was mortgage discounts in case I ever buy a house (ha 😢), it was fun to chart out the % discounts versus the required AUM. It’s been really nice to have this kind of visualization as ‘just another document’ in my notetaking app. Doesn’t need another server, and Obsidian is pretty secure and private.
This website has a new section: blogroll.opml! A blogroll is a list of blogs - a lightweight way of people recommending other people’s writing on the indieweb. What it includes The blogs that I included are just sampled from my many RSS subscriptions that I keep in my Feedbin reader. I’m subscribed to about 200 RSS feeds, the majority of which are dead or only publish once a year. I like that about blogs, that there’s no expectation of getting a post out every single day, like there is in more algorithmically-driven media. If someone who I interacted with on the internet years ago decides to restart their writing, that’s great! There’s no reason to prune all the quiet feeds. The picks are oriented toward what I’m into: niches, blogs that have a loose topic but don’t try to be general-interest, people with distinctive writing. If you import all of the feeds into your RSS reader, you’ll probably end up unsubscribing from some of them because some of the experimental electric guitar design or bonsai news is not what you’re into. Seems fine, or you’ll discover a new interest! How it works Ruben Schade figured out a brilliant way to show blogrolls and I copied him. Check out his post on styling OPML and RSS with XSLT to XHTML for how it works. My only additions to that scheme were making the blogroll page blend into the rest of the website by using an include tag with Jekyll to add the basic site skeleton, and adding a link with the download attribute to provide a simple way to download the OPML file. Oddly, if you try to save the OPML page using Save as… in Firefox, Firefox will save the transformed output via the XSLT, rather than the raw source code. XSLT is such an odd and rare part of the web ecosystem, I had to use it.
More in programming
I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough). Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries. But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less. I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way? These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined: I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things]. He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth: I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.” That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter. Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?” But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty. I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box. But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally). As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do. And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol. I think my own career evolution has gone something like what Brian describes: Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.” Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?” Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.” Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?” Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?” Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?” [As you can see, I have some personal issues I need to work through…] As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words. I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged. All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels. “I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.” Email · Mastodon · Bluesky
We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase. This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied! It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance. I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded. However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.
Recently I got a question on formal methods1: how does it help to mathematically model systems when the system requirements are constantly changing? It doesn't make sense to spend a lot of time proving a design works, and then deliver the product and find out it's not at all what the client needs. As the saying goes, the hard part is "building the right thing", not "building the thing right". One possible response: "why write tests"? You shouldn't write tests, especially lots of unit tests ahead of time, if you might just throw them all away when the requirements change. This is a bad response because we all know the difference between writing tests and formal methods: testing is easy and FM is hard. Testing requires low cost for moderate correctness, FM requires high(ish) cost for high correctness. And when requirements are constantly changing, "high(ish) cost" isn't affordable and "high correctness" isn't worthwhile, because a kinda-okay solution that solves a customer's problem is infinitely better than a solid solution that doesn't. But eventually you get something that solves the problem, and what then? Most of us don't work for Google, we can't axe features and products on a whim. If the client is happy with your solution, you are expected to support it. It should work when your customers run into new edge cases, or migrate all their computers to the next OS version, or expand into a market with shoddy internet. It should work when 10x as many customers are using 10x as many features. It should work when you add new features that come into conflict. And just as importantly, it should never stop solving their problem. Canonical example: your feature involves processing requested tasks synchronously. At scale, this doesn't work, so to improve latency you make it asynchronous. Now it's eventually consistent, but your customers were depending on it being always consistent. Now it no longer does what they need, and has stopped solving their problems. Every successful requirement met spawns a new requirement: "keep this working". That requirement is permanent, or close enough to decide our long-term strategy. It takes active investment to keep a feature behaving the same as the world around it changes. (Is this all a pretentious of way of saying "software maintenance is hard?" Maybe!) Phase changes In physics there's a concept of a phase transition. To raise the temperature of a gram of liquid water by 1° C, you have to add 4.184 joules of energy.2 This continues until you raise it to 100°C, then it stops. After you've added two thousand joules to that gram, it suddenly turns into steam. The energy of the system changes continuously but the form, or phase, changes discretely. Software isn't physics but the idea works as a metaphor. A certain architecture handles a certain level of load, and past that you need a new architecture. Or a bunch of similar features are independently hardcoded until the system becomes too messy to understand, you remodel the internals into something unified and extendable. etc etc etc. It's doesn't have to be totally discrete phase transition, but there's definitely a "before" and "after" in the system form. Phase changes tend to lead to more intricacy/complexity in the system, meaning it's likely that a phase change will introduce new bugs into existing behaviors. Take the synchronous vs asynchronous case. A very simple toy model of synchronous updates would be Set(key, val), which updates data[key] to val.3 A model of asynchronous updates would be AsyncSet(key, val, priority) adds a (key, val, priority, server_time()) tuple to a tasks set, and then another process asynchronously pulls a tuple (ordered by highest priority, then earliest time) and calls Set(key, val). Here are some properties the client may need preserved as a requirement: If AsyncSet(key, val, _, _) is called, then eventually db[key] = val (possibly violated if higher-priority tasks keep coming in) If someone calls AsyncSet(key1, val1, low) and then AsyncSet(key2, val2, low), they should see the first update and then the second (linearizability, possibly violated if the requests go to different servers with different clock times) If someone calls AsyncSet(key, val, _) and immediately reads db[key] they should get val (obviously violated, though the client may accept a slightly weaker property) If the new system doesn't satisfy an existing customer requirement, it's prudent to fix the bug before releasing the new system. The customer doesn't notice or care that your system underwent a phase change. They'll just see that one day your product solves their problems, and the next day it suddenly doesn't. This is one of the most common applications of formal methods. Both of those systems, and every one of those properties, is formally specifiable in a specification language. We can then automatically check that the new system satisfies the existing properties, and from there do things like automatically generate test suites. This does take a lot of work, so if your requirements are constantly changing, FM may not be worth the investment. But eventually requirements stop changing, and then you're stuck with them forever. That's where models shine. As always, I'm using formal methods to mean the subdiscipline of formal specification of designs, leaving out the formal verification of code. Mostly because "formal specification" is really awkward to say. ↩ Also called a "calorie". The US "dietary Calorie" is actually a kilocalorie. ↩ This is all directly translatable to a TLA+ specification, I'm just describing it in English to avoid paying the syntax tax ↩
While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.
I've been biking in Brooklyn for a few years now! It's hard for me to believe it, but I'm now one of the people other bicyclists ask questions to now. I decided to make a zine that answers the most common of those questions: Bike Brooklyn! is a zine that touches on everything I wish I knew when I started biking in Brooklyn. A lot of this information can be found in other resources, but I wanted to collect it in one place. I hope to update this zine when we get significantly more safe bike infrastructure in Brooklyn and laws change to make streets safer for bicyclists (and everyone) over time, but it's still important to note that each release will reflect a specific snapshot in time of bicycling in Brooklyn. All text and illustrations in the zine are my own. Thank you to Matt Denys, Geoffrey Thomas, Alex Morano, Saskia Haegens, Vishnu Reddy, Ben Turndorf, Thomas Nayem-Huzij, and Ryan Christman for suggestions for content and help with proofreading. This zine is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, so you can copy and distribute this zine for noncommercial purposes in unadapted form as long as you give credit to me. Check out the Bike Brooklyn! zine on the web or download pdfs to read digitally or print here!