More from Paolo Amoroso's Journal
<![CDATA[I'm working on DandeGUI, a Common Lisp GUI library for simple text and graphics output on Medley Interlisp. The name, pronounced "dandy guy", is a nod to the Dandelion workstation, one of the Xerox D-machines Interlisp-D ran on in the 1980s. DandeGUI allows the creation and management of windows for stream-based text and graphics output. It captures typical GUI patterns of the Medley environment such as printing text to a window instead of the standard output. The main window of this screenshot was created by the code shown above it. A text output window created with DandeGUI on Medley Interlisp and the Lisp code that generated it. The library is written in Common Lisp and exposes its functionality as an API callable from Common Lisp and Interlisp code. Motivations In most of my prior Lisp projects I wrote programs that print text to windows. In general these windows are actually not bare Medley windows but running instances of the TEdit rich-text editor. Driving a full editor instead of directly creating windows may be overkill, but I get for free content scrolling as well as window resizing and repainting which TEdit handles automatically. Moreover, TEdit windows have an associated TEXTSTREAM, an Interlisp data structure for text stream I/O. A TEXTSTREAM can be passed to any Common Lisp or Interlisp output function that takes a stream as an argument such as PRINC, FORMAT, and PRIN1. For example, if S is the TEXTSTREAM associated with a TEdit window, (FORMAT S "~&Hello, Medley!~%") inserts the text "Hello, Medley!" in the window at the position of the cursor. Simple and versatile. As I wrote more GUI code, recurring patterns and boilerplate emerged. These programs usually create a new TEdit window; set up the title and other options; fetch the associated text stream; and return it for further use. The rest of the program prints application specific text to the stream and hence to the window. These patterns were ripe for abstracting and packaging in a library that other programs can call. This work is also good experience with API design. Usage An example best illustrates what DandeGUI can do and how to use it. Suppose you want to display in a window some text such as a table of square roots. This code creates the table in the screenshot above: (gui:with-output-to-window (stream :title "Table of square roots") (format stream "~&Number~40TSquare Root~2%") (loop for n from 1 to 30 do (format stream "~&~4D~40T~8,4F~%" n (sqrt n)))) DandeGUI exports all the public symbols from the DANDEGUI package with nickname GUI. The macro GUI:WITH-OUTPUT-TO-WINDOW creates a new TEdit window with title specified by :TITLE, and establishes a context in which the variable STREAM is bound to the stream associated with the window. The rest of the code prints the table by repeatedly calling the Common Lisp function FORMAT with the stream. GUI:WITH-OUTPUT-TO-WINDOW is best suited for one-off output as the stream is no longer accessible outside of its scope. To retain the stream and send output in a series of steps, or from different parts of the program, you need a combination of GUI:OPEN-WINDOW-STREAM and GUI:WITH-WINDOW-STREAM. The former opens and returns a new window stream which may later be used by FORMAT and other stream output functions. These functions must be wrapped in calls to the macro GUI:WITH-WINDOW-STREAM to establish a context in which a variable is bound to the appropriate stream. The DandeGUI documentation on the project repository provides more details, sample code, and the API reference. Design DandeGUI is a thin wrapper around the Interlisp system facilities that provide the underlying functionality. The main reason for a thin wrapper is to have a simple API that covers the most common user interface patterns. Despite the simplicity, the library takes care of a lot of the complexity of managing Medley GUIs such as content scrolling and window repainting and resizing. A thin wrapper doesn't hide much the data structures ubiquitous in Medley GUIs such as menus and font descriptors. This is a plus as the programmer leverages prior knowledge of these facilities. So far I have no clear idea how DandeGUI may evolve. One more reason not to deepen the wrapper too much without a clear direction. The user needs not know whether DandeGUI packs TEdit or ordinary windows under the hood. Therefore, another design goal is to hide this implementation detail. DandeGUI, for example, disables the main command menu of TEdit and sets the editor buffer to read-only so that typing in the window doesn't change the text accidentally. Using Medley Common Lisp DandeGUI relies on basic Common Lisp features. Although the Medley Common Lisp implementation is not ANSI compliant it provides all I need, with one exception. The function DANDEGUI:WINDOW-TITLE returns the title of a window and allows to set it with a SETF function. However, the SEdit structure editor and the File Manager of Medley don't support or track function names that are lists such as (SETF WINDOW-TITLE). A good workaround is to define SETF functions with DEFSETF which Medley does support along with the CLtL macro DEFINE-SETF-METHOD. Next steps At present DandeGUI doesn't do much more than what described here. To enhance this foundation I'll likely allow to clear existing text and give control over where to insert text in windows, such as at the beginning or end. DandeGUI will also have rich text facilities like printing in bold or changing fonts. The windows of some of my programs have an attached menu of commands and a status area for displaying errors and other messages. I will eventually implement such menu-ed windows. To support programs that do graphics output I plan to leverage the functionality of Sketch for graphics in a way similar to how I build upon TEdit for text. Sketch is the line drawing editor of Medley. The Interlisp graphics primitives require as an argument a DISPLAYSTREAM, a data stracture that represents an output sink for graphics. It is possible to use the Sketch drawing area as an output destination by associating a DISPLAYSTREAM with the editor's window. Like TEdit, Sketch takes care of repainting content as well as window scrolling and resizing. In other words, DISPLAYSTREAM is to Sketch what TEXTSTREAM is to TEdit. DandeGUI will create and manage Sketch windows with associated streams suitable for use as the DISPLAYSTREAM the graphics primitives require. #DandeGUI #CommonLisp #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/dandegui-a-gui-library-for-medley-interlisp"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
<![CDATA[I spoke too soon when I said I was enjoying the stability of Linux. I have been using Linux Mint Cinnamon on a System76 Merkaat PC with no major issues since July of 2024. But a few days ago a routine system update of Mint 22 dumped me to the text console. A fresh install of Mint 22.1, the latest release, brought the system back online. I had backups and the mishap luckily turned out as just an annoyance that consumed several hours of unplanned maintenance. It all started when the Mint Update Manager listed several packages for update, including the System76 driver and tools. Oddly, the Update Manager also marked for removal several packages including core ones such as Xorg, Celluloid, and more. The smooth running of Mint made my paranoid side fall asleep and I applied the recommend changes. At the next reboot the graphics session didn't start and landed me at the text console with no clue what happened. I don't use Timeshift for system snapshots as I prefer a fresh install and restore of data backups if the system breaks. Therefore, to fix such an issue apparently related to Mint 22 the obvious route was to install Mint 22.1. Besides, this was the right occasion to try the new release. On my Raspberry Pi 400 I ran dd to flash a bootable USB stick with Mint 22.1. I had no alternatives as GNOME Disks didn't work. The Merkaat failed to boot off the stick, possibly because I messed with the arguments of dd. I still had around a USB stick with Mint 22 and I used it to freshly install it on the Merkaat. Then I immediately ran the upgrade to Mint 22.1 which completed successfully unlike a prior upgrade attempt. Next, I tried to install the System76 driver with sudo apt install system76-driver but got a package not found error. At that point I had already added the System76 package repository to the APT sources and refreshing the Mint Update Manager yielded this error: Could not refresh the list of updates Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages Aside from the errors the system was up and running on the Merkaat, so with Nemo I reflashed the Mint 22.1 stick. This time the PC did boot off the stick and let me successfully install Mint 22.1. Restoring the data completed the system recovery. I left out the System76 driver as it's the primary suspect, possibly due to package conflicts. Mint detects and supports all hardware of the Merkaat anyway and it's only prudent to skip the package for the time being. Besides improvements under the hood, Mint 22.1 features a redesigned default Cinnamon theme. No major changes, I feel at home. The main takeaway of this adventure is that it's better to have a bootable USB stick ready with the latest Mint release, even if I don't plan to upgrade immediately. Another takeaway is the Pi 400 makes for a viable backup computer that can support my major tasks, should it take longer to recover the Merkaat. However, using the device for making bootable media is problematic as little flashing software is available and some is unreliable. Finally, over decades of Linux experience I honed my emergency installation skills so much I can now confidently address most broken system situations. #linux #pi400 a href="https://remark.as/p/journal.paoloamoroso.com/an-unplanned-upgrade-to-linux-mint-22-1-cinnamon"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
<![CDATA[My journey to Lisp began in the early 1990s. Over three decades later, a few days ago I rediscovered the first Lisp environment I ever used back then which contributed to my love for the language. Here it is, PC Scheme running under DOSBox-X on my Linux PC: Screenshot of the PC Scheme Lisp development environment for MS-DOS by Texas Instruments running under DOSBox-X on Linux Mint Cinnamon. Using PC Scheme again brought back lots of great memories and made me reflect on what the environment taught me about Lisp and Lisp tooling. As a Computer Science student at the University of Milan, Italy, around 1990 I took an introductory computers and programming class taught by Prof. Stefano Cerri. The textbook was the first edition of Structure and Interpretation of Computer Programs (SICP) and Texas Instruments PC Scheme for MS-DOS the recommended PC implementation. I installed PC Scheme under DR-DOS on a 20 MHz 386 Olidata laptop with 2 MB RAM and a 40 MB hard disk drive. Prior to the class I had read about Lisp here and there but never played with the language. SICP and its use of Scheme as an elegant executable formalism instantly fascinated me. It was Lisp love at first sight. The class covered the first three chapters of the book but I later read the rest on my own. I did lots of exercises using PC Scheme to write and run them. Soon I became one with PC Scheme. The environment enabled a tight development loop thanks to its Emacs-like EDWIN editor that was well integrated with the system. The Lisp awareness of EDWIN blew my mind as it was the first such tool I encountered. The editor auto-indented and reformatted code, matched parentheses, and supported evaluating expressions and code blocks. Typing a closing parenthesis made EDWIN blink the corresponding opening one and briefly show a snippet of the beginning of the matched expression. Paying attention to the matching and the snippets made me familiar with the shape and structure of Lisp code, giving a visual feel of whether code looks syntactically right or off. Within hours of starting to use EDWIN the parentheses ceased to be a concern and disappeared from my conscious attention. Handling parentheses came natural. I actually ended up loving parentheses and the aesthetics of classic Lisp. Parenthesis matching suggested me a technique for writing syntactically correct Lisp code with pen and paper. When writing a closing parenthesis with the right hand I rested the left hand on the paper with the index finger pointed at the corresponding opening parenthesis, moving the hands in sync to match the current code. This way it was fast and easy to write moderately complex code. PC Scheme spoiled me and set the baseline of what to expect in a Lisp environment. After the class I moved to PCS/Geneva, a more advanced PC Scheme fork developed at the University of Geneva. Over the following decades I encountered and learned Common Lisp, Emacs, Lisp, and Interlisp. These experiences cemented my passion for Lisp. In the mid-1990s Texas Instruments released the executable and sources of PC Scheme. I didn't know it at the time, or if I noticed I long forgot. Until a few days ago, when nostalgia came knocking and I rediscovered the PC Scheme release. I installed PC Scheme under the DOSBox-X MS-DOS emulator on my Linux Mint Cinnamon PC. It runs well and I enjoy going through the system to rediscover what it can do. Playing with PC Scheme after decades of Lisp experience and hindsight on computing evolution shines new light on the environment. I didn't fully realize at the time but the product packed an amazing value for the price. It cost $99 in the US and I paid it about 150,000 Lira in Italy. Costing as much as two or three texbooks, the software was affordable even to students and hobbyists. PC Scheme is a rich, fast, and surprisingly capable environment with features such as a Lisp-aware editor, a good compiler, a structure editor and other tools, many Scheme extensions such as engines and OOP, text windows, graphics, and a lot more. The product came with an extensive manual, a thick book in a massive 3-ring binder I read cover to cover more than once. A paper on the implementation of PC Scheme sheds light on how good the system is given the platform constraints. Using PC Scheme now lets me put into focus what it taught me about Lisp and Lisp systems: the convenience and productivity of Lisp-aware editors; interactive development and exploratory programming; and a rich Lisp environment with a vast toolbox of libraries and facilities — this is your grandfather's batteries included language. Three decades after PC Scheme a similar combination of features, facilities, and classic aesthetics drew me to Medley Interlisp, my current daily driver for Lisp development. #Lisp #MSDOS #retrocomputing a href="https://remark.as/p/journal.paoloamoroso.com/rediscovering-the-origins-of-my-lisp-journey"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
<![CDATA[I upgraded my Raspberry Pi 400 to 64-bit Raspberry Pi OS 2024-11-19 based on Debian Bookworm 12.9: The desktop of 64-bit Raspberry Pi OS 2024-11-19 on a Raspberry Pi 400. Since I had no files to preserve the process was surprisingly easy as I went with a full installation. And this time I finally used the Raspberry Pi Imager. When I first set up the Pi 400 my only other desktop computer was a Chromebox that couldn't run the Imager on Crostini Linux. This imposed a less convenient network installation which, combined with a subtle bug, made me waste a couple of hours over three installation attempts. Now I have a real Linux PC that runs the Imager just fine. Downloading Raspberry Pi OS, configuring it, and flashing the microSD card went smoothly. When I booted the Pi 400 from the card I was greeted by a ready to run system. On the newly upgraded system, building Medley Interlisp from source for X11 took an hour or so. The environment still runs well with the labwc Wayland cmpositor that now ships with Raspberry Pi OS. But, like the previous Raspberry Pi OS release, Medley doesn't run under TigerVNC because of a connection issue. #pi400 #linux a href="https://remark.as/p/journal.paoloamoroso.com/upgrading-to-raspberry-pi-os-2024-11-19"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
More in programming
I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough). Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries. But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less. I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way? These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined: I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things]. He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth: I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.” That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter. Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?” But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty. I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box. But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally). As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do. And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol. I think my own career evolution has gone something like what Brian describes: Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.” Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?” Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.” Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?” Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?” Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?” [As you can see, I have some personal issues I need to work through…] As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words. I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged. All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels. “I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.” Email · Mastodon · Bluesky
We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase. This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied! It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance. I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded. However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.
Recently I got a question on formal methods1: how does it help to mathematically model systems when the system requirements are constantly changing? It doesn't make sense to spend a lot of time proving a design works, and then deliver the product and find out it's not at all what the client needs. As the saying goes, the hard part is "building the right thing", not "building the thing right". One possible response: "why write tests"? You shouldn't write tests, especially lots of unit tests ahead of time, if you might just throw them all away when the requirements change. This is a bad response because we all know the difference between writing tests and formal methods: testing is easy and FM is hard. Testing requires low cost for moderate correctness, FM requires high(ish) cost for high correctness. And when requirements are constantly changing, "high(ish) cost" isn't affordable and "high correctness" isn't worthwhile, because a kinda-okay solution that solves a customer's problem is infinitely better than a solid solution that doesn't. But eventually you get something that solves the problem, and what then? Most of us don't work for Google, we can't axe features and products on a whim. If the client is happy with your solution, you are expected to support it. It should work when your customers run into new edge cases, or migrate all their computers to the next OS version, or expand into a market with shoddy internet. It should work when 10x as many customers are using 10x as many features. It should work when you add new features that come into conflict. And just as importantly, it should never stop solving their problem. Canonical example: your feature involves processing requested tasks synchronously. At scale, this doesn't work, so to improve latency you make it asynchronous. Now it's eventually consistent, but your customers were depending on it being always consistent. Now it no longer does what they need, and has stopped solving their problems. Every successful requirement met spawns a new requirement: "keep this working". That requirement is permanent, or close enough to decide our long-term strategy. It takes active investment to keep a feature behaving the same as the world around it changes. (Is this all a pretentious of way of saying "software maintenance is hard?" Maybe!) Phase changes In physics there's a concept of a phase transition. To raise the temperature of a gram of liquid water by 1° C, you have to add 4.184 joules of energy.2 This continues until you raise it to 100°C, then it stops. After you've added two thousand joules to that gram, it suddenly turns into steam. The energy of the system changes continuously but the form, or phase, changes discretely. Software isn't physics but the idea works as a metaphor. A certain architecture handles a certain level of load, and past that you need a new architecture. Or a bunch of similar features are independently hardcoded until the system becomes too messy to understand, you remodel the internals into something unified and extendable. etc etc etc. It's doesn't have to be totally discrete phase transition, but there's definitely a "before" and "after" in the system form. Phase changes tend to lead to more intricacy/complexity in the system, meaning it's likely that a phase change will introduce new bugs into existing behaviors. Take the synchronous vs asynchronous case. A very simple toy model of synchronous updates would be Set(key, val), which updates data[key] to val.3 A model of asynchronous updates would be AsyncSet(key, val, priority) adds a (key, val, priority, server_time()) tuple to a tasks set, and then another process asynchronously pulls a tuple (ordered by highest priority, then earliest time) and calls Set(key, val). Here are some properties the client may need preserved as a requirement: If AsyncSet(key, val, _, _) is called, then eventually db[key] = val (possibly violated if higher-priority tasks keep coming in) If someone calls AsyncSet(key1, val1, low) and then AsyncSet(key2, val2, low), they should see the first update and then the second (linearizability, possibly violated if the requests go to different servers with different clock times) If someone calls AsyncSet(key, val, _) and immediately reads db[key] they should get val (obviously violated, though the client may accept a slightly weaker property) If the new system doesn't satisfy an existing customer requirement, it's prudent to fix the bug before releasing the new system. The customer doesn't notice or care that your system underwent a phase change. They'll just see that one day your product solves their problems, and the next day it suddenly doesn't. This is one of the most common applications of formal methods. Both of those systems, and every one of those properties, is formally specifiable in a specification language. We can then automatically check that the new system satisfies the existing properties, and from there do things like automatically generate test suites. This does take a lot of work, so if your requirements are constantly changing, FM may not be worth the investment. But eventually requirements stop changing, and then you're stuck with them forever. That's where models shine. As always, I'm using formal methods to mean the subdiscipline of formal specification of designs, leaving out the formal verification of code. Mostly because "formal specification" is really awkward to say. ↩ Also called a "calorie". The US "dietary Calorie" is actually a kilocalorie. ↩ This is all directly translatable to a TLA+ specification, I'm just describing it in English to avoid paying the syntax tax ↩
While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.
I've been biking in Brooklyn for a few years now! It's hard for me to believe it, but I'm now one of the people other bicyclists ask questions to now. I decided to make a zine that answers the most common of those questions: Bike Brooklyn! is a zine that touches on everything I wish I knew when I started biking in Brooklyn. A lot of this information can be found in other resources, but I wanted to collect it in one place. I hope to update this zine when we get significantly more safe bike infrastructure in Brooklyn and laws change to make streets safer for bicyclists (and everyone) over time, but it's still important to note that each release will reflect a specific snapshot in time of bicycling in Brooklyn. All text and illustrations in the zine are my own. Thank you to Matt Denys, Geoffrey Thomas, Alex Morano, Saskia Haegens, Vishnu Reddy, Ben Turndorf, Thomas Nayem-Huzij, and Ryan Christman for suggestions for content and help with proofreading. This zine is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, so you can copy and distribute this zine for noncommercial purposes in unadapted form as long as you give credit to me. Check out the Bike Brooklyn! zine on the web or download pdfs to read digitally or print here!