More from Alice GG
Two weeks ago we released Dice’n Goblins, our first game on Steam. This project allowed me to discover and learn a lot of new things about game development and the industry. I will use this blog post to write down what I consider to be the most important lessons from the months spent working on this. The development started around 2 years ago when Daphnée started prototyping a dungeon crawler featuring a goblin protagonist. After a few iterations, the game combat started featuring dice, and then those dice could be used to make combos. In May 2024, the game was baptized Dice’n Goblins, and a Steam page was created featuring some early gameplay screenshots and footage. I joined the project full-time around this period. Almost one year later, after amassing more than 8000 wishlists, the game finally released on Steam on April 4th, 2025. It was received positively by the gaming press, with great reviews from PCGamer and LadiesGamers. It now sits at 92% positive reviews from players on Steam. Building RPGs isn’t easy As you can see from the above timeline, building this game took almost two years and two programmers. This is actually not that long if you consider that other indie RPGs have taken more than 6 years to come out. The main issue with the genre is that you need to create a believable world. In practice, this requires programming many different systems that will interact together to give the impression of a cohesive universe. Every time you add a new system, you need to think about how it will fit all the existing game features. For example, players typically expect an RPG to have a shop system. Of course, this means designing a shop non-player character (or building) and creating a UI that is displayed when you interact with it. But this also means thinking through a lot of other systems: combat needs to be changed to reward the player with gold, every item needs a price tag, chests should sometimes reward the player with gold, etc… Adding too many systems can quickly get into scope creep territory, and make the development exponentially longer. But you can only get away with removing so much until your game stops being an RPG. Making a game without a shop might be acceptable, but the experience still needs to have more features than “walking around and fighting monsters” to feel complete. RPGs are also, by definition, narrative experiences. While some games have managed to get away with procedurally generating 90% of the content, in general, you’ll need to get your hands dirty, write a story, and design a bunch of maps. Creating enough content for a game to fit 12h+ without having the player go through repetitive grind will by itself take a lot of time. Having said all that, I definitely wouldn’t do any other kind of games than RPGs, because this is what I enjoy playing. I don’t think I would be able to nail what makes other genres fun if I don’t play them enough to understand what separates the good from the mediocre. Marketing isn’t that complicated Everyone in the game dev community knows that there are way too many games releasing on Steam. To stand out amongst the 50+ games coming out every day, it’s important not only to have a finished product but also to plan a marketing campaign well in advance. For most people coming from a software engineering background, like me, this can feel extremely daunting. Our education and jobs do not prepare us well for this kind of task. In practice, it’s not that complicated. If your brain is able to provision a Kubernetes cluster, then you are most definitely capable of running a marketing campaign. Like anything else, it’s a skill that you can learn over time by practicing it, and iteratively improving your methods. During the 8 months following the Steam page release, we tried basically everything you can think of as a way to promote the game. Every time something was having a positive impact, we would do it more, and we quickly stopped things with low impact. The most important thing to keep in mind is your target audience. If you know who wants the game of games you’re making, it is very easy to find where they hang out and talk to them. This is however not an easy question to answer for every game. For a long while, we were not sure who would like Dice’n Goblins. Is it people who like Etrian Odyssey? Fans of Dicey Dungeons? Nostalgic players of Paper Mario? For us, the answer was mostly #1, with a bit of #3. Once we figured out what was our target audience, how to communicate with them, and most importantly, had a game that was visually appealing enough, marketing became very straightforward. This is why we really struggled to get our first 1000 wishlists, but getting the last 5000 was actually not that complicated. Publishers aren’t magic At some point, balancing the workload of actually building the game and figuring out how to market it felt too much for a two-person team. We therefore did what many indie studios do, and decided to work with a publisher. We worked with Rogue Duck Interactive, who previously published Dice & Fold, a fairly successful dice roguelike. Without getting too much into details, it didn’t work out as planned and we decided, by mutual agreement, to go back to self-publishing Dice’n Goblins. The issue simply came from the audience question mentioned earlier. Even though Dice & Fold and Dice’n Goblins share some similarities, they target a different audience, which requires a completely different approach to marketing. The lesson learned is that when picking a publisher, the most important thing you can do is to check that their current game catalog really matches the idea you have of your own game. If you’re building a fast-paced FPS, a publisher that only has experience with cozy simulation games will not be able to help you efficiently. In our situation, a publisher with experience in roguelikes and casual strategy games wasn’t a good fit for an RPG. In addition to that, I don’t think the idea of using a publisher to remove marketing toil and focus on making the game is that much of a good idea in the long term. While it definitely helps to remove the pressure from handling social media accounts and ad campaigns, new effort will be required in communicating and negotiating with the publishing team. In the end, the difference between the work saved and the work gained might not have been worth selling a chunk of your game. Conclusion After all this was said and done, one big question I haven’t answered is: would I do it again? The answer is definitely yes. Not only building this game was an extremely satisfying endeavor, but so much has been learned and built while doing it, it would be a shame not to go ahead and do a second one.
Neovim is by far my favorite text editor. The clutter-free interface and keyboard-only navigation are what keep me productive in my daily programming. In an earlier post, I explained how I configure it into a minimalist development environment. Today, I will show you how to use it with Godot and GDScript. Configure Godot First, we need to tell Godot to use nvim as a text editor instead of the built-in one. Open Godot, and head to Editor Settings > General > Text Editor > External. There, you will need to tick the box Use external editor, indicate your Neovim installation path, and use --server /tmp/godothost --remote-send "<C-\><C-N>:n {file}<CR>{line}G{col}|" as execution flags. While in the settings, head to Network > Language Server and note down the remote port Godot is using. By default, it should be 6005. We will need that value later. Connecting to Godot with vim-godot Neovim will be able to access Godot features by using a plugin called vim-godot. We will need to edit the nvim configuration file to install plugins and configure Neovim. On Mac and Linux, it is located at ~/.config/nvim/init.vim I use vim-plug to manage my plugins, so I can just add it to my configuration like this: call plug#begin('~/.vim/plugged') " ... Plug 'habamax/vim-godot' " ... call plug#end() Once the configuration file is modified and saved, use the :PlugInstall command to install it. You’ll also need to indicate Godot’s executable path. Add this line to your init.vim: let g:godot_executable = '/Applications/Godot.app/Contents/MacOS/Godot' For vim-godot to communicate with the Godot editor, it will need to listen to the /tmp/godothost file we configured in the editor previously. To do that, simply launch nvim with the flag --listen /tmp/godothost. To save you some precious keypress, I suggest creating a new alias in your bashrc/zshrc like this: alias gvim="nvim --listen /tmp/godothost" Getting autocompletion with coc.nvim Godot ships with a language server. It means the Godot editor can provide autocompletion, syntax highlighting, and advanced navigation to external editors like nvim. While Neovim now has built-in support for the language server protocol, I’ve used the plugin coc.nvim to obtain these functionalities for years and see no reason to change. You can also install it with vim-plug by adding the following line to your plugin list: Plug 'neoclide/coc.nvim', {'branch':'release'} Run :PlugInstall again to install it. You’ll need to indicate the Godot language server address and port using the command :CocConfig. It should open Coc’s configuration file, which is a JSON file normally located at ~/.config/nvim/coc-settings.json. In this file enter the following data, and make sure the port number matches the one located in your editor: { "languageserver": { "godot": { "host": "127.0.0.1", "filetypes": ["gdscript"], "port": 6005 } } } I recommend adding Coc’s example configuration to your init.vim file. You can find it on GitHub. It will provide you with a lot of useful shortcuts, such as using gd to go to a function definition and gr to list its references. Debugging using nvim-dap If you want to use the debugger from inside Neovim, you’ll need to install another plugin called nvim-dap. Add the following to your plugins list: Plug 'mfussenegger/nvim-dap' The plugin authors suggest configuring it using Lua, so let’s do that by adding the following in your init.vim: lua <<EOF local dap = require("dap") dap.adapters.godot = { type = "server", host = "127.0.0.1", port = 6006, } dap.configurations.gdscript = { { type = "godot", request = "launch", name = "Launch scene", project = "${workspaceFolder}", launch_scene = true, }, } vim.api.nvim_create_user_command("Breakpoint", "lua require'dap'.toggle_breakpoint()", {}) vim.api.nvim_create_user_command("Continue", "lua require'dap'.continue()", {}) vim.api.nvim_create_user_command("StepOver", "lua require'dap'.step_over()", {}) vim.api.nvim_create_user_command("StepInto", "lua require'dap'.step_into()", {}) vim.api.nvim_create_user_command("REPL", "lua require'dap'.repl.open()", {}) EOF This will connect to the language server (here on port 6005), and allow you to pilot the debugger using the following commands: :Breakpoint to create (or remove) a breakpoint :Continue to launch the game or run until the next breakpoint :StepOver to step over a line :StepInto to step inside a function definition :REPL to launch a REPL (useful if you want to examine values) Conclusion I hope you’ll have a great time developing Godot games with Neovim. If it helps you, you can check out my entire init.vim file on GitHub gist.
It’s been around 2 years that I’ve had to stop with my long-term addiction to stable jobs. Quite a few people who read this blog are wondering what the hell exactly I’ve been doing since then so I’m going to update all of you on the various projects I’ve been working on. Meme credit: Fabian Stadler Mikochi Last year, I created Mikochi, a minimalist remote file browser written in Go and Preact. It has slowly been getting more and more users, and it’s now sitting at more than 200 GitHub stars and more than 6000 Docker pulls. I personally use it almost every day and it fits my use case perfectly. It is basically feature-complete so I don’t do too much development on it. I’ve actually been hoping users help me solve the few remaining GitHub issues. So far it happened twice, a good start I guess. Itako You may have seen a couple of posts on this blog regarding finance. It’s a subject I’ve been trying to learn more about for a while now. This led me to read some excellent books including Nassim Taleb’s Fooled by Randomness, Robert Shiller’s Irrational Exuberance, and Robert Carver’s Smart Portfolios. Those books have pushed me toward a more systematic approach to investing, and I’ve built Itako to help me with that. I’ve not talked about it on this blog so far, but it’s a SaaS software that gives clear data visualizations of a stock portfolio performance, volatility, and diversification. It’s currently in beta and usable for free. I’m quite happy that there are actually people using it and that it seems to work without any major issues. However, I think making it easier to use and adding a couple more features would be necessary to make it into a commercially viable product. I try to work on it when I find the time, but for the next couple of months, I have to prioritize the next project. Dice’n Goblins I play RPGs too much and now I’m even working on making them. This project was actually not started by me but by Daphnée Portheault. In the past, we worked on a couple of game jams and produced Cosmic Delusion and Duat. Now we’re trying to make a real commercial game called Dice’n Goblins. The game is about a Goblin who tries to escape from a dungeon that seems to grow endlessly. It’s inspired by classic dungeon crawlers like Etrian Odyssey and Lands of Lore. The twist is that you have to use dice to fight monsters. Equipping items you find in the dungeon gives you new dice and using skills allows you to change the dice values during combat (and make combos). We managed to obtain a decent amount of traction on this project and now it’s being published by Rogue Duck Interactive. The full game should come out in Q1 2025, for PC, Mac, and Linux. You can already play the demo (and wishlist the game) on Steam. If you’re really enthusiastic about it, don’t hesitate to join the Discord community. Technically it’s quite a big change for me to work on game dev since I can’t use that many of the reflexes I’ve built while working on infra subjects. But I’m getting more and more comfortable with using Godot and figuring out all the new game development related lingo. It’s also been an occasion to do a bit of work with non-code topics, like press relations. Japanese Something totally not relevant to tech. Since I’ve managed to reach a ‘goed genoeg’ level of Dutch, I’ve also started to learn more Japanese. I’ve almost reached the N4 level. (By almost I mean I’ve failed but it was close.) A screenshot from the Kanji Study Android App I’ve managed to learn all the hiraganas, katakanas, basic vocabulary, and grammar. So now all I’ve left to do is a huge amount of immersion and grind more kanjis. This is tougher than I thought it would be but I guess it’s fun that I can pretend to be studying while playing Dragon Quest XI in Japanese.
Since 2019, Apple has required all MacOS software to be signed and notarized. This is meant to prevent naive users from installing malware while running software from unknown sources. Since this process is convoluted, it stops many indie game developers from releasing their Godot games on Mac. To solve this, this article will attempt to document each and every step of the signing and notarization process. Photo by Natasya Chen Step 0: Get a Mac While there tools exists to codesign/notarize Mac executables from other platforms, I think having access to a MacOS machine will remove quite a few headaches. A Mac VM, or even a cloud machine, might do the job. I have not personally tested those alternatives, so if you do, please tell me if it works well. Step 1: Get an Apple ID and the Developer App You can create an Apple ID through Apple’s website. While the process should be straightforward, it seems like Apple has trust issues when it comes to email from protonmail.com or custom domains. Do not hesitate to contact support in case you encounter issues creating or logging into your Apple ID. They are quite responsive. Once you have a working Apple ID, use it to log into the App Store on your Mac and install the Apple Developer application. Step 2: Enroll in the Apple Developer Program Next, open the Apple Developer app, log in, and ask to “Enroll” in the developer program. This will require you to scan your ID, fill in data about you and your company, and most likely confirm those data with a support agent by phone. The process costs ~99$ and should take between 24 and 48 hours. Step 3: Setup Xcode Xcode will be used to codesign and notarize your app through Godot. You should install the app through the App Store like you did for the Apple Developer application. Once the app is installed, we need to accept the license. First, launch Xcode and close it. Then open a terminal and run the following commands: sudo xcode-select -s /Applications/Xcode.app/Contents/Developer sudo xcodebuild -license accept Step 4: Generate a certificate signing request To obtain a code signing certificate, we need to generate a certificate request. To do this, open Keychain, and from the top menu, select Keychain Access > Certificate Assistant > Request a Certificate From a Certificate Authority. Fill in your email address and choose Request is: Saved to disk. Click on Continue and save the .certSigningRequest file somewhere. Step 5: Obtain a code signing certificate Now, head to the Apple Developer website. Log in and go to the certificate list. Click on the + button to create a new certificate. Check Developer ID Application when asked for the type of certificate and click on continue. On the next screen, upload the certificate signing request we generated in step 4. You’ll be prompted to download your certificate. Do it and add it to Keychain by double-clicking on the file. You can check that your certificate was properly added by running the following command: security find-identity -v -p codesigning It should return (at least) one identity. Step 6: Get an App Store Connect API Key Back to the Apple Developer website, go to Users and Access, and open the Integrations tab. From this page, you should request access to the App Store Connect API. This access should normally be granted immediately. From this page, create a new key by clicking on the + icon. Give your key a name you will remember and give it the Developer access. Click on Generate and the key will be created. You will then be prompted to download your key. Do it and store the file safely, as you will only be able to download it once. Step 7: Configure Godot Open your Godot project and head to the project settings using the top menu (Project > Project Settings). From there search for VRAM Compression and check Import ETC2 ASTC. Then make sure you have installed up-to-date export templates by going through the Editor > Manage Export Templates menu and clicking on Download and Install. To export your project, head to the Project > Export. Click on Add and select macOS to create new presets. In the presets form on the left, you’ll have to fill in a unique Bundle Identifier in the Application section, this can be com.yourcompany.yourgame. In the Codesign section, select Xcode codesign and fill in your Apple Team ID and Identity. Those can be found using the security find-identity -v -p codesigning command: the first (~40 characters) part of the output is your identity, and the last (~10 characters, between parentheses) is your Team ID. In the Notarization section, select Xcode notarytool and fill in your API UUID (found on the appstoreconnect page), API Key (the file you saved in Step 6), and API Key ID (also found on the appstoreconnect page). Click on Export Project… to start the export. Step 8: Checking the notarization status Godot will automatically send your exported file for notarization. You can check the notarization progress by running: xcrun notarytool history --key YOUR_AUTH_KEY_FILE.p8 --key-id YOUR_KEY_ID --issuer YOUR_ISSUER_ID According to Apple, the process should rarely take more than 15 minutes. Empirically, this is sometimes very false and the process can give you enough time to grab a coffee, bake a cake, and water your plants. Once the notarization appears completed (with the status Valid or Invalid), you can run this command to check the result (using the job ID found in the previous command output): xcrun notarytool log --key YOUR_AUTH_KEY_FILE.p8 --key-id YOUR_KEY_ID --issuer YOUR_ISSUER_ID YOUR_JOB_ID Step 9: Stapling your executable To make sure that your executable can work offline, you are supposed to ‘staple’ the notarization to it. This is done by running the following command: xcrun stapler staple MY_SOFTWARE.dmg Extra: Exporting the game as .app Godot can export your game as .dmg, .zip, or .app. For most users, it is more convenient to receive the game as .app, as those can be directly executed. However, the notarization process doesn’t support uploading .app files to Apple’s server. I think the proper way to obtain a notarized .app file is to: Export the project .dmg from Godot with code signing and notarization Mount the .dmg and extract the .app located inside of it Staple the .app bundle Extra: Code signing GodotSteam GodotSteam is a Godot add-on that wraps the Steam API inside GDscript. If you use it, you might encounter issues during notarization, because it adds a bunch of .dylib and .framework files. What I did to work around that was to codesign the framework folders: codesign --deep --force --verify --verbose --sign "Developer ID Application: My Company" libgodotsteam.macos.template_release.framework codesign --deep --force --verify --verbose --sign "Developer ID Application: My Company" libgodotsteam.macos.template_debug.framework I also checked the options Allow DyId environment variable and Disable Library Validation in the export settings (section Codesign > Entitlements). FAQ: Is this really necessary if I’m just going to publish my game on Steam? Actually, I’m not 100% sure, but I think it is only “recommended” and Steam can bypass the notarization. Steamworks does contain a checkbox asking if App Bundles Are Notarized, so I assume it might do something.
Talking to the press is an inevitable part of marketing a game or software. To make the journalist’s job easier, it’s a good idea to put together a press kit. The press kit should contain all the information someone could want to write an article about your product, as well as downloadable, high-resolution assets. Dice'n Goblins Introducing Milou Milou is a NodeJS software that generates press kits in the form of static websites. It aims at creating beautiful, fast, and responsive press kits, using only YAML configuration files. I built it on top of presskit.html, which solved the same problem but isn’t actively maintained at the moment. Milou improves on its foundation by using a more modern CSS, YAML instead of XML, and up-to-date Javascript code. Installation First, you will need to have NodeJS installed: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash nvm install 22 Once Node is ready, you can use NPM to install Milou: npm install -g milou Running milou -V should display its version (currently 1.1.1). Let’s build a press kit Let’s create a new project: mkdir mypresskit cd mypresskit milou new The root directory of your project will be used for your company. In this directory, the file data.yml should contain data about your company, such as your company name, location, website, etc… You can find an example of a fully completed company data.yml file on GitHub. To validate that your file is a valid YAML file, you can use an online validator. Your company directory should contain a sub-folder called images, you should put illustrations you want to appear in your press kit inside it. Any file named header.*** will be used as the page header, favicon.ico will be used as the page favicon, and files prefixed by the word logo will appear in a dedicated logo section of the page (eg. logo01.png or logo.jpg). Other images you put in this folder will be included in your page, in the Images section. After completing your company page, we can create a product page. This will be done in a subfolder: mkdir myproduct cd myproduct milou new -t product Just like for a company, you should fill in the data.yml file with info about your product, like its title, features, and prices. You can find an example of a product file on GitHub. The product folder should also contain an images subfolder. It works the same way as for the company. When your product is ready, go back to the company folder and build the press kit: cd ../ milou build . This will generate the HTML and CSS files for your online presskit in the directory build. You can then use any web server to access them. For example, this will make them accessible from http://localhost:3000/ cd build npx serve To put your press kit online, you can upload this folder to any static site host, like CloudFlare Pages, Netlify, or your own server running Nginx. Conclusion Milou is still quite new, and if you encounter issues while using it, don’t hesitate to open an issue. And if it works perfectly for you, leave a star on GitHub.
More in programming
We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase. This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied! It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance. I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded. However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.
Recently I got a question on formal methods1: how does it help to mathematically model systems when the system requirements are constantly changing? It doesn't make sense to spend a lot of time proving a design works, and then deliver the product and find out it's not at all what the client needs. As the saying goes, the hard part is "building the right thing", not "building the thing right". One possible response: "why write tests"? You shouldn't write tests, especially lots of unit tests ahead of time, if you might just throw them all away when the requirements change. This is a bad response because we all know the difference between writing tests and formal methods: testing is easy and FM is hard. Testing requires low cost for moderate correctness, FM requires high(ish) cost for high correctness. And when requirements are constantly changing, "high(ish) cost" isn't affordable and "high correctness" isn't worthwhile, because a kinda-okay solution that solves a customer's problem is infinitely better than a solid solution that doesn't. But eventually you get something that solves the problem, and what then? Most of us don't work for Google, we can't axe features and products on a whim. If the client is happy with your solution, you are expected to support it. It should work when your customers run into new edge cases, or migrate all their computers to the next OS version, or expand into a market with shoddy internet. It should work when 10x as many customers are using 10x as many features. It should work when you add new features that come into conflict. And just as importantly, it should never stop solving their problem. Canonical example: your feature involves processing requested tasks synchronously. At scale, this doesn't work, so to improve latency you make it asynchronous. Now it's eventually consistent, but your customers were depending on it being always consistent. Now it no longer does what they need, and has stopped solving their problems. Every successful requirement met spawns a new requirement: "keep this working". That requirement is permanent, or close enough to decide our long-term strategy. It takes active investment to keep a feature behaving the same as the world around it changes. (Is this all a pretentious of way of saying "software maintenance is hard?" Maybe!) Phase changes In physics there's a concept of a phase transition. To raise the temperature of a gram of liquid water by 1° C, you have to add 4.184 joules of energy.2 This continues until you raise it to 100°C, then it stops. After you've added two thousand joules to that gram, it suddenly turns into steam. The energy of the system changes continuously but the form, or phase, changes discretely. Software isn't physics but the idea works as a metaphor. A certain architecture handles a certain level of load, and past that you need a new architecture. Or a bunch of similar features are independently hardcoded until the system becomes too messy to understand, you remodel the internals into something unified and extendable. etc etc etc. It's doesn't have to be totally discrete phase transition, but there's definitely a "before" and "after" in the system form. Phase changes tend to lead to more intricacy/complexity in the system, meaning it's likely that a phase change will introduce new bugs into existing behaviors. Take the synchronous vs asynchronous case. A very simple toy model of synchronous updates would be Set(key, val), which updates data[key] to val.3 A model of asynchronous updates would be AsyncSet(key, val, priority) adds a (key, val, priority, server_time()) tuple to a tasks set, and then another process asynchronously pulls a tuple (ordered by highest priority, then earliest time) and calls Set(key, val). Here are some properties the client may need preserved as a requirement: If AsyncSet(key, val, _, _) is called, then eventually db[key] = val (possibly violated if higher-priority tasks keep coming in) If someone calls AsyncSet(key1, val1, low) and then AsyncSet(key2, val2, low), they should see the first update and then the second (linearizability, possibly violated if the requests go to different servers with different clock times) If someone calls AsyncSet(key, val, _) and immediately reads db[key] they should get val (obviously violated, though the client may accept a slightly weaker property) If the new system doesn't satisfy an existing customer requirement, it's prudent to fix the bug before releasing the new system. The customer doesn't notice or care that your system underwent a phase change. They'll just see that one day your product solves their problems, and the next day it suddenly doesn't. This is one of the most common applications of formal methods. Both of those systems, and every one of those properties, is formally specifiable in a specification language. We can then automatically check that the new system satisfies the existing properties, and from there do things like automatically generate test suites. This does take a lot of work, so if your requirements are constantly changing, FM may not be worth the investment. But eventually requirements stop changing, and then you're stuck with them forever. That's where models shine. As always, I'm using formal methods to mean the subdiscipline of formal specification of designs, leaving out the formal verification of code. Mostly because "formal specification" is really awkward to say. ↩ Also called a "calorie". The US "dietary Calorie" is actually a kilocalorie. ↩ This is all directly translatable to a TLA+ specification, I'm just describing it in English to avoid paying the syntax tax ↩
While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.
I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough). Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries. But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less. I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way? These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined: I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things]. He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth: I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.” That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter. Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?” But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty. I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box. But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally). As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do. And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol. I think my own career evolution has gone something like what Brian describes: Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.” Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?” Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.” Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?” Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?” Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?” [As you can see, I have some personal issues I need to work through…] As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words. I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged. All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels. “I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.” Email · Mastodon · Bluesky
I've been biking in Brooklyn for a few years now! It's hard for me to believe it, but I'm now one of the people other bicyclists ask questions to now. I decided to make a zine that answers the most common of those questions: Bike Brooklyn! is a zine that touches on everything I wish I knew when I started biking in Brooklyn. A lot of this information can be found in other resources, but I wanted to collect it in one place. I hope to update this zine when we get significantly more safe bike infrastructure in Brooklyn and laws change to make streets safer for bicyclists (and everyone) over time, but it's still important to note that each release will reflect a specific snapshot in time of bicycling in Brooklyn. All text and illustrations in the zine are my own. Thank you to Matt Denys, Geoffrey Thomas, Alex Morano, Saskia Haegens, Vishnu Reddy, Ben Turndorf, Thomas Nayem-Huzij, and Ryan Christman for suggestions for content and help with proofreading. This zine is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, so you can copy and distribute this zine for noncommercial purposes in unadapted form as long as you give credit to me. Check out the Bike Brooklyn! zine on the web or download pdfs to read digitally or print here!