More from Jibran’s Perspective
I’ve completed a freelance project I was working on for a few months, and have started saying no to new opportunities. It’s time to work on one of my own ideas again. This is part of my plan to start failing more. I’ve decided to build a business sending gift cards to Pakistan - and eventually other countries in that corner of the world. Why? A few years ago I had sent a gift card to a colleague in the UK. I found a number of very good options. They all had websites that inspired confidence, and used robust payment methods (Stripe in my example) that I could trust with my credit card. I recently had to send a gift card to a colleague in Pakistan. I was confident that I would find a bunch of great options; instead I only found one that I could think of trusting with my money. I ended up using their services and the card was delivered, but there were a number of problems I saw: No trust building around card payments. There was no clear mention of which provider they used. I did a bank transfer instead of using a CC. This meant my payment was manually verified and the card was only sent after a few hours. There was no confirmation email about my order. I was worried enough to call their helpline to confirm that my order had gone through. Once they had sent the card (which I also had to confirm via phone), I only got a confirmation email the next day. To get an invoice to expense this, I had to send them an email. I’m still waiting on an invoice. There were multiple colleagues who chipped in on this gift card. I had to collect the money from them and then pay for the card myself. In my previous experience of sending a gift card to the UK, I was able to include my colleagues in the process. They were able to add their contributions directly to the gift card I selected and a card of the total amount was sent to the recipient. Finally, there was no option for the receiver to choose which gift card they wanted. Instead I had to choose for them. There is a “Universal Gift Card” they claim works at all merchants and is the one I got, but redeeming that would be slightly more complicated. Interestingly, my colleague didn’t open the email they received with the gift card because they thought it was a spam/scam/malicious email. Only after I asked if they had received the card did they end up opening it. I know a better user experience exists. I want to bring the same to Pakistan and solve my own problem at the same time. Is there a market for this? I believe so, because: It’s a problem I’ve just faced. I’ve seen my wife having to deal with low-trust companies sending gifts to Pakistan. Gift cards are different, but eventually I could also add the option to send physical gifts to the recipient. I’ve seen my employer deal with this. Recently a baby gift basket arrived 2 months after the baby was born. 🤯 This is a recurring problem. People & companies need to send gift cards on birthdays, weddings, etc. With more companies starting to hire remotely in Pakistan, this could be a valuable service for businesses to subscribe to. Validation? I haven’t found an easy way to validate this idea. There is no community of “people sending gift cards to Pakistan” that I can tap into. That isn’t a cohort I can find in one place. I could make a list of B2B customers; companies that hire remotely in Pakistan. However, I want to start with individual customers - because I’m starting from a place of solving my own problem. It should be possible to pivot to B2B if I don’t find any interest from individual customers. Validation then involves me starting with a blog - suggesting gift cards to send to Pakistan. I’ll use SEO to bring in traffic. If I see enough visitors, I could start building a business. This also means that if/when the actual product launches, I’ll have a distribution channel already working. What if I’m wrong? There’s a very strong possibility that I’m wrong about this idea. That I’ll spend a bunch of time for it to get nowhere, or that I have picked a problem that isn’t very valuable to solve. This is my unique brand of fear of failure. I used to think I didn’t fear failing, because I had already failed many times. Instead, my fear of failure manifests as a fear of picking the wrong thing and wasting time on it. The way I am dealing with this is to realize that if I don’t pick anything - which I have frequently done in the past - I have an exactly 0% chance of succeeding. Just trying something makes that probability > 0%. You miss 100% of the shots you don’t take. Another thing that’s helping me is to time box this idea. I will spend 6 weeks on building the blog and populating it with as much useful content as possible. After that I can spend an hour or two every week to add a few more pieces of content. I can start researching and working on a different idea after the 6 week period and wait for the SEO to have an impact before making a decision to continue or abandon this.
As part of a contracting project, I’ve been building an analytics dashboard for a feedback collection SaaS. The app is built in Ruby on Rails and given all the nice things I’ve heard about Kamal; I decided to use it for deploying the app. The experience has been phenomenal; outside of some frustration with the initial deployment. The app is deployed on a pretty standard AWS setup; a couple of EC2 servers hosting the web app running inside Docker containers, and a load balancer in front. One of the problems I faced during the initial deployment was forwarding headers from the AWS application load balancer to the RoR server running in the Docker container. The challenge with Kamal is that it relies heavily on Traefik, and while Traefik is a great tool, it takes some getting used to. It’s configuration is not very intuitive, and there’s no easy way to see how things are configured outside of looking at the text logs. The Traefik document is pretty thorough, so a bit of searching led me to this CLI argument which needs to be passed to the Traefik container: entrypoints.http.forwardedheaders.insecure: true However, no matter what I tried, when I added this, the app container would stop responding to web requests. Without the config the container would work but throw an exception related to the Origin header not matching the configured hosts. After a lot of experimentation, I stumbled upon the other config I needed to add by pure luck. entrypoints.http.address: ":80" As far as I can tell, when I added the forwardedheaders config, the entrypoint no longer got the correct address configuration. I’m not sure if this is related to Kamal or Traefik. Kamal deploy.yml If you’re looking to replicate a similar setup, here’s the Kamal deploy.yml file that I am using with this project to deploy to AWS, with a load balancer terminating the SSL connection and forwarding traffic to web servers that are configured via Kamal. As a bonus, this config also deploys Sidekiq for background tasks. service: <SERVICE NAME> image: <IMAGE NAME> ssh: user: ubuntu proxy: "ubuntu@A.B.C.D" servers: web: hosts: - "A.B.C.D" - "A.B.C.D" labels: traefik.http.routers.<SERVICE NAME>-web.rule: Host(`<YOUR HOST NAME>`) sidekiq: hosts: - "A.B.C.D" - "A.B.C.D" traefik: false cmd: bundle exec sidekiq registry: server: <AWS ACCOUNT ID>.dkr.ecr.<AWS REGION>.amazonaws.com username: AWS password: <%= %x(aws ecr get-login-password --region <AWS REGION>) %> builder: local: arch: amd64 # Because I develop on a Apple Silicon machine, I need to use a build target env: clear: - DATABASE_URL: <DATABASE URL> secret: - RAILS_MASTER_KEY - DB_PASSWORD traefik: args: entrypoints.http.address: ":80" entrypoints.http.forwardedheaders.insecure: true log.level: DEBUG accesslog: true accesslog.format: json
I have failed, and that is exactly what I had hoped for a few months ago in this blog post. This is a good failure. It has taught me things, lessons I can use in the future to avoid failing this way again. But first a bit of context. What did I fail at? In February of 2024 I decide to try my hands on my first “Indie Hacker” hustle, something that would make me money on the internet without having to trade my time for it. A product instead of consultancy services that I usually provide. I had seen a number of people on Twitter (X) rave about how well their bootstrap templates were doing; and I had just gotten out of a consultancy project where I needed to connect a Next.js frontend to a Django backend. I thought it was the perfect project to start my indie hacking journey. I put up a launch post and started working, updating a build log as I went along. I gave myself until 28th March 2024 to finish it. That of course did not happen. Let’s talk about why I failed and what I learned. Episode 1: The one where I don’t understand the meaning of MVP My initial plan was to build a Django+Next.js boilerplate template the provided all of these: the base template that provided a Django backend & Next.js frontend working authentication b/w the backend & frontend Dockerfile that would create the backend & frontend containers for deployment Terraform scripts to setup an infrastructure on AWS Celery + Redis for background task processing TailwindCSS for the frontend (comes mostly for free with Next.js) social auth This looks like something achievable in a week or two of work - but only if you’re working full time on this. I failed to consider that I have a day job and a life. I was barely able to tick of the first two of these deliverables by the time my 6 week deadline came up. As a good friend told me later, I should have focused on the minimum amount of value I could deliver. Just having the first two things on my list be done would have been enough. I couldn’t charge the $20 I had planned for, but I could have charged $1-$5 for just that. And if no one was interested in spending the cost of a coffee on the MVP of the template, that would have been a good signal that this wasn’t going anywhere in it’s current shape. Instead, by focusing on building something much bigger, I robbed myself of the ability to validate the idea quickly. I spent all my available time coding the template instead of trying to talk to potential customers about it. Lesson 1: Scope down aggressively. Episode 2: Where I jumped on the hype-wagon I settled on building a boilerplate template because that’s what I had seen a lot of people on Twitter/X doing lately; I’m chalking this down to recency bias. I had no personal interest in a boilerplate template. It’s also not a product that I would personally use. I have so far made one project that uses this tech stack. Most of my other projects are Django, and Ruby on Rails. The most successful boilerplate templates I come across are from people who made a bunch of projects in 1 tech stack then realized they needed to do the same thing over-and-over again; which they then packaged into a template they could use. Selling to others was a bonus at first I guess. I was very enthusiastic about the project at the start, but as time went on I had to force myself to work on it. My lack of interest in this type of project was a big factor. Another factor was there being no way to see the fruits of my labor. I am currently working on an analytics dashboard for another client (a RoR project) and every time I build a feature, I love to play around with it in my free time. I test how it works, make sure the UX is a good one, and just play around and admire the app I’ve made. Without me using my template to build new projects, I lacked that feedback loop. Without the loop, I quickly lost interest. Lesson 2: Build something I can use myself. This isn’t a job I’m getting paid for, so the only motivation I have initially until it starts generating money is to build something interesting for myself. Episode 3: Where I had nothing for potential customers to play around with This is related to the 1st lesson. Because I didn’t have a path to quickly get something out there, there was no way for me to get my “product” into the hands of people who could test and provide feedback. I think the problem with a boilerplate template style of product is that you can’t give people a half-backed thing and ask them to test it. Unlike other SaaS apps, there’s no mid-way version of a template. Customers have to “buy-in” to use your template with any project they are starting. With SaaS, users can sign up and test, and then leave if they don’t like it. There’s no easy way of testing with a template. Lesson 3: Build something that can be tested by potential customers easily. For now, I’m going to stick with SaaS style web apps. Conclusion Moving forward: I’ll be working on web app products that users can sign up for and test very quickly. My next few experiments/products will be things that I can use myself as well. I’ll post what I’m going to work on next when I decide and have some time away from my job & freelance projects that are currently in progress.
If you’re just looking for implementation instructions, skip my ramblings and go straight to the code here. I’m currently working on my first project after deciding that I needed to fail more and practice finishing projects instead of abandoning them midway once they got “boring”. Anyways… This one is till in it’s interesting phase, so here’s a blog post with some things I learned yesterday while working on it. The project is a boilerplate template that should make it easy for devs. to start a new project with a Django backend and a Next.js frontend, something I had to struggle with recently. The problem The first thing I’m looking to solve is authentication. That was my biggest challenge when working on the contracting project that inspired this template. While there are a number of good posts around how to setup authentication b/w Django & Next.js, nothing “definitive” came up and I had to cobble together a weird mess of Django+DRF (Django Rest Framework) and Next.js+NextAuth, sharing a token from Django that was masquarading as a JWT token for Next.js. It wasn’t pretty and I knew I could do better. The options I considered 2 options for authenticating the Next.js frontend with the Django backend: Token based auth. On logging in, a user receives a token that is stored in local storage by the frontend and send with every request to the backend. Session/Cookie based auth. This is how authentication works in Django by default and is very easy to get started with - it basically comes for free out of the box when you start a new Django project. While token based auth. is what almost everyone suggests to use when using a Next.js frontend with any backend technology, I wanted to give session based auth. a try. I was curious what it would take to make it work - if it was even possible. tl;dr: It was possible to use cookie/session auth. b/w Django & Next.js - though with a few constraints which make it less appealing than the token based solution What follows are my notes on how to set it up, the problems I faced, and why for the template I’m going to go with token based auth. instead. Learning how CORS & Set-Cookie works It took me a few hours to get my head around how cross-origin requests and cookies work together, but the actual implementation was surprisingly straight forward. This “mini-quest” gave me a chance to learn a lot about how CORS and cookies work, and I’m happy with the time I spent on this. These are the resources which helped me the most (all are from MDN): Cross-Origin Resource Sharing Same-origin policy Using HTTP cookies Set-Cookie And finally, there was a surprise waiting for me! Browsers are almost universally making changes to restrict 3rd party or cross-domain cookies because of their privacy implications. Here’s a nice article from MDN about it: Saying goodbye to third-party cookies in 2024. This is the reason why; while this approach works, I won’t be using it in the template. More on that later. Implementation Implementing the session based auth. b/w Django & Next.js is pretty simple. Django configuration Install the django-cors-headers Python package. Add "corsheaders", to your INSTALLED_APPS. Add the "corsheaders.middleware.CorsMiddleware", middleware, right above the existing CommonMiddleware. Set CORS_ALLOWED_ORIGINS = ["http://localhost:3000"], replacing the URL with your frontend URL. Set CORS_ALLOW_CREDENTIALS = True Configure settings.py to allow cross-domain access for the session cookie. Set SESSION_COOKIE_SAMESITE = "None" Set SESSION_COOKIE_SECURE = True Next.js configuration No configuration is needed on the frontend. However, you do need to use the credentials: "include", option when using the fetch() API to access your backend. Here’s a minimal example. "use client"; import { BACKEND_URL } from "@/constants"; async function signIn() { const loginData = new FormData(); loginData.append("username", "admin"); loginData.append("password", "admin"); return await fetch(`${BACKEND_URL}/accounts/login/`, { method: "POST", body: loginData, credentials: "include", }); } async function whoAmI() { console.log( await fetch(`${BACKEND_URL}/accounts/me/`, { method: "GET", credentials: "include", }), ); } export default function Home() { return ( <main className="flex min-h-dvh w-full flex-col justify-around"> <h1 className="text-center">Home</h1> <button className="" onClick={signIn}> Sign In </button> <button onClick={whoAmI}>Who Am I</button> </main> ); } That’s it. That simple piece of code & configuration took me hours to find. Hopefully you can use this example to skip all that time spent trying to figure things out. Side quest log: Initially, I was not using the credentials: "include" option in the signIn() function above; thinking that I didn’t need to send any cookies with the login call, only the second API call to the /accounts/me endpoint. That mistake cost me about 2 hours of debugging time. If I had RTFM correctly the first time, I would have seen this: include: Tells browsers to include credentials in both same- and cross-origin requests, and always use any credentials sent back in responses. The credentials: "include" not only controls if cookies are sent, but also if they are saved when returned by the server. Why I won’t use this solution in the template Browsers are phasing out 3rd party cookies (Saying goodbye to third-party cookies in 2024) and adding features to work around that restriction where needed. The simplest way that doesn’t require much change is to use Cookies Having Independent Partitioned State (CHIPS). To enable CHIPS, you simply put a Partitioned flag on your Set-Cookie header, like so: Set-Cookie: session_id=1234; SameSite=None; Secure; Path=/; Partitioned; Unfortunately, there’s no straight forward way to do this in Django for now. There’s an open issue to resolve this, but looking at the comments, it won’t likely be solved anytime soon. Considering this, I opted to use the token based auth. method for my template. I’ll write a blog on that once I get it working over the next few days.
Links: Gumroad page Build Log My accidental new years resolution was to work on the 1 problem that has plagued me for my entire adult life; failure to commit and focus. I decided to work in 6 week “sprints” (inspired by Shape Up) and complete the projects I start - for some known definition of complete. This is the 1st project I have decided to work on. I’ll work on this from today (15th Feb 2024) to (28th Mar 2024). I’ll follow-up then with another post talking about how it went. The project The goal is to make & sell a Django + NextJS boilerplate template. What’s a boilerplate template? It’s the source code for a project that’s already setup with many things that are needed in a new project; for example: Stripe subscriptions functionality Background jobs CSS framework User/team management A great example is Saas Pegasus, which seems like an amazing boilerplate loved by many people. My boilerplate is going to be much simpler - and also much cheaper. SaaS Pegasus comes with so many features that it’s worth the $249 starting price. I’m aiming for $5-$10. Goals My goal is to sell this boilerplate to at least 10 people - and have them be happy using it. This means: talking to prospective customers and seeing if this can be useful to them. People will have the option of scheduling a 15 minute pre-purchase call with me for $5 to see if this would be useful to them. The payment is purely to make sure that I only spend time talking to people who are somewhat serious about purchasing. providing excellent after sales support. I’ll include a 60 minute setup call with me for any purchase. While a 60 minute call for a $10 sale isn’t scalable, it’s a great way for me to talk to customers at the start. having a no questions asked refund policy. My experiences with running an e-commerce store in the past tell me this is an amazing way to build trust. provide on-going support, updates, and fixes over email. build a mailing list of people interested in my work who I can email when I launch my future projects. The deliverable The boilerplate will allow developers to quickly start a project that uses Django for the backend and NextJS for the frontend. My recent experiences with another project in this tech stack required me to spend significant time on: figuring out how to setup authentication b/w Django & NextJS (this took the most time & effort) setting up Django Rest Framework so I could write APIs that would be used by the frontend writing Docker files that would build 2 containers - backend & frontend writing Terraform scripts to deploy those containers to AWS ECS writing config & scripts to run the project on Gitpod so it could be easily worked on by my team members My plan is to build a boilerplate that already has most those features built in, plus a few extras: Celery with Redis for background task processing Tailwind CSS for the frontend (in my project I used ChakraUI but Tailwind would be a better option for a boilerplate) If there’s demand for it, a stretch goal is to include social auth (sign-in with Google/Apple/etc) Once complete, I’ll put this on Gumroad and create a landing page there. From then on, it’s all about marketing it; that’s the part which I have no experience with and hope to learn the most from. The marketing plan This is the area where I lack any experience; so I’m not sure how I’m going to market this. Some ideas I have: build it in public on Twitter. I have a tiny Twitter following (312 followers) so not sure how useful this could be. But I have to try something. share it with people asking how to setup Django & NextJS on forums like Reddit, Stackoverflow, and others. maybe write a blog post on how to setup Django & NextJS and then link to the boilerplate from there. The blog post would provider all the steps necessary for the basic setup and the boilerplate would go beyond that with something that’s ready to use. The build log I’d also like to create a build log with this project. This will be a daily note of what I did for this project. I’ll keep it in my notes app Reflect and periodically put it here in this blog post. These daily notes might also serve as content for my build-in-public marketing strategy.
More in programming
I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough). Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries. But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less. I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way? These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined: I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things]. He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth: I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.” That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter. Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?” But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty. I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box. But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally). As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do. And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol. I think my own career evolution has gone something like what Brian describes: Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.” Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?” Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.” Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?” Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?” Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?” [As you can see, I have some personal issues I need to work through…] As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words. I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged. All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels. “I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.” Email · Mastodon · Bluesky
We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase. This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied! It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance. I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded. However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.
Recently I got a question on formal methods1: how does it help to mathematically model systems when the system requirements are constantly changing? It doesn't make sense to spend a lot of time proving a design works, and then deliver the product and find out it's not at all what the client needs. As the saying goes, the hard part is "building the right thing", not "building the thing right". One possible response: "why write tests"? You shouldn't write tests, especially lots of unit tests ahead of time, if you might just throw them all away when the requirements change. This is a bad response because we all know the difference between writing tests and formal methods: testing is easy and FM is hard. Testing requires low cost for moderate correctness, FM requires high(ish) cost for high correctness. And when requirements are constantly changing, "high(ish) cost" isn't affordable and "high correctness" isn't worthwhile, because a kinda-okay solution that solves a customer's problem is infinitely better than a solid solution that doesn't. But eventually you get something that solves the problem, and what then? Most of us don't work for Google, we can't axe features and products on a whim. If the client is happy with your solution, you are expected to support it. It should work when your customers run into new edge cases, or migrate all their computers to the next OS version, or expand into a market with shoddy internet. It should work when 10x as many customers are using 10x as many features. It should work when you add new features that come into conflict. And just as importantly, it should never stop solving their problem. Canonical example: your feature involves processing requested tasks synchronously. At scale, this doesn't work, so to improve latency you make it asynchronous. Now it's eventually consistent, but your customers were depending on it being always consistent. Now it no longer does what they need, and has stopped solving their problems. Every successful requirement met spawns a new requirement: "keep this working". That requirement is permanent, or close enough to decide our long-term strategy. It takes active investment to keep a feature behaving the same as the world around it changes. (Is this all a pretentious of way of saying "software maintenance is hard?" Maybe!) Phase changes In physics there's a concept of a phase transition. To raise the temperature of a gram of liquid water by 1° C, you have to add 4.184 joules of energy.2 This continues until you raise it to 100°C, then it stops. After you've added two thousand joules to that gram, it suddenly turns into steam. The energy of the system changes continuously but the form, or phase, changes discretely. Software isn't physics but the idea works as a metaphor. A certain architecture handles a certain level of load, and past that you need a new architecture. Or a bunch of similar features are independently hardcoded until the system becomes too messy to understand, you remodel the internals into something unified and extendable. etc etc etc. It's doesn't have to be totally discrete phase transition, but there's definitely a "before" and "after" in the system form. Phase changes tend to lead to more intricacy/complexity in the system, meaning it's likely that a phase change will introduce new bugs into existing behaviors. Take the synchronous vs asynchronous case. A very simple toy model of synchronous updates would be Set(key, val), which updates data[key] to val.3 A model of asynchronous updates would be AsyncSet(key, val, priority) adds a (key, val, priority, server_time()) tuple to a tasks set, and then another process asynchronously pulls a tuple (ordered by highest priority, then earliest time) and calls Set(key, val). Here are some properties the client may need preserved as a requirement: If AsyncSet(key, val, _, _) is called, then eventually db[key] = val (possibly violated if higher-priority tasks keep coming in) If someone calls AsyncSet(key1, val1, low) and then AsyncSet(key2, val2, low), they should see the first update and then the second (linearizability, possibly violated if the requests go to different servers with different clock times) If someone calls AsyncSet(key, val, _) and immediately reads db[key] they should get val (obviously violated, though the client may accept a slightly weaker property) If the new system doesn't satisfy an existing customer requirement, it's prudent to fix the bug before releasing the new system. The customer doesn't notice or care that your system underwent a phase change. They'll just see that one day your product solves their problems, and the next day it suddenly doesn't. This is one of the most common applications of formal methods. Both of those systems, and every one of those properties, is formally specifiable in a specification language. We can then automatically check that the new system satisfies the existing properties, and from there do things like automatically generate test suites. This does take a lot of work, so if your requirements are constantly changing, FM may not be worth the investment. But eventually requirements stop changing, and then you're stuck with them forever. That's where models shine. As always, I'm using formal methods to mean the subdiscipline of formal specification of designs, leaving out the formal verification of code. Mostly because "formal specification" is really awkward to say. ↩ Also called a "calorie". The US "dietary Calorie" is actually a kilocalorie. ↩ This is all directly translatable to a TLA+ specification, I'm just describing it in English to avoid paying the syntax tax ↩
While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.
I've been biking in Brooklyn for a few years now! It's hard for me to believe it, but I'm now one of the people other bicyclists ask questions to now. I decided to make a zine that answers the most common of those questions: Bike Brooklyn! is a zine that touches on everything I wish I knew when I started biking in Brooklyn. A lot of this information can be found in other resources, but I wanted to collect it in one place. I hope to update this zine when we get significantly more safe bike infrastructure in Brooklyn and laws change to make streets safer for bicyclists (and everyone) over time, but it's still important to note that each release will reflect a specific snapshot in time of bicycling in Brooklyn. All text and illustrations in the zine are my own. Thank you to Matt Denys, Geoffrey Thomas, Alex Morano, Saskia Haegens, Vishnu Reddy, Ben Turndorf, Thomas Nayem-Huzij, and Ryan Christman for suggestions for content and help with proofreading. This zine is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, so you can copy and distribute this zine for noncommercial purposes in unadapted form as long as you give credit to me. Check out the Bike Brooklyn! zine on the web or download pdfs to read digitally or print here!