Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
12
Links: Gumroad page Build Log My accidental new years resolution was to work on the 1 problem that has plagued me for my entire adult life; failure to commit and focus. I decided to work in 6 week “sprints” (inspired by Shape Up) and complete the projects I start - for some known definition of complete. This is the 1st project I have decided to work on. I’ll work on this from today (15th Feb 2024) to (28th Mar 2024). I’ll follow-up then with another post talking about how it went. The project The goal is to make & sell a Django + NextJS boilerplate template. What’s a boilerplate template? It’s the source code for a project that’s already setup with many things that are needed in a new project; for example: Stripe subscriptions functionality Background jobs CSS framework User/team management A great example is Saas Pegasus, which seems like an amazing boilerplate loved by many people. My boilerplate is going to be much simpler - and also much cheaper. SaaS Pegasus comes with so many...
11 months ago

More from Jibran’s Perspective

Project 2: Gift cards to Pakistan

I’ve completed a freelance project I was working on for a few months, and have started saying no to new opportunities. It’s time to work on one of my own ideas again. This is part of my plan to start failing more. I’ve decided to build a business sending gift cards to Pakistan - and eventually other countries in that corner of the world. Why? A few years ago I had sent a gift card to a colleague in the UK. I found a number of very good options. They all had websites that inspired confidence, and used robust payment methods (Stripe in my example) that I could trust with my credit card. I recently had to send a gift card to a colleague in Pakistan. I was confident that I would find a bunch of great options; instead I only found one that I could think of trusting with my money. I ended up using their services and the card was delivered, but there were a number of problems I saw: No trust building around card payments. There was no clear mention of which provider they used. I did a bank transfer instead of using a CC. This meant my payment was manually verified and the card was only sent after a few hours. There was no confirmation email about my order. I was worried enough to call their helpline to confirm that my order had gone through. Once they had sent the card (which I also had to confirm via phone), I only got a confirmation email the next day. To get an invoice to expense this, I had to send them an email. I’m still waiting on an invoice. There were multiple colleagues who chipped in on this gift card. I had to collect the money from them and then pay for the card myself. In my previous experience of sending a gift card to the UK, I was able to include my colleagues in the process. They were able to add their contributions directly to the gift card I selected and a card of the total amount was sent to the recipient. Finally, there was no option for the receiver to choose which gift card they wanted. Instead I had to choose for them. There is a “Universal Gift Card” they claim works at all merchants and is the one I got, but redeeming that would be slightly more complicated. Interestingly, my colleague didn’t open the email they received with the gift card because they thought it was a spam/scam/malicious email. Only after I asked if they had received the card did they end up opening it. I know a better user experience exists. I want to bring the same to Pakistan and solve my own problem at the same time. Is there a market for this? I believe so, because: It’s a problem I’ve just faced. I’ve seen my wife having to deal with low-trust companies sending gifts to Pakistan. Gift cards are different, but eventually I could also add the option to send physical gifts to the recipient. I’ve seen my employer deal with this. Recently a baby gift basket arrived 2 months after the baby was born. 🤯 This is a recurring problem. People & companies need to send gift cards on birthdays, weddings, etc. With more companies starting to hire remotely in Pakistan, this could be a valuable service for businesses to subscribe to. Validation? I haven’t found an easy way to validate this idea. There is no community of “people sending gift cards to Pakistan” that I can tap into. That isn’t a cohort I can find in one place. I could make a list of B2B customers; companies that hire remotely in Pakistan. However, I want to start with individual customers - because I’m starting from a place of solving my own problem. It should be possible to pivot to B2B if I don’t find any interest from individual customers. Validation then involves me starting with a blog - suggesting gift cards to send to Pakistan. I’ll use SEO to bring in traffic. If I see enough visitors, I could start building a business. This also means that if/when the actual product launches, I’ll have a distribution channel already working. What if I’m wrong? There’s a very strong possibility that I’m wrong about this idea. That I’ll spend a bunch of time for it to get nowhere, or that I have picked a problem that isn’t very valuable to solve. This is my unique brand of fear of failure. I used to think I didn’t fear failing, because I had already failed many times. Instead, my fear of failure manifests as a fear of picking the wrong thing and wasting time on it. The way I am dealing with this is to realize that if I don’t pick anything - which I have frequently done in the past - I have an exactly 0% chance of succeeding. Just trying something makes that probability > 0%. You miss 100% of the shots you don’t take. Another thing that’s helping me is to time box this idea. I will spend 6 weeks on building the blog and populating it with as much useful content as possible. After that I can spend an hour or two every week to add a few more pieces of content. I can start researching and working on a different idea after the 6 week period and wait for the SEO to have an impact before making a decision to continue or abandon this.

2 months ago 43 votes
Deploying Ruby on Rails to AWS with Kamal

As part of a contracting project, I’ve been building an analytics dashboard for a feedback collection SaaS. The app is built in Ruby on Rails and given all the nice things I’ve heard about Kamal; I decided to use it for deploying the app. The experience has been phenomenal; outside of some frustration with the initial deployment. The app is deployed on a pretty standard AWS setup; a couple of EC2 servers hosting the web app running inside Docker containers, and a load balancer in front. One of the problems I faced during the initial deployment was forwarding headers from the AWS application load balancer to the RoR server running in the Docker container. The challenge with Kamal is that it relies heavily on Traefik, and while Traefik is a great tool, it takes some getting used to. It’s configuration is not very intuitive, and there’s no easy way to see how things are configured outside of looking at the text logs. The Traefik document is pretty thorough, so a bit of searching led me to this CLI argument which needs to be passed to the Traefik container: entrypoints.http.forwardedheaders.insecure: true However, no matter what I tried, when I added this, the app container would stop responding to web requests. Without the config the container would work but throw an exception related to the Origin header not matching the configured hosts. After a lot of experimentation, I stumbled upon the other config I needed to add by pure luck. entrypoints.http.address: ":80" As far as I can tell, when I added the forwardedheaders config, the entrypoint no longer got the correct address configuration. I’m not sure if this is related to Kamal or Traefik. Kamal deploy.yml If you’re looking to replicate a similar setup, here’s the Kamal deploy.yml file that I am using with this project to deploy to AWS, with a load balancer terminating the SSL connection and forwarding traffic to web servers that are configured via Kamal. As a bonus, this config also deploys Sidekiq for background tasks. service: <SERVICE NAME> image: <IMAGE NAME> ssh: user: ubuntu proxy: "ubuntu@A.B.C.D" servers: web: hosts: - "A.B.C.D" - "A.B.C.D" labels: traefik.http.routers.<SERVICE NAME>-web.rule: Host(`<YOUR HOST NAME>`) sidekiq: hosts: - "A.B.C.D" - "A.B.C.D" traefik: false cmd: bundle exec sidekiq registry: server: <AWS ACCOUNT ID>.dkr.ecr.<AWS REGION>.amazonaws.com username: AWS password: <%= %x(aws ecr get-login-password --region <AWS REGION>) %> builder: local: arch: amd64 # Because I develop on a Apple Silicon machine, I need to use a build target env: clear: - DATABASE_URL: <DATABASE URL> secret: - RAILS_MASTER_KEY - DB_PASSWORD traefik: args: entrypoints.http.address: ":80" entrypoints.http.forwardedheaders.insecure: true log.level: DEBUG accesslog: true accesslog.format: json

7 months ago 19 votes
Failure 1: Django + NextJS Boilerplate

I have failed, and that is exactly what I had hoped for a few months ago in this blog post. This is a good failure. It has taught me things, lessons I can use in the future to avoid failing this way again. But first a bit of context. What did I fail at? In February of 2024 I decide to try my hands on my first “Indie Hacker” hustle, something that would make me money on the internet without having to trade my time for it. A product instead of consultancy services that I usually provide. I had seen a number of people on Twitter (X) rave about how well their bootstrap templates were doing; and I had just gotten out of a consultancy project where I needed to connect a Next.js frontend to a Django backend. I thought it was the perfect project to start my indie hacking journey. I put up a launch post and started working, updating a build log as I went along. I gave myself until 28th March 2024 to finish it. That of course did not happen. Let’s talk about why I failed and what I learned. Episode 1: The one where I don’t understand the meaning of MVP My initial plan was to build a Django+Next.js boilerplate template the provided all of these: the base template that provided a Django backend & Next.js frontend working authentication b/w the backend & frontend Dockerfile that would create the backend & frontend containers for deployment Terraform scripts to setup an infrastructure on AWS Celery + Redis for background task processing TailwindCSS for the frontend (comes mostly for free with Next.js) social auth This looks like something achievable in a week or two of work - but only if you’re working full time on this. I failed to consider that I have a day job and a life. I was barely able to tick of the first two of these deliverables by the time my 6 week deadline came up. As a good friend told me later, I should have focused on the minimum amount of value I could deliver. Just having the first two things on my list be done would have been enough. I couldn’t charge the $20 I had planned for, but I could have charged $1-$5 for just that. And if no one was interested in spending the cost of a coffee on the MVP of the template, that would have been a good signal that this wasn’t going anywhere in it’s current shape. Instead, by focusing on building something much bigger, I robbed myself of the ability to validate the idea quickly. I spent all my available time coding the template instead of trying to talk to potential customers about it. Lesson 1: Scope down aggressively. Episode 2: Where I jumped on the hype-wagon I settled on building a boilerplate template because that’s what I had seen a lot of people on Twitter/X doing lately; I’m chalking this down to recency bias. I had no personal interest in a boilerplate template. It’s also not a product that I would personally use. I have so far made one project that uses this tech stack. Most of my other projects are Django, and Ruby on Rails. The most successful boilerplate templates I come across are from people who made a bunch of projects in 1 tech stack then realized they needed to do the same thing over-and-over again; which they then packaged into a template they could use. Selling to others was a bonus at first I guess. I was very enthusiastic about the project at the start, but as time went on I had to force myself to work on it. My lack of interest in this type of project was a big factor. Another factor was there being no way to see the fruits of my labor. I am currently working on an analytics dashboard for another client (a RoR project) and every time I build a feature, I love to play around with it in my free time. I test how it works, make sure the UX is a good one, and just play around and admire the app I’ve made. Without me using my template to build new projects, I lacked that feedback loop. Without the loop, I quickly lost interest. Lesson 2: Build something I can use myself. This isn’t a job I’m getting paid for, so the only motivation I have initially until it starts generating money is to build something interesting for myself. Episode 3: Where I had nothing for potential customers to play around with This is related to the 1st lesson. Because I didn’t have a path to quickly get something out there, there was no way for me to get my “product” into the hands of people who could test and provide feedback. I think the problem with a boilerplate template style of product is that you can’t give people a half-backed thing and ask them to test it. Unlike other SaaS apps, there’s no mid-way version of a template. Customers have to “buy-in” to use your template with any project they are starting. With SaaS, users can sign up and test, and then leave if they don’t like it. There’s no easy way of testing with a template. Lesson 3: Build something that can be tested by potential customers easily. For now, I’m going to stick with SaaS style web apps. Conclusion Moving forward: I’ll be working on web app products that users can sign up for and test very quickly. My next few experiments/products will be things that I can use myself as well. I’ll post what I’m going to work on next when I decide and have some time away from my job & freelance projects that are currently in progress.

7 months ago 17 votes
Cookie Based Auth for Django and NextJS

If you’re just looking for implementation instructions, skip my ramblings and go straight to the code here. I’m currently working on my first project after deciding that I needed to fail more and practice finishing projects instead of abandoning them midway once they got “boring”. Anyways… This one is till in it’s interesting phase, so here’s a blog post with some things I learned yesterday while working on it. The project is a boilerplate template that should make it easy for devs. to start a new project with a Django backend and a Next.js frontend, something I had to struggle with recently. The problem The first thing I’m looking to solve is authentication. That was my biggest challenge when working on the contracting project that inspired this template. While there are a number of good posts around how to setup authentication b/w Django & Next.js, nothing “definitive” came up and I had to cobble together a weird mess of Django+DRF (Django Rest Framework) and Next.js+NextAuth, sharing a token from Django that was masquarading as a JWT token for Next.js. It wasn’t pretty and I knew I could do better. The options I considered 2 options for authenticating the Next.js frontend with the Django backend: Token based auth. On logging in, a user receives a token that is stored in local storage by the frontend and send with every request to the backend. Session/Cookie based auth. This is how authentication works in Django by default and is very easy to get started with - it basically comes for free out of the box when you start a new Django project. While token based auth. is what almost everyone suggests to use when using a Next.js frontend with any backend technology, I wanted to give session based auth. a try. I was curious what it would take to make it work - if it was even possible. tl;dr: It was possible to use cookie/session auth. b/w Django & Next.js - though with a few constraints which make it less appealing than the token based solution What follows are my notes on how to set it up, the problems I faced, and why for the template I’m going to go with token based auth. instead. Learning how CORS & Set-Cookie works It took me a few hours to get my head around how cross-origin requests and cookies work together, but the actual implementation was surprisingly straight forward. This “mini-quest” gave me a chance to learn a lot about how CORS and cookies work, and I’m happy with the time I spent on this. These are the resources which helped me the most (all are from MDN): Cross-Origin Resource Sharing Same-origin policy Using HTTP cookies Set-Cookie And finally, there was a surprise waiting for me! Browsers are almost universally making changes to restrict 3rd party or cross-domain cookies because of their privacy implications. Here’s a nice article from MDN about it: Saying goodbye to third-party cookies in 2024. This is the reason why; while this approach works, I won’t be using it in the template. More on that later. Implementation Implementing the session based auth. b/w Django & Next.js is pretty simple. Django configuration Install the django-cors-headers Python package. Add "corsheaders", to your INSTALLED_APPS. Add the "corsheaders.middleware.CorsMiddleware", middleware, right above the existing CommonMiddleware. Set CORS_ALLOWED_ORIGINS = ["http://localhost:3000"], replacing the URL with your frontend URL. Set CORS_ALLOW_CREDENTIALS = True Configure settings.py to allow cross-domain access for the session cookie. Set SESSION_COOKIE_SAMESITE = "None" Set SESSION_COOKIE_SECURE = True Next.js configuration No configuration is needed on the frontend. However, you do need to use the credentials: "include", option when using the fetch() API to access your backend. Here’s a minimal example. "use client"; import { BACKEND_URL } from "@/constants"; async function signIn() { const loginData = new FormData(); loginData.append("username", "admin"); loginData.append("password", "admin"); return await fetch(`${BACKEND_URL}/accounts/login/`, { method: "POST", body: loginData, credentials: "include", }); } async function whoAmI() { console.log( await fetch(`${BACKEND_URL}/accounts/me/`, { method: "GET", credentials: "include", }), ); } export default function Home() { return ( <main className="flex min-h-dvh w-full flex-col justify-around"> <h1 className="text-center">Home</h1> <button className="" onClick={signIn}> Sign In </button> <button onClick={whoAmI}>Who Am I</button> </main> ); } That’s it. That simple piece of code & configuration took me hours to find. Hopefully you can use this example to skip all that time spent trying to figure things out. Side quest log: Initially, I was not using the credentials: "include" option in the signIn() function above; thinking that I didn’t need to send any cookies with the login call, only the second API call to the /accounts/me endpoint. That mistake cost me about 2 hours of debugging time. If I had RTFM correctly the first time, I would have seen this: include: Tells browsers to include credentials in both same- and cross-origin requests, and always use any credentials sent back in responses. The credentials: "include" not only controls if cookies are sent, but also if they are saved when returned by the server. Why I won’t use this solution in the template Browsers are phasing out 3rd party cookies (Saying goodbye to third-party cookies in 2024) and adding features to work around that restriction where needed. The simplest way that doesn’t require much change is to use Cookies Having Independent Partitioned State (CHIPS). To enable CHIPS, you simply put a Partitioned flag on your Set-Cookie header, like so: Set-Cookie: session_id=1234; SameSite=None; Secure; Path=/; Partitioned; Unfortunately, there’s no straight forward way to do this in Django for now. There’s an open issue to resolve this, but looking at the comments, it won’t likely be solved anytime soon. Considering this, I opted to use the token based auth. method for my template. I’ll write a blog on that once I get it working over the next few days.

11 months ago 13 votes

More in programming

Non-alcoholic apéritifs

I’ve been doing Dry January this year. One thing I missed was something for apéro hour, a beverage to mark the start of the evening. Something complex and maybe bitter, not like a drink you’d have with lunch. I found some good options. Ghia sodas are my favorite. Ghia is an NA apéritif based on grape juice but with enough bitterness (gentian) and sourness (yuzu) to be interesting. You can buy a bottle and mix it with soda yourself but I like the little cans with extra flavoring. The Ginger and the Sumac & Chili are both great. Another thing I like are low-sugar fancy soda pops. Not diet drinks, they still have a little sugar, but typically 50 calories a can. De La Calle Tepache is my favorite. Fermented pineapple is delicious and they have some fun flavors. Culture Pop is also good. A friend gave me the Zero book, a drinks cookbook from the fancy restaurant Alinea. This book is a little aspirational but the recipes are doable, it’s just a lot of labor. Very fancy high end drink mixing, really beautiful flavor ideas. The only thing I made was their gin substitute (mostly junipers extracted in glycerin) and it was too sweet for me. Need to find the right use for it, a martini definitely ain’t it. An easier homemade drink is this Nonalcoholic Dirty Lemon Tonic. It’s basically a lemonade heavily flavored with salted preserved lemons, then mixed with tonic. I love the complexity and freshness of this drink and enjoy it on its own merits. Finally, non-alcoholic beer has gotten a lot better in the last few years thanks to manufacturing innovations. I’ve been enjoying NA Black Butte Porter, Stella Artois 0.0, Heineken 0.0. They basically all taste just like their alcoholic uncles, no compromise. One thing to note about non-alcoholic substitutes is they are not cheap. They’ve become a big high end business. Expect to pay the same for an NA drink as one with alcohol even though they aren’t taxed nearly as much.

yesterday 5 votes
It burns

The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with our bags packed, thinking they'd probably get it under control. But by 2am, the roaring blades of fire choppers shaking the house got us up. As we sped down the canyon towards Pacific Coast Highway (PCH), the fire had reached the ridge across from ours, and flames were blazing large out the car windows. It felt like we had left the evacuation a little too late, but they eventually did get Franklin under control before it reached us. Humans have a strange relationship with risk and disasters. We're so prone to wishful thinking and bad pattern matching. I remember people being shocked when the flames jumped the PCH during the Woolsey fire in 2017. IT HAD NEVER DONE THAT! So several friends of ours had to suddenly escape a nightmare scenario, driving through burning streets, in heavy smoke, with literally their lives on the line. Because the past had failed to predict the future. I feel into that same trap for a moment with the dramatic proclamations of wind and fire weather in the days leading up to January 7. Warning after warning of "extremely dangerous, life-threatening wind" coming from the City of Malibu, and that overly-bureaucratic-but-still-ominous "Particularly Dangerous Situation" designation. Because, really, how much worse could it be? Turns out, a lot. It was a little before noon on the 7th when we first saw the big plumes of smoke rise from the Palisades fire. And immediately the pattern matching ran astray. Oh, it's probably just like Franklin. It's not big yet, they'll get it out. They usually do. Well, they didn't. By the late afternoon, we had once more packed our bags, and by then it was also clear that things actually were different this time. Different worse. Different enough that even Santa Monica didn't feel like it was assured to be safe. So we headed far North, to be sure that we wouldn't have to evacuate again. Turned out to be a good move. Because by now, into the evening, few people in the connected world hadn't started to see the catastrophic images emerging from the Palisades and Eaton fires. Well over 10,000 houses would ultimately burn. Entire neighborhoods leveled. Pictures that could be mistaken for World War II. Utter and complete destruction. By the night of the 7th, the fire reached our canyon, and it tore through the chaparral and brush that'd been building since the last big fire that area saw in 1993. Out of some 150 houses in our immediate vicinity, nearly a hundred burned to the ground. Including the first house we moved to in Malibu back in 2009. But thankfully not ours. That's of course a huge relief. This was and is our Malibu Dream House. The site of that gorgeous home office I'm so fond to share views from. Our home. But a house left standing in a disaster zone is still a disaster. The flames reached all the way up to the base of our construction, incinerated much of our landscaping, and devoured the power poles around it to dysfunction. We have burnt-out buildings every which way the eye looks. The national guard is still stationed at road blocks on the access roads. Utility workers are tearing down the entire power grid to rebuild it from scratch. It's going to be a long time before this is comfortably habitable again. So we left. That in itself feels like defeat. There's an urge to stay put, and to help, in whatever helpless ways you can. But with three school-age children who've already missed over a months worth of learning from power outages, fire threats, actual fires, and now mudslide dangers, it was time to go. None of this came as a surprise, mind you. After Woolsey in 2017, Malibu life always felt like living on borrowed time to us. We knew it, even accepted it. Beautiful enough to be worth the risk, we said.  But even if it wasn't a surprise, it's still a shock. The sheer devastation, especially in the Palisades, went far beyond our normal range of comprehension. Bounded, as it always is, by past experiences. Thus, we find ourselves back in Copenhagen. A safe haven for calamities of all sorts. We lived here for three years during the pandemic, so it just made sense to use it for refuge once more. The kids' old international school accepted them right back in, and past friendships were quickly rebooted. I don't know how long it's going to be this time. And that's an odd feeling to have, just as America has been turning a corner, and just as the optimism is back in so many areas. Of the twenty years I've spent in America, this feels like the most exciting time to be part of the exceptionalism that the US of A offers. And of course we still are. I'll still be in the US all the time on both business, racing, and family trips. But it won't be exclusively so for a while, and it won't be from our Malibu Dream House. And that burns.

yesterday 6 votes
Slow, flaky, and failing

Thou shalt not suffer a flaky test to live, because it’s annoying, counterproductive, and dangerous: one day it might fail for real, and you won’t notice. Here’s what to do.

2 days ago 6 votes
Name that Ware, January 2025

The ware for January 2025 is shown below. Thanks to brimdavis for contributing this ware! …back in the day when you would get wares that had “blue wires” in them… One thing I wonder about this ware is…where are the ROMs? Perhaps I’ll find out soon! Happy year of the snake!

2 days ago 4 votes
Is engineering strategy useful?

While I frequently hear engineers bemoan a missing strategy, they rarely complete the thought by articulating why the missing strategy matters. Instead, it serves as more of a truism: the economy used to be better, children used to respect their parents, and engineering organizations used to have an engineering strategy. This chapter starts by exploring something I believe quite strongly: there’s always an engineering strategy, even if there’s nothing written down. From there, we’ll discuss why strategy, especially written strategy, is such a valuable opportunity for organizations that take it seriously. We’ll dig into: Why there’s always a strategy, even when people say there isn’t How strategies have been impactful across my career How inappropriate strategies create significant organizational pain without much compensating impact How written strategy drives organizational learning The costs of not writing strategy down How strategy supports personal learning and developing, even in cases where you’re not empowered to “do strategy” yourself By this chapter’s end, hopefully you will agree with me that strategy is an undertaking worth investing your–and your organization’s–time in. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. There’s always a strategy I’ve never worked somewhere where people didn’t claim there as no strategy. In many of those companies, they’d say there was no engineering strategy. Once I became an executive and was able to document and distribute an engineering strategy, accusations of missing strategy didn’t go away, they just shfited to focus on a missing product or company strategy. This even happened at companies that definitively had engineering strategies like Stripe in 2016 which had numerous pillars to a clear engineering strategy such as: Maintain backwards API compatibilty, at almost any cost (e.g. force an upgrade from TLS 1.2 to TLS 1.3 to retain PCI compliance, but don’t force upgrades from the /v1/charges endpoint to the /v1/payment_intents endpoint) Work in Ruby in a monorepo, unless it’s the PCI environment, data processing, or data science work Engineers are fully responsible for the usability of their work, even when there are product or engineering managers involved Working there it was generally clear what the company’s engineering strategy was on any given topic. That said, it sometimes required asking around, and over time certain decisions became sufficiently contentious that it became hard to definitively answer what the strategy was. For example, the adoptino of Ruby versus Java became contentious enough that I distributed a strategy attempting to mediate the disagreement, Magnitudes of exploration, although it wasn’t a particularly successful effort (for reasons that are obvious in hindsight, particularly the lack of any enforcement mechanism). In the same sense that William Gibson said “The future is already here – it’s just not very evenly distributed,” there is always a strategy embedded into an organization’s decisions, although in many organizations that strategy is only visible to a small group, and may be quickly forgotten. If you ever find yourself thinking that a strategy doesn’t exist, I’d encourage you to instead ask yourself where the strategy lives if you can’t find it. Once you do find it, you may also find that the strategy is quite ineffective, but I’ve simply never found that it doesn’t exist. Strategy is impactful In “We are a product engineering company!”, we discuss Calm’s engineering strategy to address pervasive friction within the engineering team. The core of that strategy is clarifying how Calm makes major technology decisions, along with documenting the motivating goal steering those decisions: maximizing time and energy spent on creating their product. That strategy reduced friction by eliminating the cause of ongoing debate. It was successful in resetting the team’s focus. It also caused several engineers to leave the company, because it was incompatible with their priorities. It’s easy to view that as a downside, but I don’t think it was. A clear, documented strategy made it clear to everyone involved what sort of game we were playing, the rules for that game, and for the first time let them accurately decide if they wanted to be part of that game with the wider team. Creating alignment is one of the ways that strategy makes an impact, but it’s certainly not the only way. Some of the ways that strategies support the creating organization are: Concentrating company investment into a smaller space. For example, deciding not to decompose a monolith allows you to invest the majority of your tooling efforts on one language, one test suite, and one deployment mechanism. Many interesting properties only available through universal adoption. For example, moving to an “N-1 policy” on backfilled roles is a significant opportunity for managing costs, but only works if consistently adopted. As another example, many strategies for disaster recovery or multi-region are only viable if all infrastructure has a common configuration mechanism. Focus execution on what truly matters. For example, Uber’s service migration strategy allowed a four engineer team to migrate a thousand services operated by two thousand engineers to a new provisioning and orchestration platform in less than a year. This was an extraordinarily difficult project, and was only possible because of clear thinking. Creating a knowledge repository of how your organization thinks. Onboarding new hires, particularly senior new hires, is much more effective with documented strategy. For example, most industry professionals today have a strongly held opinion on how to adopt large language models. New hires will have a strong opinion as well, but they’re unlikely to share your organization’s opinion unless there’s a clear document they can read to understand it. There are some things that a strategy, even a cleverly written one, cannot do. However, it’s always been my experience that developing a strategy creates progress, even if the progress is understanding the inherent disagreement preventing agreement. Inappropriate strategy is especially impactful While good strategy can accomplish many things, it sometimes feels that inappropriate strategy is far more impactful. Of course, impactful in all the wrong ways. Digg V4 remains the worst considered strategy I’ve personally participated in. It was a complete rewrite of the Digg V3.5 codebase from a PHP monolith to a PHP frontend and backend of a dozen Python services. It also moved the database from sharded MySQL to an early version of Cassandra. Perhaps worst, it replaced the nuanced algorithms developed over a decade with a hack implemented a few days before launch. Although it’s likely Digg would have struggled to become profitable due to its reliance on search engine optimization for traffic, and Google’s frequently changing search algorithm of that era, the engineering strategy ensured we died fast rather than having an opportunity to dig our way out. Importantly, it’s not just Digg. Almost every engineering organization you drill into will have it’s share of unused platform projects that captured decades of engineering years to the detriment of an important opportunity. A shocking number of senior leaders join new companies and initiate a grand migration that attempts to entirely rewrite the architecture, switch programming languages, or otherwise shift their new organization to resemble a prior organization where they understood things better. Inappropriate versus bad When I first wrote this section, I just labeled this sort of strategy as “bad.” The challenge with that term is that the same strategy might well be very effective in a different set of circumstances. For example, if Digg had been a three person company with no revenue, rewriting from scratch could have the right decision! As a result, I’ve tried to prefer the term “inappropriate” rather than “bad” to avoid getting caught up on whether a given approach might work in other circumstances. Every approach undoubtedly works in some organization. Written strategy drives organizational learning When I joined Carta, I noticed we had an inconsistent approach to a number of important problems. Teams had distinct standard kits for how they approached new projects. Adoption of existing internal platforms was inconsistent, as was decision making around funding new internal platforms. There was widespread agreement that we were decomposing our monolith, but no agreement on how we were doing it. Coming into such a permissive strategy environment, with strong, differing perspectives on the ideal path forward, one of my first projects was writing down an explicit engineering strategy along with our newly formed Navigators team, itself a part of our new engineering strategy. Navigators at Carta As discussed in Navigators, we developed a program at Carta to have explicitly named individual contributor, technical leaders to represent key parts of the engineering organization. This representative leadership group made it possible to iterate on strategy with a small team of about ten engineers that represented the entire organization, rather than take on the impossible task of negotiating with 400 engineers directly. This written strategy made it possible to explicitly describe the problems we saw, and how we wanted to navigate those problems. Further, it was an artifact that we were able to iterate on in a small group, but then share widely for feedback from teams we might have missed. After initial publishing, we shared it widely and talked about it frequently in engineering all-hands meetings. Then we came back to it each year, or when things stopped making much sense, and revised it. As an example, our initial strategy didn’t talk about artificial intelligence at all. A few months later, we extended it to mention a very conservative approach to using Large Language Models. Most recently, we’ve revised the artificial intelligence portion again, as we dive deeply into agentic workflows. A lot of people have disagreed with parts of the strategy, which is great: that’s one of the key benefits of a written strategy, it’s possible to precisely disagree. From that disagreement, we’ve been able to evolve our strategy. Sometimes because there’s new information like the current rapidly evolution of artificial intelligence pratices, and other times because our initial approach could be improved like in how we gated membership of the initial Navigators team. New hires are able to disagree too, and do it from an informed place rather than coming across as attached to their prior company’s practices. In particular, they’re able to understand the historical thinking that motivated our decisions, even when that context is no longer obvious. At the time we paused decomposition of our monolith, there was significant friction in service provisioning, but that’s far less true today, which makes the decision seem a bit arbitrary. Only the written document can consistently communicate that context across a growing, shifting, and changing organization. With oral history, what you believe is highly dependent on who you talk with, which shapes your view of history and the present. With writen history, it’s far more possible to agree at scale, which is the prerequisite to growing at scale rather than isolating growth to small pockets of senior leadership. The cost of implicit strategy We just finished talking about written strategy, and this book spends a lot of time on this topic, including a chapter on how to structure strategies to maximize readability. It’s not just because of the positives created by written strategy, but also because of the damage unwritten strategy creates. Vulnerable to misinterpretation. Information flow in verbal organizations depends on an individual being in a given room for a decision, and then accurately repeating that information to the others who need it. However, it’s common to see those individuals fail to repeat that information elsewhere. Sometimes their interpretation is also faulty to some degree. Both of these create significant problems in operating strategy. Two-headed organizations Some years ago, I started moving towards a model where most engineering organizations I worked with have two leaders: one who’s a manager, and another who is a senior engineer. This was partially to ensure engineering context was included in senior decision making, but it was also to reduce communication errors. Errors in point-to-point communication are so prevalent when done one-to-one, that the only solution I could find for folks who weren’t reading-oriented communicators was ensuring I had communicated strategy (and other updates) to at least two people. Inconsistency across teams. At one company I worked in, promotions to Staff-plus role happened at a much higher rate in the infrastructure engineering organization than the product engineering team. This created a constant drain out of product engineering to work on infrastructure shaped problems, even if those problems weren’t particularly valuable to the business. New leaders had no idea this informal policy existed, and they would routinely run into trouble in calibration discussions. They also weren’t aware they needed to go argue for a better policy. Worse, no one was sure if this was a real policy or not, so it was ultimately random whether this perspective was represented for any given promotion: sometimes good promotions would be blocked, sometimes borderline cases would be approved. Inconsistency over time. Implementing a new policy tends to be a mix of persistent and one-time actions. For example, let’s say you wanted to standardize all HTTP operations to use the same library across your codebase. You might add a linter check to reject known alternatives, and you’ll probably do a one-time pass across your codebase standardizing on that library. However, two years later there are another three random HTTP libraries in your codebase, creeping into the cracks surrounding your linting. If the policy is written down, and a few people read it, then there’s a number of ways this could be nonetheless prevented. If it’s not written down, it’s much less likely someone will remember, and much more likely they won’t remember the rationale well enough to argue about it. Hazard to new leadership. When a new Staff-plus engineer or executive joins a company, it’s common to blame them for failing to understand the existing context behind decisions. That’s fair: a big part of senior leadership is uncovering and understanding context. It’s also unfair: explicit documentation of prior thinking would have made this much easier for them. Every particularly bad new-leader onboarding that I’ve seen has involved a new leader coming into an unfilled role, that the new leader’s manager didn’t know how to do. In those cases, success is entirely dependent on that new leader’s ability and interest in learning. In most ways, the practice of documenting strategy has a lot in common with succession planning, where the full benefits accrue to the organization rather than to the individual doing it. It’s possible to maintain things when the original authors are present, appreciating the value requires stepping outside yourself for a moment to value things that will matter most to the organization when you’re no longer a member. Information herd immunity A frequent objection to written strategy is that no one reads anything. There’s some truth to this: it’s extremely hard to get everyone in an organization to know something. However, I’ve never found that goal to be particularly important. My view of information dispersal in an organization is the same as Herd immunity: you don’t need everyone to know something, just to have enough people who know something that confusion doesn’t propagate too far. So, it may be impossible for all engineers to know strategy details, but you certainly can have every Staff-plus engineer and engineering manager know those details. Strategy supports personal learning While I believe that the largest benefits of strategy accrue to the organization, rather than the individual creating it, I also believe that strategy is an underrated avenue for self-development. The ways that I’ve seen strategy support personal development are: Creating strategy builds self-awareness. Starting with a concrete example, I’ve worked with several engineers who viewed themselves as extremely senior, but frequently demanded that projects were implemented using new programming languages or technologies because they personally wanted to learn about the technology. Their internal strategy was clear–they wanted to work on something fun–but following the steps to build an engineering strategy would have created a strategy that even they agreed didn’t make sense. Strategy supports situational awareness in new environments. Wardley mapping talks a lot about situational awareness as a prerequisite to good strategy. This is ensuring you understand the realities of your circumstances, which is the most destructive failure of new senior engineering leaders. By explicitly stating the diagnosis where the strategy applied, it makes it easier for you to debug why reusing a prior strategy in a new team or company might not work. Strategy as your personal archive. Just as documented strategy is institutional memory, it also serves as personal memory to understand the impact of your prior approaches. Each of us is an archivist of our prior work, pulling out the most valuable pieces to address the problem at hand. Over a long career, memory fades–and motivated reasoning creeps in–but explicit documentation doesn’t. Indeed, part of the reason I started working on this book now rather than later is that I realized I was starting to forget the details of the strategy work I did earlier in my career. If I wanted to preserve the wisdom of that era, and ensure I didn’t have to relearn the same lessons in the future, I had to write it now. Summary We’ve covered why strategy can be a valuable learning mechanism for both your engineering organization and for you. We’ve shown how strategies have helped organizations deal with service migrations, monolith decomposition, and right-sizing backfilling. We’ve also discussed how inappropriate strategy contributed to Digg’s demise. However, if I had to pick two things to emphasize as this chapter ends, it wouldn’t be any of those things. Rather, it would be two themes that I find are the most frequently ignored: There’s always a strategy, even if it isn’t written down. The single biggest act you can take to further strategy in your organization is to write down strategy so it can be debated, agreed upon, and explicitly evolved. Discussions around topics like strategy often get caught up in high prestige activities like making controversial decisions, but the most effective strategists I’ve seen make more progress by actually performing the basics: writing things down, exploring widely to see how other companies solve the same problem, accepting feedback into their draft from folks who disagree with them. Strategy is useful, and doing strategy can be simple, too.

2 days ago 6 votes