More from Irrational Exuberance
We’ve read a lot of strategy at this point in the book. We can judge a strategy’s format, and its construction: both are useful things. However, format is a predictor of quality, not quality itself. The remaining question is, how should we assess whether a strategy is any good? Uber’s service migration strategy unlocked the entire organization to make rapid progress. It also led to a sprawling architecture problem down the line. Was it a great strategy or a terrible one? Folks can reasonably disagree, but it’s worthwhile developing our point of view why we should prefer one interpretation or the other. This chapter will focus on: The various ways that are frequently suggested for evaluating strategies, such as input-only evaluation, output-only evaluation, and so on A rubric for evaluating strategies, and why a useful rubric has to recognize that strategies have to be evaluated in phases rather than as a unified construct Why ending a strategy is often a sign of a good strategist, and sometimes the natural reaction to a new phase in a strategy, rather than a judgment on prior phases How missing context is an unpierceable veil for evaluating other companies' strategies with high-conviction, and why you’ll end up attempting to evaluate them anyway Why you can learn just as much from bad strategies as from good ones, even in circumstances where you are missing much of the underlying context Time to refine our judgment about strategy quality a bit. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. How are strategies graded? Before suggesting my own rubric, I want to explore how the industry appears to grade strategies in practice. That’s not because I particularly agree with them–I generally find each approach is missing an important nuance–understanding their flaws is a foundation to build on. Grading strategy on its outputs is by far the most prevalent approach I’ve found in industry. This is an appealing approach, because it does make sense that a strategy’s results are more important than anything else. However, this line of thinking can go awry. We saw massive companies like Google move to service architectures, and we copied them because if it worked for Google, it would likely work for us. As discussed in the monolith decomposition strategy, it did not work particularly well for most adopters. The challenge with grading outputs is that it doesn’t distinguish between “alpha”, how much better your results are because of your strategy, and “beta”, the expected outcome if you hadn’t used the strategy. For example, the acquisition of Index allowed Stripe to build a point-of-sale business line, but they were also on track to internally build that business. Looking only at outputs can’t distinguish whether it would have been better to build the business via acquisition or internally. But one of those paths must have been the better strategy. Similarly, there are also strategies that succeed, but do so at unreasonably high costs. Stripe’s API deprecation strategy is a good example of a strategy that was extremely well worth the cost for the company’s first decade, but eventually became too expensive to maintain as the evolving regulatory environment created more overhead. Fortunately, Stripe modified their strategy to allow some deprecations, but you can imagine an alternate scenario where they attempted to maintain their original strategy, which would have likely failed due to its accumulating costs. Confronting these problems with judging on outputs, it’s compelling to switch to the opposite lens and evaluate strategy purely on its inputs. In that approach, as long as the sum of the strategy’s parts make sense, it’s a good strategy, even if it didn’t accomplish its goals. This approach is very appealing, because it appears to focus purely on the strategy’s alpha. Unfortunately I find this view similarly deficient. For example, the strategy for adopting LLMs offers a cautious approach to adopting LLMs. If that company is outcompleted by competitors in the incorporation of LLMs, to the loss of significant revenue, I would argue that strategy isn’t a great one, even if it’s rooted in a proper diagnosis and effective policies. Doing good strategy requires reconciling the theoretical with the practical, so we can’t argue that inputs alone are enough to evaluate strategy work. If a strategy is conceptually sound, but struggling to make an impact, then its authors should continue to refine it. If its authors take a single pass and ignore subsequent information that it’s not working, then it’s a failed strategy, regardless of how thoughtful the first pass was. While I find these mechanisms to be incomplete, they’re still instructive. By incorporating bits of each of these observations, we’re surprisingly close to a rubric that avoids each of these particular downfalls. Rubric for strategy Balancing the strengths and flaws of the previous section’s ideas, the rubric I’ve found effective for evaluating strategy is: How quickly is the strategy refined? If a strategy starts out bad, but improves quickly, that’s a better strategy than a mostly right strategy that never evolves. Strategy thrives when its practitioners understand it is a living endeavour. How expensive is the strategy’s refinement for implementing and impacted teams? Just as culture eats strategy for breakfast, good policy loses to poor operational mechanisms every time. Especially early on, good strategy is validated cheaply. Expensive strategies are discarded before they can be validated, let alone improved. How well does the current iteration solve its diagnosis? Ultimately, strategy does have to address the diagnosis it starts from. Even if you’re learning quickly and at a low cost, at some point you do have to actually get to impact. Strategy must eventually be graded on its impact. With this rubric in hand, we can finally assess the Uber’s service migration strategy. It refined rapidly as we improved our tooling, minimized costs because we had to rely on voluntary adoption, and solved its diagnosis extremely well. So this was a great strategy, but how do we think about the fact that its diagnosis missed out on the consequences of a wide-spread service architecture on developer productivity? This brings me to the final component of the strategy quality rubric: the recognition that strategy exists across multiple phases. Each phase is defined by new information–whether or not this information is known by the strategy’s authors–that render the diagnosis incomplete. The Uber strategy can be thought of as existing across two phases: Phase 1 used service provisioning to address developer productivity challenges in the monolith. Phase 2 was engaging with consequences of a sprawling service architecture. All the good grades I gave the strategy are appropriate to the first phase. However, the second phase was ushered in by the negative impacts to developer productivity exposed by the initial rollout. The second phase’s grades on the rate of iteration, the cost, and the outcomes were reasonable, but a bit lower than first phase. In the subsequent years, the second phase was succeeded by a third phase that aimed to address the second’s challenges. Does stopping mean a strategy’s bad? Now that we have a rubric, we can use it to evaluate one of the important questions of strategy: does giving up on a strategy mean that the strategy is a bad one? The vocabulary of strategy phases helps us here, and I think it’s uncontroversial to say that a new phase’s evolution of your prior diagnosis might make it appropriate to abandon a strategy. For example, Digg owned our own servers in 2010, but would certainly not buy their own servers if they started ten years later. Circumstances change. Sometimes I also think that aborting a strategy in its first phase is a good sign. That’s generally true when the rate of learning is outpaced by the cost of learning. I recently sponsored a developer productivity strategy that had some impact, but less than we’d intended. We immortalized a few of the smaller pieces, and returned further exploration to a lower altitude strategy owned by the teams rather than the high altitude strategy that I owned as an executive. Essentially all strategies are competing with strategies at other altitudes, so I think giving up on strategies, especially high altitude strategies, is almost always a good idea. The unpierceable veil Working within our industry, we are often called upon to evaluate strategies from afar. As other companies rolled out LLMs in their products or microservices for their architectures, our companies pushed us on why we weren’t making these changes as well. The exploration step of strategy helps determine where a strategy might be useful for you, but even that doesn’t really help you evaluate whether the strategy or the strategists. There are simply too many dimensions of the rubric that you cannot evaluate when you’re far away. For example, how many phases occurred before the idea that became the external representation of the strategy came into existence? How much did those early stages cost to implement? Is the real mastery in the operational mechanisms that are never reported on? Did the external representation of the strategy ever happen at all, or is it the logical next phase that solves the reality of the internal implementation? With all that in mind, I find that it’s generally impossible to accurately evaluate strategies happening in other companies with much conviction. Even if you want to, the missing context is an impenetrable veil. That’s not to say that you shouldn’t try to evalute their strategies, that’s something that you’ll be forced to do in your own strategy work. Instead, it’s a reminder to keep a low confidence score in those appraisals: you’re guaranteed to be missing something. Learning despite quality issues Although I believe it’s quite valuable for us to judge the quality of strategies, I want to caution against going a step further and making the conclusion that you can’t learn from poor strategies. As long as you are aware of a strategy’s quality, I believe you can learn just as much from failed strategy as from great strategy. Part of this is because often even failed strategies have early phases that work extremely well. Another part is because strategies tend to fail for interesting reasons. I learned just as much from Stripe’s failed rollout of agile which struggled due to missing operational mechanisms. as I did from Calm’s successful transition to focus primarily on product engineering. Without a clear point of view on which of these worked, you’d be at risk of learning the wrong lessons, but with forewarning you don’t have run that risk. Once you’ve determined a strategy was unsuccessful, I find it particularly valuable to determine the strategy’s phases and understand which phase and where in the strategy steps things went wrong. Was it a lack of operational mechanisms? Was the policy itself a poor match for the diagnosis? Was the diagnosis willfully ignoring a truculent executive? Answering these questions will teach you more about strategy than only studying successful strategies, because you’ll develop an intuition for which parts truly matter. Summary Finishing this chapter, you now have a structured rubric for evaluating a strategy, moving beyond “good strategy” and “bad strategy” to a nuanced assessment. This assessment is not just useful for grading strategy, but makes it possible to specifically improve your strategy work. Maybe your approach is sound, but your operational mechanisms are too costly for the rate of learning they facilitate. Maybe you’ve treated strategy as a single iteration exercise, rather than recognizing that even excellent strategy goes stale over time. Keep those ideas in mind as we head into the final chapter on how you personally can get better at strategy work.
Often you’ll see a disorganized collection of ideas labeled as a “strategy.” Even when they’re dense with ideas, these can be hard to parse, and are a major reason why most engineers will claim their company doesn’t have a clear strategy even though my experience is that all companies follow some strategy, even if it’s undocumented. This chapter lays out a repeatable, structured approach to drafting strategy. It introduces each step of that approach, which are then detailed further in their respective chapters. Here we’ll cover: How these five steps fit together to facilitate creating strategy, especially by preventing practicioners from skipping steps that feel awkward or challenging. Step 1: Exploring the wider industry’s ideas and practices around the strategy you’re working on. Exploration is understanding what recent research might change your approach, and how the state of art has changed since you last tackled a similar problem. Step 2: Diagnosing the details of your problem. It’s hard to slow down to understand your problem clearly before attempting to solve it, but it’s even more difficult to solve anything well without a clear diagnosis. Step 3: Refinement is taking a raw, unproven set of ideas and testing them against reality. Three techniques are introduced to support this validation process: strategy testing, systems modeling, and Wardley mapping. Step 4: Policy makes the tradeoffs and decisions to solve your diagnosis. These can range from specifying how software is architected, to how pull requests are reviewed, to how headcount is allocated within an organization. Step 5: Operations are the concrete mechanisms that translate policy into an active force within your organization. These can be nudges that remind you about code changes without associated tests, or weekly meetings where you study progress on a migration. Whether these steps are sacred or are open to adaptation and experimentation, including when you personally should persevere in attempting steps that don’t feel effective. From this chapter’s launching point, you’ll have the high-level summaries of each step in strategy creation, and can decide where you want to read further. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. How the steps become strategy Creating effective strategy is not rote incantation of a formula. You can’t merely follow these steps to guarantee that you’ll create a great strategy. However, I’ve found over and over is that strategies fail more due to avoidable errors than from fundamentally unsound thinking. Busy people skip steps. Especially steps they dislike or have failed at before. These steps are the scaffolding to avoid those errors. By practicing routinely, you’ll build powerful habits and intuition around which approach is most appropriate for the current strategy you’re working on. They also help turn strategy into a community practice that you, your colleagues, and the wider engineering ecosystem can participate in together. Each step is an input that flows into the next step. Your exploration is the foundation of a solid diagnosis. Your diagnosis helps you search the infinite space of policy for what you need now. Operational mechanisms help you turn policy into an active force supporting your strategy rather than an abstract treatise. If you’re skeptical of the steps, you should certainly maintain your skepticism, but do give them a few tries before discarding them entirely. You may also appreciate the discussion in the chapter on bridging between theory and practice when doing strategy. Explore Exploration is the deliberate practice of searching through a strategy’s problem and solution spaces before allowing yourself to commit to a given approach. It’s understanding how other companies and teams have approached similar questions, and whether their approaches might also work well for you. It’s also learning why what brought you so much success at your former employer isn’t necessarily the best solution for your current organization. The Uber service migration strategy used exploration to understand the service ecosystem by reading industry literature: As a starting point, we find it valuable to read Large-scale cluster management at Google with Borg which informed some elements of the approach to Kubernetes, and Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center which describes the Mesos/Aurora approach. It also used a Wardley map to explore the cloud compute ecosystem. For more detail, read the Exploration chapter. Diagnose Diagnosis is your attempt to correctly recognize the context that the strategy needs to solve before deciding on the policies to address that context. Starting from your exploration’s learnings, and your understanding of your current circumstances, building a diagnosis forces you to delay thinking about solutions until you fully understand your problem’s nuances. A diagnosis can be largely data driven, such as the navigating a Private Equity ownership transition strategy: Our Engineering headcount costs have grown by 15% YoY this year, and 18% YoY the prior year. Headcount grew 7% and 9% respectively, with the difference between headcount and headcount costs explained by salary band adjustments (4%), a focus on hiring senior roles (3%), and increased hiring in higher cost geographic regions (1%). It can also be less data driven, instead aiming to summarize a problem, such as the Index acquisition strategy’s summary of the known and unknown elements of the technical integration prior to the acquisition closing: We will need to rapidly integrate the acquired startup to meet this timeline. We only know a small number of details about what this will entail. We do know that point-of-sale devices directly operate on payment details (e.g. the point-of-sale device knows the credit card details of the card it reads). Our compliance obligations restrict such activity to our “tokenization environment”, a highly secured and isolated environment with direct access to payment details. This environment converts payment details into a unique token that other environments can utilize to operate against payment details without the compliance overhead of having direct access to the underlying payment details. The approach, and challenges, of developing a diagnosis are detailed in the Diagnosis chapter. Refine (Test, Map & Model) Strategy refinement is a toolkit of methods to identify which parts of your diagnosis are most important, and verify that your approach to solving the diagnosis actually works. This chapter delves into the details of using three methods in particular: strategy testing, systems modeling, and Wardley mapping. An example of a systems modeling diagram. These techniques are also demonstrated in the strategy case studies, such as the Wardley map of the LLM ecosystem, or the systems model of backfilling roles without downleveling them. For more detail, read the Refinement chapter. Why isn’t refinement earlier (or later)? A frequent point of disagreement is that refinement should occur before the diagnosis. Another is that mapping and modeling are two distinct steps, and mapping should occur before diagnosis, and modeling should occur after policy. A third is that refinement ought to be the final step of strategy, turning the steps into a looping cycle. These are all reasonable observations, so let me unpack my rationale for this structure. By far the biggest risk for most strategies is not that you model too early or map too late, but instead that you simply skip both steps entirely. My foremost concern is minimizing the required investment into mapping and modeling such that more folks do these steps at all. Refining after exploring and diagnosing allows you to concentrate your efforts on a smaller number of load-bearing areas. That said, it’s common to refine many places in your strategy creation. You’re just as likely to have three small refinement steps as one bigger one. Policy Policy is interpreting your diagnosis into a concrete plan. This plan also needs to work, which requires careful study of what’s worked within your company, and what new ideas you’ve discovered while exploring the current problem. Policies can range from providing directional guidance, such as the user data controls strategy’s guidance: Good security discussions don’t frame decisions as a compromise between security and usability. We will pursue multi-dimensional tradeoffs to simultaneously improve security and efficiency. Whenever we frame a discussion on trading off between security and utility, it’s a sign that we are having the wrong discussion, and that we should rethink our approach. We will prioritize mechanisms that can both automatically authorize and automatically document the rationale for accesses to customer data. The most obvious example of this is automatically granting access to a customer support agent for users who have an open support ticket assigned to that agent. (And removing that access when that ticket is reassigned or resolved.) To committing not to make a decision until later, as practiced in the Index acquisition strategy: Defer making a decision regarding the introduction of Java to a later date: the introduction of Java is incompatible with our existing engineering strategy, but at this point we’ve also been unable to align stakeholders on how to address this decision. Further, we see attempting to address this issue as a distraction from our timely goal of launching a joint product within six months. We will take up this discussion after launching the initial release. This chapter further goes into evaluating policies, overcoming ambiguous circumstances that make it difficult to decide on an approach, and developing novel policies. For full detail, read the Policy chapter. Operations Even the best policies have to be interpreted. There will be new circumstances their authors never imagined, and the policies may be in effect long after their authors have left the organization. Operational mechanisms are the concrete implementation of your policy. The simplest mechanisms are an explicit escalation path, as shown in Calm’s product engineering strategy: Exceptions are granted by the CTO, and must be in writing. The above policies are deliberately restrictive. Sometimes they may be wrong, and we will make exceptions to them. However, each exception should be deliberate and grounded in concrete problems we are aligned both on solving and how we solve them. If we all scatter towards our preferred solution, then we’ll create negative leverage for Calm rather than serving as the engine that advances our product. From that starting point, the mechanisms can get far more complex. This chapter works through evaluating mechanisms, composing an operational plan, and the most common sorts of operational mechanisms that I’ve seen across strategies. For more detail, read the Operations chapter. Is the structure sacrosanct? When someone’s struggling to write a strategy document, one of the first tools someone will often recommend is a strategy template. Templates are great: they reduce the ambiguity of an already broad project into something more tractable. If you’re wondering if you should use a template to craft strategy: sure, go ahead! However, I find that well-meaning, thoughtful templates often turn into lumbering, callous documents that serve no one well. The secret to good templates is that someone has to own it, and that person has to care about the template writer first and foremost, rather than the various constituencies that want to insert requirements into the strategy creation process. The security, compliance and cost of your plans matter a lot, but many organizations start to layer in more and more requirements into these sorts of documents until the idea of writing them becomes prohibitively painful. The best advice I can give someone attempting to write strategy, is that you should discard every element of strategy that gets in your way as long as you can explain what that element was intended to accomplish. For example, if you’re drafting a strategy and you don’t find any operational mechanisms that fit. That’s fine, discard that section. Ultimately, the structure is not sacrosanct, it’s the thinking behind the sections that really matter. This topic is explored in more detail in the chapter on Making engineering strategies more readable. Summary Now, you know the foundational steps to conducting strategy. From here, you can dive into the details with the strategy case studies like How should you adopt LLMs? or you can maintain a high altitude starting with how exploration creates the foundation for an effective strategy. Whichever you start with, I encourage yout o eventually work through both to get the full perspective.
Yesterday, the tj-actions repository, a popular tool used with Github Actions was compromised (for more background read one of these two articles). Watching the infrastructure and security engineering teams at Carta respond, it highlighted to me just how much LLMs can’t meaningfully replace many essential roles of software professionals. However, I’m also reading Jennifer Palkha’s Recoding America, which makes an important point: decision-makers can remain irrational longer than you can remain solvent. (Or, in this context, remain employed.) I’ve been thinking about this a lot lately, as I’ve ended up having more “2025 is not much fun”-themed career discussions with prior colleagues navigating the current job market. I’ve tried to pull together my points from those conversations here: Many people who first entered senior roles in 2010-2020 are finding current roles a lot less fun. There are a number of reasons for this. First, managers were generally evaluated in that period based on their ability to hire, retain and motivate teams. The current market doesn’t value those skills particularly highly, but instead prioritizes a different set of skills: working in the details, pushing pace, and navigating the technology transition to foundational models / LLMs. This means many members of the current crop of senior leaders are either worse at the skills they currently need to succeed, or are less motivated by those activities. Either way, they’re having less fun. Similarly, the would-be senior leaders from 2010-2020 era who excelled at working in the details, pushing pace and so on, are viewed as stagnate in their careers so are still finding it difficult to move into senior roles. This means that many folks feel like the current market has left them behind. This is, of course, not universal. It is a general experience that many people are having. Many people are not having this experience. The technology transition to Foundational models / LLMs as a core product and development tool is causing many senior leaders’ hard-earned playbooks to be invalidated. Many companies that were stable, durable market leaders are now in tenuous positions because foundational models threaten to erode their advantage. Whether or not their advantage is truly eroded is uncertain, but it is clear that usefully adopting foundational models into a product requires more than simply shoving an OpenAI/Anthropic API call in somewhere. Instead, you have to figure out how to design with progressive validation, with critical data validated via human-in-the-loop techniques before it is used in a critical workflow. It also requires designing for a rapidly improving toolkit: many workflows that were laughably bad in 2023 work surprisingly well with the latest reasoning models. Effective product design requires architecting for both massive improvement, and no improvement at all, of models in 2026-2027. This is equally true of writing software itself. There’s so much noise about how to write software, and much of it’s clearly propaganda–this blog’s opening anecdote regarding the tj-actions repository prove that expertise remains essential–but parts of it aren’t. I spent a few weeks in the evenings working on a new side project via Cursor in January, and I was surprised at how much my workflow changed even through Cursor itself was far from perfect. Even since then, Claude has advanced from 3.5 to 3.7 with extended thinking. Again, initial application development might easily be radically different in 2027, or it might be largely unchanged after the scaffolding step in complex codebases. (I’m also curious to see if context window limitations drive another flight from monolithic architectures.) Sitting out this transition, when we are relearning how to develop software, feels like a high risk proposition. Your well-honed skills in team development are already devalued today relative to three years ago, and now your other skills are at risk of being devalued as well. Valuations and funding are relatively less accessible to non-AI companies than they were three years ago. Certainly elite companies are doing alright, whether or not they have a clear AI angle, but the cutoff for remaining elite has risen. Simultaneously, the public markets are challenged, which means less willingness for both individuals and companies to purchase products, which slows revenue growth, further challenging valuations and funding. The consequence of this if you’re at a private, non-AI company, is that you’re likely to hire less, promote less, see less movement in pay bands, and experience a less predictable path to liquidity. It also means fewer open roles at other companies, so there’s more competition when attempting to trade up into a larger, higher compensated role at another company. The major exception to this is joining an AI company, but generally those companies are in extremely competitive markets and are priced more appropriately for investors managing a basket of investments than for employees trying to deliver a predictable return. If you join one of these companies today, you’re probably joining a bit late to experience a big pop, your equity might go to zero, and you’ll be working extremely hard for the next five to seven years. This is the classic startup contract, but not necessarily the contract that folks have expected over the past decade as maximum compensation has generally come from joining a later-stage company or member of the Magnificent Seven. As companies respond to the reduced valuations and funding, they are pushing their teams harder to find growth with their existing team. In the right environment, this can be motivating, but people may have opted into to a more relaxed experience that has become markedly less relaxed without their consent. If you pull all those things together, you’re essentially in a market where profit and pace are fixed, and you have to figure out how you personally want to optimize between people, prestige and learning. Whereas a few years ago, I think these variables were much more decoupled, that is not what I hear from folks today, even if their jobs were quite cozy a few years ago. Going a bit further, I know folks who are good at their jobs, and have been struggling to find something meaningful for six-plus months. I know folks who are exceptionally strong candidates, who can find reasonably good jobs, but even they are finding that the sorts of jobs they want simply don’t exist right now. I know folks who are strong candidates but with some oddities in their profile, maybe too many short stints, who are now being filtered out because hiring managers need some way to filter through the higher volume of candidates. I can’t give advice on what you should do, but if you’re finding this job market difficult, it’s certainly not personal. My sense is that’s basically the experience that everyone is having when searching for new roles right now. If you are in a role today that’s frustrating you, my advice is to try harder than usual to find a way to make it a rewarding experience, even if it’s not perfect. I also wouldn’t personally try to sit this cycle out unless you’re comfortable with a small risk that reentry is quite difficult: I think it’s more likely that the ecosystem is meaningfully different in five years than that it’s largely unchanged. Altogether, this hasn’t really been the advice that anyone wanted when they chatted with me, but it seems to generally have resonated with them as a realistic appraisal of the current markets. Hopefully there’s something useful for you in here as well.
This book’s introduction started by defining strategy as “making decisions.” Then we dug into exploration, diagnosis, and refinement: three chapters where you could argue that we didn’t decide anything at all. Clarifying the problem to be solved is the prerequisite of effective decision making, but eventually decisions do have to be made. Here in this chapter on policy, and the following chapter on operations, we finally start to actually make some decisions. In this chapter, we’ll dig into: How we define policy, and how setting policy differs from operating policy as discussed in the next chapter The structured steps for setting policy How many policies should you set? Is it preferable to have one policy, many policies, or does it not matter much either way? Recurring kinds of policies that appear frequently in strategies Why it’s valuable to be intentional about your strategy’s altitude, and how engineers and executives generally maintain different altitudes in their strategies Criteria to use for evaluating whether your policies are likely to be impactful How to develop novel policies, and why it’s rare Why having multiple bundles of alternative policies is generally a phase in strategy development that indicates a gap in your diagnosis How policies that ignore constraints sound inspirational, but accomplish little Dealing with ambiguity and uncertainty created by missing strategies from cross-functional stakeholders By the end, you’ll be ready to evaluate why an existing strategy’s policies are struggling to make an impact, and to start iterating on policies for strategy of your own. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. What is policy? Policy is interpreting your diagnosis into a concrete plan. That plan will be a collection of decisions, tradeoffs, and approaches. They’ll range from coding practices, to hiring mandates, to architectural decisions, to guidance about how choices are made within your organization. An effective policy solves the entirety of the strategy’s diagnosis, although the diagnosis itself is encouraged to specify which aspects can be ignored. For example, the strategy for working with private equity ownership acknowledges in its diagnosis that they don’t have clear guidance on what kind of reduction to expect: Based on general practice, it seems likely that our new Private Equity ownership will expect us to reduce R&D headcount costs through a reduction. However, we don’t have any concrete details to make a structured decision on this, and our approach would vary significantly depending on the size of the reduction. Faced with that uncertainty, the policy simply acknowledges the ambiguity and commits to reconsider when more information becomes available: We believe our new ownership will provide a specific target for Research and Development (R&D) operating expenses during the upcoming financial year planning. We will revise these policies again once we have explicit targets, and will delay planning around reductions until we have those numbers to avoid running two overlapping processes. There are two frequent points of confusion when creating policies that are worth addressing directly: Policy is a subset of strategy, rather than the entirety of strategy, because policy is only meaningful in the context of the strategy’s diagnosis. For example, the “N-1 backfill policy” makes sense in the context of new, private equity ownership. The policy wouldn’t work well in a rapidly expanding organization. Any strategy without a policy is useless, but you’ll also find policies without context aren’t worth much either. This is particularly unfortunate, because so often strategies are communicated without those critical sections. Policy describes how tradeoffs should be made, but it doesn’t verify how the tradeoffs are actually being made in practice. The next chapter on operations covers how to inspect an organization’s behavior to ensure policies are followed. When reworking a strategy to be more readable, it often makes sense to merge policy and operation sections together. However, when drafting strategy it’s valuable to keep them separate. Yes, you might use a weekly meeting to review whether the policy is being followed, but whether it’s an effective policy is independent of having such a meeting, and what operational mechanisms you use will vary depending on the number of policies you intend to implement. With this definition in mind, now we can move onto the more interesting discussion of how to set policy. How to set policy Every part of writing a strategy feels hard when you’re doing it, but I personally find that writing policy either feels uncomfortably easy or painfully challenging. It’s never a happy medium. Fortunately, the exploration and diagnosis usually come together to make writing your policy simple: although sometimes that simple conclusion may be a difficult one to swallow. The steps I follow to write a strategy’s policy are: Review diagnosis to ensure it captures the most important themes. It doesn’t need to be perfect, but it shouldn’t have omissions so obvious that you can immediately identify them. Select policies that address the diagnosis. Explicitly match each policy to one or more diagnoses that it addresses. Continue adding policies until every diagnosis is covered. This is a broad instruction, but it’s simpler than it sounds because you’ll typically select from policies identified during your exploration phase. However, there certainly is space to tweak those policies, and to reapply familiar policies to new circumstances. If you do find yourself developing a novel policy, there’s a later section in this chapter, Developing novel policies, that addresses that topic in more detail. Consolidate policies in cases where they overlap or adjoin. For example, two policies about specific teams might be generalized into a policy about all teams in the engineering organization. Backtest policy against recent decisions you’ve made. This is particularly effective if you maintain a decision log in your organization. Mine for conflict once again, much as you did in developing your diagnosis. Emphasize feedback from teams and individuals with a different perspective than your own, but don’t wholly eliminate those that you agree with. Just as it’s easy to crowd out opposing views in diagnosis if you don’t solicit their input, it’s possible to accidentally crowd out your own perspective if you anchor too much on others’ perspectives. Consider refinement if you finish writing, and you just aren’t sure your approach works – that’s fine! Return to the refinement phase by deploying one of the refinement techniques to increase your conviction. Remember that we talk about strategy like it’s done in one pass, but almost all real strategy takes many refinement passes. The steps of writing policy are relatively pedestrian, largely because you’ve done so much of the work already in the exploration, diagnosis, and refinement steps. If you skip those phases, you’d likely follow the above steps for writing policy, but the expected quality of the policy itself would be far lower. How many policies? Addressing the entirety of the diagnosis is often complex, which is why most strategies feature a set of policies rather than just one. The strategy for decomposing a monolithic application is not one policy deciding not to decompose, but a series of four policies: Business units should always operate in their own code repository and monolith. New integrations across business unit monoliths should be done using gRPC. Except for new business unit monoliths, we don’t allow new services. Merge existing services into business-unit monoliths where you can. Four isn’t universally the right number either. It’s simply the number that was required to solve that strategy’s diagnosis. With an excellent diagnosis, your policies will often feel inevitable, and perhaps even boring. That’s great: what makes a policy good is that it’s effective, not that it’s novel or inspiring. Kinds of policies While there are so many policies you can write, I’ve found they generally fall into one of four major categories: approvals, allocations, direction, and guidance. This section introduces those categories. Approvals define the process for making a recurring decision. This might require invoking an architecture advice process, or it might require involving an authority figure like an executive. In the Index post-acquisition integration strategy, there were a number of complex decisions to be made, and the approval mechanism was: Escalations come to paired leads: given our limited shared context across teams, all escalations must come to both Stripe’s Head of Traffic Engineering and Index’s Head of Engineering. This allowed the acquired and acquiring teams to start building trust between each other by ensuring both were consulted before any decision was finalized. On the other hand, the user data access strategy’s approval strategy was more focused on managing corporate risk: Exceptions must be granted in writing by CISO. While our overarching Engineering Strategy states that we follow an advisory architecture process as described in Facilitating Software Architecture, the customer data access policy is an exception and must be explicitly approved, with documentation, by the CISO. Start that process in the #ciso channel. These two different approval processes had different goals, so they made tradeoffs differently. There are so many ways to tweak approval, allowing for many different tradeoffs between safety, productivity, and trust. Allocations describe how resources are split across multiple potential investments. Allocations are the most concrete statement of organizational priority, and also articulate the organization’s belief about how productivity happens in teams. Some companies believe you go fast by swarming more people onto critical problems. Other companies believe you go fast by forcing teams to solve problems without additional headcount. Both can work, and teach you something important about the company’s beliefs. The strategy on Uber’s service migration has two concrete examples of allocation policies. The first describes the Infrastructure engineering team’s allocation between manual provision tasks and investing into creating a self-service provisioning platform: Constrain manual provisioning allocation to maximize investment in self-service provisioning. The service provisioning team will maintain a fixed allocation of one full time engineer on manual service provisioning tasks. We will move the remaining engineers to work on automation to speed up future service provisioning. This will degrade manual provisioning in the short term, but the alternative is permanently degrading provisioning by the influx of new service requests from newly hired product engineers. The second allocation policy is implicitly noted in this strategy’s diagnosis, where it describes the allocation policy in the Engineering organization’s higher altitude strategy: Within infrastructure engineering, there is a team of four engineers responsible for service provisioning today. While our organization is growing at a similar rate as product engineering, none of that additional headcount is being allocated directly to the team working on service provisioning. We do not anticipate this changing. Allocation policies often create a surprising amount of clarity for the team, and I include them in almost every policy I write either explicitly, or implicitly in a higher altitude strategy. Direction provides explicit instruction on how a decision must be made. This is the right tool when you know where you want to go, and exactly the way that you want to get there. Direction is appropriate for problems you understand clearly, and you value consistency more than empowering individual judgment. Direction works well when you need an unambiguous policy that doesn’t leave room for interpretation. For example, Calm’s policy for working in the monolith: We write all code in the monolith. It has been ambiguous if new code (especially new application code) should be written in our JavaScript monolith, or if all new code must be written in a new service outside of the monolith. This is no longer ambiguous: all new code must be written in the monolith. In the rare case that there is a functional requirement that makes writing in the monolith implausible, then you should seek an exception as described below. In that case, the team couldn’t agree on what should go into the monolith. Individuals would often make incompatible decisions, so creating consistency required removing personal judgment from the equation. Sometimes judgment is the issue, and sometimes consistency is difficult due to misaligned incentives. A good example of this comes in strategy on working with new Private Equity ownership: We will move to an “N-1” backfill policy, where departures are backfilled with a less senior level. We will also institute a strict maximum of one Principal Engineer per business unit. It’s likely that hiring managers would simply ignore this backfill policy if it was stated more softly, although sometimes less forceful policies are useful. Guidance provides a recommendation about how a decision should be made. Guidance is useful when there’s enough nuance, ambiguity, or complexity that you can explain the desired destination, but you can’t mandate the path to reaching it. One example of guidance comes from the Index acquisition integration strategy: Minimize changes to tokenization environment: because point-of-sale devices directly work with customer payment details, the API that directly supports the point-of-sale device must live within our secured environment where payment details are stored. However, any other functionality must not be added to our tokenization environment. This might read like direction, but it’s clarifying the desired outcome of avoiding unnecessary complexity in the tokenization environment. However, it’s not able to articulate what complexity is necessary, so ultimately it’s guidance because it requires significant judgment to interpret. A second example of guidance comes in the strategy on decomposing a monolithic codebase: Merge existing services into business-unit monoliths where you can. We believe that each choice to move existing services back into a monolith should be made “in the details” rather than from a top-down strategy perspective. Consequently, we generally encourage teams to wind down their existing services outside of their business unit’s monolith, but defer to teams to make the right decision for their local context. This is another case of knowing the desired outcome, but encountering too much uncertainty to direct the team on how to get there. If you ask five engineers about whether it’s possible to merge a given service back into a monolithic codebase, they’ll probably disagree. That’s fine, and highlights the value of guidance: it makes it possible to make incremental progress in areas where more concrete direction would cause confusion. When you’re working on a strategy’s policy section, it’s important to consider all of these categories. Which feel most natural to use will vary depending on your team and role, but they’re all usable: If you’re a developer productivity team, you might have to lean heavily on guidance in your policies and increased support for that guidance within the details of your platform. If you’re an executive, you might lean heavily on direction. Indeed, you might lean too heavily on direction, where guidance often works better for areas where you understand the direction but not the path. If you’re a product engineering organization, you might have to narrow the scope of your direction to the engineers within that organization to deal with the realities of complex cross-organization dynamics. Finally, if you have a clear approach you want to take that doesn’t fit cleanly into any of these categories, then don’t let this framework dissuade you. Give it a try, and adapt if it doesn’t initially work out. Maintaining strategy altitude The chapter on when to write engineering strategy introduced the concept of strategy altitude, which is being deliberate about where certain kinds of policies are created within your organization. Without repeating that section in its entirety, it’s particularly relevant when you set policy to consider how your new policies eliminate flexibility within your organization. Consider these two somewhat opposing strategies: Stripe’s Sorbet strategy only worked in an organization that enforced the use of a single programming language across (essentially) all teams Uber’s service migration strategy worked well in an organization that was unwilling to enforce consistent programming language adoption across teams Stripe’s organization-altitude policy took away the freedom of individual teams to select their preferred technology stack. In return, they unlocked the ability to centralize investment in a powerful way. Uber went the opposite way, unlocking the ability of teams to pick their preferred technology stack, while significantly reducing their centralized teams’ leverage. Both altitudes make sense. Both have consequences. Criteria for effective policies In The Engineering Executive’s Primer’s chapter on engineering strategy, I introduced three criteria for evaluating policies. They ought to be applicable, enforced, and create leverage. Defining those a bit: Applicable: it can be used to navigate complex, real scenarios, particularly when making tradeoffs. Enforced: teams will be held accountable for following the guiding policy. Create Leverage: create compounding or multiplicative impact. The last of these three, create leverage, made sense in the context of a book about engineering executives, but probably doesn’t make as much sense here. Some policies certainly should create leverage (e.g. empower developer experience team by restricting new services), but others might not (e.g. moving to an N-1 backfill policy). Outside the executive context, what’s important isn’t necessarily creating leverage, but that a policy solves for part of the diagnosis. That leaves the other two–being applicable and enforced–both of which are necessary for a policy to actually address the diagnosis. Any policy which you can’t determine how to apply, or aren’t willing to enforce, simply won’t be useful. Let’s apply these criteria to a handful of potential policies. First let’s think about policies we might write to improve the talent density of our engineering team: “We only hire world-class engineers.” This isn’t applicable, because it’s unclear what a world-class engineer means. Because there’s no mutually agreeable definition in this policy, it’s also not consistently enforceable. “We only hire engineers that get at least one ‘strong yes’ in scorecards.” This is applicable, because there’s a clear definition. This is enforceable, depending on the willingness of the organization to reject seemingly good candidates who don’t happen to get a strong yes. Next, let’s think about a policy regarding code reuse within a codebase: “We follow a strict Don’t Repeat Yourself policy in our codebase.” There’s room for debate within a team about whether two pieces of code are truly duplicative, but this is generally applicable. Because there’s room for debate, it’s a very context specific determination to decide how to enforce a decision. “Code authors are responsible for determining if their contributions violate Don’t Repeat Yourself, and rewriting them if they do.” This is much more applicable, because now there’s only a single person’s judgment to assess the potential repetition. In some ways, this policy is also more enforceable, because there’s no longer any ambiguity around who is deciding whether a piece of code is a repetition. The challenge is that enforceability now depends on one individual, and making this policy effective will require holding individuals accountable for the quality of their judgement. An organization that’s unwilling to distinguish between good and bad judgment won’t get any value out of the policy. This is a good example of how a good policy in one organization might become a poor policy in another. If you ever find yourself wanting to include a policy that for some reason either can’t be applied or can’t be enforced, stop to ask yourself what you’re trying to accomplish and ponder if there’s a different policy that might be better suited to that goal. Developing novel policies My experience is that there are vanishingly few truly novel policies to write. There’s almost always someone else has already done something similar to your intended approach. Calm’s engineering strategy is such a case: the details are particular to the company, but the general approach is common across the industry. The most likely place to find truly novel policies is during the adoption phase of a new widespread technology, such as the rise of ubiquitous mobile phones, cloud computing, or large language models. Even then, as explored in the strategy for adopting large-language models, the new technology can be engaged with as a generic technology: Develop an LLM-backed process for reactivating departed and suspended drivers in mature markets. Through modeling our driver lifecycle, we determined that improving onboarding time will have little impact on the total number of active drivers. Instead, we are focusing on mechanisms to reactivate departed and suspended drivers, which is the only opportunity to meaningfully impact active drivers. You could simply replace “LLM” with “data-driven” and it would be equally readable. In this way, policy can generally sidestep areas of uncertainty by being a bit abstract. This avoids being overly specific about topics you simply don’t know much about. However, even if your policy isn’t novel to the industry, it might still be novel to you or your organization. The steps that I’ve found useful to debug novel policies are the same steps as running a condensed version of the strategy process, with a focus on exploration and refinement: Collect a number of similar policies, with a focus on how those policies differ from the policy you are creating Create a systems model to articulate how this policy will work, and also how it will differ from the similar policies you’re considering Run a strategy testing cycle for your proto-policy to discover any unknown-unknowns about how it works in practice Whether you run into this scenario is largely a function of the extent of your, and your organization’s, experience. Early in my career, I found myself doing novel (for me) strategy work very frequently, and these days I rarely find myself doing novel work, instead focusing on adaptation of well-known policies to new circumstances. Are competing policy proposals an anti-pattern? When creating policy, you’ll often have to engage with the question of whether you should develop one preferred policy or a series of potential strategies to pick from. Developing these is a useful stage of setting policy, but rather than helping you refine your policy, I’d encourage you to think of this as exposing gaps in your diagnosis. For example, when Stripe developed the Sorbet ruby-typing tooling, there was debate between two policies: Should we build a ruby-typing tool to allow a centralized team to gradually migrate the company to a typed codebase? Should we migrate the codebase to a preexisting strongly typed language like Golang or Java? These were, initially, equally valid hypotheses. It was only by clarifying our diagnosis around resourcing that it became clear that incurring the bulk of costs in a centralized team was clearly preferable to spreading the costs across many teams. Specifically, recognizing that we wanted to prioritize short-term product engineering velocity, even if it led to a longer migration overall. If you do develop multiple policy options, I encourage you to move the alternatives into an appendix rather than including them in the core of your strategy document. This will make it easier for readers of your final version to understand how to follow your policies, and they are the most important long-term user of your written strategy. Recognizing constraints A similar problem to competing solutions is developing a policy that you cannot possibly fund. It’s easy to get enamored with policies that you can’t meaningfully enforce, but that’s bad policy, even if it would work in an alternate universe where it was possible to enforce or resource it. To consider a few examples: The strategy for controlling access to user data might have proposed requiring manual approval by a second party of every access to customer data. However, that would have gone nowhere. Our approach to Uber’s service migration might have required more staffing for the infrastructure engineering team, but we knew that wasn’t going to happen, so it was a meaningless policy proposal to make. The strategy for navigating private equity ownership might have argued that new ownership should not hold engineering accountable to a new standard on spending. But they would have just invalidated that strategy in the next financial planning period. If you find a policy that contemplates an impractical approach, it doesn’t only indicate that the policy is a poor one, it also suggests your policy is missing an important pillar. Rather than debating the policy options, the fastest path to resolution is to align on the diagnosis that would invalidate potential paths forward. In cases where aligning on the diagnosis isn’t possible, for example because you simply don’t understand the possibilities of a new technology as encountered in the strategy for adopting LLMs, then you’ve typically found a valuable opportunity to use strategy refinement to build alignment. Dealing with missing strategies At a recent company offsite, we were debating which policies we might adopt to deal with annual plans that kept getting derailed after less than a month. Someone remarked that this would be much easier if we could get the executive team to commit to a clearer, written strategy about which business units we were prioritizing. They were, of course, right. It would be much easier. Unfortunately, it goes back to the problem we discussed in the diagnosis chapter about reframing blockers into diagnosis. If a strategy from the company or a peer function is missing, the empowering thing to do is to include the absence in your diagnosis and move forward. Sometimes, even when you do this, it’s easy to fall back into the belief that you cannot set a policy because a peer function might set a conflicting policy in the future. Whether you’re an executive or an engineer, you’ll never have the details you want to make the ideal policy. Meaningful leadership requires taking meaningful risks, which is never something that gets comfortable. Summary After working through this chapter, you know how to develop policy, how to assemble policies to solve your diagnosis, and how to avoid a number of the frequent challenges that policy writers encounter. At this point, there’s only one phase of strategy left to dig into, operating the policies you’ve created.
More in programming
I took an amazing trip to SE Asia last month, including Angkor Wat. I had a hard time finding good reading or other resources to learn from before I went, in part because Amazon is awash in AI garbage. Here’s some books and podcasts I found useful about the Khmer empire in general and Angkor in particular: Ancient Angkor by Michael Freeman and Claude Jacques. The closest thing to a coffee-table book to preview what you will see. The practical information is outdated but the pictures and descriptions are good. Empire Podcast #185: The God Kings of Angkor Wat by William Dalrymple and Anita Anand. An entertaining and fully detailed account of the Khmer empire. It’s basically an excerpt from Dalrymple’s new book The Golden Road: How Ancient India Transformed the World. Fall of Civilizations Podcast #5: The Khmer Empire by Paul Cooper. Another history, not quite as magically well told as Dalrymple but full of good information. Angkor and the Khmer Civilization by Michael D. Coe. A highly recommended history of the Khmer region. Honestly I found this very dry and too detailed, but I did learn from it. Lonely Planet Pocket Guide: Siem Reap & the Temples of Angkor. We didn’t use this much but it seemed like a useful practical guide. OTOH it dates to 2018 so things have changed. My other advice for visiting Siem Reap and Angkor is: go. It is amazing. Plan for at least two full days of touristing there. Hire a private guide and driver if you can, it is absolutely worth it. (Email me for a recommendation.)
Often you’ll see a disorganized collection of ideas labeled as a “strategy.” Even when they’re dense with ideas, these can be hard to parse, and are a major reason why most engineers will claim their company doesn’t have a clear strategy even though my experience is that all companies follow some strategy, even if it’s undocumented. This chapter lays out a repeatable, structured approach to drafting strategy. It introduces each step of that approach, which are then detailed further in their respective chapters. Here we’ll cover: How these five steps fit together to facilitate creating strategy, especially by preventing practicioners from skipping steps that feel awkward or challenging. Step 1: Exploring the wider industry’s ideas and practices around the strategy you’re working on. Exploration is understanding what recent research might change your approach, and how the state of art has changed since you last tackled a similar problem. Step 2: Diagnosing the details of your problem. It’s hard to slow down to understand your problem clearly before attempting to solve it, but it’s even more difficult to solve anything well without a clear diagnosis. Step 3: Refinement is taking a raw, unproven set of ideas and testing them against reality. Three techniques are introduced to support this validation process: strategy testing, systems modeling, and Wardley mapping. Step 4: Policy makes the tradeoffs and decisions to solve your diagnosis. These can range from specifying how software is architected, to how pull requests are reviewed, to how headcount is allocated within an organization. Step 5: Operations are the concrete mechanisms that translate policy into an active force within your organization. These can be nudges that remind you about code changes without associated tests, or weekly meetings where you study progress on a migration. Whether these steps are sacred or are open to adaptation and experimentation, including when you personally should persevere in attempting steps that don’t feel effective. From this chapter’s launching point, you’ll have the high-level summaries of each step in strategy creation, and can decide where you want to read further. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. How the steps become strategy Creating effective strategy is not rote incantation of a formula. You can’t merely follow these steps to guarantee that you’ll create a great strategy. However, I’ve found over and over is that strategies fail more due to avoidable errors than from fundamentally unsound thinking. Busy people skip steps. Especially steps they dislike or have failed at before. These steps are the scaffolding to avoid those errors. By practicing routinely, you’ll build powerful habits and intuition around which approach is most appropriate for the current strategy you’re working on. They also help turn strategy into a community practice that you, your colleagues, and the wider engineering ecosystem can participate in together. Each step is an input that flows into the next step. Your exploration is the foundation of a solid diagnosis. Your diagnosis helps you search the infinite space of policy for what you need now. Operational mechanisms help you turn policy into an active force supporting your strategy rather than an abstract treatise. If you’re skeptical of the steps, you should certainly maintain your skepticism, but do give them a few tries before discarding them entirely. You may also appreciate the discussion in the chapter on bridging between theory and practice when doing strategy. Explore Exploration is the deliberate practice of searching through a strategy’s problem and solution spaces before allowing yourself to commit to a given approach. It’s understanding how other companies and teams have approached similar questions, and whether their approaches might also work well for you. It’s also learning why what brought you so much success at your former employer isn’t necessarily the best solution for your current organization. The Uber service migration strategy used exploration to understand the service ecosystem by reading industry literature: As a starting point, we find it valuable to read Large-scale cluster management at Google with Borg which informed some elements of the approach to Kubernetes, and Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center which describes the Mesos/Aurora approach. It also used a Wardley map to explore the cloud compute ecosystem. For more detail, read the Exploration chapter. Diagnose Diagnosis is your attempt to correctly recognize the context that the strategy needs to solve before deciding on the policies to address that context. Starting from your exploration’s learnings, and your understanding of your current circumstances, building a diagnosis forces you to delay thinking about solutions until you fully understand your problem’s nuances. A diagnosis can be largely data driven, such as the navigating a Private Equity ownership transition strategy: Our Engineering headcount costs have grown by 15% YoY this year, and 18% YoY the prior year. Headcount grew 7% and 9% respectively, with the difference between headcount and headcount costs explained by salary band adjustments (4%), a focus on hiring senior roles (3%), and increased hiring in higher cost geographic regions (1%). It can also be less data driven, instead aiming to summarize a problem, such as the Index acquisition strategy’s summary of the known and unknown elements of the technical integration prior to the acquisition closing: We will need to rapidly integrate the acquired startup to meet this timeline. We only know a small number of details about what this will entail. We do know that point-of-sale devices directly operate on payment details (e.g. the point-of-sale device knows the credit card details of the card it reads). Our compliance obligations restrict such activity to our “tokenization environment”, a highly secured and isolated environment with direct access to payment details. This environment converts payment details into a unique token that other environments can utilize to operate against payment details without the compliance overhead of having direct access to the underlying payment details. The approach, and challenges, of developing a diagnosis are detailed in the Diagnosis chapter. Refine (Test, Map & Model) Strategy refinement is a toolkit of methods to identify which parts of your diagnosis are most important, and verify that your approach to solving the diagnosis actually works. This chapter delves into the details of using three methods in particular: strategy testing, systems modeling, and Wardley mapping. An example of a systems modeling diagram. These techniques are also demonstrated in the strategy case studies, such as the Wardley map of the LLM ecosystem, or the systems model of backfilling roles without downleveling them. For more detail, read the Refinement chapter. Why isn’t refinement earlier (or later)? A frequent point of disagreement is that refinement should occur before the diagnosis. Another is that mapping and modeling are two distinct steps, and mapping should occur before diagnosis, and modeling should occur after policy. A third is that refinement ought to be the final step of strategy, turning the steps into a looping cycle. These are all reasonable observations, so let me unpack my rationale for this structure. By far the biggest risk for most strategies is not that you model too early or map too late, but instead that you simply skip both steps entirely. My foremost concern is minimizing the required investment into mapping and modeling such that more folks do these steps at all. Refining after exploring and diagnosing allows you to concentrate your efforts on a smaller number of load-bearing areas. That said, it’s common to refine many places in your strategy creation. You’re just as likely to have three small refinement steps as one bigger one. Policy Policy is interpreting your diagnosis into a concrete plan. This plan also needs to work, which requires careful study of what’s worked within your company, and what new ideas you’ve discovered while exploring the current problem. Policies can range from providing directional guidance, such as the user data controls strategy’s guidance: Good security discussions don’t frame decisions as a compromise between security and usability. We will pursue multi-dimensional tradeoffs to simultaneously improve security and efficiency. Whenever we frame a discussion on trading off between security and utility, it’s a sign that we are having the wrong discussion, and that we should rethink our approach. We will prioritize mechanisms that can both automatically authorize and automatically document the rationale for accesses to customer data. The most obvious example of this is automatically granting access to a customer support agent for users who have an open support ticket assigned to that agent. (And removing that access when that ticket is reassigned or resolved.) To committing not to make a decision until later, as practiced in the Index acquisition strategy: Defer making a decision regarding the introduction of Java to a later date: the introduction of Java is incompatible with our existing engineering strategy, but at this point we’ve also been unable to align stakeholders on how to address this decision. Further, we see attempting to address this issue as a distraction from our timely goal of launching a joint product within six months. We will take up this discussion after launching the initial release. This chapter further goes into evaluating policies, overcoming ambiguous circumstances that make it difficult to decide on an approach, and developing novel policies. For full detail, read the Policy chapter. Operations Even the best policies have to be interpreted. There will be new circumstances their authors never imagined, and the policies may be in effect long after their authors have left the organization. Operational mechanisms are the concrete implementation of your policy. The simplest mechanisms are an explicit escalation path, as shown in Calm’s product engineering strategy: Exceptions are granted by the CTO, and must be in writing. The above policies are deliberately restrictive. Sometimes they may be wrong, and we will make exceptions to them. However, each exception should be deliberate and grounded in concrete problems we are aligned both on solving and how we solve them. If we all scatter towards our preferred solution, then we’ll create negative leverage for Calm rather than serving as the engine that advances our product. From that starting point, the mechanisms can get far more complex. This chapter works through evaluating mechanisms, composing an operational plan, and the most common sorts of operational mechanisms that I’ve seen across strategies. For more detail, read the Operations chapter. Is the structure sacrosanct? When someone’s struggling to write a strategy document, one of the first tools someone will often recommend is a strategy template. Templates are great: they reduce the ambiguity of an already broad project into something more tractable. If you’re wondering if you should use a template to craft strategy: sure, go ahead! However, I find that well-meaning, thoughtful templates often turn into lumbering, callous documents that serve no one well. The secret to good templates is that someone has to own it, and that person has to care about the template writer first and foremost, rather than the various constituencies that want to insert requirements into the strategy creation process. The security, compliance and cost of your plans matter a lot, but many organizations start to layer in more and more requirements into these sorts of documents until the idea of writing them becomes prohibitively painful. The best advice I can give someone attempting to write strategy, is that you should discard every element of strategy that gets in your way as long as you can explain what that element was intended to accomplish. For example, if you’re drafting a strategy and you don’t find any operational mechanisms that fit. That’s fine, discard that section. Ultimately, the structure is not sacrosanct, it’s the thinking behind the sections that really matter. This topic is explored in more detail in the chapter on Making engineering strategies more readable. Summary Now, you know the foundational steps to conducting strategy. From here, you can dive into the details with the strategy case studies like How should you adopt LLMs? or you can maintain a high altitude starting with how exploration creates the foundation for an effective strategy. Whichever you start with, I encourage yout o eventually work through both to get the full perspective.
Logic for Programmers v0.8 now out! The new release has minor changes: new formatting for notes and a better introduction to predicates. I would have rolled it all into v0.9 next month but I like the monthly cadence. Get it here! Betteridge's Law of Software Engineering Specialness In There is No Automatic Reset in Engineering, Tim Ottinger asks: Do the other people have to live with January 2013 for the rest of their lives? Or is it only engineering that has to deal with every dirty hack since the beginning of the organization? Betteridge's Law of Headlines says that if a journalism headline ends with a question mark, the answer is probably "no". I propose a similar law relating to software engineering specialness:1 If someone asks if some aspect of software development is truly unique to just software development, the answer is probably "no". Take the idea that "in software, hacks are forever." My favorite example of this comes from a different profession. The Dewey Decimal System hierarchically categorizes books by discipline. For example, Covered Bridges of Pennsylvania has Dewey number 624.37. 6-- is the technology discipline, 62- is engineering, 624 is civil engineering, and 624.3 is "special types of bridges". I have no idea what the last 0.07 means, but you get the picture. Now if you look at the 6-- "technology" breakdown, you'll see that there's no "software" subdiscipline. This is because when Dewey preallocated the whole technology block in 1876. New topics were instead to be added to the 00- "general-knowledge" catch-all. Eventually 005 was assigned to "software development", meaning The C Programming Language lives at 005.133. Incidentally, another late addition to the general knowledge block is 001.9: "controversial knowledge". And that's why my hometown library shelved the C++ books right next to The Mothman Prophecies. How's that for technical debt? If anything, fixing hacks in software is significantly easier than in other fields. This came up when I was interviewing classic engineers. Kludges happened all the time, but "refactoring" them out is expensive. Need to house a machine that's just two inches taller than the room? Guess what, you're cutting a hole in the ceiling. (Even if we restrict the question to other departments in a software company, we can find kludges that are horrible to undo. I once worked for a company which landed an early contract by adding a bespoke support agreement for that one customer. That plagued them for years afterward.) That's not to say that there aren't things that are different about software vs other fields!2 But I think that most of the time, when we say "software development is the only profession that deals with XYZ", it's only because we're ignorant of how those other professions work. Short newsletter because I'm way behind on writing my April Cools. If you're interested in April Cools, you should try it out! I make it way harder on myself than it actually needs to be— everybody else who participates finds it pretty chill. Ottinger caveats it with "engineering, software or otherwise", so I think he knows that other branches of engineering, at least, have kludges. ↩ The "software is different" idea that I'm most sympathetic to is that in software, the tools we use and the products we create are made from the same material. That's unusual at least in classic engineering. Then again, plenty of machinists have made their own lathes and mills! ↩
We're spending just shy of $1.5 million/year on AWS S3 at the moment to host files for Basecamp, HEY, and everything else. The only way we were able to get the pricing that low was by signing a four-year contract. That contract expires this summer, June 30, so that's our departure date for the final leg of our cloud exit. We've already racked the replacement from Pure Storage in our two primary data centers. A combined 18 petabytes, securely replicated a thousand miles apart. It's a gorgeous rack full of blazing-fast NVMe storage modules. Each card in the chassis capable of storing 150TB now. Pure Storage comes with an S3-compatible API, so no need for CEPH, Minio, or any of the other object storage software solutions you might need, if you were trying to do this exercise on commodity hardware. This makes it pretty easy from the app side to do the swap. But there's still work to do. We have to transfer almost six petabytes out of S3. In an earlier age, that egress alone would have cost hundreds of thousands of dollars in fees alone. But now AWS offers a free 60-day egress window for anyone who wants to leave, so that drops the cost to $0. Nice! It takes a while to transfer that much data, though. Even on the fat 40-Gbit pipe we have set aside for the purpose, it'll probably take at least three weeks, once you factor in overhead and some babysitting of the process. That's when it's good to remind ourselves why June 30th matters. And the reminder math pens out in nice, round numbers for easy recollection: If we don't get this done in time, we'll be paying a cool five thousand dollars a day to continue to use S3 (if all the files are still there). Yikes! That's $35,000/week! That's $150,000/month! Pretty serious money for a company of our size. But so are the savings. Over five years, it'll now be almost five million! Maybe even more, depending on the growth in files we need to store for customers. About $1.5 million for the Pure Storage hardware, and a bit less than a million over five years for warranty and support. But those big numbers always seem a bit abstract to me. The idea of paying $5,000/day, if we miss our departure date, is awfully concrete in comparison.