Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
15
In my career, the majority of the strategy work I’ve done has been in non-executive roles, things like Uber’s service migration. Joining Calm was my first executive role, where I was able to not just propose, but also mandate, strategy. Like almost all startups, the engineering team was scattered when I joined. Was our most important work creating more scalable infrastructure? Was our greatest risk the failure to adopt leading programming languages? How did we rescue the stuck service decomposition initiative? This strategy is where the engineering team and I aligned after numerous rounds of iteration, debate, and inevitably some disagreement. As a strategy, it’s both basic and also unambiguous about what we valued, and I believe it’s a reasonably good starting point for any low scalability-complexity consumer product. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft...
3 days ago

More from Irrational Exuberance

Bridging theory and practice in engineering strategy.

Some people I’ve worked with have lost hope that engineering strategy actually exists within any engineering organizations. I imagine that they, reading through the steps to build engineering strategy, or the strategy for navigating private equity ownership, are not impressed. Instead, these ideas probably come across as theoretical at best. In less polite company, they might describe these ideas as fake constructs. Let’s talk about it! Because they’re right. In fact, they’re right in two different ways. First, this book is focused on explain how to create clean, refine and definitive strategy documents, where initially most real strategy artifacts look rather messy. Second, applying these techniques in practice can require a fair amount of creativity. It might sound easy, but it’s quick difficult in practice. This chapter will cover: Why strategy documents need to be clear and definitive, especially when strategy development has been messy How to iterate on strategy when there are demands for unrealistic timelines Using strategy as non-executives, where others might override your strategy Handling dynamic, quickly changing environments where diagnosis can change frequently Working with indecisive stakeholders who don’t provide clarity on approach Surviving other people’s bad strategy work Alright, let’s dive into the many ways that praxis doesn’t quite line up with theory. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Clear and definitive documents As explored in Making engineering strategies more readable, documents that feel intuitive to write are often fairly difficult to read, That’s because thinking tends to be a linear-ish journey from a problem to a solution. Most readers, on the other hand, usually just want to know the solution and then to move on. That’s because good strategies for direction (e.g. when a team wants to understand how they’re supposed to solve a specific issue at hand) far more frequently than they’re read to build agreement (e.g. building stakeholder alignment during the initial development of the strategy). However, many organizations only produce writer-oriented strategy documents, and may not have any reader-oriented documents at all. If you’ve predominantly worked in those sorts of organizations, then the first reader-oriented documents you encounter will see artificial. There are also organizations that have many reader-oriented documents, but omit the rationale behind those documents. Those documents feel proscriptive and heavy-handed, because the infrequent reader who does want to understand the thinking can’t find it. Further, when they want to propose an alternative, they have to do so without the rationale behind the current policies: the absence of that context often transforms what was a collaborative problem-solving opportunity into a political match. With that in mind, I’d encourage you to see the frequent absence of these documents as a major opportunity to drive strategy within your organization, rather than evidence that these documents don’t work. My experience is that they do. Doing strategy despite unrealistic timelines The most frequent failure mode I see for strategy is when it’s rushed, and its authors accept that thinking must stop when the artificial deadline is reached. Taking annual planning at Stripe as an example, Claire Hughes Johnson argued that planning expands to fit any timeline, and consequently set a short planning timeline of several weeks. Some teams accepted that as a fixed timeline and stopped planning when the timeline ended, whereas effective teams never stopped planning before or after the planning window. When strategy work is given an artificially or unrealistic timeline, then you should deliver the best draft you can. Afterwards, rather than being finished, you should view yourself as startingt he refinement process. An open strategy secret is that many strategies never leave the refinement phase, and continue to be tweaked throughout their lifespan. Why should a strategy with an early deadline be any different? Well, there is one important problem to acknowledge: I’ve often found that the executive who initially provided the unrealistic timeline intended it as a forcing function to inspire action and quick thinking. If you have a discussion with them directly, they’re usually quite open to adjusting the approach. However, the intermediate layers of leadership between that executive and you often calcify on a particular approach which they claim that the executive insists on precisely following. Sometimes having the conversation with the responsible executive is quite difficult. In that case, you do have to work with individuals taking the strategy as literate and unalterable until either you can have the conversation or something goes wrong enough that the executive starts paying attention again. Usually, though, you can find someone who has a communication path, as long as you can articulate the issue clearly. Using strategy as non-executives Some engineers will argue that the only valid strategy altitude is the highest one defined by executives, because any other strategy can be invalidated by a new, higher altitude strategy. They would claim that teams simply cannot do strategy, because executives might invalidate it. Some engineering executives would argue the same thing, instead claiming that they can’t work on an engineering strategy because the missing product strategy or business strategy might introduce new constraints. I don’t agree with this line of thinking at all. To do strategy at any altitude, you have to come to terms with the certainty that new information will show up, and you’ll need to revise your strategy to deal with that. Uber’s service provisioning strategy is a good counterexample against the idea that you have to wait for someone else to set the strategy table. We were able to find a durable diagnosis despite being a relatively small team within a much larger organization that was relatively indifferent to helping us succeed. When it comes to using strategy, effective diagnosis trumps authority. In my experience, at least as many executives’ strategies are ravaged by reality’s pervasive details as are overridden by higher altitude strategies. The only way to be certain your strategy will fail is waiting until you’re certain that no new information might show up and require it changing. Doing strategy in chaotic environments How should you adopt LLMs? discusses how a company should plot a path through the rapidly evolving LLM ecosystem. Periods of rapid technology evolution are one reason why your strategy might encounter a pocket of chaos, but there are many others. Pockets of rapid hiring, as well as layoffs, create chaos. The departure of load-bearing senior leaders can change a company quickly. Slowing revenue in a company’s core business can also initiate chaotic actions in pursuit of a new business. Strategies don’t require stable environments. Instead, strategies require awareness of the environment that they’re operating in. In a stable period, a strategy might expect to run for several years and expect relatively little deviation from the initial approach. In a dynamic period, the strategy might know you can only protect capacity in two-week chunks before a new critical initiative pops up. It’s possible to good strategy in either scenario, but it’s impossible to good strategy if you don’t diagnose the context effectively. Unreliable information Often times, the strategy forward is very obvious if a few key decisions were made, you know who is supposed to make those decisions, but you simply cannot get them to decide. My most visceral experience of this was conducting a layoff where the CEO wouldn’t define a target cost reduction or a thesis of how much various functions (e.g. engineering, marketing, sales) should contribute to those reductions. With those two decisions, engineering’s approach would be obvious, and without that clarity things felt impossible. Although I was frustrated at the time, I’ve since come to appreciate that missing decisions are the norm rather than the exception. The strategy on Navigating Private Equity ownership deals with this problem by acknowledging a missing decision, and expressly blocking one part of its execution on that decision being made. Other parts of its plan, like changing how roles are backfilled, went ahead to address the broader cost problem. Rather than blocking on missing information, your strategy should acknowledge what’s missing, and move forward where you can. Sometimes that’s moving forward by taking risk, sometimes that’s delaying for clarity, but it’s never accepting yourself as stuck without options other than pointing a finger. Surviving other people’s bad strategy work Sometimes you will be told to follow something which is described as a strategy, but is really just a policy without any strategic thinking behind it. This is an unavoidable element of working in organizations and happens for all sorts of reasons. Sometimes, your organization’s leader doesn’t believe it’s valuable to explain their thinking to others, because they see themselves as the one important decision maker. Other times, your leader doesn’t agree with a policy they’ve been instructed to rollout. Adoption of “high hype” technologies like blockchain technologies during the crypto book was often top-down direction from company leadership that engineering disagreed with, but was obligated to align with. In this case, your leader is finding that it’s hard to explain a strategy that they themselves don’t understand either. This is a frustrating situation. What I’ve found most effective is writing a strategy of my own, one that acknowledges the broader strategy I disagree with in its diagnosis as a static, unavoidable truth. From there, I’ve been able to make practical decisions that recognize the context, even if it’s not a context I’d have selected for myself. Summary I started this chapter by acknowledging that the steps to building engineering strategy are a theory of strategy, and one that can get quite messy in practice. Now you know why strategy documents often come across as overly pristine–because they’re trying to communicate clear about a complex topic. You also know how to navigate the many ways reality pulls you away from perfect strategy, such as unrealsitic timelines, higher altitude strategies invalidating your own strategy work, working in a chaotic environment, and dealing with stakeholders who refuse to align with your strategy. Finally, we acknowledged that sometimes strategy work done by others is not what we’d consider strategy, it’s often unsupported policy with neither a diagnosis or an approach to operating the policy. That’s all stuff you’re going to run into, and it’s all stuff you’re going to overcome on the path to doing good strategy work.

a week ago 27 votes
Uber's service migration strategy circa 2014.

In early 2014, I joined as an engineering manager for Uber’s Infrastructure team. We were responsible for a wide number of things, including provisioning new services. While the overall team I led grew significantly over time, the subset working on service provisioning never grew beyond four engineers. Those four engineers successfully migrated 1,000+ services onto a new, future-proofed service platform. More importantly, they did it while absorbing the majority, although certainly not the entirety, of the migration workload onto that small team rather than spreading it across the 2,000+ engineers working at Uber at the time. Their strategy serves as an interesting case study of how a team can drive strategy, even without any executive sponsor, by focusing on solving a pressing user problem, and providing effective ergonomics while doing so. Note that after this introductory section, the remainder of this strategy will be written from the perspective of 2014, when it was originally developed. More than a decade later after this strategy was implemented, we have an interesting perspective to evaluate its impact. It’s fair to say that it had some meaningful, negative consequences by allowing the widespread proliferation of new services within Uber. Those services contributed to a messy architecture that had to go through cycles of internal cleanup over the following years. As the principle author of this strategy, I’ve learned a lot from meditating on the fact that this strategy was wildly successful, that I think Uber is better off for having followed it, and that it also meaningfully degraded Uber’s developer experience over time. There’s both good and bad here; with a wide enough lens, all evaluations get complicated. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reserve order, starting with Explore, then Diagnose and so on. Relative to the default structure, this document one tweak, folding the Operation section in with Policy. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation We’ve adopted these guiding principles for extending Uber’s service platform: Constrain manual provisioning allocation to maximize investment in self-service provisioning. The service provisioning team will maintain a fixed allocation of one full time engineer on manual service provisioning tasks. We will move the remaining engineers to work on automation to speed up future service provisioning. This will degrade manual provisioning in the short term, but the alternative is permanently degrading provisioning by the influx of new service requests from newly hired product engineers. Self-service must be safely usable by a new hire without Uber context. It is possible today to make a Puppet or Clusto change while provisioning a new service that negatively impacts the production environment. This must not be true in any self-service solution. Move to structured requests, and out of tickets. Missing or incorrect information in provisioning requests create significant delays in provisioning. Further, collecting this information is the first step of moving to a self-service process. As such, we can get paid twice by reducing errors in manual provisioning while also creating the interface for self-service workflows. Prefer initializing new services with good defaults rather than requiring user input. Most new services are provisioned for new projects with strong timeline pressure but little certainty on their long-term requirements. These users cannot accurately predict their future needs, and expecting them to do so creates significant friction. Instead, the provisioning framework should suggest good defaults, and make it easy to change the settings later when users have more clarity. The gate from development environment to production environment is a particularly effective one for ensuring settings are refreshed. We are materializing those principles into this sequenced set of tasks: Create an internal tool that coordinates service provisioning, replacing the process where teams request new services via Phabricator tickets. This new tool will maintain a schema of required fields that must be supplied, with the aim of eliminating the majority of back and forth between teams during service provisioning. In addition to capturing necessary data, this will also serve as our interface for automating various steps in provisioning without requiring future changes in the workflow to request service provisioning. Extend the internal tool will generate Puppet scaffolding for new services, reducing the potential for errors in two ways. First, the data supplied in the service provisioning request can be directly included into the rendered template. Second, this will eliminate most human tweaking of templates where typo’s can create issues. Port allocation is a particularly high-risk element of provisioning, as reusing a port can breaking routing to an existing production service. As such, this will be the first area we fully automate, with the provisioning service supply the allocated port rather than requiring requesting teams to provide an already allocated port. Doing this will require moving the port registry out of a Phabricator wiki page and into a database, which will allow us to guard access with a variety of checks. Manual assignment of new services to servers often leads to new serices being allocated to already heavily utilized servers. We will replace the manual assignment with an automated system, and do so with the intention of migrating to the Mesos/Aurora cluster once it is available for production workloads. Each week, we’ll review the size of the service provisioning queue, along with the service provisioning time to assess whether the strategy is working or needs to be revised. Prolonged strategy testing Although I didn’t have a name for this practice in 2014 when we created and implemented this strategy, the preceeding paragraph captures an important truth of team-led bottom-up strategy: the entire strategy was implemented in a prolonged strategy testing phase. This is an important truth of all low-attitude, bottom-up strategy: because you don’t have the authority to mandate compliance. An executive’s high-altitude strategy can be enforced despite not working due to their organizational authority, but a team’s strategy will only endure while it remains effective. Refine In order to refine our diagnosis, we’ve created a systems model for service onboarding. This will allow us to simulate a variety of different approaches to our problem, and determine which approach, or combination of approaches, will be most effective. As we exercised the model, it became clear that: we are increasingly falling behind, hiring onto the service provisioning team is not a viable solution, and moving to a self-service approach is our only option. While the model writeup justifies each of those statements in more detail, we’ll include two charts here. The first chart shows the status quo, where new service provisioning requests, labeled as Initial RequestedServices, quickly accumulate into a backlog. Second, we have a chart comparing the outcomes between the current status quo and a self-service approach. In that chart, you can see that the service provisioning backlog in the self-service model remains steady, as represented by the SelfService RequestedServices line. Of the various attempts to find a solution, none of the others showed promise, including eliminating all errors in provisioning and increasing the team’s capacity by 500%. Diagnose We’ve diagnosed the current state of service provisioning at Uber’s as: Many product engineering teams are aiming to leave the centralized monolith, which is generating two to three service provisioning requests each week. We expect this rate to increase roughly linearly with the size of the product engineering organization. Even if we disagree with this shift to additional services, there’s no team responsible for maintaining the extensibility of the monolith, and working in the monolith is the number one source of developer frustration, so we don’t have a practical counter proposal to offer engineers other than provisioning a new service. The engineering organization is doubling every six months. Consequently, a year from now, we expect eight to twelve service provisioning requests every week. Within infrastructure engineering, there is a team of four engineers responsible for service provisioning today. While our organization is growing at a similar rate as product engineering, none of that additional headcount is being allocated directly to the team working on service provisioning. We do not anticipate this changing. Some additional headcount is being allocated to Service Reliability Engineers (SREs) who can take on the most nuanced, complicated service provisioning work. However, their bandwidth is already heavily constrained across many tasks, so relying on SRES is an insufficient solution. The queue for service provisioning is already increasing in size as things are today. Barring some change, many services will not be provisioned in a timely fashion. Today, provisioning a new service takes about a week, with numerous round trips between the requesting team and the provisioning team. Missing and incorrect information between teams is the largest source of delay in provisioning services. If the provisioning team has all the necessary information, and it’s accurate, then a new service can be provisioned in about three to four hours of work across configuration in Puppet, metadata in Clusto, allocating ports, assigning the service to servers, and so on. There are few safe guards on port allocation, server assignment and so on. It is easy to inadvertently cause a production outage during service provisioning unless done with attention to detail. Given our rate of hiring, training the engineering organization to use this unsafe toolchain is an impractical solution: even if we train the entire organization perfectly today, there will be just as many untrained individuals in six months. Further, there’s product engineering leadership has no interest in their team being diverted to service provisioning training. It’s widely agreed across the infrastructure engineering team that essentially every component of service provisioning should be replaced as soon as possible, but there is no concrete plan to replace any of the core components. Further, there is no team accountable for replacing these components, which means the service provisioning team will either need to work around the current tooling or replace that tooling ourselves. It’s urgent to unblock development of new services, but moving those new services to production is rarely urgent, and occurs after a long internal development period. Evidence of this is that requests to provision a new service generally come with significant urgency and internal escalations to management. After the service is provisioned for development, there are relatively few urgent escalations other than one-off requests for increased production capacity during incidents. Another team within infrastructure is actively exploring adoption of Mesos and Aurora, but there’s no concrete timeline for when this might be available for our usage. Until they commit to supporting our workloads, we’ll need to find an alternative solution. Explore Uber’s server and service infrastructure today is composed of a handful of pieces. First, we run servers on-prem within a handful of colocations. Second, we describe each server in Puppet manifests to support repeatable provisioning of servers. Finally, we manage fleet and server metadata in a tool named Clusto, originally created by Digg, which allows us to populate Puppet manifests with server and cluster appropriate metadata during provisioning. In general, we agree that our current infrastructure is nearing its end of lifespan, but it’s less obvious what the appropriate replacements are for each piece. There’s significant internal opposition to running in the cloud, up to and including our CEO, so we don’t believe that will change in the forseeable future. We do however believe there’s opportunity to change our service definitions from Puppet to something along the lines of Docker, and to change our metadata mechanism towards a more purpose-built solution like Mesos/Aurora or Kubernetes. As a starting point, we find it valuable to read Large-scale cluster management at Google with Borg which informed some elements of the approach to Kubernetes, and Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center which describes the Mesos/Aurora approach. If you’re wondering why there’s no mention of Borg, Omega, and Kubernetes, it’s because it wasn’t published until 2016, a year after this strategy was developed. Within Uber, we have a number of ex-Twitter engineers who can speak with confidence to their experience operating with Mesos/Aurora at Twitter. We have been unable to find anyone to speak with that has production Kubernetes experience operating a comparably large fleet of 10,000+ servers, although presumably someone is operating–or close to operating–Kuberenetes at that scale. Our general belief of the evolution of the ecosystem at the time is described in this Wardley mapping exercise on service orchestration (2014). One of the unknowns today is how the evolution of Mesos/Aurora and Kubernetes will look in the future. Kubernetes seems promising with Google’s backing, but there are few if any meaningful production deployments today. Mesos/Aurora has more community support and more production deployments, but the absolute number of deployments remains quite small, and there is no large-scale industry backer outside of Twitter. Even further out, there’s considerable excitement around “serverless” frameworks, which seem like a likely future evolution, but canvassing the industry and our networks we’ve simply been unable to find enough real-world usage to make an active push towards this destination today. Wardley mapping is introduced as one of the techniques for strategy refinement, but it can also be a useful technique for exploring an dyanmic ecosystem like service orchestration in 2014. Assembling each strategy requires exercising judgment on how to compile the pieces together most usefully, and in this case I found that the map fits most naturally in with the rest of exploration rather than in the more operationally-focused refinement section.

2 weeks ago 32 votes
Service onboarding model for Uber (2014).

At the core of Uber’s service migration strategy (2014) is understanding the service onboarding process, and identifying the levers to speed up that process. Here we’ll develop a system model representing that onboarding process, and exercise the model to test a number of hypotheses about how to best speed up provisioning. In this chapter, we’ll cover: Where the model of service onboarding suggested we focus on efforts Developing a system model using the lethain/systems package on Github. That model is available in the lethain/eng-strategy-models repository Exercising that model to learn from it Let’s figure out what this model can teach us. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Learnings Even if we model this problem with a 100% success rate (e.g. no errors at all), then the backlog of requested new services continues to increase over time. This clarifies that the problem to be solved is not the quality of service the service provisioning team is providing, but rather that the fundamental approach is not working. Although hiring is tempting as a solution, our model suggests it is not a particularly valuable approach in this scenario. Even increasing the Service Provisioning team’s staff allocated to manually provisioning services by 500% doesn’t solve the backlog of incoming requests. If reducing errors doesn’t solve the problem, and increased hiring for the team doesn’t solve the problem, then we have to find a way to eliminate manual service provisioning entirely. The most promising candidate is moving to a self-service provisioning model, which our model shows solves the backlog problem effectively. Refining our earlier statement, additional hiring may benefit the team if we are able to focus those hires on building self-service provisioning, and were able to ramp their productivity faster than the increase of incoming service provisioning requests. Sketch Our initial sketch of service provisioning is a simple pipieline starting with requested services and moving step by step through to server capacity allocated. Some of these steps are likely much slower than others, but it gives a sense of the stages and where things might go wrong. It also gives us a sense of what we can measure to evaluate if our approach to provisioning is working well. One element worth mentioning are the dotted lines from hiring rate to product engineers and from product engineers to requested services. These are called links, which are stocks that influence another stock, but don’t flow directly into them. A purist would correctly note that links should connect to flows rather than stocks. That is true! However, as we’ll encounter when we convert this sketch into a model, there are actually several counterintuitive elemnents here that are necessary to model this system but make the sketch less readable. As a modeler, you’ll frequently encounter these sorts of tradeoffs, and you’ll have to decide what choices serve your needs best in the moment. The biggest missing element the initial model is missing is error flows, where things can sometimes go wrong in addition to sometimes going right. There are many ways things can go wrong, but we’re going to focus on modeling three error flows in particular: Missing/incorrect information occurs twice in this model, and throws a provisioning request back into the initial provisioning phase where information is collected. When this occurs during port assignment, this is a relatively small trip backwards. However, when it occurs in Puppet configuration, this is a significantly larger step backwards. Puppet error occurs in the second to final stock, Puppet configuration tested & merged. This sends requests back one step in the provisioning flow. Updating our sketch to reflect these flows, we get a fairly complete, and somewhat nuanced, view of the service provisioning flow. Note that the combination of these two flows introduces the possibility of a service being almost fully provisioned, but then traveling from Puppet testing back to Puppet configuration due to Puppet error, and then backwards again to the intial step due to Missing/incorrect information. This means it’s possible to lose almost all provisioning progress if everything goes wrong. There are more nuances we could introduce here, but there’s already enough complexity here for us to learn quite a bit from this model. Reason Studying our sketches, a few things stands out: The hiring of product engineers is going to drive up service provisioning requests over time, but there’s no counterbalancing hiring of infrastructure engineers to work on service provisioning. This means there’s an implicit, but very real, deadline to scale this process independently of the size of the infrastructure engineering team. Even without building the full model, it’s clear that we have to either stop hiring product engineers, turn this into a self-service solution, or find a new mechanism to discourage service provisioning. The size of error rates are going to influence results a great deal, particularly those for Missing/incorrect information. This is probably the most valuable place to start looking for efficiency improvements. Missing information errors are more expensive than the model implies, because they require coordination across teams to resolve. Conversely, Puppet testing errors are probably cheaper than the model implies, because they should be solvable within the same team and consequently benefit from a quick iteration loop. Now we need to build a model that helps guide our inquiry into those questions. Model You can find the full implementation of this model on Github if you want to see the entirety rather than these emphasized snippets. First, let’s get the success states working: HiringRate(10) ProductEngineers(1000) [PotentialHires] > ProductEngineers @ HiringRate [PotentialServices] > RequestedServices(10) @ ProductEngineers / 10 RequestedServices > InflightServices(0, 10) @ Leak(1.0) InflightServices > PortNameAssigned @ Leak(1.0) PortNameAssigned > PuppetGenerated @ Leak(1.0) PuppetGenerated > PuppetConfigMerged @ Leak(1.0) PuppetConfigMerged > ServerCapacityAllocated @ Leak(1.0) As we run this model, we can see that the number of requested services grows significantly over time. This makes sense, as we’re only able to provision a maximum of ten services per round. However, it’s also the best case, because we’re not capturing the three error states: Unique port and name assignment can fail because of missing or incorrect information Puppet configuration can also fail due to missing or incorrect information. Puppet configurations can have errors in them, requiring rework. Let’s update the model to include these failure modes, starting with unique port and name assignment. The error-free version looks like this: InflightServices > PortNameAssigned @ Leak(1.0) Now let’s add in an error rate, where 20% of requests are missing information and return to inflight services stock. PortNameAssigned > PuppetGenerated @ Leak(0.8) PortNameAssigned > RequestedServices @ Leak(0.2) Then let’s do the same thing for puppet configuration errors: # original version PuppetGenerated > PuppetConfigMerged @ Leak(1.0) # updated version with errors PuppetGenerated > PuppetConfigMerged @ Leak(0.8) PuppetGenerated > InflightServices @ Leak(0.2) Finally, we’ll make a similar change to represent errors made in the Puppet templates themselves: # original version PuppetConfigMerged > ServerCapacityAllocated @ Leak(1.0) # updated version with errors PuppetConfigMerged > ServerCapacityAllocated @ Leak(0.8) PuppetConfigMerged > PuppetGenerated @ Leak(0.2) Even with relatively low error rates, we can see that the throughput of the system overall has been meaningfully impacted by introducing these errors. Now that we have the foundation of the model built, it’s time to start exercising the model to understand the problem space a bit better. Exercise We already know the errors are impacting throughput, but let’s start by narrowing down which of errors matter most by increasing the error rate for each of them independently and comparing the impact. To model this, we’ll create three new specifications, each of which increases one error from from 20% error rate to 50% error rate, and see how the overall throughput of the system is impacted: # test 1: port assignment errors increased PortNameAssigned > PuppetGenerated @ Leak(0.5) PortNameAssigned > RequestedServices @ Leak(0.5) # test 2: puppet generated errors increased PuppetGenerated > PuppetConfigMerged @ Leak(0.5) PuppetGenerated > InflightServices @ Leak(0.5) # test 3: puppet merged errors increased PuppetConfigMerged > ServerCapacityAllocated @ Leak(0.5) PuppetConfigMerged > PuppetGenerated @ Leak(0.5) Comparing the impact of increasing the error rates from 20% to 50% in each of the three error loops, we can get a sense of the model’s sensitivity to each error. This chart captures why exercising is so impactful: we’d assumed during sketching that errors in puppet generation would matter the most because they caused a long trip backwards, but it turns out a very high error rate early in the process matters even more because there are still multiple other potential errors later on that compound on its increase. Next we can get a sense of the impact of hiring more people onto the service provisioning team to manually provision more services, which we can model by increasing the maximum size of the inflight services stock from 10 to 50. # initial model RequestedServices > InflightServices(0, 10) @ Leak(1.0) # with 5x capacity! RequestedServices > InflightServices(0, 50) @ Leak(1.0) Unfortunately, we can see that even increasing the team’s capacity by 500% doesn’t solve the backlog of requested services. There’s some impact, but that much, and the backlog of requested services remains extremely high. We can conclude that more infrastructure hiring isn’t the solution we need, but let’s see if moving to self-service is a plausible solution. We can simulate the impact of moving to self-service by removing the maximum size from inflight services entirely: # initial model RequestedServices > InflightServices(0, 10) @ Leak(1.0) # simulating self-service RequestedServices > InflightServices(0) @ Leak(1.0) We can see this finally solves the backlog. At this point, we’ve exercised the model a fair amount and have a good sense of what it wants to tell us. We know which errors matter the most to invest in early, and we also know that we need to make the move to a self-service platform sometime soon.

2 weeks ago 39 votes
Refining strategy with Wardley Mapping.

The first time I heard about Wardley Mapping was from Charity Majors discussing it on Twitter. Of the three core strategy refinement techniques, this is the technique that I’ve personally used the least. Despite that, I decided to include it in this book because it highlights how many different techniques can be used for refining strategy, and also because it’s particularly effective at looking at the broadest ecosystems your organization exists in. Where the other techniques like systems thinking and strategy testing often zoom in, Wardley mapping is remarkably effective at zooming out. In this chapter, we’ll cover: A ten-minute primer on Wardley mapping Recommendations for tools to create Wardley maps When Wardley maps are an ideal strategy refinement tool, and when they’re not The process I use to map, as well as integrate a Wardley map into strategy creation Breadcrumbs to specific Wardley maps that provide examples Documenting a Wardley map in the context of a strategy writeup Why I limited focus on two elements of Wardley’s work: doctrines and gameplay After working through this chapter, and digging into some of this book’s examples of Wardley Maps, you’ll have a good background to start your own mapping practice. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Ten minute primer Wardley maps are a technique created by Simon Wardley to ensure your strategy is grounded in reality. Or, as mapping practioners would say, it’s a tool for creating situational awareness. If you have a few days, you might want to start your dive into Wardley mapping by reading Simon Wardley’s book on the topic, Wardley Maps. If you only have ten minutes, then this section should be enough to get you up to speed on reading Wardley maps. Picking an example to work through, we’re going to create a Wardley map that aims to understand a knowledge base management product, along the lines of a wiki like Confluence or Notion. You need to know three foundational concepts to read a Wardley map: Maps are populated with three kinds of components: users, needs, and capabilities. Users exist at the top, and represent a cohort of users who will use your product. Each kind of user has a specific set of needs, generally tasks that they need to accomplish. Each need requires certain capabilities required to fulfill that need. Any box connecting directly to a user is a need. Any box connecting to a need is a capability. A capability can be connected to any number of needs, but can never connect directly to a user; they connect to users only indirectly via a need. The x-axis is divided into four segments, representing how commoditized a capability is. On the far left is genesis, which represents a brand-new capability that hasn’t existed before. On the far right is commoditized, something so standard and expected that it’s unremarkable, like turning on a switch causing electricity to flow. In between are custom and product, the two categories where most items fall on the map. Custom represents something that requires specialized expertise and operation to function, such as a web application that requires software engineers to build and maintain. Product represents something that can generally be bought. In this map, document reading is commoditized: it’s unremarkable if your application allows its users to read content. On the other hand, document editing is someone on the border of product and custom. You might integrate an existing vendor for document editing needs, or you might build it yourself, but in either case document editing is less commoditized than document reading. The y-axis represents visibility to the user. In this map, reading documents is something that is extremely visible to the user. On the other hand, users depend on something indexing new documents for search, but your users will generally have no visibility into the indexing process or even that you have a search index to begin with. Although maps can get quite complex, those three concepts are generally sufficient to allow you to decode an arbitrarily complex map. In addition to mapping the current state, Wardley maps are also excellent at exploring how circumstances might change over time. To illustrate that, let’s look at a second iteration of our map, paying particular attention to the red arrows indicating capabilities that we expect to change in the future. In particular, the map now indicates that the current document creation experience will be superseded by an AI-enhanced editing process. Critically, the map also predicts that the AI-enhanced process will be more commoditized than its current authoring experience, perhaps because the AI-enhancement will be driven by commoditized foundational models from providers like Anthropic and OpenAI. Building on that, the only place left in the map for meaningful differentiation is in search indexing. Either the knowledge base company needs to accept the implication that they will increasingly be a search company, or they need to expand the user needs they service to find a new avenue for differentiation. Some maps will show evolution of a given capability using a “pipeline”, a box that describes a series of expected improvements in a capability over time. Now instead of simply indicating that the authoring experience may be replaced by an AI-enhanced capability over time, we’re able to express a sequence of steps. From the starting place of a typical editing experience, the next expected step is AI-assisted creation, and then finally we expect AI-led creation where the author only provides high-level direction to a machine learning-powered agent. For completeness, it’s also worth mentioning that some Wardley maps will have an overlay, which is a box to group capabilities or requirements together by some common denominator. This happens most frequently to indicate the responsible team for various capabilities, but it’s a technique that can be used to emphasize any interesting element of a map’s topology. At this point, you have the foundation to read a Wardley map, or get started creating your own. Maps you encounter in the wild might appear singificantly more complex than these initial examples, but they’ll be composed of the same fundamental elements. More Wardley Mapping resources The Value Flywheel Effect by David Anderson Wardley Maps by Simon Wardley on Medium, also available as PDF Learn Wardley Mapping by Ben Mosior wardleymaps.com’s resources and @WardleyMaps on Youtube Tools for Wardley Mapping Systems modeling has a serious tooling problem, which often prevents would-be adopters from developing their systems modeling practice. Fortunately, Wardley Mapping doesn’t suffer from that problem. Uou can simply print out a Wardley Map and draw on it by hand. You can also use OmniGraffle, Miro, Figma or whatever diagramming tool you’re already familiar with. There are more focused tools as well, with Ben Mosior pulling together an excellent writeup on Wardley Mapping Tools as of 2024. Of those two, I’d strongly encourage starting with Mapkeep as a simple, free, and intuitive tool for your innitial mapping needs. After you’ve gotten some practice, you may well want to move back into your most familiar diagramming tool to make it easier to collaborate with colleagues, but initially prioritize the simplest tool you can to avoid losing learning momentum on configuration, setup and so on. When are Wardley Maps useful? All successful strategy begins with understanding the constraints and circumstances that the strategy needs to work within. Wardley mapping labels that understanding as situational awareness, and creating situational awareness is the foremost goal of mapping. Situational awareness is always useful, but it’s particularly essential in highly dynamic environments where the industry around you, competitors you’re sellinga gainst, or the capabilities powering your product are shifting rapidly. In the past several decades, there have been a number of these dynamic contexts, including the rise of web applications, the proliferation of mobile devices, and the expansion of machine learning techniques. When you’re in those environments, it’s obvious that the world is changing rapidly. What’s sometimes easy to miss is that any strategy the needs to last longer than a year or two is build on an evolving foundation, even if things seem very stable at the time. For example, in the early 2010s, startups like Facebook, Uber and Digg were all operating in physical datacenters with their owned hardware. Over a five year period, having a presence in a physical datacenter went from the default approach for startups to a relatively unconventional solution, as cloud based infrastructure rapidly expanded. Any strategy written in 2010 that imagined the world of hosting was static, was destinated to be invalidated. No tool is universally effective, and that’s true here as well. While Wardley maps are extremely helpful at understanding broad change, my experience is that they’re less helpful in the details. If you’re looping to optimize your onboarding funnel, then something like systems modeling or strategy testing are likely going to serve you better. How to Wardley Map Learning Wardley mapping is a mix of reading others’ maps and writing your own. A variety of maps for reading are collected in the following breadcrumbs section, and I’d recommend skimming all of them. In this section are the concrete steps I’d encourage you to follow for creating the first map of your own: Commit to starting small and iterating. Simple maps are the foundation of complex maps. Even the smallest Wardley map will have enough detail to reveal something interesting about the environment you’re operating in. Conversely, by starting complex, it’s easy to get caught up in all of your early map’s imperfections. At worst, this will cause you to lose momentum in creating the map. At best, it will accidentally steer your attention rather than facilitating discover of which details are important to focus on. List users, needs and capabilities. Identify the first one or two users for your product. Going back to the knowledge management example from the primer, your two initial users might be an author and a reader. From there, identify those users’ needs, such as authoring content, finding content, and providing feedback on which content is helpful. Finally, write down the underlying technical capabilities necessary to support those needs, which might range from indexing content in a search index to a customer support process to deal with frustrated users. Remember to start small! On your first pass, it’s fine to focus on a single user. As you iterate on your map, bring in more users, needs and capabilities until the map conveys something useful. Tooling for this can be a piece of paper or wherever you keep notes. Establish value chains. Take your list and then connect each of the components into chains. For example, the reader in the above knowledge base example would then be connected to needing to discover content. Discovering content would be linked to indexing in the search index. That sequence from reader to discovering content to search index represents one value chain. Convergence across chains is a good thing. As your chains get more comprehensive, it’s expected that a given capability would be referenced by multiple different needs. Similarly, it’s expected that multiple users might have a shared need. Plot value chains on a Wardley Map. You can do this using any of the tools discussed in the Tools for Wardley mapping section, including a piece of paper. Because you already have the value chains created, what you’re focused on in this step is placing each component relative to it’s visibility to users (higher up is more visible to the user, lower down is less visible), and how mature the solutions are (leftward represents more custom solutions, rightward represents most commoditized solutions). Study current state of the map. With the value chains plotted on your map, it will begin to reveal where your organization’s attention should be focused, and what complexity you can delegate to vendors. Jot down any realizations you have from this topology. Predict evolution of the map, and create a second version of your map that includes these changes. (Keep the previous version so you can better see the evolution of your thinking!) It can be helpful to create multiple maps that contemplate different scenarios. Thinking about the running knowledge base example, you might contemplate a future where AI-powered tools become the dominant mechanism for authors creating content. Then you could explore another future where such tools are regulated out of most tools, and imagine how that would shape your approach differently. Picking the timeframe for these changes will vary on the evironment you’re mapping. Always prefer a timeframe that makes it easy to believe changes will happen, maybe that’s five years, or maybe it’s 12 months. If you’re caught up wondering whether change might take longer a certain timeframe, than simply extend your timeframe to sidestep that issue. Study future state of the map, now that you’ve predicted the future, Once again, write down any unexpected implications of this evolution, and how you may need to adjust your approach as a result. Share with others for feedback. It’s impossible for anyone to know everything, which is why the best maps tend to be a communal creation. That’s not to suggest that you should perform every step in a broad community, or that your map should be the consensus of a working group. Instead, you should test your map against others, see what they find insightful and what they find artificial in the map, and include that in your map’s topology. Document what you’ve learned as discussed below in the section on documentation. You should also connect that Wardley map writeup with your overall strategy document, typically in the Refine or Explore sections. One downside of presenting steps to do something is that the sequence can become a fixed recipe. These are the steps that I’ve found most useful, and I’d encourage you to try them if mapping is a new tool in your toolkit, but this is far from the canonical way. Start here, then experiment with other approaches until you find the best approach for you and the strategies that you’re working on. Breadcrumbs for Wardley Map examples I’ll update these examples as I continue writing more strategies for this book. Until then, I admit that some of these examples are “what I have laying around” moreso than the “ideal forms of Wardley maps.” With the foundation in place, the best way to build on Wardley mapping is writing your own maps. The second best way is to read existing maps that others have made, and a number of which exist within this book: LLM evolution studies the evolution of the Large Language Model ecosystem, and how that will impact product engineering organizations attempting to validate and deploy new paradigms like agentic workflows and retrieval augmented generation Gitlab strategy shows a broad Wardley Map, looking at the developer tooling industry’s evolution over time, and how Gitlab’s approach implies they belief commoditization will drive organizations to prefer bundled solutions over integration best-in-breed offerings Evolution of developer experience tooling space explores how Wardley mapping has helped me refine my understanding of how the developer experience ecosystem will evolve over time In addition to the maps within this book, I also label maps that I create on my blog using the wardley category. How to document a Wardley Map As explored in how to create readable strategy documents, the default temptation is to structure documents around the creation process. However, it’s essentially always better to write in two steps: develop a writing-optimization version that’s focused on facilitating thinking, and then rework it into a reading-optimized version that supports both readers who are, and are not, interested in the details. The writing-optimized version is what we discussed in “How to Wardley Map” above. For a reading-optimized version, I recommend: How things work today shares a map of the current environment, explains any interesting rationales or controversies behind placements on the map, and highlights the most interesting parts of the map Transition to future state starts with a second map, this one showing the transition from the current state to a projected future state. It’s very reasonable to have multiple distinct maps, each of which considers one potential evolution, or one step of a longer evolution. Users and Value chains are the first place you start creating a Wardley map, but generally the least interesting part of explaining a map’s implications. This isn’t because the value chains are unimportant, rather it’s because the map itself tends to implicitly explain the value chain enough that you can move directly to focusing on the map’s most interesting implications. In a sufficiently complex, it’s very reasonable to split this into two sections, but generally I find it eliminates redundency to cover users and value chains in one joint section rather than separately. This is a good example of the difference between reading and writing: splitting these two topics helps clarify thinking, but muddles reading. This ordering may seem too brief or a bit counter-intuitive for you, as the person who has the full set of details, but my experience is that it will be simpler to read for most readers. That’s because most readers read until they agree with the conclusion, then stop reading, and are only interested in the details if they disagree with the conclusion. This format is also fairly different than the format I recommend for documenting systems models. That is because systems model diagrams exclude much of the relevant detail, showing the relationship between stocks but not showing the magnitude of the flows. You can only fully understand a system model by seeing both the diagram and a chart showing the model’s output. Wardley maps, on the other hand, tend to be more self-explanatory, and often can stand on their own with relatively less written description. What about doctrines and gameplay? This book’s components of strategy are most heavily influenced by Richard Rumelt’s approach. Simon Wardley’s approach to strategy built around Wardley Mapping could be viewed as a competing lens. For each problem that Rumelt’s system solves, there is a Wardley solution as well, and it’s worth mentioning some of the components I’ve not included, and why I didn’t. The two most important components I’ve not discussed thus far are Wardley’s ideas of doctrine and gameplay. Wardley’s doctrine are universally applicable practices like knowing your users, biasing towards data, and design for constant evolution. Gameplay is similar to doctrine, but is context-dependent rather than universal. Some examples of gameplay are talent raid (hiring from knowledgable competitior), bundling (selling products together rather than separately), and exploiting network effects. I decided not to spend much time on doctrine and gameplay because I find them lightly specialized on the needs of business strategy, and consequently a bit messy to apply to the sorts of problems that this book is most interested in solving: the problems of engineering strategy. To be explicit, I don’t personally view Rumelt’s approach and Wardley’s approaches as competing efforts. What’s most valuable is to have a broad toolkit, and pull in the pieces of that toolkit that feel most applicable to the problems at hand. I find Wardley Maps exceptionally valuable at enhancing exploration, diagnosis, and refinement in some problems. In other problems, typically shorter duration or more internally-oriented, I find the Rumelt playbook more applicable. In all problems, I find the combination more valuable than anchoring in one camp’s perspective. Summary No refinement technique will let you reliably predict the future, but Wardley mapping is very effective at helping you plot out the various potential futures your strategy might need to operate in. With those futures in mind, you can tune your strategy to excel in the most likely, and to weather the less desirable. It took me years to dive into Wardley mapping. Once I finally did, it was simpler than I’d feared, and now I find myself creating Wardley maps somewhat frequently. When you’re working on your next strategy that’s impacted by the ecosystem’s evolution around it, try your hand at mapping, and soon you’ll start to build your own collection of maps.

3 weeks ago 48 votes

More in programming

DeepSeek

If you're not distressingly embedded in the torrent of AI news on Twixxer like I reluctantly am, you might not know what DeepSeek is yet. Bless you.

13 hours ago 6 votes
On the (un?)importance of design

You redesign your entire website, customers and employees say it's better, but none of the metrics change… Does design even matter?

23 hours ago 3 votes
How to download YouTube Videos quickly

I used to use yt5s all the time to rip and remix videos:

yesterday 2 votes
Launching Live Courses on Systems Programming

Modern software development has created a paradox: we build increasingly complex systems, yet fewer engineers understand how these systems work under the hood.

yesterday 7 votes
What I Miss And Don't From Working As A Programmer

I retired almost four years ago after nearly 40 years as a programmer. While I still write code daily, I do so to support my generative art rather than get paid for it. Most of my career was spent building new applications, and no matter what my title was, I

yesterday 6 votes