Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
62
The significance of Bluesky and decentralized social media I'm delighted to share that we have introduced support for Bluesky in Buffer. This is an important moment for us as a company, and there are a number of reasons that adding Bluesky is personally meaningful for me. With Bluesky, we now support the three major social networks pushing forward a new era of decentralized social media: Mastodon, Threads and Bluesky. We have been intentional about moving fast to add these channels to our tool. Supporting independence and ownership in social media Buffer has now existed for almost 14 years, and throughout that time I've seen a lot change in social media, and in our space of tools to support people and businesses with social. We're an outlier as a product and company that has existed for that kind of timeframe with leadership and values left in tact. We've had to work hard at times to maintain control over our destiny. In 2018, we made the decision to spend $3.3M to buy out the...
7 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Joel Gascoigne's blog

Fourteen years

Fourteen years It's a little hard to believe. Fourteen years ago today, I launched Buffer from my apartment in Birmingham, in the UK. The launch came seven weeks after I started working on the project on the side as a contract web developer. For a few weeks, I called it bfffr until I realized that no one knew how to pronounce it. Sometimes it's better to be clear than clever. So it became bufferapp.com. Even then, people thought we were called Buffer App for a while! Eventually we were able to acquire buffer.com and clear up the confusion altogether. When I started Buffer, I had no idea how far it could come. This was a case where the dream formed over time, rather than being fully formed on day one. There's a dogma that you need to have complete clarity of the vision and outcome before you even start (and go all-in and full-time, which I also disagree with). I think there's a beauty in starting with a small dream. It just so happens that every big thing started small. Early on, my dream was just to create a tool that made it easy to Tweet consistently, build it for myself and others, and make enough money to cover my living expenses and go full-time on it. The number for me to be able to work on it full-time was £1,200 per month, and that felt almost out of reach in the beginning. Today, Buffer generates $1.65 million per month, serves 59,000 customers, and enables fulfilling work for 72 people. I've had many dreams with Buffer, each one progressively becoming more ambitious. To me it's always felt like I can just about see the horizon, and once I get there, I see a new horizon to strive for. I've tried to embrace that Buffer can continue to evolve as I, the team, and customers do. A lot happens as a founder and as a business in fourteen years. I started the company when I was 23. I was young, ambitious, and had so much to learn. My naivety served me well in so many ways. At the same time, I like to think that the years have given me a more intentional, decisive approach to business. Broadly, it feels like we've had three eras to the company so far. In our first era, we found traction, we built swiftly and with fervor, we grew a special community of users and customers, and we did it all in our own way. We were a remote company before almost anyone else, and were part of the earliest days of building in public. There's so much we did right in that first era, though we also had wind in our sails which masked our errors and immaturity. The second era of Buffer was marked by growing pains, a struggle to understand who we really are, missteps and through that, transformation, clarity, and new beginnings. These years were very much the messy middle of Buffer. They were also where I experienced my lowest lows in the journey so far. As hard as this experience was, I am grateful as it was the path I needed to walk in order to grow as a leader, cement our independence and long-term ambitions, rediscover Buffer's purpose, and start to operate with greater conviction. We're a couple of chapters into our current era. With a renewed focus on entrepreneurs, creators, and small businesses, we started making bolder moves to serve them and create a more unique offering in what had become a very crowded and commoditized space. Through a clearer strategy, strengthening our culture, and improving how we work as a team, we emerged from a multi-year decline. Last year, we turned the ship around and had a flat year. This year, we're on track for over 10% growth and a profitable year. It doesn't feel like a coincidence to me that this final era has also been the phase where I've experienced one of the most joyful and demanding experiences as a human: becoming a parent. I have a wife and I have two young boys, and they mean the world to me. I also started prioritizing my community of family and friends, as well as cultivating hobbies again. I spend time on my health and fitness, try to keep up my skiing, and recently picked up playing the piano again. Time has become a lot more precious, and with that, clarity and conviction are more vital than ever. As much as it sometimes feels hard to fit everything in, to me, it's the whole package that makes life fulfilling. When I really stop to take a step back, I feel very lucky that I've been able to do this for fourteen years. It's a long time in any sense. In tech and social media it feels like almost a lifetime already. And yet, just like those early days when I could barely imagine reaching £1,200 per month, I'm still looking toward that next horizon. I see a clear opportunity to help entrepreneurs, creators and small businesses get off the ground, grow, and thrive long-term. Photo by Simon Berger on Unsplash.

3 months ago 53 votes
Build Week at Buffer: What it is and how we’re approaching it

Build Week at Buffer: What it is and how we’re approaching it Note: this was originally posted on the Buffer blog. We’ve dedicated the week of August 22nd to a brand new internal initiative called Build Week. We’ll all be putting aside our regular work for a single week to come together in small groups and work on ideas that can benefit customers or us as a company, ideally with something of value shipped or in place by the end. The inspiration for Build Week Before building Buffer, I had several formative experiences attending “build a startup in a weekend”-type events. Two I attended were run by Launch48, and another was Startup Weekend. Anyone could sign up to attend no matter what skill set or experience level they would bring. As long as you were willing to roll up your sleeves, build something, and contribute in any way, you’d be very welcome. The focus was on building something rapidly from end to end, within the space of a weekend. Teams would be capped to a small number, around three to five people per team, so the groups could move quickly with decision making. Once the teams were formed, you’d get to work and start doing research, building, and marketing (often all in parallel) to move as fast as possible in building a minimum viable product and achieving a level of validation. At the end of the weekend, teams would present what they achieved, what they validated, and what they learned. Through these events, I met people, formed strong bonds, and stayed in contact for years with them afterward. Some teams even became startups. It felt like highly accelerated learning, and it was intense but fun, very energizing and inspiring. I’ve been thinking about how this could translate to Buffer and why it would be so powerful for us in our current season, which is where Build Week comes in. What is Build Week? Build Week is a week at Buffer where we’ll form teams, work with people we don’t typically work with, and work together on an idea we feel called towards. The highest level goals of Build Week are to inject into the company and team a spirit of shipping, creativity, and innovation, making progress and decisions rapidly, comfort with uncertainty, and ultimately going from idea to usable value out in the world in the space of a week. When it comes to the type of projects we’ll work on and the skill sets required to accomplish them, the goal is for those to be far-reaching. While it may seem like Build Week would be more suited to engineers specifically, our goal is to achieve the outcome that everyone realizes they are and can be a Builder. Ultimately, being a Builder in Buffer Build Week will mean that you are part of a team that successfully makes a change that brings value, and it happens in the short period of a week. Everyone on the team has something to bring to this goal, and I'm excited by the various projects that will be worked on. How we’re approaching Build Week With our high-level vision and ideas for Build Week, several months ago we got to work to bring this concept to life and make it happen. The first thing we did was form a team to plan and design Build Week itself. Staying true to our vision for Build Week itself, where we want to have small teams of people who don’t normally work together, this is also how we approached forming the Build Week Planning team. With this team in place, we started meeting weekly. Overall, it has been a small time commitment of 45 minutes per week to plan and design Build Week. As we got closer to the actual week, we started meeting for longer and having real working sessions. Our final design for Build Week consisted of three key stages: Idea Gathering, Team Formation and Build Week. For the Idea Gathering stage, we created a Trello board where anyone in the team could contribute an idea. We used voting and commenting on the cards, which helped narrow the ideas to those that would be worked on during Build Week. We gave people a few days to submit ideas and received 78 total contributions. This was a big win and a clear indication of a big appetite for Build Week within the company. The Team Formation stage was a trickier problem to solve and determine the process for. Initially, we had hoped that this could be entirely organic, with people gravitating towards an idea and joining up with people who are also excited to work on that idea. Ultimately, we realized that if we approached it this way, we would likely struggle with our goal of having people work with folks they don’t normally work with, and we wouldn’t have enough control over other aspects, such as the time zones within each team. All of this could jeopardize the success of Build Week itself. So we arrived at a hybrid, where we created a Google Form for people to submit their top 3 choices of ideas they’d like to work on. With that information, we determined the teams and made every effort to put people in a team they had put down as a choice. And the final stage is, of course, Build Week itself! The teams have now been formed, and we created a Slack channel for each team to start organizing themselves. We are providing some very lightweight guidance, and we will have a few required deliverables, but other than that, we are leaving it to each team to determine the best way to work together to create value during the week. If you're a Buffer customer, one small note that as we embrace this company-wide event and time together, we will be shifting our focus slightly away from the support inbox. We will still be responding to your questions and problems with Buffer; however, we may be slightly slower than usual. We also won't be publishing any new content on the blog. We’re confident that this time for the team to bond and build various projects of value will ultimately benefit all Buffer customers. Why right now is the time for Build Week at Buffer 2022 has been a different year for Buffer. We’re in a position of flatter to declining revenue, and we’ve been working hard to find our path back to healthy, sustainable growth. One key element of this effort has been actively embracing being a smaller company. We’re still a small company, and we serve small businesses. Unless we lean into this, we will lose many of our advantages. We want to drive more connection across the team in a time where we’ve felt it lacking for the past couple of years. While we’ve been remote for most of our 11+ years of existence, we’ve always found a ton of value from company retreats where we all meet in person, and we’ve suffered during the pandemic where we’ve not been able to have these events. Build Week is an opportunity for us to do that with a whole new concept and event rather than trying to do it with something like a virtual retreat which would likely never be able to live up to our previous retreat experiences. There’s a big opportunity for exchanging context and ideas of current Buffer challenges within teams where the teams are cross-functional and with people who don’t normally work together. This could help us for months afterward. Build Week can also be a time where strong bonds, both in work and personally, are formed. My dream would be that after Build Week, people within their teams hit each other up in Slack and jump on a spontaneous catch-up call once in a while because they’ve become close during the week. We’ve had engineering hack weeks for a long time now. Those have been awesome in their way, but they have been very contained to engineering. And while those events created a lot of value, they often lacked perspectives that would have enhanced the work, such as customer advocacy, design, culture, or operational perspectives. As a company, we want to challenge some of the processes we have built up over the past few years. Build Week is like a blank canvas – we clear out a whole week and then diligently decide what we need in terms of structure and process to make this concept thrive and no more. This can act as inspiration for us going forward, where we can use the week as an example of rethinking process and questioning the ways we do things. The opportunity that comes with Build Week If we are successful with Build Week, I am confident that we will surprise ourselves with just how much value is created by the whole company in that one week alone. In embracing being a small company, we’re currently striving to challenge ourselves by moving at a faster pace without over-working. I think this is possible, and the completely different nature of how we work together in Build Week could give us ideas for what we can adjust to work more effectively and productively together in our regular flow of work. The opportunity for value creation within Build Week goes far beyond product features or improvements. Build Week will be a time for us to build anything that serves either customers or the team in pursuit of our vision and mission, or strengthens and upholds our values. We can stretch ourselves in the possibilities – there could be a marketing campaign, a data report, improving an existing process in the company, rethinking our tools, creating a new element of transparency, bringing our customers together, etc. Wish us luck! I believe Build Week can be one of the most fun, high-energy weeks we’ve had in years. I expect we can come out of the week on a high that can fuel us with motivation and enjoyment of our work for months. That is a worthy goal and something I think we can achieve with a little creativity and the right group of people designing and planning the event. Of course, part of the beauty of Build Week itself is that just like all the ideas and the freedom to choose how you work in a team, we don’t know everything we’ll learn as a company by doing this. It could be chaotic, there could be challenges, and there will undoubtedly be many insights, but we will be better off for having gone through the process. Please wish us all luck as we head into next week. There’s a lot of excitement in the company to create value. We hope to have new features to share with you in the coming weeks, and we’ll be back soon with a post sharing how it went. Have you tried something like Build Week before? If so, how did it go? I’d love to hear from you on Twitter. Photo by C Dustin on Unsplash.

over a year ago 17 votes
Our vision for location-independent salaries at Buffer

Our vision for location-independent salaries at Buffer Note: this was originally posted on the Buffer blog. I’m happy to share that we’ve established a long-term goal that salaries at Buffer will not be based on location. We made our first step towards this last year, when we moved from four cost-of-living based location bands for salaries to two bands. We did this by eliminating the lower two location bands The change we made resulted in salary increases for 55 of 85 team members, with the increase being on average $10,265. When the time is right, we will be eliminating the concept of cost-of-living based location bands entirely, which will lead to a simpler approach to providing generous, fair and transparent salaries at Buffer. In this post I’m sharing my thinking behind this change and our approach to pay overall. Location and Salaries It’s been interesting to see the conversation about location and salaries unfold both within Buffer and beyond. We’ve heard from many teammates over the years about the pros and cons of the location factor, and of course we’ve watched with interest as this became a regular topic of conversation within the larger remote work community. I've had many healthy debates with other remote leaders, and there are arguments for eliminating a location component which I haven’t agreed with. I don’t believe pay differences across locations is unethical, and it has made a lot of sense for us in the past. However, the last few years have seen a lot of change for remote teams. A change like this isn't to be made lightly, and at our scale comes with considerations. Our Compensation Philosophy Compensation is always slowly evolving as companies and markets mature and change. We’ve been through several major iterations of our salary formula, and myriad small tweaks throughout the last 8 or so years since we launched the initial version. Part of the fun of having a salary formula is knowing that it’s never going to be “done.” Knowing that the iterations would continue, Caryn, our VP of Finance, and I worked together to establish our compensation philosophy and document our principles on compensation to help us determine what should always be true even as the salary formula changes over time. We arrived at four principles that guide our decisions around compensation. We strive for Buffer’s approach to salary, equity, and benefits to be: Transparent Simple Fair Generous These are the tenets that have guided us through compensation decisions over the years. After we articulated them as our compensation principles, we were able to look at the location factor of our formula with new clarity. There are a few key considerations that were part of our discussions and my decision to put Buffer on a path towards removing our location factor from salaries that I'll go into more detail about next. Transparency, Simplicity, and Trust Our salary formula is one of the fundamental reasons that we can share our salaries transparently. Having a spreadsheet of team salaries is a huge step toward transparency, but true transparency is reached when the formula is simple, straightforward, easy to understand, and importantly, easy to use. In one of our earlier versions of the salary formula, we calculated the cost-of-living multiplier for every new location when we made an offer. That was cumbersome, and it meant that a candidate couldn’t truly know their salary range until we calculated that. This was improved greatly when we moved to the concept of “cost-of-living bands.”. After that, different cities and towns could more easily be classified into each band. This massively increased the transparency of the formula, and I think it helped create a lot more trust in this system. Anyone could relatively easily understand which band their location fit into, and with that knowledge understand the exact salary they'd receive at Buffer. This type of immediate understanding of the salary formula, and ability to run calculations yourself, is where transparency really gains an extra level of impact and drives trust within and beyond the team. However, with our four cost-of-living bands, there were still decisions to be made around where locations fall, and this has been the topic of much healthy and productive debate over the years. The conversations around locations falling between the Average and High bands is what led us to introduce the Intermediate band. And with four choices of location, it has meant there is some disparity in salaries across the team. With the benefits that come from the powerful combination of transparency and simplicity, alongside the increased trust that is fostered with more parity across the team, I’m choosing to drive Buffer’s salary formula in the direction of eventually having no cost-of-living factor. Freedom and Flexibility We’ve long taken approaches to work which have been grounded in the ideal of an increased level of freedom and flexibility as a team member. When I started Buffer, I wanted greater freedom and a better quality of life than I felt would be possible by working at a company. That came in various forms, including location freedom, flexibility of working hours, and financial freedom. And as we’ve built the company, I’ve been proud that we’ve built a culture where every single team member can experience an unusual and refreshing level of freedom and flexibility. Since the earliest days, one of our most fondly held values has been to Improve Consistently, and in particular this line: “We choose to be where we are the happiest and most productive”. This is a value that has supported and encouraged teammates to travel and try living in different cities, in search of that “happiest and most productive” place. It has enabled people to find work they love and great co-workers, from a hometown near family where it would be hard to find a local company that can offer that same experience and challenge. It has also enabled people to travel in order to support their partner in an important career change involving a move, something which allows an often stressful change to happen much more smoothly, since you can keep working at Buffer from anywhere in the world. Having a culture that has supported moving freely across the globe has been a powerful level of freedom and flexibility. That freedom has been matched with a salary system which adjusts compensation to accommodate those changes in a fair and appropriate way. However, knowing that your salary will fluctuate and can decrease due to a choice to be somewhere else, does limit that freedom and the ability to make a decision to move. Moving towards a salary formula with parity across all locations, will enable an even greater level of freedom and flexibility. It feels clear to me that choosing to move is a personal or a family decision, and it is ideal if Buffer salaries are structured in a way that honor and support that reality. I’m excited that working towards removing our cost-of-living differences will help significantly reduce the friction involved in making a potentially positively life-changing decision to live in a different city or country. Results, Independence, and Reward At Buffer, we are not on the typical hyper-growth VC path. This comes with some constraints: we don’t have tens of millions in funding and unlimited capital to deploy in an attempt to find a rapid path to $100m and going public (thankfully, that’s not our goal). This path also means that our experiences as teammates in a variety of ways are directly tied to whether we are successfully serving existing and new customers. For example, the level of benefits, ability to travel (in normal times), and competitiveness of compensation, are very much driven by our revenue growth and profitability. But, this is independence too. The thing we often need to remind ourselves of, is that while we may feel more constrained at times, we have full freedom of what we do with the success we achieve. Making a choice like this is one example of that. It is my intention as founder / CEO that as we succeed together as a company, we all benefit from that success and see adjustments that improve our quality of life and create wealth. We are in a position of profitability which allows us to take a significant step towards removing the cost-of-living factor from our salary framework, which I believe serves those goals. And removing it entirely will be determined by us successfully executing on our strategy and serving customers well. Reducing Cost-of-Living Bands The way our salary formula works is that we benchmark a teammate’s role based on market data at the 50th percentile for the software industry in San Francisco and then multiply that by the cost-of-living band. So, a Product Marketer benchmark at the 50th percentile of the San Francisco market data is $108,838. Depending on the teammate’s location this would be multiplied by a cost-of-living band (Low, Average, Intermediate or High). For example, if they lived Boulder, Colorado, a city with Average cost-of-living, the benchmark would be multiplied by 0.85 for a salary of $92,512. To best reflect our compensation philosophy, company values, and the path we want for Buffer, we have eliminated the Low and Average cost-of-living bands. What we’ve done is brought all Low (.75 multiplier) and Average (.85 multiplier) salaries up to Intermediate (.9 multiplier), which we now call our Global band. This is what resulted in 55 teammates seeing on average an increase to their salary of $10,265. Our two bands are now Global (.9 multiplier) and High (1.0 multiplier). This change is based on my vision for Buffer and how being a part of this team affects each of us as individually, as well as the direction I believe the world is going. I’m excited about the change first and foremost because it supports our goal of having a transparent, simple, fair, and generous approach to compensation. This is also a move that raised salaries right away for more than half of the team. This point in particular gives me a lot of joy because I want compensation to be one of the incredible parts of working at Buffer. Money isn’t everything, and we all need kind and smart colleagues, a psychologically safe environment, and to work on challenging and interesting problems, in order to be fulfilled at work. Beyond that, however, money really impacts life choices, and that’s ultimately what I want for every Bufferoo; the freedom to choose their own lifestyle and make choices for themselves and their families’ long-term health and happiness. It’s important to me that people who choose to spend their years at Buffer will have the freedom to make their own choices to have a great life. And, for our teammates who live in much lower cost-of-living areas, a Buffer salary could end up being truly life changing. I’m really happy with that outcome. The decision was also impacted by the direction that I believe the world is going (and, the direction we want to help it go). Remote is in full swing, and it’s increasingly breaking down geographical borders. I believe this is a great thing. Looking ahead 10 or even 5 years, it seems to me that we’re going to see a big rebalancing, or correction, that’s going to happen. I believe it’s important to be ahead of these types of shifts, and be proactively choosing the path that’s appropriate and energizing for us. What next? Our plan is to eventually get to one single location band, essentially eliminating the cost-of-living factor from the salary formula altogether. This will be possible once we can afford to make this change and sustain our commitment to profitability. So, this will be driven by the long-term results we create from our hard work, creativity in the market, and commitment to customers. What questions does this spark for you? Send me a tweet with your thoughts. Photo by Javier Allegue Barros on Unsplash.

over a year ago 19 votes

More in programming

ChatGPT Would be a Decent Policy Advisor

Revealed: How the UK tech secretary uses ChatGPT for policy advice by Chris Stokel-Walker for the New Scientist

17 hours ago 3 votes
Setting policy for strategy.

This book’s introduction started by defining strategy as “making decisions.” Then we dug into exploration, diagnosis, and refinement: three chapters where you could argue that we didn’t decide anything at all. Clarifying the problem to be solved is the prerequisite of effective decision making, but eventually decisions do have to be made. Here in this chapter on policy, and the following chapter on operations, we finally start to actually make some decisions. In this chapter, we’ll dig into: How we define policy, and how setting policy differs from operating policy as discussed in the next chapter The structured steps for setting policy How many policies should you set? Is it preferable to have one policy, many policies, or does it not matter much either way? Recurring kinds of policies that appear frequently in strategies Why it’s valuable to be intentional about your strategy’s altitude, and how engineers and executives generally maintain different altitudes in their strategies Criteria to use for evaluating whether your policies are likely to be impactful How to develop novel policies, and why it’s rare Why having multiple bundles of alternative policies is generally a phase in strategy development that indicates a gap in your diagnosis How policies that ignore constraints sound inspirational, but accomplish little Dealing with ambiguity and uncertainty created by missing strategies from cross-functional stakeholders By the end, you’ll be ready to evaluate why an existing strategy’s policies are struggling to make an impact, and to start iterating on policies for strategy of your own. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. What is policy? Policy is interpreting your diagnosis into a concrete plan. That plan will be a collection of decisions, tradeoffs, and approaches. They’ll range from coding practices, to hiring mandates, to architectural decisions, to guidance about how choices are made within your organization. An effective policy solves the entirety of the strategy’s diagnosis, although the diagnosis itself is encouraged to specify which aspects can be ignored. For example, the strategy for working with private equity ownership acknowledges in its diagnosis that they don’t have clear guidance on what kind of reduction to expect: Based on general practice, it seems likely that our new Private Equity ownership will expect us to reduce R&D headcount costs through a reduction. However, we don’t have any concrete details to make a structured decision on this, and our approach would vary significantly depending on the size of the reduction. Faced with that uncertainty, the policy simply acknowledges the ambiguity and commits to reconsider when more information becomes available: We believe our new ownership will provide a specific target for Research and Development (R&D) operating expenses during the upcoming financial year planning. We will revise these policies again once we have explicit targets, and will delay planning around reductions until we have those numbers to avoid running two overlapping processes. There are two frequent points of confusion when creating policies that are worth addressing directly: Policy is a subset of strategy, rather than the entirety of strategy, because policy is only meaningful in the context of the strategy’s diagnosis. For example, the “N-1 backfill policy” makes sense in the context of new, private equity ownership. The policy wouldn’t work well in a rapidly expanding organization. Any strategy without a policy is useless, but you’ll also find policies without context aren’t worth much either. This is particularly unfortunate, because so often strategies are communicated without those critical sections. Policy describes how tradeoffs should be made, but it doesn’t verify how the tradeoffs are actually being made in practice. The next chapter on operations covers how to inspect an organization’s behavior to ensure policies are followed. When reworking a strategy to be more readable, it often makes sense to merge policy and operation sections together. However, when drafting strategy it’s valuable to keep them separate. Yes, you might use a weekly meeting to review whether the policy is being followed, but whether it’s an effective policy is independent of having such a meeting, and what operational mechanisms you use will vary depending on the number of policies you intend to implement. With this definition in mind, now we can move onto the more interesting discussion of how to set policy. How to set policy Every part of writing a strategy feels hard when you’re doing it, but I personally find that writing policy either feels uncomfortably easy or painfully challenging. It’s never a happy medium. Fortunately, the exploration and diagnosis usually come together to make writing your policy simple: although sometimes that simple conclusion may be a difficult one to swallow. The steps I follow to write a strategy’s policy are: Review diagnosis to ensure it captures the most important themes. It doesn’t need to be perfect, but it shouldn’t have omissions so obvious that you can immediately identify them. Select policies that address the diagnosis. Explicitly match each policy to one or more diagnoses that it addresses. Continue adding policies until every diagnosis is covered. This is a broad instruction, but it’s simpler than it sounds because you’ll typically select from policies identified during your exploration phase. However, there certainly is space to tweak those policies, and to reapply familiar policies to new circumstances. If you do find yourself developing a novel policy, there’s a later section in this chapter, Developing novel policies, that addresses that topic in more detail. Consolidate policies in cases where they overlap or adjoin. For example, two policies about specific teams might be generalized into a policy about all teams in the engineering organization. Backtest policy against recent decisions you’ve made. This is particularly effective if you maintain a decision log in your organization. Mine for conflict once again, much as you did in developing your diagnosis. Emphasize feedback from teams and individuals with a different perspective than your own, but don’t wholly eliminate those that you agree with. Just as it’s easy to crowd out opposing views in diagnosis if you don’t solicit their input, it’s possible to accidentally crowd out your own perspective if you anchor too much on others’ perspectives. Consider refinement if you finish writing, and you just aren’t sure your approach works – that’s fine! Return to the refinement phase by deploying one of the refinement techniques to increase your conviction. Remember that we talk about strategy like it’s done in one pass, but almost all real strategy takes many refinement passes. The steps of writing policy are relatively pedestrian, largely because you’ve done so much of the work already in the exploration, diagnosis, and refinement steps. If you skip those phases, you’d likely follow the above steps for writing policy, but the expected quality of the policy itself would be far lower. How many policies? Addressing the entirety of the diagnosis is often complex, which is why most strategies feature a set of policies rather than just one. The strategy for decomposing a monolithic application is not one policy deciding not to decompose, but a series of four policies: Business units should always operate in their own code repository and monolith. New integrations across business unit monoliths should be done using gRPC. Except for new business unit monoliths, we don’t allow new services. Merge existing services into business-unit monoliths where you can. Four isn’t universally the right number either. It’s simply the number that was required to solve that strategy’s diagnosis. With an excellent diagnosis, your policies will often feel inevitable, and perhaps even boring. That’s great: what makes a policy good is that it’s effective, not that it’s novel or inspiring. Kinds of policies While there are so many policies you can write, I’ve found they generally fall into one of four major categories: approvals, allocations, direction, and guidance. This section introduces those categories. Approvals define the process for making a recurring decision. This might require invoking an architecture advice process, or it might require involving an authority figure like an executive. In the Index post-acquisition integration strategy, there were a number of complex decisions to be made, and the approval mechanism was: Escalations come to paired leads: given our limited shared context across teams, all escalations must come to both Stripe’s Head of Traffic Engineering and Index’s Head of Engineering. This allowed the acquired and acquiring teams to start building trust between each other by ensuring both were consulted before any decision was finalized. On the other hand, the user data access strategy’s approval strategy was more focused on managing corporate risk: Exceptions must be granted in writing by CISO. While our overarching Engineering Strategy states that we follow an advisory architecture process as described in Facilitating Software Architecture, the customer data access policy is an exception and must be explicitly approved, with documentation, by the CISO. Start that process in the #ciso channel. These two different approval processes had different goals, so they made tradeoffs differently. There are so many ways to tweak approval, allowing for many different tradeoffs between safety, productivity, and trust. Allocations describe how resources are split across multiple potential investments. Allocations are the most concrete statement of organizational priority, and also articulate the organization’s belief about how productivity happens in teams. Some companies believe you go fast by swarming more people onto critical problems. Other companies believe you go fast by forcing teams to solve problems without additional headcount. Both can work, and teach you something important about the company’s beliefs. The strategy on Uber’s service migration has two concrete examples of allocation policies. The first describes the Infrastructure engineering team’s allocation between manual provision tasks and investing into creating a self-service provisioning platform: Constrain manual provisioning allocation to maximize investment in self-service provisioning. The service provisioning team will maintain a fixed allocation of one full time engineer on manual service provisioning tasks. We will move the remaining engineers to work on automation to speed up future service provisioning. This will degrade manual provisioning in the short term, but the alternative is permanently degrading provisioning by the influx of new service requests from newly hired product engineers. The second allocation policy is implicitly noted in this strategy’s diagnosis, where it describes the allocation policy in the Engineering organization’s higher altitude strategy: Within infrastructure engineering, there is a team of four engineers responsible for service provisioning today. While our organization is growing at a similar rate as product engineering, none of that additional headcount is being allocated directly to the team working on service provisioning. We do not anticipate this changing. Allocation policies often create a surprising amount of clarity for the team, and I include them in almost every policy I write either explicitly, or implicitly in a higher altitude strategy. Direction provides explicit instruction on how a decision must be made. This is the right tool when you know where you want to go, and exactly the way that you want to get there. Direction is appropriate for problems you understand clearly, and you value consistency more than empowering individual judgment. Direction works well when you need an unambiguous policy that doesn’t leave room for interpretation. For example, Calm’s policy for working in the monolith: We write all code in the monolith. It has been ambiguous if new code (especially new application code) should be written in our JavaScript monolith, or if all new code must be written in a new service outside of the monolith. This is no longer ambiguous: all new code must be written in the monolith. In the rare case that there is a functional requirement that makes writing in the monolith implausible, then you should seek an exception as described below. In that case, the team couldn’t agree on what should go into the monolith. Individuals would often make incompatible decisions, so creating consistency required removing personal judgment from the equation. Sometimes judgment is the issue, and sometimes consistency is difficult due to misaligned incentives. A good example of this comes in strategy on working with new Private Equity ownership: We will move to an “N-1” backfill policy, where departures are backfilled with a less senior level. We will also institute a strict maximum of one Principal Engineer per business unit. It’s likely that hiring managers would simply ignore this backfill policy if it was stated more softly, although sometimes less forceful policies are useful. Guidance provides a recommendation about how a decision should be made. Guidance is useful when there’s enough nuance, ambiguity, or complexity that you can explain the desired destination, but you can’t mandate the path to reaching it. One example of guidance comes from the Index acquisition integration strategy: Minimize changes to tokenization environment: because point-of-sale devices directly work with customer payment details, the API that directly supports the point-of-sale device must live within our secured environment where payment details are stored. However, any other functionality must not be added to our tokenization environment. This might read like direction, but it’s clarifying the desired outcome of avoiding unnecessary complexity in the tokenization environment. However, it’s not able to articulate what complexity is necessary, so ultimately it’s guidance because it requires significant judgment to interpret. A second example of guidance comes in the strategy on decomposing a monolithic codebase: Merge existing services into business-unit monoliths where you can. We believe that each choice to move existing services back into a monolith should be made “in the details” rather than from a top-down strategy perspective. Consequently, we generally encourage teams to wind down their existing services outside of their business unit’s monolith, but defer to teams to make the right decision for their local context. This is another case of knowing the desired outcome, but encountering too much uncertainty to direct the team on how to get there. If you ask five engineers about whether it’s possible to merge a given service back into a monolithic codebase, they’ll probably disagree. That’s fine, and highlights the value of guidance: it makes it possible to make incremental progress in areas where more concrete direction would cause confusion. When you’re working on a strategy’s policy section, it’s important to consider all of these categories. Which feel most natural to use will vary depending on your team and role, but they’re all usable: If you’re a developer productivity team, you might have to lean heavily on guidance in your policies and increased support for that guidance within the details of your platform. If you’re an executive, you might lean heavily on direction. Indeed, you might lean too heavily on direction, where guidance often works better for areas where you understand the direction but not the path. If you’re a product engineering organization, you might have to narrow the scope of your direction to the engineers within that organization to deal with the realities of complex cross-organization dynamics. Finally, if you have a clear approach you want to take that doesn’t fit cleanly into any of these categories, then don’t let this framework dissuade you. Give it a try, and adapt if it doesn’t initially work out. Maintaining strategy altitude The chapter on when to write engineering strategy introduced the concept of strategy altitude, which is being deliberate about where certain kinds of policies are created within your organization. Without repeating that section in its entirety, it’s particularly relevant when you set policy to consider how your new policies eliminate flexibility within your organization. Consider these two somewhat opposing strategies: Stripe’s Sorbet strategy only worked in an organization that enforced the use of a single programming language across (essentially) all teams Uber’s service migration strategy worked well in an organization that was unwilling to enforce consistent programming language adoption across teams Stripe’s organization-altitude policy took away the freedom of individual teams to select their preferred technology stack. In return, they unlocked the ability to centralize investment in a powerful way. Uber went the opposite way, unlocking the ability of teams to pick their preferred technology stack, while significantly reducing their centralized teams’ leverage. Both altitudes make sense. Both have consequences. Criteria for effective policies In The Engineering Executive’s Primer’s chapter on engineering strategy, I introduced three criteria for evaluating policies. They ought to be applicable, enforced, and create leverage. Defining those a bit: Applicable: it can be used to navigate complex, real scenarios, particularly when making tradeoffs. Enforced: teams will be held accountable for following the guiding policy. Create Leverage: create compounding or multiplicative impact. The last of these three, create leverage, made sense in the context of a book about engineering executives, but probably doesn’t make as much sense here. Some policies certainly should create leverage (e.g. empower developer experience team by restricting new services), but others might not (e.g. moving to an N-1 backfill policy). Outside the executive context, what’s important isn’t necessarily creating leverage, but that a policy solves for part of the diagnosis. That leaves the other two–being applicable and enforced–both of which are necessary for a policy to actually address the diagnosis. Any policy which you can’t determine how to apply, or aren’t willing to enforce, simply won’t be useful. Let’s apply these criteria to a handful of potential policies. First let’s think about policies we might write to improve the talent density of our engineering team: “We only hire world-class engineers.” This isn’t applicable, because it’s unclear what a world-class engineer means. Because there’s no mutually agreeable definition in this policy, it’s also not consistently enforceable. “We only hire engineers that get at least one ‘strong yes’ in scorecards.” This is applicable, because there’s a clear definition. This is enforceable, depending on the willingness of the organization to reject seemingly good candidates who don’t happen to get a strong yes. Next, let’s think about a policy regarding code reuse within a codebase: “We follow a strict Don’t Repeat Yourself policy in our codebase.” There’s room for debate within a team about whether two pieces of code are truly duplicative, but this is generally applicable. Because there’s room for debate, it’s a very context specific determination to decide how to enforce a decision. “Code authors are responsible for determining if their contributions violate Don’t Repeat Yourself, and rewriting them if they do.” This is much more applicable, because now there’s only a single person’s judgment to assess the potential repetition. In some ways, this policy is also more enforceable, because there’s no longer any ambiguity around who is deciding whether a piece of code is a repetition. The challenge is that enforceability now depends on one individual, and making this policy effective will require holding individuals accountable for the quality of their judgement. An organization that’s unwilling to distinguish between good and bad judgment won’t get any value out of the policy. This is a good example of how a good policy in one organization might become a poor policy in another. If you ever find yourself wanting to include a policy that for some reason either can’t be applied or can’t be enforced, stop to ask yourself what you’re trying to accomplish and ponder if there’s a different policy that might be better suited to that goal. Developing novel policies My experience is that there are vanishingly few truly novel policies to write. There’s almost always someone else has already done something similar to your intended approach. Calm’s engineering strategy is such a case: the details are particular to the company, but the general approach is common across the industry. The most likely place to find truly novel policies is during the adoption phase of a new widespread technology, such as the rise of ubiquitous mobile phones, cloud computing, or large language models. Even then, as explored in the strategy for adopting large-language models, the new technology can be engaged with as a generic technology: Develop an LLM-backed process for reactivating departed and suspended drivers in mature markets. Through modeling our driver lifecycle, we determined that improving onboarding time will have little impact on the total number of active drivers. Instead, we are focusing on mechanisms to reactivate departed and suspended drivers, which is the only opportunity to meaningfully impact active drivers. You could simply replace “LLM” with “data-driven” and it would be equally readable. In this way, policy can generally sidestep areas of uncertainty by being a bit abstract. This avoids being overly specific about topics you simply don’t know much about. However, even if your policy isn’t novel to the industry, it might still be novel to you or your organization. The steps that I’ve found useful to debug novel policies are the same steps as running a condensed version of the strategy process, with a focus on exploration and refinement: Collect a number of similar policies, with a focus on how those policies differ from the policy you are creating Create a systems model to articulate how this policy will work, and also how it will differ from the similar policies you’re considering Run a strategy testing cycle for your proto-policy to discover any unknown-unknowns about how it works in practice Whether you run into this scenario is largely a function of the extent of your, and your organization’s, experience. Early in my career, I found myself doing novel (for me) strategy work very frequently, and these days I rarely find myself doing novel work, instead focusing on adaptation of well-known policies to new circumstances. Are competing policy proposals an anti-pattern? When creating policy, you’ll often have to engage with the question of whether you should develop one preferred policy or a series of potential strategies to pick from. Developing these is a useful stage of setting policy, but rather than helping you refine your policy, I’d encourage you to think of this as exposing gaps in your diagnosis. For example, when Stripe developed the Sorbet ruby-typing tooling, there was debate between two policies: Should we build a ruby-typing tool to allow a centralized team to gradually migrate the company to a typed codebase? Should we migrate the codebase to a preexisting strongly typed language like Golang or Java? These were, initially, equally valid hypotheses. It was only by clarifying our diagnosis around resourcing that it became clear that incurring the bulk of costs in a centralized team was clearly preferable to spreading the costs across many teams. Specifically, recognizing that we wanted to prioritize short-term product engineering velocity, even if it led to a longer migration overall. If you do develop multiple policy options, I encourage you to move the alternatives into an appendix rather than including them in the core of your strategy document. This will make it easier for readers of your final version to understand how to follow your policies, and they are the most important long-term user of your written strategy. Recognizing constraints A similar problem to competing solutions is developing a policy that you cannot possibly fund. It’s easy to get enamored with policies that you can’t meaningfully enforce, but that’s bad policy, even if it would work in an alternate universe where it was possible to enforce or resource it. To consider a few examples: The strategy for controlling access to user data might have proposed requiring manual approval by a second party of every access to customer data. However, that would have gone nowhere. Our approach to Uber’s service migration might have required more staffing for the infrastructure engineering team, but we knew that wasn’t going to happen, so it was a meaningless policy proposal to make. The strategy for navigating private equity ownership might have argued that new ownership should not hold engineering accountable to a new standard on spending. But they would have just invalidated that strategy in the next financial planning period. If you find a policy that contemplates an impractical approach, it doesn’t only indicate that the policy is a poor one, it also suggests your policy is missing an important pillar. Rather than debating the policy options, the fastest path to resolution is to align on the diagnosis that would invalidate potential paths forward. In cases where aligning on the diagnosis isn’t possible, for example because you simply don’t understand the possibilities of a new technology as encountered in the strategy for adopting LLMs, then you’ve typically found a valuable opportunity to use strategy refinement to build alignment. Dealing with missing strategies At a recent company offsite, we were debating which policies we might adopt to deal with annual plans that kept getting derailed after less than a month. Someone remarked that this would be much easier if we could get the executive team to commit to a clearer, written strategy about which business units we were prioritizing. They were, of course, right. It would be much easier. Unfortunately, it goes back to the problem we discussed in the diagnosis chapter about reframing blockers into diagnosis. If a strategy from the company or a peer function is missing, the empowering thing to do is to include the absence in your diagnosis and move forward. Sometimes, even when you do this, it’s easy to fall back into the belief that you cannot set a policy because a peer function might set a conflicting policy in the future. Whether you’re an executive or an engineer, you’ll never have the details you want to make the ideal policy. Meaningful leadership requires taking meaningful risks, which is never something that gets comfortable. Summary After working through this chapter, you know how to develop policy, how to assemble policies to solve your diagnosis, and how to avoid a number of the frequent challenges that policy writers encounter. At this point, there’s only one phase of strategy left to dig into, operating the policies you’ve created.

22 hours ago 3 votes
Fast and random sampling in SQLite

I was building a small feature for the Flickr Commons Explorer today: show a random selection of photos from the entire collection. I wanted a fast and varied set of photos. This meant getting a random sample of rows from a SQLite table (because the Explorer stores all its data in SQLite). I’m happy with the code I settled on, but it took several attempts to get right. Approach #1: ORDER BY RANDOM() My first attempt was pretty naïve – I used an ORDER BY RANDOM() clause to sort the table, then limit the results: SELECT * FROM photos ORDER BY random() LIMIT 10 This query works, but it was slow – about half a second to sample a table with 2 million photos (which is very small by SQLite standards). This query would run on every request for the homepage, so that latency is unacceptable. It’s slow because it forces SQLite to generate a value for every row, then sort all the rows, and only then does it apply the limit. SQLite is fast, but there’s only so fast you can sort millions of values. I found a suggestion from Stack Overflow user Ali to do a random sort on the id column first, pick my IDs from that, and only fetch the whole row for the photos I’m selecting: SELECT * FROM photos WHERE id IN ( SELECT id FROM photos ORDER BY RANDOM() LIMIT 10 ) This means SQLite only has to load the rows it’s returning, not every row in the database. This query was over three times faster – about 0.15s – but that’s still slower than I wanted. Approach #2: WHERE rowid > (…) Scrolling down the Stack Overflow page, I found an answer by Max Shenfield with a different approach: SELECT * FROM photos WHERE rowid > ( ABS(RANDOM()) % (SELECT max(rowid) FROM photos) ) LIMIT 10 The rowid is a unique identifier that’s used as a primary key in most SQLite tables, and it can be looked up very quickly. SQLite automatically assigns a unique rowid unless you explicitly tell it not to, or create your own integer primary key. This query works by picking a point between the biggest and smallest rowid values used in the table, then getting the rows with rowids which are higher than that point. If you want to know more, Max’s answer has a more detailed explanation. This query is much faster – around 0.0008s – but I didn’t go this route. The result is more like a random slice than a random sample. In my testing, it always returned contiguous rows – 101, 102, 103, … – which isn’t what I want. The photos in the Commons Explorer database were inserted in upload order, so photos with adjacent row IDs were uploaded at around the same time and are probably quite similar. I’d get one photo of an old plane, then nine more photos of other planes. I want more variety! (This behaviour isn’t guaranteed – if you don’t add an ORDER BY clause to a SELECT query, then the order of results is undefined. SQLite is returning rows in rowid order in my table, and a quick Google suggests that’s pretty common, but that may not be true in all cases. It doesn’t affect whether I want to use this approach, but I mention it here because I was confused about the ordering when I read this code.) Approach #3: Select random rowid values outside SQLite Max’s answer was the first time I’d heard of rowid, and it gave me an idea – what if I chose random rowid values outside SQLite? This is a less “pure” approach because I’m not doing everything in the database, but I’m happy with that if it gets the result I want. Here’s the procedure I came up with: Create an empty list to store our sample. Find the highest rowid that’s currently in use: sqlite> SELECT MAX(rowid) FROM photos; 1913389 Use a random number generator to pick a rowid between 1 and the highest rowid: >>> import random >>> random.randint(1, max_rowid) 196476 If we’ve already got this rowid, discard it and generate a new one. (The rowid is a signed, 64-bit integer, so the minimum possible value is always 1.) Look for a row with that rowid: SELECT * FROM photos WHERE rowid = 196476 If such a row exists, add it to our sample. If we have enough items in our sample, we’re done. Otherwise, return to step 3 and generate another rowid. If such a row doesn’t exist, return to step 3 and generate another rowid. This requires a bit more code, but it returns a diverse sample of photos, which is what I really care about. It’s a bit slower, but still plenty fast enough (about 0.001s). This approach is best for tables where the rowid values are mostly contiguous – it would be slower if there are lots of rowids between 1 and the max that don’t exist. If there are large gaps in rowid values, you might try multiple missing entries before finding a valid row, slowing down the query. You might want to try something different, like tracking valid rowid values separately. This is a good fit for my use case, because photos don’t get removed from Flickr Commons very often. Once a row is written, it sticks around, and over 97% of the possible rowid values do exist. Summary Here are the four approaches I tried: Approach Performance (for 2M rows) Notes ORDER BY RANDOM() ~0.5s Slowest, easiest to read WHERE id IN (SELECT id …) ~0.15s Faster, still fairly easy to understand WHERE rowid > ... ~0.0008s Returns clustered results Random rowid in Python ~0.001s Fast and returns varied results, requires code outside SQL I’m using the random rowid in Python in the Commons Explorer, trading code complexity for speed. I’m using this random sample to render a web page, so it’s important that it returns quickly – when I was testing ORDER BY RANDOM(), I could feel myself waiting for the page to load. But I’ve used ORDER BY RANDOM() in the past, especially for asynchronous data pipelines where I don’t care about absolute performance. It’s simpler to read and easier to see what’s going on. Now it’s your turn – visit the Commons Explorer and see what random gems you can find. Let me know if you spot anything cool! [If the formatting of this post looks odd in your feed reader, visit the original article]

13 hours ago 2 votes
Choosing Languages
yesterday 3 votes
05 · Syncing Keyhive

How we sync Keyhive and Automerge

yesterday 1 votes