Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
70
Fourteen years It's a little hard to believe. Fourteen years ago today, I launched Buffer from my apartment in Birmingham, in the UK. The launch came seven weeks after I started working on the project on the side as a contract web developer. For a few weeks, I called it bfffr until I realized that no one knew how to pronounce it. Sometimes it's better to be clear than clever. So it became bufferapp.com. Even then, people thought we were called Buffer App for a while! Eventually we were able to acquire buffer.com and clear up the confusion altogether. When I started Buffer, I had no idea how far it could come. This was a case where the dream formed over time, rather than being fully formed on day one. There's a dogma that you need to have complete clarity of the vision and outcome before you even start (and go all-in and full-time, which I also disagree with). I think there's a beauty in starting with a small dream. It just so happens that every big thing started small. Early on, my...
7 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Joel Gascoigne's blog

The significance of Bluesky and decentralized social media

The significance of Bluesky and decentralized social media I'm delighted to share that we have introduced support for Bluesky in Buffer. This is an important moment for us as a company, and there are a number of reasons that adding Bluesky is personally meaningful for me. With Bluesky, we now support the three major social networks pushing forward a new era of decentralized social media: Mastodon, Threads and Bluesky. We have been intentional about moving fast to add these channels to our tool. Supporting independence and ownership in social media Buffer has now existed for almost 14 years, and throughout that time I've seen a lot change in social media, and in our space of tools to support people and businesses with social. We're an outlier as a product and company that has existed for that kind of timeframe with leadership and values left in tact. We've had to work hard at times to maintain control over our destiny. In 2018, we made the decision to spend $3.3M to buy out the majority of our VC investors and be able to go our long-term path. We have continued to carry out buybacks each year since 2018, and at this stage we are majority founder and team owned. One of the things I'm proudest of is that we still wholeheartedly serve individuals and creators, and have not gone up-market as many other long-running companies in our space have done. We've been fortunate to be able to scale to 56,000 paying customers and over $18M in annual revenue while taking our own unique path. Through intentional choices over the years, we have maintained a level of optionality over our future that most do not have. This independence is something I don't take for granted. Keeping ownership of our company, and through that ownership having an ability to boldly go in the direction we believe is best for customers and the team, is very important to me. This is why, as a business, we feel so philosophically aligned with rising new decentralized social media networks, such as Bluesky and Mastodon. These networks have been started with a belief that individuals should maintain ownership over their content and the connection to their audience. They have data portability baked in from the beginning. When you use these networks, you are much more likely to be able to maintain control over your content and audience than if you use social networks owned by large corporations with complex ownership structures of their own, and often with public markets to answer to. The larger social networks provide a level of distribution that's worth tapping into, but I strongly encourage investing a portion of your energy into networks where you will be able to maintain ownership long-term. At Buffer, we will be doing everything we can to support the growth of new decentralized social media options, because we believe that individuals and small businesses should maintain control over their content and the connection to their audience. The resurgence of the open web with social media protocols I have been eagerly observing the emergence and growth of social media protocols, in particular with ActivityPub (and Mastodon as the prominent implementation), and AT Protocol from Bluesky. Open standards in social media could be as powerful as open standards have been for direct and private communication (email). What I find exciting about the development of these open standards, and more importantly the adoption of them and traction of social networks which support them, is that they can bring forth a new era of open standards for the web. The Internet was built upon open standards — HTTP, URL, TCP/IP, DNS, HTML. A vast many valuable internet businesses have built on these "shoulders of giants." ActivityPub and AT Protocol are built with open standards philosophies, and could similarly enable a new playground of innovation, with openness, ownership and interoperability at their core. I personally miss the earlier days of social media where the APIs had much greater parity with what could be done natively on the platforms. When I started Buffer, the Twitter and Facebook APIs were close to feature-complete, and brought about a lot of innovation in third-party development on top of those APIs. This is how Buffer was born, along with many other products in our space. Over time, we saw an era of closed APIs with reduced transparency and ownership of content and audiences. Mastodon and Bluesky bring the opportunity for a new era of innovation in our space, which I am welcoming with open arms. More innovation in the social media management space will be better for customers, and frankly makes for more exciting work to do. Bluesky is bringing innovation back to social media If you haven't had a chance to take a look at some of Bluesky's recent product and platform announcements, I highly recommend that you go and read them. In particular, what they've done with introducing custom feeds as well as starter packs gets me very excited about some real innovation from a social network. When I saw starter packs introduced, it immediately felt like a no-brainer feature for a social network, and such a powerful thing, especially for an emerging social network, to offer. Starter packs allow anyone to create a "getting started pack" for a new Bluesky user. This can include a set of recommended follows, and up to three recommended custom feeds (more on those below). This enables their passionate users to be able to personalize an introduction for people not yet on Bluesky. It's a smart way to activate users to play a meaningful role in onboarding new people to the network and grounding them with an existing community to interact with. Of course, Bluesky benefits by likely getting more people onto their new network than they would otherwise. Custom feeds are an incredible innovation that put the choice of algorithm for the social network in the hands of the wide range of users and different niche communities that exist on the network. The way that the Bluesky team have built custom feeds enables a ton of flexibility for the types of content alogrithms can serve up, and creates a marketplace for browsing and enabling different custom feeds you can choose to view. Something I've observed from the Bluesky team is their commitment to, and intentionality around, building tools for the governance of the network itself. It's very meaningful that on Bluesky you can choose your own algorithm and you can adopt an algorithm that someone else has written, or create your own algorithm for what content shows up in your feed. And I think it's very smart that Bluesky has done this — because it's both innovation and it's strong strategy because it's a highly defensible move which many of the other networks would not be able offer. It would be very unlikely for the commercial social networks to move away from the company, the network themselves, holding on to ownership of the algorithm and what is served up to you. I had a wonderful conversation with Rose Wang from the Bluesky team a couple of weeks ago and one of the topics we got into was around the values that are embedded in the Bluesky team and the work they're trying to do. It was clear to me how thoughtful and intentional they are being around the governance of the network and the flexibility they're building in to allow users to really shape the community and what is important to them. Something I appreciate about Bluesky is that their goal is to create a social network not controlled by a single company, while also ensuring that it comes together as a cohesive and easy-to-use experience. Decentralized social media can be daunting and feel complex and inaccessible to people initially, and so I think intentional work going into the simplicity of the experience is paramount. With great innovation from the Bluesky team such as starter packs and custom feeds, along with their focus on simplicity, I strongly encourage you to go and take a look at this new social network. This is a platform and community that's worth taking a deeper look at, participating in and investing time into. Join us in participating in a new era of decentralized social media By supporting Bluesky, along with Mastodon and Threads, we are playing our part in moving forward this promising new era of social media. Many of us in the team have been personally drawn to these networks for their special and supportive communities. We're here to see decentralized social media grow and become more meaningful for more people across the world. That's why we've put our scale, brand and resources into building awareness and providing tools to make participating on these new social networks more streamlined. I encourage you to add Bluesky to your channels in Buffer, and start participating in the social network today. Learn more and get started by visiting our Bluesky page. Photo by Kumiko SHIMIZU on Unsplash.

11 months ago 82 votes
Build Week at Buffer: What it is and how we’re approaching it

Build Week at Buffer: What it is and how we’re approaching it Note: this was originally posted on the Buffer blog. We’ve dedicated the week of August 22nd to a brand new internal initiative called Build Week. We’ll all be putting aside our regular work for a single week to come together in small groups and work on ideas that can benefit customers or us as a company, ideally with something of value shipped or in place by the end. The inspiration for Build Week Before building Buffer, I had several formative experiences attending “build a startup in a weekend”-type events. Two I attended were run by Launch48, and another was Startup Weekend. Anyone could sign up to attend no matter what skill set or experience level they would bring. As long as you were willing to roll up your sleeves, build something, and contribute in any way, you’d be very welcome. The focus was on building something rapidly from end to end, within the space of a weekend. Teams would be capped to a small number, around three to five people per team, so the groups could move quickly with decision making. Once the teams were formed, you’d get to work and start doing research, building, and marketing (often all in parallel) to move as fast as possible in building a minimum viable product and achieving a level of validation. At the end of the weekend, teams would present what they achieved, what they validated, and what they learned. Through these events, I met people, formed strong bonds, and stayed in contact for years with them afterward. Some teams even became startups. It felt like highly accelerated learning, and it was intense but fun, very energizing and inspiring. I’ve been thinking about how this could translate to Buffer and why it would be so powerful for us in our current season, which is where Build Week comes in. What is Build Week? Build Week is a week at Buffer where we’ll form teams, work with people we don’t typically work with, and work together on an idea we feel called towards. The highest level goals of Build Week are to inject into the company and team a spirit of shipping, creativity, and innovation, making progress and decisions rapidly, comfort with uncertainty, and ultimately going from idea to usable value out in the world in the space of a week. When it comes to the type of projects we’ll work on and the skill sets required to accomplish them, the goal is for those to be far-reaching. While it may seem like Build Week would be more suited to engineers specifically, our goal is to achieve the outcome that everyone realizes they are and can be a Builder. Ultimately, being a Builder in Buffer Build Week will mean that you are part of a team that successfully makes a change that brings value, and it happens in the short period of a week. Everyone on the team has something to bring to this goal, and I'm excited by the various projects that will be worked on. How we’re approaching Build Week With our high-level vision and ideas for Build Week, several months ago we got to work to bring this concept to life and make it happen. The first thing we did was form a team to plan and design Build Week itself. Staying true to our vision for Build Week itself, where we want to have small teams of people who don’t normally work together, this is also how we approached forming the Build Week Planning team. With this team in place, we started meeting weekly. Overall, it has been a small time commitment of 45 minutes per week to plan and design Build Week. As we got closer to the actual week, we started meeting for longer and having real working sessions. Our final design for Build Week consisted of three key stages: Idea Gathering, Team Formation and Build Week. For the Idea Gathering stage, we created a Trello board where anyone in the team could contribute an idea. We used voting and commenting on the cards, which helped narrow the ideas to those that would be worked on during Build Week. We gave people a few days to submit ideas and received 78 total contributions. This was a big win and a clear indication of a big appetite for Build Week within the company. The Team Formation stage was a trickier problem to solve and determine the process for. Initially, we had hoped that this could be entirely organic, with people gravitating towards an idea and joining up with people who are also excited to work on that idea. Ultimately, we realized that if we approached it this way, we would likely struggle with our goal of having people work with folks they don’t normally work with, and we wouldn’t have enough control over other aspects, such as the time zones within each team. All of this could jeopardize the success of Build Week itself. So we arrived at a hybrid, where we created a Google Form for people to submit their top 3 choices of ideas they’d like to work on. With that information, we determined the teams and made every effort to put people in a team they had put down as a choice. And the final stage is, of course, Build Week itself! The teams have now been formed, and we created a Slack channel for each team to start organizing themselves. We are providing some very lightweight guidance, and we will have a few required deliverables, but other than that, we are leaving it to each team to determine the best way to work together to create value during the week. If you're a Buffer customer, one small note that as we embrace this company-wide event and time together, we will be shifting our focus slightly away from the support inbox. We will still be responding to your questions and problems with Buffer; however, we may be slightly slower than usual. We also won't be publishing any new content on the blog. We’re confident that this time for the team to bond and build various projects of value will ultimately benefit all Buffer customers. Why right now is the time for Build Week at Buffer 2022 has been a different year for Buffer. We’re in a position of flatter to declining revenue, and we’ve been working hard to find our path back to healthy, sustainable growth. One key element of this effort has been actively embracing being a smaller company. We’re still a small company, and we serve small businesses. Unless we lean into this, we will lose many of our advantages. We want to drive more connection across the team in a time where we’ve felt it lacking for the past couple of years. While we’ve been remote for most of our 11+ years of existence, we’ve always found a ton of value from company retreats where we all meet in person, and we’ve suffered during the pandemic where we’ve not been able to have these events. Build Week is an opportunity for us to do that with a whole new concept and event rather than trying to do it with something like a virtual retreat which would likely never be able to live up to our previous retreat experiences. There’s a big opportunity for exchanging context and ideas of current Buffer challenges within teams where the teams are cross-functional and with people who don’t normally work together. This could help us for months afterward. Build Week can also be a time where strong bonds, both in work and personally, are formed. My dream would be that after Build Week, people within their teams hit each other up in Slack and jump on a spontaneous catch-up call once in a while because they’ve become close during the week. We’ve had engineering hack weeks for a long time now. Those have been awesome in their way, but they have been very contained to engineering. And while those events created a lot of value, they often lacked perspectives that would have enhanced the work, such as customer advocacy, design, culture, or operational perspectives. As a company, we want to challenge some of the processes we have built up over the past few years. Build Week is like a blank canvas – we clear out a whole week and then diligently decide what we need in terms of structure and process to make this concept thrive and no more. This can act as inspiration for us going forward, where we can use the week as an example of rethinking process and questioning the ways we do things. The opportunity that comes with Build Week If we are successful with Build Week, I am confident that we will surprise ourselves with just how much value is created by the whole company in that one week alone. In embracing being a small company, we’re currently striving to challenge ourselves by moving at a faster pace without over-working. I think this is possible, and the completely different nature of how we work together in Build Week could give us ideas for what we can adjust to work more effectively and productively together in our regular flow of work. The opportunity for value creation within Build Week goes far beyond product features or improvements. Build Week will be a time for us to build anything that serves either customers or the team in pursuit of our vision and mission, or strengthens and upholds our values. We can stretch ourselves in the possibilities – there could be a marketing campaign, a data report, improving an existing process in the company, rethinking our tools, creating a new element of transparency, bringing our customers together, etc. Wish us luck! I believe Build Week can be one of the most fun, high-energy weeks we’ve had in years. I expect we can come out of the week on a high that can fuel us with motivation and enjoyment of our work for months. That is a worthy goal and something I think we can achieve with a little creativity and the right group of people designing and planning the event. Of course, part of the beauty of Build Week itself is that just like all the ideas and the freedom to choose how you work in a team, we don’t know everything we’ll learn as a company by doing this. It could be chaotic, there could be challenges, and there will undoubtedly be many insights, but we will be better off for having gone through the process. Please wish us all luck as we head into next week. There’s a lot of excitement in the company to create value. We hope to have new features to share with you in the coming weeks, and we’ll be back soon with a post sharing how it went. Have you tried something like Build Week before? If so, how did it go? I’d love to hear from you on Twitter. Photo by C Dustin on Unsplash.

over a year ago 30 votes
Our vision for location-independent salaries at Buffer

Our vision for location-independent salaries at Buffer Note: this was originally posted on the Buffer blog. I’m happy to share that we’ve established a long-term goal that salaries at Buffer will not be based on location. We made our first step towards this last year, when we moved from four cost-of-living based location bands for salaries to two bands. We did this by eliminating the lower two location bands The change we made resulted in salary increases for 55 of 85 team members, with the increase being on average $10,265. When the time is right, we will be eliminating the concept of cost-of-living based location bands entirely, which will lead to a simpler approach to providing generous, fair and transparent salaries at Buffer. In this post I’m sharing my thinking behind this change and our approach to pay overall. Location and Salaries It’s been interesting to see the conversation about location and salaries unfold both within Buffer and beyond. We’ve heard from many teammates over the years about the pros and cons of the location factor, and of course we’ve watched with interest as this became a regular topic of conversation within the larger remote work community. I've had many healthy debates with other remote leaders, and there are arguments for eliminating a location component which I haven’t agreed with. I don’t believe pay differences across locations is unethical, and it has made a lot of sense for us in the past. However, the last few years have seen a lot of change for remote teams. A change like this isn't to be made lightly, and at our scale comes with considerations. Our Compensation Philosophy Compensation is always slowly evolving as companies and markets mature and change. We’ve been through several major iterations of our salary formula, and myriad small tweaks throughout the last 8 or so years since we launched the initial version. Part of the fun of having a salary formula is knowing that it’s never going to be “done.” Knowing that the iterations would continue, Caryn, our VP of Finance, and I worked together to establish our compensation philosophy and document our principles on compensation to help us determine what should always be true even as the salary formula changes over time. We arrived at four principles that guide our decisions around compensation. We strive for Buffer’s approach to salary, equity, and benefits to be: Transparent Simple Fair Generous These are the tenets that have guided us through compensation decisions over the years. After we articulated them as our compensation principles, we were able to look at the location factor of our formula with new clarity. There are a few key considerations that were part of our discussions and my decision to put Buffer on a path towards removing our location factor from salaries that I'll go into more detail about next. Transparency, Simplicity, and Trust Our salary formula is one of the fundamental reasons that we can share our salaries transparently. Having a spreadsheet of team salaries is a huge step toward transparency, but true transparency is reached when the formula is simple, straightforward, easy to understand, and importantly, easy to use. In one of our earlier versions of the salary formula, we calculated the cost-of-living multiplier for every new location when we made an offer. That was cumbersome, and it meant that a candidate couldn’t truly know their salary range until we calculated that. This was improved greatly when we moved to the concept of “cost-of-living bands.”. After that, different cities and towns could more easily be classified into each band. This massively increased the transparency of the formula, and I think it helped create a lot more trust in this system. Anyone could relatively easily understand which band their location fit into, and with that knowledge understand the exact salary they'd receive at Buffer. This type of immediate understanding of the salary formula, and ability to run calculations yourself, is where transparency really gains an extra level of impact and drives trust within and beyond the team. However, with our four cost-of-living bands, there were still decisions to be made around where locations fall, and this has been the topic of much healthy and productive debate over the years. The conversations around locations falling between the Average and High bands is what led us to introduce the Intermediate band. And with four choices of location, it has meant there is some disparity in salaries across the team. With the benefits that come from the powerful combination of transparency and simplicity, alongside the increased trust that is fostered with more parity across the team, I’m choosing to drive Buffer’s salary formula in the direction of eventually having no cost-of-living factor. Freedom and Flexibility We’ve long taken approaches to work which have been grounded in the ideal of an increased level of freedom and flexibility as a team member. When I started Buffer, I wanted greater freedom and a better quality of life than I felt would be possible by working at a company. That came in various forms, including location freedom, flexibility of working hours, and financial freedom. And as we’ve built the company, I’ve been proud that we’ve built a culture where every single team member can experience an unusual and refreshing level of freedom and flexibility. Since the earliest days, one of our most fondly held values has been to Improve Consistently, and in particular this line: “We choose to be where we are the happiest and most productive”. This is a value that has supported and encouraged teammates to travel and try living in different cities, in search of that “happiest and most productive” place. It has enabled people to find work they love and great co-workers, from a hometown near family where it would be hard to find a local company that can offer that same experience and challenge. It has also enabled people to travel in order to support their partner in an important career change involving a move, something which allows an often stressful change to happen much more smoothly, since you can keep working at Buffer from anywhere in the world. Having a culture that has supported moving freely across the globe has been a powerful level of freedom and flexibility. That freedom has been matched with a salary system which adjusts compensation to accommodate those changes in a fair and appropriate way. However, knowing that your salary will fluctuate and can decrease due to a choice to be somewhere else, does limit that freedom and the ability to make a decision to move. Moving towards a salary formula with parity across all locations, will enable an even greater level of freedom and flexibility. It feels clear to me that choosing to move is a personal or a family decision, and it is ideal if Buffer salaries are structured in a way that honor and support that reality. I’m excited that working towards removing our cost-of-living differences will help significantly reduce the friction involved in making a potentially positively life-changing decision to live in a different city or country. Results, Independence, and Reward At Buffer, we are not on the typical hyper-growth VC path. This comes with some constraints: we don’t have tens of millions in funding and unlimited capital to deploy in an attempt to find a rapid path to $100m and going public (thankfully, that’s not our goal). This path also means that our experiences as teammates in a variety of ways are directly tied to whether we are successfully serving existing and new customers. For example, the level of benefits, ability to travel (in normal times), and competitiveness of compensation, are very much driven by our revenue growth and profitability. But, this is independence too. The thing we often need to remind ourselves of, is that while we may feel more constrained at times, we have full freedom of what we do with the success we achieve. Making a choice like this is one example of that. It is my intention as founder / CEO that as we succeed together as a company, we all benefit from that success and see adjustments that improve our quality of life and create wealth. We are in a position of profitability which allows us to take a significant step towards removing the cost-of-living factor from our salary framework, which I believe serves those goals. And removing it entirely will be determined by us successfully executing on our strategy and serving customers well. Reducing Cost-of-Living Bands The way our salary formula works is that we benchmark a teammate’s role based on market data at the 50th percentile for the software industry in San Francisco and then multiply that by the cost-of-living band. So, a Product Marketer benchmark at the 50th percentile of the San Francisco market data is $108,838. Depending on the teammate’s location this would be multiplied by a cost-of-living band (Low, Average, Intermediate or High). For example, if they lived Boulder, Colorado, a city with Average cost-of-living, the benchmark would be multiplied by 0.85 for a salary of $92,512. To best reflect our compensation philosophy, company values, and the path we want for Buffer, we have eliminated the Low and Average cost-of-living bands. What we’ve done is brought all Low (.75 multiplier) and Average (.85 multiplier) salaries up to Intermediate (.9 multiplier), which we now call our Global band. This is what resulted in 55 teammates seeing on average an increase to their salary of $10,265. Our two bands are now Global (.9 multiplier) and High (1.0 multiplier). This change is based on my vision for Buffer and how being a part of this team affects each of us as individually, as well as the direction I believe the world is going. I’m excited about the change first and foremost because it supports our goal of having a transparent, simple, fair, and generous approach to compensation. This is also a move that raised salaries right away for more than half of the team. This point in particular gives me a lot of joy because I want compensation to be one of the incredible parts of working at Buffer. Money isn’t everything, and we all need kind and smart colleagues, a psychologically safe environment, and to work on challenging and interesting problems, in order to be fulfilled at work. Beyond that, however, money really impacts life choices, and that’s ultimately what I want for every Bufferoo; the freedom to choose their own lifestyle and make choices for themselves and their families’ long-term health and happiness. It’s important to me that people who choose to spend their years at Buffer will have the freedom to make their own choices to have a great life. And, for our teammates who live in much lower cost-of-living areas, a Buffer salary could end up being truly life changing. I’m really happy with that outcome. The decision was also impacted by the direction that I believe the world is going (and, the direction we want to help it go). Remote is in full swing, and it’s increasingly breaking down geographical borders. I believe this is a great thing. Looking ahead 10 or even 5 years, it seems to me that we’re going to see a big rebalancing, or correction, that’s going to happen. I believe it’s important to be ahead of these types of shifts, and be proactively choosing the path that’s appropriate and energizing for us. What next? Our plan is to eventually get to one single location band, essentially eliminating the cost-of-living factor from the salary formula altogether. This will be possible once we can afford to make this change and sustain our commitment to profitability. So, this will be driven by the long-term results we create from our hard work, creativity in the market, and commitment to customers. What questions does this spark for you? Send me a tweet with your thoughts. Photo by Javier Allegue Barros on Unsplash.

over a year ago 31 votes

More in programming

clamp / median / range

Here are a few tangentially-related ideas vaguely near the theme of comparison operators. comparison style clamp style clamp is median clamp in range range style style clash? comparison style Some languages such as BCPL, Icon, Python have chained comparison operators, like if min <= x <= max: ... In languages without chained comparison, I like to write comparisons as if they were chained, like, if min <= x && x <= max { // ... } A rule of thumb is to prefer less than (or equal) operators and avoid greater than. In a sequence of comparisons, order values from (expected) least to greatest. clamp style The clamp() function ensures a value is between some min and max, def clamp(min, x, max): if x < min: return min if max < x: return max return x I like to order its arguments matching the expected order of the values, following my rule of thumb for comparisons. (I used that flavour of clamp() in my article about GCRA.) But I seem to be unusual in this preference, based on a few examples I have seen recently. clamp is median Last month, Fabian Giesen pointed out a way to resolve this difference of opinion: A function that returns the median of three values is equivalent to a clamp() function that doesn’t care about the order of its arguments. This version is written so that it returns NaN if any of its arguments is NaN. (When an argument is NaN, both of its comparisons will be false.) fn med3(a: f64, b: f64, c: f64) -> f64 { match (a <= b, b <= c, c <= a) { (false, false, false) => f64::NAN, (false, false, true) => b, // a > b > c (false, true, false) => a, // c > a > b (false, true, true) => c, // b <= c <= a (true, false, false) => c, // b > c > a (true, false, true) => a, // c <= a <= b (true, true, false) => b, // a <= b <= c (true, true, true) => b, // a == b == c } } When two of its arguments are constant, med3() should compile to the same code as a simple clamp(); but med3()’s misuse-resistance comes at a small cost when the arguments are not known at compile time. clamp in range If your language has proper range types, there is a nicer way to make clamp() resistant to misuse: fn clamp(x: f64, r: RangeInclusive<f64>) -> f64 { let (&min,&max) = (r.start(), r.end()); if x < min { return min } if max < x { return max } return x; } let x = clamp(x, MIN..=MAX); range style For a long time I have been fond of the idea of a simple counting for loop that matches the syntax of chained comparisons, like for min <= x <= max: ... By itself this is silly: too cute and too ad-hoc. I’m also dissatisfied with the range or slice syntax in basically every programming language I’ve seen. I thought it might be nice if the cute comparison and iteration syntaxes were aspects of a more generally useful range syntax, but I couldn’t make it work. Until recently when I realised I could make use of prefix or mixfix syntax, instead of confining myself to infix. So now my fantasy pet range syntax looks like >= min < max // half-open >= min <= max // inclusive And you might use it in a pattern match if x is >= min < max { // ... } Or as an iterator for x in >= min < max { // ... } Or to take a slice xs[>= min < max] style clash? It’s kind of ironic that these range examples don’t follow the left-to-right, lesser-to-greater rule of thumb that this post started off with. (x is not lexically between min and max!) But that rule of thumb is really intended for languages such as C that don’t have ranges. Careful stylistic conventions can help to avoid mistakes in nontrivial conditional expressions. It’s much better if language and library features reduce the need for nontrivial conditions and catch mistakes automatically.

9 hours ago 2 votes
Digital hygiene: Emails

Email is your most important online account, so keep it clean.

yesterday 5 votes
Building a container orchestrator

Kubernetes is not exactly the most fun piece of technology around. Learning it isn’t easy, and learning the surrounding ecosystem is even harder. Even those who have managed to tame it are still afraid of getting paged by an ETCD cluster corruption, a Kubelet certificate expiration, or the DNS breaking down (and somehow, it’s always the DNS). Samuel Sianipar If you’re like me, the thought of making your own orchestrator has crossed your mind a few times. The result would, of course, be a magical piece of technology that is both simple to learn and wouldn’t break down every weekend. Sadly, the task seems daunting. Kubernetes is a multi-million lines of code project which has been worked on for more than a decade. The good thing is someone wrote a book that can serve as a good starting point to explore the idea of building our own container orchestrator. This book is named “Build an Orchestrator in Go”, written by Tim Boring, published by Manning. The tasks The basic unit of our container orchestrator is called a “task”. A task represents a single container. It contains configuration data, like the container’s name, image and exposed ports. Most importantly, it indicates the container state, and so acts as a state machine. The state of a task can be Pending, Scheduled, Running, Completed or Failed. Each task will need to interact with a container runtime, through a client. In the book, we use Docker (aka Moby). The client will get its configuration from the task and then proceed to pull the image, create the container and start it. When it is time to finish the task, it will stop the container and remove it. The workers Above the task, we have workers. Each machine in the cluster runs a worker. Workers expose an API through which they receive commands. Those commands are added to a queue to be processed asynchronously. When the queue gets processed, the worker will start or stop tasks using the container client. In addition to exposing the ability to start and stop tasks, the worker must be able to list all the tasks running on it. This demands keeping a task database in the worker’s memory and updating it every time a task change’s state. The worker also needs to be able to provide information about its resources, like the available CPU and memory. The book suggests reading the /proc Linux file system using goprocinfo, but since I use a Mac, I used gopsutil. The manager On top of our cluster of workers, we have the manager. The manager also exposes an API, which allows us to start, stop, and list tasks on the cluster. Every time we want to create a new task, the manager will call a scheduler component. The scheduler has to list the workers that can accept more tasks, assign them a score by suitability and return the best one. When this is done, the manager will send the work to be done using the worker’s API. In the book, the author also suggests that the manager component should keep track of every tasks state by performing regular health checks. Health checks typically consist of querying an HTTP endpoint (i.e. /ready) and checking if it returns 200. In case a health check fails, the manager asks the worker to restart the task. I’m not sure if I agree with this idea. This could lead to the manager and worker having differing opinions about a task state. It will also cause scaling issues: the manager workload will have to grow linearly as we add tasks, and not just when we add workers. As far as I know, in Kubernetes, Kubelet (the equivalent of the worker here) is responsible for performing health checks. The CLI The last part of the project is to create a CLI to make sure our new orchestrator can be used without having to resort to firing up curl. The CLI needs to implement the following features: start a worker start a manager run a task in the cluster stop a task get the task status get the worker node status Using cobra makes this part fairly straightforward. It lets you create very modern feeling command-line apps, with properly formatted help commands and easy argument parsing. Once this is done, we almost have a fully functional orchestrator. We just need to add authentication. And maybe some kind of DaemonSet implementation would be nice. And a way to handle mounting volumes…

yesterday 3 votes
Bugs I fixed in SumatraPDF

Unexamined life is not worth living said Socrates. I don’t know about that but to become a better, faster, more productive programmer it pays to examine what makes you un-productive. Fixing bugs is one of those un-productive activities. You have to fix them but it would be even better if you didn’t write them in the first place. Therefore it’s good to reflect after fixing a bug. Why did the bug happen? Could I have done something to not write the bug in the first place? If I did write the bug, could I do something to diagnose or fix it faster? This seems like a great idea that I wasn’t doing. Until now. Here’s a random selection of bugs I found and fixed in SumatraPDF, with some reflections. SumatraPDF is a C++ win32 Windows app. It’s a small, fast, open-source, multi-format PDF/eBook/Comic Book reader. To keep the app small and fast I generally avoid using other people’s code. As a result most code is mine and most bugs are mine. Let’s reflect on those bugs. TabWidth doesn’t work A user reported that TabWidth advanced setting doesn’t work in 3.5.2 but worked in 3.4.6. I looked at the code and indeed: the setting was not used anywhere. The fix was to use it. Why did the bug happen? It was a refactoring. I heavily refactored tabs control. Somehow during the rewrite I forgot to use the advanced setting when creating the new tabs control, even though I did write the code to support it in the control. I guess you could call it sloppiness. How could I not write the bug? I could review the changes more carefully. There’s no-one else working on this project so there’s no one else to do additional code reviews. I typically do a code review by myself with webdiff but let’s face it: reviewing changes right after writing them is the worst possible time. I’m biased to think that the code I just wrote is correct and I’m often mentally exhausted. Maybe I should adopt a process when I review changes made yesterday with fresh, un-tired eyes? How could I detect the bug earlier?. 3.5.2 release happened over a year ago. Could I have found it sooner? I knew I was refactoring tabs code. I knew I have a setting for changing the look of tabs. If I connected the dots at the time, I could have tested if the setting still works. I don’t make releases too often. I could do more testing before each release and at the very least verify all advanced settings work as expected. The real problem In retrospect, I shouldn’t have implemented that feature at all. I like Sumatra’s customizability and I think it’s non-trivial contributor to it’s popularity but it took over a year for someone to notice and report that particular bug. It’s clear it’s not a frequently used feature. I implemented it because someone asked and it was easy. I should have said no to that particular request. Fix printing crash by correctly ref-counting engine Bugs can crash your program. Users rarely report crashes even though I did put effort into making it easy. When I a crash happens I have a crash handler that saves the diagnostic info to a file and I show a message box asking users to report the crash and with a press of a button I launch a notepad with diagnostic info and a browser with a page describing how to submit that as a GitHub issue. The other button is to ignore my pleas for help. Most users overwhelmingly choose to ignore. I know that because I also have crash reporting system that sends me a crash report. I get thousands of crash reports for every crash reported by the user. Therefore I’m convinced that the single most impactful thing for making software that doesn’t crash is to have a crash reporting system, look at the crashes and fix them. This is not a perfect system because all I have is a call stack of crashed thread, info about the computer and very limited logs. Nevertheless, sometimes all it takes is a look at the crash call stack and inspection of the code. I saw a crash in printing code which I fixed after some code inspection. The clue was that I was accessing a seemingly destroyed instance of Engine. That was easy to diagnose because I just refactored the code to add ref-counting to Engine so it was easy to connect the dots. I’m not a fan of ref-counting. It’s easy to mess up ref-counting (add too many refs, which leads to memory leaks or too many releases which leads to premature destruction). I’ve seen codebases where developers were crazy in love with ref-counting: every little thing, even objects with obvious lifetimes. In contrast,, that was the first ref-counted object in over 100k loc of SumatraPDF code. It was necessary in this case because I would potentially hand off the object to a printing thread so its lifetime could outlast the lifetime of the window for which it was created. How could I not write the bug? It’s another case of sloppiness but I don’t feel bad. I think the bug existed there before the refactoring and this is the hard part about programming: complex interactions between distant, in space and time, parts of the program. Again, more time spent reviewing the change could have prevented it. As a bonus, I managed to simplify the logic a bit. Writing software is an incremental process. I could feel bad about not writing the perfect code from the beginning but I choose to enjoy the process of finding and implementing improvements. Making the code and the program better over time. Tracking down a chm thumbnail crash Not all crashes can be fixed given information in crash report. I saw a report with crash related to creating a thumbnail crash. I couldn’t figure out why it crashes but I could add more logging to help figure out the issue if it happens again. If it doesn’t happen again, then I win. If it does happen again, I will have more context in the log to help me figure out the issue. Update: I did fix the crash. Fix crash when viewing favorites menu A user reported a crash. I was able to reproduce the crash and fix it. This is the bast case scenario: a bug report with instructions to reproduce a crash. If I can reproduce the crash when running debug build under the debugger, it’s typically very easy to figure out the problem and fix it. In this case I’ve recently implemented an improved version of StrVec (vector of strings) class. It had a compatibility bug compared to previous implementation in that StrVec::InsertAt(0) into an empty vector would crash. Arguably it’s not a correct usage but existing code used it so I’ve added support to InsertAt() at the end of vector. How could I not write the bug? I should have written a unit test (which I did in the fix). I don’t blindly advocate unit tests. Writing tests has a productivity cost but for such low-level, relatively tricky code, unit tests are good. I don’t feel too bad about it. I did write lots of tests for StrVec and arguably this particular usage of InsertAt() was borderline correct so it didn’t occur to me to test that condition. Use after free I saw a crash in crash reports, close to DeleteThumbnailForFile(). I looked at the code: if (!fs->favorites->IsEmpty()) { // only hide documents with favorites gFileHistory.MarkFileInexistent(fs->filePath, true); } else { gFileHistory.Remove(fs); DeleteDisplayState(fs); } DeleteThumbnailForFile(fs->filePath); I immediately spotted suspicious part: we call DeleteDisplayState(fs) and then might use fs->filePath. I looked at DeleteDisplayState and it does, in fact, deletes fs and all its data, including filePath. So we use freed data in a classic use after free bug. The fix was simple: make a copy of fs->filePath before calling DeleteDisplayState and use that. How could I not write the bug? Same story: be more careful when reviewing the changes, test the changes more. If I fail that, crash reporting saves my ass. The bug didn’t last more than a few days and affected only one user. I immediately fixed it and published an update. Summary of being more productive and writing bug free software If many people use your software, a crash reporting system is a must. Crashes happen and few of them are reported by users. Code reviews can catch bugs but they are also costly and reviewing your own code right after you write it is not a good time. You’re tired and biased to think your code is correct. Maybe reviewing the code a day after, with fresh eyes, would be better. I don’t know, I haven’t tried it.

yesterday 2 votes
An Analysis of Links From The White House’s “Wire” Website

A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”. So a link blog, if you will. As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?” So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns. I wrote some JavaScript to: Fetch the HTML page at whitehouse.gov/wire Parse it with cheerio Select all the external links on the page Return a list of links and their headline text In a few minutes I had a quick analysis of what kind of links were on the page: This immediately sparked my curiosity to know more about the meta information around the links, like: If you grouped all the links together, which sites get linked to the most? What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words? What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)? So I got to building. Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality. My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API. After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage. From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like: Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page. Parse the links and pull out the top-level domains so I could group links by domain occurrence. Create charts and graphs to visualize the structured data I had created. Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop! Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating! It’s been about a month and a half since I started this and I have about fifty days worth of data. The results? Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025: youtube.com (133) foxnews.com (72) thepostmillennial.com (67) foxbusiness.com (66) breitbart.com (64) x.com (63) reuters.com (51) truthsocial.com (48) nypost.com (47) dailywire.com (36) From the links, here’s a word cloud of the most commonly recurring words in the link headlines: “trump” (343) “president” (145) “us” (134) “big” (131) “bill” (127) “beautiful” (113) “trumps” (92) “one” (72) “million” (57) “house” (56) The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool! If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience. Email · Mastodon · Bluesky

2 days ago 3 votes