Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
43
While many companies build out an elaborate Engineering onboarding program, the process for onboarding new executives tends to be an ad-hoc, chaotic affair. There usually is an executive onboarding process, but it’s used too infrequently to ever get excellent. Part of the problem is similar to that of an executive job search: every executive and role is unique, and it’s hard to create a repeatable program to handle one-of-a-kind onboardings. The other part of the problem is that the executive’s manager, the CEO, usually hires a new executive to solve a specific problem. Once the executive starts, the CEO will often turn to focus on their next biggest problem. This happens so frequently that the two most common frustrations I hear from executives about onboarding their peers are “Isn’t this the CEO’s job, why should I worry about it?” followed shortly thereafter by “Why didn’t I worry about it more? I knew leaving it to the CEO was a mistake.” Even when the CEO is engaged, most new...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Irrational Exuberance

Public company comparables.

A few years ago I wrote about reading a Profit & Loss statement, which is a foundational executive skill. I also subsequently wrote about ways to measure your engineering organization. Despite having written those, I still spend a lot of time wondering about effective ways to represent an engineering organization to your board of directors. Over the past few years, one of the most useful charts I’ve found for explaining an R&D organization is a scatterplot of R&D spend as a % of margin versus YoY growth of last twelve months (LTM) revenue. Unlike so many other measures, this is an explicit measure of your R&D organization value as an investment relative to peer organizations. Until recently, I assumed building this dataset required reading financial filings but my strategic finance partner at Carta, Tyler Braslow, pointed out that you can get all of this data for the tech section from Meritech Analytics, for free. When you login to Meritech, you’re dropped into a table of public company comparables for tech companies. This is the exact dataset I’d been looking for to build this chart. After logging in, you can then copy the contents of that table into a Google Sheets spreadsheet or Excel, or whatever you’re most comfortable with. Within that sheet, the columns you care about are: % YoY Growth LTM Rev (column Q for me) – how much “last twelve months revenue” has grown year over year, as a percentage % LTM Margins for R&D (column U for me) – how much R&D spend is as a percentage of last twelve months margin LTM Revenue (column O for me) – although I don’t show this in the scatterplot, I find this one useful for debugging outlier values Hiding the other columns gives you a much simpler table. From that table, you’re then able to build the scatterplot. Note that being “higher” means your R&D spend as a percentage of LTM margin is higher, which is a bad thing. The best companies are to the bottom and to the right; the worst companies are to the top and to the left. With this chart as a starting point, you can then plot your company in and show where you stand. You could also show how your company’s position in the chart has evolved over time: hopefully improving. Finally, you might want to cull some of these data points to better determine your public company comparables. The Meritech dataset has 106 entries, but you might prefer a more representative thirty entries.

a week ago 3 votes
How to filter out old email from inbox

Every few years I take a pass at reducing the chaos in my personal inboxes. There are simply too many emails to deal with, and that generally leads to me increasingly failing to follow up on important email. Up to this point, my strategy has largely been filtering out emails that I never want to read. But there’s another category of email which is stuff I often want to read when it’s fresh, but never want to read after it’s fresh. For example, calendar reminders, some mailing lists, some news letters, etc. I decided to figure out how I could setup a system where I could mark a number of things as “filter three days after receipt”. This is a nice compromise, because I do want to see those things, but I don’t want to have to remember to archive them after the fact. You can write a search query for this in GMail: from:(calendar-notification@google.com) older_than:3d However, if you try to create a GMail filter using that, it turns the older_than:3d into a fixed point in time rather than doing what you want. It seems that this is unsolvable within GMail itself. However, some quick searching suggested it was possible to create a Google App Script to solve this, and asked Claude to write the script for me. Following those instructions, I went to script.google.com, which I have not gone to in many years. I edited the generated script from Claude to use the tag “TempMsg”, to archive messages (as originally it had those commented out), and to limit itself to the first fifty items matching that tag. You can find the full code in this gist. I attempted to run this as is, and got an error message that I needed to grant permissions. That requires three clicks within the Google Scripts UI. This also requires approving the somewhat scary message that I trust myself. From there I tried to run this script, and it failed because the TempMsg tag doesn’t exist in my inbox. So I went ahead and created that tag, and setup some filters to assign that tag to certain email senders. After that, I was able to run the script and it worked properly. Note that I convinced myself that it was failing for a bit, because it doesn’t remove messages from the past three days. That is exactly how it’s supposed to work, but I would run it and then see messages with the tag there and think it was failing. Woops. After convincing myself it was working, I added a periodic trigger to run this. I now have this running on a daily basis, and it’s given me a nice new tool for managing my email a bit better. After verifying it, I also used the tag manager to “hide” this tag in the inbox, so I don’t have to see the TempMsg tag everywhere. If I ever need to debug things, I can always make it visible again.

a week ago 2 votes
How should Stripe deprecate APIs? (~2016)

While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.

2 weeks ago 19 votes
library-mcp: working with Markdown knowledge bases

At work, we’ve been building agentic workflows to support our internal Delivery team on various accounting, cash reconciliation, and operational tasks. To better guide that project, I wrote my own simple workflow tool as a learning project in January. Since then, the Model Context Protocol (MCP) has become a prominent solution for writing tools for agents, and I decided to spend some time writing an MCP server over the weekend to build a better intuition. The output of that project is library-mcp, a simple MCP that you can use locally with tools like Claude Desktop to explore Markdown knowledge bases. I’m increasingly enamored with the idea of “datapacks” that I load into context windows with relevant work, and I am currently working to release my upcoming book in a “datapack” format that’s optimized for usage with LLMs. library-mcp allows any author to dynamically build datapacks relevant to their current question, as long as they have access to their content in Markdown files. A few screenshots tell the story. First, here’s a list of the tools provided by this server. These tools give a variety of ways to search through content and pull that content into your context window. Each time you access a tool for the first time in a chat, Claude Desktop prompts you to verify you want that tool to operate. This is a nice feature, and I think it’s particularly important that approval is done at the application layer, not at the agent layer. If agents approve their own usage, well, security is going to be quite a mess. Here’s an example of retrieving all the tags to figure out what I’ve written about. You could do a follow-up like, “Get me posts I’ve written about ‘python’” after seeing the tags. The interesting thing here is you can combine retrieval and intelligence. For example, you could ask “Get me all the tags I’ve written, and find those that seem related to software architecture” and it does a good job of filtering. Finally, here’s an example of actually using a datapack to answer a question. In this case, it’s evaluating how my writing has changed between 2018 and 2025. More practically, I’ve already experimented with friends writing their CTO onboarding plans with Your first 90 days as CTO as a datapack in the context window, and you can imagine the right datapacks allowing you to go much further. Writing a company policy with all the existing policies in a datapack, along with a document about how to write policies effectively, for example, would improve consistency and be likely to identify conflicting policies. Altogether, I am currently enamored with the vision of useful datapacks facilitating creation, and hope that library-mcp is a useful tool for folks as we experiment our way towards this idea.

2 weeks ago 15 votes
Refreshed StaffEng.com and a few other sites

Ahead of announcing the title and publisher of my thus-far-untitled book on engineering strategy in the next week or two, I put together a website for its content. That site is pretty much the same format as this blog, but with some improvements like better mobile rendering on / than this blog has historically had. After finishing that work, I ported the improvements back to lethain.com, but also decided to bring them to staffeng.com. That was slightly trickier because, unlike this blog, StaffEng was historically a Gatsby app. (Why a Gatsby app? Because Calm was using Gatsby for our web frontend and I wanted to get some experience with it.) Over the weekend, I took some time to migrate to Hugo and apply the same enhancements. which you can now see in the lethain:staff-eng repository or on staffeng.com. Here’s a screenshot of the old version. Then here’s a screenshot of the updated version. Overall, I think it’s slightly easier to read, and I took it as a chance to update the various links. For example, I removed the newsletter link and pointed that to this blog’s newsletter instead, given that one’s mailing list went quiet a long time ago. Speaking of going quiet, I also brought these updates to infraeng.dev, which is the very-stuck-in-time site for the book I may-or-may-not one day write about infrastructure engineering. That means that I now have four essentially equivalent Hugo sites running different content websites: this blog, staffeng.com, infraeng.dev, and the site for the upcoming book. All of these build and deploy automatically onto GitHub Pages, which has been an extremely easy, reliable workflow for me. While I was working on this, someone asked me why I don’t just write my own blog server to host my blogs. The answer here is pretty straightforward. I’ve written three blog servers for my blog over the years. The first two were in Python, and the last one was in Go. They all worked well enough, but maintaining them was eventually a pain point because they required a real build pipeline and deal with libraries that could have security issues. Even in the best case, the containers they run in would get end-of-lifed periodically as Ubuntu versions got deprecated. What I’ve slowly learned from that is that, as a frequent writer, you really want your content to live somewhere that can work properly for decades. Even small maintenance costs can be prohibitive over time, and I’ve seen some good blogs disappear rather than e.g. figure out a WordPress upgrade. Individually, these are all minor, but over decades they can really add up. This is also my argument against using hosted providers: I’m sure Substack will be around in five years, but I have no idea if Substack will be around in twenty years, but I know that I’ll still be writing then, and will also want my previous writing to still be accessible.

2 weeks ago 14 votes

More in programming

How Cursor Indexes Codebases Fast

Merkle Trees in the real world

18 hours ago 4 votes
a whippet waypoint

Hey peoples! Tonight, some meta-words. As you know I am fascinated by compilers and language implementations, and I just want to know all the things and implement all the fun stuff: intermediate representations, flow-sensitive source-to-source optimization passes, register allocation, instruction selection, garbage collection, all of that. It started long ago with a combination of curiosity and a hubris to satisfy that curiosity. The usual way to slake such a thirst is structured higher education followed by industry apprenticeship, but for whatever reason my path sent me through a nuclear engineering bachelor’s program instead of computer science, and continuing that path was so distasteful that I noped out all the way to rural Namibia for a couple years. Fast-forward, after 20 years in the programming industry, and having picked up some language implementation experience, a few years ago I returned to garbage collection. I have a good level of language implementation chops but never wrote a memory manager, and Guile’s performance was limited by its use of the Boehm collector. I had been on the lookout for something that could help, and when I learned of it seemed to me that the only thing missing was an appropriate implementation for Guile, and hey I could do that!Immix I started with the idea of an -style interface to a memory manager that was abstract enough to be implemented by a variety of different collection algorithms. This kind of abstraction is important, because in this domain it’s easy to convince oneself that a given algorithm is amazing, just based on vibes; to stay grounded, I find I always need to compare what I am doing to some fixed point of reference. This GC implementation effort grew into , but as it did so a funny thing happened: the as a direct replacement for the Boehm collector maintained mark bits in a side table, which I realized was a suitable substrate for Immix-inspired bump-pointer allocation into holes. I ended up building on that to develop an Immix collector, but without lines: instead each granule of allocation (16 bytes for a 64-bit system) is its own line.MMTkWhippetmark-sweep collector that I prototyped The is funny, because it defines itself as a new class of collector, fundamentally different from the three other fundamental algorithms (mark-sweep, mark-compact, and evacuation). Immix’s are blocks (64kB coarse-grained heap divisions) and lines (128B “fine-grained” divisions); the innovation (for me) is the discipline by which one can potentially defragment a block without a second pass over the heap, while also allowing for bump-pointer allocation. See the papers for the deets!Immix papermark-regionregionsoptimistic evacuation However what, really, are the regions referred to by ? If they are blocks, then the concept is trivial: everyone has a block-structured heap these days. If they are spans of lines, well, how does one choose a line size? As I understand it, Immix’s choice of 128 bytes was to be fine-grained enough to not lose too much space to fragmentation, while also being coarse enough to be eagerly swept during the GC pause.mark-region This constraint was odd, to me; all of the mark-sweep systems I have ever dealt with have had lazy or concurrent sweeping, so the lower bound on the line size to me had little meaning. Indeed, as one reads papers in this domain, it is hard to know the real from the rhetorical; the review process prizes novelty over nuance. Anyway. What if we cranked the precision dial to 16 instead, and had a line per granule? That was the process that led me to Nofl. It is a space in a collector that came from mark-sweep with a side table, but instead uses the side table for bump-pointer allocation. Or you could see it as an Immix whose line size is 16 bytes; it’s certainly easier to explain it that way, and that’s the tack I took in a .recent paper submission to ISMM’25 Wait what! I have a fine job in industry and a blog, why write a paper? Gosh I have meditated on this for a long time and the answers are very silly. Firstly, one of my language communities is Scheme, which was a research hotbed some 20-25 years ago, which means many practitioners—people I would be pleased to call peers—came up through the PhD factories and published many interesting results in academic venues. These are the folks I like to hang out with! This is also what academic conferences are, chances to shoot the shit with far-flung fellows. In Scheme this is fine, my work on Guile is enough to pay the intellectual cover charge, but I need more, and in the field of GC I am not a proven player. So I did an atypical thing, which is to cosplay at being an independent researcher without having first been a dependent researcher, and just solo-submit a paper. Kids: if you see yourself here, just go get a doctorate. It is not easy but I can only think it is a much more direct path to goal. And the result? Well, friends, it is this blog post :) I got the usual assortment of review feedback, from the very sympathetic to the less so, but ultimately people were confused by leading with a comparison to Immix but ending without an evaluation against Immix. This is fair and the paper does not mention that, you know, I don’t have an Immix lying around. To my eyes it was a good paper, an , but, you know, just a try. I’ll try again sometime.80% paper In the meantime, I am driving towards getting Whippet into Guile. I am hoping that sometime next week I will have excised all the uses of the BDW (Boehm GC) API in Guile, which will finally allow for testing Nofl in more than a laboratory environment. Onwards and upwards! whippet regions? paper??!?

2 days ago 5 votes
Stress And Programming

Having spent four decades as a programmer in various industries and situations, I know that modern software development processes are far more stressful than when I started. It's not simply that developing software today is more complex than it was back in 1981. In that early decade, none

2 days ago 5 votes
Espressif’s Automatic Reset

In previous articles, we saw how to use “real” UART, and looked into the trick used by Arduino to automatically reset boards when uploading firmware. Today, we’ll look into how Espressif does something similar, using even more tricks. “Real” UART on the Saola As usual, let’s first simply connect the UART adapter. Again, we connect … Continue reading Espressif’s Automatic Reset → The post Espressif’s Automatic Reset appeared first on Quentin Santos.

2 days ago 4 votes
How I built a chatbot with my dog

Lessons for AI prompting and retrieval

2 days ago 4 votes