More from Ferd.ca
This is a re-publishing of a blog post I originally wrote for work, but wanted on my own blog as well. AI is everywhere, and its impressive claims are leading to rapid adoption. At this stage, I’d qualify it as charismatic technology—something that under-delivers on what it promises, but promises so much that the industry still leverages it because we believe it will eventually deliver on these claims. This is a known pattern. In this post, I’ll use the example of automation deployments to go over known patterns and risks in order to provide you with a list of questions to ask about potential AI solutions. I’ll first cover a short list of base assumptions, and then borrow from scholars of cognitive systems engineering and resilience engineering to list said criteria. At the core of it is the idea that when we say we want humans in the loop, it really matters where in the loop they are. My base assumptions The first thing I’m going to say is that we currently do not have Artificial General Intelligence (AGI). I don’t care whether we have it in 2 years or 40 years or never; if I’m looking to deploy a tool (or an agent) that is supposed to do stuff to my production environments, it has to be able to do it now. I am not looking to be impressed, I am looking to make my life and the system better. Another mechanism I want you to keep in mind is something called the context gap. In a nutshell, any model or automation is constructed from a narrow definition of a controlled environment, which can expand as it gains autonomy, but remains limited. By comparison, people in a system start from a broad situation and narrow definitions down and add constraints to make problem-solving tractable. One side starts from a narrow context, and one starts from a wide one—so in practice, with humans and machines, you end up seeing a type of teamwork where one constantly updates the other: The optimal solution of a model is not an optimal solution of a problem unless the model is a perfect representation of the problem, which it never is. — Ackoff (1979, p. 97) Because of that mindset, I will disregard all arguments of “it’s coming soon” and “it’s getting better real fast” and instead frame what current LLM solutions are shaped like: tools and automation. As it turns out, there are lots of studies about ergonomics, tool design, collaborative design, where semi-autonomous components fit into sociotechnical systems, and how they tend to fail. Additionally, I’ll borrow from the framing used by people who study joint cognitive systems: rather than looking only at the abilities of what a single person or tool can do, we’re going to look at the overall performance of the joint system. This is important because if you have a tool that is built to be operated like an autonomous agent, you can get weird results in your integration. You’re essentially building an interface for the wrong kind of component—like using a joystick to ride a bicycle. This lens will assist us in establishing general criteria about where the problems will likely be without having to test for every single one and evaluate them on benchmarks against each other. Questions you'll want to ask The following list of questions is meant to act as reminders—abstracting away all the theory from research papers you’d need to read—to let you think through some of the important stuff your teams should track, whether they are engineers using code generation, SREs using AIOps, or managers and execs making the call to adopt new tooling. Are you better even after the tool is taken away? An interesting warning comes from studying how LLMs function as learning aides. The researchers found that people who trained using LLMs tended to fail tests more when the LLMs were taken away compared to people who never studied with them, except if the prompts were specifically (and successfully) designed to help people learn. Likewise, it’s been known for decades that when automation handles standard challenges, the operators expected to take over when they reach their limits end up worse off and generally require more training to keep the overall system performant. While people can feel like they’re getting better and more productive with tool assistance, it doesn’t necessarily follow that they are learning or improving. Over time, there’s a serious risk that your overall system’s performance will be limited to what the automation can do—because without proper design, people keeping the automation in check will gradually lose the skills they had developed prior. Are you augmenting the person or the computer? Traditionally successful tools tend to work on the principle that they improve the physical or mental abilities of their operator: search tools let you go through more data than you could on your own and shift demands to external memory, a bicycle more effectively transmits force for locomotion, a blind spot alert on your car can extend your ability to pay attention to your surroundings, and so on. Automation that augments users therefore tends to be easier to direct, and sort of extends the person’s abilities, rather than acting based on preset goals and framing. Automation that augments a machine tends to broaden the device’s scope and control by leveraging some known effects of their environment and successfully hiding them away. For software folks, an autoscaling controller is a good example of the latter. Neither is fundamentally better nor worse than the other—but you should figure out what kind of automation you’re getting, because they fail differently. Augmenting the user implies that they can tackle a broader variety of challenges effectively. Augmenting the computers tends to mean that when the component reaches its limits, the challenges are worse for the operator. Is it turning you into a monitor rather than helping build an understanding? If your job is to look at the tool go and then say whether it was doing a good or bad job (and maybe take over if it does a bad job), you’re going to have problems. It has long been known that people adapt to their tools, and automation can create complacency. Self-driving cars that generally self-drive themselves well but still require a monitor are not effectively monitored. Instead, having AI that supports people or adds perspectives to the work an operator is already doing tends to yield better long-term results than patterns where the human learns to mostly delegate and focus elsewhere. (As a side note, this is why I tend to dislike incident summarizers. Don’t make it so people stop trying to piece together what happened! Instead, I prefer seeing tools that look at your summaries to remind you of items you may have forgotten, or that look for linguistic cues that point to biases or reductive points of view.) Does it pigeonhole what you can look at? When evaluating a tool, you should ask questions about where the automation lands: Does it let you look at the world more effectively? Does it tell you where to look in the world? Does it force you to look somewhere specific? Does it tell you to do something specific? Does it force you to do something? This is a bit of a hybrid between “Does it extend you?” and “Is it turning you into a monitor?” The five questions above let you figure that out. As the tool becomes a source of assertions or constraints (rather than a source of information and options), the operator becomes someone who interacts with the world from inside the tool rather than someone who interacts with the world with the tool’s help. The tool stops being a tool and becomes a representation of the whole system, which means whatever limitations and internal constraints it has are then transmitted to your users. Is it a built-in distraction? People tend to do multiple tasks over many contexts. Some automated systems are built with alarms or alerts that require stealing someone’s focus, and unless they truly are the most critical thing their users could give attention to, they are going to be an annoyance that can lower the effectiveness of the overall system. What perspectives does it bake in? Tools tend to embody a given perspective. For example, AIOps tools that are built to find a root cause will likely carry the conceptual framework behind root causes in their design. More subtly, these perspectives are sometimes hidden in the type of data you get: if your AIOps agent can only see alerts, your telemetry data, and maybe your code, it will rarely be a source of suggestions on how to improve your workflows because that isn’t part of its world. In roles that are inherently about pulling context from many disconnected sources, how on earth is automation going to make the right decisions? And moreover, who’s accountable for when it makes a poor decision on incomplete data? Surely not the buyer who installed it! This is also one of the many ways in which automation can reinforce biases—not just based on what is in its training data, but also based on its own structure and what inputs were considered most important at design time. The tool can itself become a keyhole through which your conclusions are guided. Is it going to become a hero? A common trope in incident response is heroes—the few people who know everything inside and out, and who end up being necessary bottlenecks to all emergencies. They can’t go away for vacation, they’re too busy to train others, they develop blind spots that nobody can fix, and they can’t be replaced. To avoid this, you have to maintain a continuous awareness of who knows what, and crosstrain each other to always have enough redundancy. If you have a team of multiple engineers and you add AI to it, having it do all of the tasks of a specific kind means it becomes a de facto hero to your team. If that’s okay, be aware that any outages or dysfunction in the AI agent would likely have no practical workaround. You will essentially have offshored part of your ops. Do you need it to be perfect? What a thing promises to be is never what it is—otherwise AWS would be enough, and Kubernetes would be enough, and JIRA would be enough, and the software would work fine with no one needing to fix things. That just doesn’t happen. Ever. Even if it’s really, really good, it’s gonna have outages and surprises, and it’ll mess up here and there, no matter what it is. We aren’t building an omnipotent computer god, we’re building imperfect software. You’ll want to seriously consider whether the tradeoffs you’d make in terms of quality and cost are worth it, and this is going to be a case-by-case basis. Just be careful not to fix the problem by adding a human in the loop that acts as a monitor! Is it doing the whole job or a fraction of it? We don’t notice major parts of our own jobs because they feel natural. A classic pattern here is one of AIs getting better at diagnosing patients, except the benchmarks are usually run on a patient chart where most of the relevant observations have already been made by someone else. Similarly, we often see AI pass a test with flying colors while it still can’t be productive at the job the test represents. People in general have adopted a model of cognition based on information processing that’s very similar to how computers work (get data in, think, output stuff, rinse and repeat), but for decades, there have been multiple disciplines that looked harder at situated work and cognition, moving past that model. Key patterns of cognition are not just in the mind, but are also embedded in the environment and in the interactions we have with each other. Be wary of acquiring a solution that solves what you think the problem is rather than what it actually is. We routinely show we don’t accurately know the latter. What if we have more than one? You probably know how straightforward it can be to write a toy project on your own, with full control of every refactor. You probably also know how this stops being true as your team grows. As it stands today, a lot of AI agents are built within a snapshot of the current world: one or few AI tools added to teams that are mostly made up of people. By analogy, this would be like everyone selling you a computer assuming it were the first and only electronic device inside your household. Problems arise when you go beyond these assumptions: maybe AI that writes code has to go through a code review process, but what if that code review is done by another unrelated AI agent? What happens when you get to operations and common mode failures impact components from various teams that all have agents empowered to go fix things to the best of their ability with the available data? Are they going to clash with people, or even with each other? Humans also have that ability and tend to solve it via processes and procedures, explicit coordination, announcing what they’ll do before they do it, and calling upon each other when they need help. Will multiple agents require something equivalent, and if so, do you have it in place? How do they cope with limited context? Some changes that cause issues might be safe to roll back, some not (maybe they include database migrations, maybe it is better to be down than corrupting data), and some may contain changes that rolling back wouldn’t fix (maybe the workload is controlled by one or more feature flags). Knowing what to do in these situations can sometimes be understood from code or release notes, but some situations can require different workflows involving broader parts of the organization. A risk of automation without context is that if you have situations where waiting or doing little is the best option, then you’ll need to either have automation that requires input to act, or a set of actions to quickly disable multiple types of automation as fast as possible. Many of these may exist at the same time, and it becomes the operators’ jobs to not only maintain their own context, but also maintain a mental model of the context each of these pieces of automation has access to. The fancier your agents, the fancier your operators’ understanding and abilities must be to properly orchestrate them. The more surprising your landscape is, the harder it can become to manage with semi-autonomous elements roaming around. After an outage or incident, who does the learning and who does the fixing? One way to track accountability in a system is to figure out who ends up having to learn lessons and change how things are done. It’s not always the same people or teams, and generally, learning will happen whether you want it or not. This is more of a rhetorical question right now, because I expect that in most cases, when things go wrong, whoever is expected to monitor the AI tool is going to have to steer it in a better direction and fix it (if they can); if it can’t be fixed, then the expectation will be that the automation, as a tool, will be used more judiciously in the future. In a nutshell, if the expectation is that your engineers are going to be doing the learning and tweaking, your AI isn’t an independent agent—it’s a tool that cosplays as an independent agent. Do what you will—just be mindful All in all, none of the above questions flat out say you should not use AI, nor where exactly in the loop you should put people. The key point is that you should ask that question and be aware that just adding whatever to your system is not going to substitute workers away. It will, instead, transform work and create new patterns and weaknesses. Some of these patterns are known and well-studied. We don’t have to go rushing to rediscover them all through failures as if we were the first to ever automate something. If AI ever gets so good and so smart that it’s better than all your engineers, it won’t make a difference whether you adopt it only once it’s good. In the meanwhile, these things do matter and have real impacts, so please design your systems responsibly. If you’re interested to know more about the theoretical elements underpinning this post, the following references—on top of whatever was already linked in the text—might be of interest: Books: Joint Cognitive Systems: Foundations of Cognitive Systems Engineering by Erik Hollnagel Joint Cognitive Systems: Patterns in Cognitive Systems Engineering by David D. Woods Cognition in the Wild by Edwin Hutchins Behind Human Error by David D. Woods, Sydney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter Papers: Ironies of Automation by Lisanne Bainbridge The French-Speaking Ergonomists’ Approach to Work Activity by Daniellou How in the World Did We Ever Get into That Mode? Mode Error and Awareness in Supervisory Control by Nadine Sarter Can We Ever Escape from Data Overload? A Cognitive Systems Diagnosis by David D. Woods Ten Challenges for Making Automation a “Team Player” in Joint Human-Agent Activity by Gary Klein and David D. Woods MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination by Sidney Dekker Managing the Hidden Costs of Coordination by Laura Maguire Designing for Expertise by David D. Woods The Impact of Generative AI on Critical Thinking by Lee et al.
This blog post originally appeared on the LFI blog but I decided to post it on my own as well. Every organization has to contend with limits: scarcity of resources, people, attention, or funding, friction from scaling, inertia from previous code bases, or a quickly shifting ecosystem. And of course there are more, like time, quality, effort, or how much can fit in anyone's mind. There are so many ways for things to go wrong; your ongoing success comes in no small part from the people within your system constantly navigating that space, making sacrifice decisions and trading off some things to buy runway elsewhere. From time to time, these come to a head in what we call a goal conflict, where two important attributes clash with each other. These are not avoidable, and in fact are just assumed to be so in many cases, such as "cheap, fast, and good; pick two". But somehow, when it comes to more specific details of our work, that clarity hides itself or gets obscured by the veil of normative judgments. It is easy after an incident to think of what people could have done differently, of signals they should have listened to, or of consequences they would have foreseen had they just been a little bit more careful. From this point of view, the idea of reinforcing desired behaviors through incentives, both positive (bonuses, public praise, promotions) and negative (demerits, re-certification, disciplinary reviews) can feel attractive. (Do note here that I am specifically talking of incentives around specific decision-making or performance, rather than broader ones such as wages, perks, overtime or hazard pay, or employment benefits, even though effects may sometimes overlap.) But this perspective itself is a trap. Hindsight bias—where we overestimate how predictable outcomes were after the fact—and its close relative outcome bias—where knowing the results after the fact tints how we judge the decision made—both serve as good reminders that we should ideally look at decisions as they were being made, with the information known and pressures present then.. This is generally made easier by assuming people were trying to do a good job and get good results; a judgment that seems to make no sense asks of us that we figure out how it seemed reasonable at the time. Events were likely challenging, resources were limited (including cognitive bandwidth), and context was probably uncertain. If you were looking for goal conflicts and difficult trade-offs, this is certainly a promising area in which they can be found. Taking people's desire for good outcomes for granted forces you to shift your perspective. It demands you move away from thinking that somehow more pressure toward succeeding would help. It makes you ask what aid could be given to navigate the situation better, how the context could be changed for the trade-offs to be negotiated differently next time around. It lets us move away from wondering how we can prevent mistakes and move toward how we could better support our participants. Hell, the idea of rewarding desired behavior feels enticing even in cases where your review process does not fall into the traps mentioned here, where you take a more just approach. But the core idea here is that you can't really expect different outcomes if the pressures and goals that gave them rise don't change either. During incidents, priorities in play already are things like "I've got to fix this to keep this business alive", stabilizing the system to prevent large cascades, or trying to prevent harm to users or customers. They come with stress, adrenalin, and sometimes a sense of panic or shock. These are likely to rank higher in the minds of people than “what’s my bonus gonna be?” or “am I losing a gift card or some plaque if I fail?” Adding incentives, whether positive or negative, does not clarify the situation. It does not address goal conflicts. It adds more variables to the equation, complexifies the situation, and likely makes it more challenging. Chances are that people will make the same decisions they would have made (and have been making continuously) in the past, obtaining the desired outcomes. Instead, they’ll change what they report later in subtle ways, by either tweaking or hiding information to protect themselves, or by gradually losing trust in the process you've put in place. These effects can be amplified when teams are given hard-to-meet abstract targets such as lowering incident counts, which can actively interfere with incident response by creating new decision points in people's mental flows. If responders have to discuss and classify the nature of an incident to fit an accounting system unrelated to solving it right now, their response is likely made slower, more challenging. This is not to say all attempts at structure and classification would hinder proper response, though. Clarifying the critical elements to salvage first, creating cues and language for patterns that will be encountered, and agreeing on strategies that support effective coordination across participants can all be really productive. It needs to be done with a deeper understanding of how your incident response actually works, and that sometimes means unpleasant feedback about how people perceive your priorities. I've been in reviews where people stated things like "we know that we get yelled at more for delivering features late than broken code so we just shipped broken code since we were out of time", or who admitted ignoring execs who made a habit of coming down from above to scold employees into fixing things they were pressured into doing anyway. These can be hurtful for an organization to consider, but they are nevertheless a real part of how people deal with exceptional situations. By trying to properly understand the challenges, by clarifying the goal conflicts that arise in systems and result in sometimes frustrating trade-offs, and by making learning from these experiences an objective of its own, we can hopefully make things a bit better. Grounding our interventions within a richer, more naturalistic understanding of incident response and all its challenges is a small—albeit a critical one—part of it all.
From time to time, people ask me what I use to power my blog, maybe because they like the minimalist form it has. I tell them it’s a bad idea and that I use the Erlang compiler infrastructure for it, and they agree to look elsewhere. After launching my notes section, I had to fully clean up my engine. I thought I could write about how it works because it’s fairly unique and interesting, even if you should probably not use it. The Requirements I first started my blog 14 years ago. It had roughly the same structure as it does at the time of writing this: a list of links and text with nothing else. It did poorly with mobile (which was still sort of new but I should really work to improve these days), but okay with screen readers. It’s gotta be minimal enough to load fast on old devices. There’s absolutely nothing dynamic on here. No JavaScript, no comments, no tracking, and I’m pretty sure I’ve disabled most logging and retention. I write into a void, either transcribing talks or putting down rants I’ve repeated 2-3 times to other people so it becomes faster to just link things in the future. I mostly don’t know what gets read or not, but over time I found this kept the experience better for me than chasing readers or views. Basically, a static site is the best technology for me, but from time to time it’s nice to be able to update the layout, add some features (like syntax highlighting or an RSS feed) so it needs to be better than flat HTML files. Internally it runs with erlydtl, an Erlang implementation of Django Templates, which I really liked a decade and a half ago. It supports template inheritance, which is really neat to minimize files I have to edit. All I have is a bunch of files containing my posts, a few of these templates, and a little bit of Rebar3 config tying them together. There are some features that erlydtl doesn’t support but that I wanted anyway, notably syntax highlighting (without JavaScript), markdown support, and including subsections of HTML files (a weird corner case to support RSS feeds without powering them with a database). The feature I want to discuss here is “only rebuild what you strictly need to,” which I covered by using the Rebar3 compiler. Rebar3’s Compiler Rebar3 is the Erlang community’s build tool, which Tristan and I launched over 10 years ago, a follower to the classic rebar 2.x script. A funny requirement for Rebar3 is that Erlang has multiple compilers: one for Erlang, but also one for MIB files (for SNMP), the Leex syntax analyzer generator, and the Yecc parser generator. It also is plugin-friendly in order to compile Elixir modules, and other BEAM languages, like LFE, or very early versions of Gleam. We needed to support at least four compilers out of the box, and to properly track everything such that we only rebuild what we must. This is done using a Directed Acyclic Graph (DAG) to track all files, including build artifacts. The Rebar3 compiler infrastructure works by breaking up the flow of compilation in a generic and specific subset. The specific subset will: Define which file types and paths must be considered by the compiler. Define which files are dependencies of other files. Be given a graph of all files and their artifacts with their last modified times (and metadata), and specify which of them need rebuilding. Compile individual files and provide metadata to track the artifacts. The generic subset will: Scan files and update their timestamps in a graph for the last modifications. Use the dependency information to complete the dependency graph. Propagate the timestamps of source files modifications transitively through the graph (assume you update header A, included by header B, applied by macro C, on file D; then B, C, and D are all marked as modified as recently as A in the DAG). Pass this updated graph to the specific part to get a list of files to build (usually by comparing which source files are newer than their artifacts, but also if build options changed). Schedule sequential or parallel compilation based on what the specific part specified. Update the DAG with the artifacts and build metadata, and persist the data to disk. In short, you build a compiler plugin that can name directories, file extensions, dependencies, and can compare timestamps and metadata. Then make sure this plugin can compile individual files, and the rest is handled for you. The blog engine Since I’m currently the most active Rebar3 maintainer, I’ve definitely got to maintain the compiler infrastructure described earlier. Since my blog needed to rebuild the fewest static files possible and I already used a template compiler, plugging it into Rebar3 became the solution demanding the least effort. It requires a few hundred lines of code to write the plugin and a bit of config looking like this: {blog3r,[{vars,[{url,[{base,"https://ferd.ca/"},{notes,"https://ferd.ca/notes/"},{img,"https://ferd.ca/static/img/"},...]},%% Main site{index,#{template=>"index.tpl",out=>"index.html",section=>main}},{index,#{template=>"rss.tpl",out=>"feed.rss",section=>main}},%% Notes section{index,#{template=>"index-notes.tpl",out=>"notes/index.html",section=>notes}},{index,#{template=>"rss-notes.tpl",out=>"notes/feed.rss",section=>notes}},%% All sections' pages.{sections,#{main=>{"posts/","./",[{"Mon, 02 Sep 2024 11:00:00 EDT","My Blog Engine is the Erlang Build Tool","blog-engine-erlang-build-tool.md.tpl"},{"Thu, 30 May 2024 15:00:00 EDT","The Review Is the Action Item","the-review-is-the-action-item.md.tpl"},{"Tue, 19 Mar 2024 11:00:00 EDT","A Commentary on Defining Observability","a-commentary-on-defining-observability.md.tpl"},{"Wed, 07 Feb 2024 19:00:00 EST","A Distributed Systems Reading List","distsys-reading-list.md.tpl"},...]},notes=>{"notes/","notes/",[{"Fri, 16 Aug 2024 10:30:00 EDT","Paper: Psychological Safety: The History, Renaissance, and Future of an Interpersonal Construct","papers/psychological-safety-interpersonal-construct.md.tpl"},{"Fri, 02 Aug 2024 09:30:00 EDT","Atomic Accidents and Uneven Blame","atomic-accidents-and-uneven-blame.md.tpl"},{"Sat, 27 Jul 2024 12:00:00 EDT","Paper: Moral Crumple Zones","papers/moral-crumple-zones.md.tpl"},{"Tue, 16 Jul 2024 19:00:00 EDT","Hutchins' Distributed Cognition in The Wild","hutchins-distributed-cognition-in-the-wild.md.tpl"},...]}}}]}. And blog post entry files like this: {% extends "base.tpl" %} {% block content %} <p>I like cats. I like food. <br /> I don't especially like catfood though.</p> {% markdown %} ### Have a subtitle And then _all sorts_ of content! - lists - other lists - [links]({{ url.base }}section/page)) - and whatever fits a demo > Have a quote to close this out {% endmarkdown %} {% endblock %} These call to a parent template (see base.tpl for the structure) to inject their content. The whole site gets generated that way. Even compiler error messages are lifted from the Rebar3 libraries (although I haven't wrapped everything perfectly yet), with the following coming up when I forgot to close an if tag before closing a for loop: $ rebar3 compile ===> Verifying dependencies... ===> Analyzing applications... ===> Compiling ferd_ca ===> template error: ┌─ /home/ferd/code/ferd-ca/templates/rss.tpl: │ 24 │ {% endfor %} │ ╰── syntax error before: "endfor" ===> Compiling templates/rss.tpl failed As you can see, I build my blog by calling rebar3 compile, the same command as I do for any Erlang project. I find it interesting that on one hand, this is pretty much the best design possible for me given that it represents almost no new code, no new tools, and no new costs. It’s quite optimal. On the other hand, it’s possibly the worst possible tool chain imaginable for a blog engine for almost anybody else.
2024/05/30 The Review Is the Action Item I like to consider running an incident review to be its own action item. Other follow-ups emerging from it are a plus, but the point is to learn from incidents, and the review gives room for that to happen. This is not surprising advice if you’ve read material from the LFI community and related disciplines. However, there are specific perspectives required to make this work, and some assumptions necessary for it, without which things can break down. How can it work? In a more traditional view, the system is believed to be stable, then disrupted into an incident. The system gets stabilized, and we must look for weaknesses that can be removed or barriers that could be added in order to prevent such disruption in the future. Other perspectives for systems include views where they are never truly stable. Things change constantly; uncertainty is normal. Under that lens, systems can’t be forced into stability by control or authority. They can be influenced and adapt on an ongoing basis, and possibly kept in balance through constant effort. Once you adopt a socio-technical perspective, the hard-to-model nature of humans becomes a desirable trait to cope with chaos. Rather than a messy variable to stamp out, you’ll want to give them more tools and ways to keep all the moving parts of the subsystems going. There, an incident review becomes an arena where misalignment in objectives can be repaired, where strategies and tactics can be discussed, where mental models can be corrected and enriched, where voices can be heard when they wouldn’t be, and where we are free to reflect on the messy reality that drove us here. This is valuable work, and establishing an environment where it takes place is a key action item on its own. People who want to keep things working will jump on this opportunity if they see any value in it. Rather than giving them tickets to work on, we’re giving them a safe context to surface and discuss useful information. They’ll carry that information with them in the future, and it may influence the decisions they make, here and elsewhere. If the stories that come out of reviews are good enough, they will be retold to others, and the organization will have learned something. That belief people will do better over time as they learn, to me, tends to be worth more than focusing on making room for a few tickets in the backlog. How can it break down? One of the unnamed assumptions with this whole approach is that teams should have the ability to influence their own roadmap and choose some of their own work. A staunchly top-down organization may leverage incident reviews as a way to let people change the established course with a high priority. That use of incident reviews can’t be denied in these contexts. We want to give people the information and the perspectives they need to come up with fixes that are effective. Good reviews with action items ought to make sense, particularly in these orgs where most of the work is normally driven by folks outside of the engineering teams. But if the maintainers do not have the opportunity to schedule work they think needs doing outside of the aftermath of an incident—work that is by definition reactive—then they have no real power to schedule preventive work on their own. And so that’s a place where learning being its own purpose breaks down: when the learnings can’t be applied. Maybe it feels like “good” reviews focused on learning apply to a surprisingly narrow set of teams then, because most teams don’t have that much control. The question here really boils down to “who is it that can apply things they learned, and when?” If the answer is “almost no one, and only when things explode,” that’s maybe a good lesson already. That’s maybe where you’d want to start remediating. Note that even this perspective is a bit reductionist, which is also another way in which learning reviews may break down. By narrowing knowledge’s utility only to when it gets applied in measurable scheduled work, we stop finding value outside of this context, and eventually stop giving space for it. It’s easy to forget that we don’t control what people learn. We don’t choose what the takeaways are. Everyone does it for themselves based on their own lived experience. More importantly, we can’t stop people from using the information they learned, whether at work or in their personal life. Lessons learned can be applied anywhere and any time, and they can become critically useful at unexpected times. Narrowing the scope of your reviews such that they only aim to prevent bad accidents indirectly hinders creating fertile grounds for good surprises as well. Going for better While the need for action items is almost always there, a key element of improving incident reviews is to not make corrections the focal point. Consider the incident review as a preliminary step, the data-gathering stage before writing down the ideas. You’re using recent events as a study of what’s surprising within the system, but also of how it is that things usually work well. Only once that perspective is established does it make sense to start thinking of ways of modifying things. Try it with only one or two reviews at first. Minor incidents are usually good, because following the methods outlined in docs like the Etsy Debriefing Facilitation Guide and the Howie guide tends to reveal many useful insights in incidents people would have otherwise overlooked as not very interesting. As you and your teams see value, expand to more and more incidents. It also helps to set the tone before and during the meetings. I’ve written a set of “ground rules” we use at Honeycomb and that my colleague Lex Neva has transcribed, commented, and published. See if something like that could adequately frame the session.. If abandoning the idea of action items seems irresponsible or impractical to you, keep them. But keep them with some distance; the common tip given by the LFI community is to schedule another meeting after the review to discuss them in isolation. iiii At some point, that follow-up meeting may become disjoint from the reviews. There’s not necessarily a reason why every incident needs a dedicated set of fixes (longer-term changes impacting them could already be in progress, for example), nor is there a reason to wait for an incident to fix things and improve them. That’s when you decouple understanding from fixing, and the incident review becomes its own sufficient action item.
More in programming
The world is waking to the fact that talk therapy is neither the only nor the best way to cure a garden-variety petite depression. Something many people will encounter at some point in their lives. Studies have shown that exercise, for example, is a more effective treatment than talk therapy (and pharmaceuticals!) when dealing with such episodes. But I'm just as interested in the role building competence can have in warding off the demons. And partly because of this meme: I've talked about it before, but I keep coming back to the fact that it's exactly backwards. That signing up for an educational quest into Linux, history, or motorcycle repair actually is an incredibly effective alternative to therapy! At least for men who'd prefer to feel useful over being listened to, which, in my experience, is most of them. This is why I find it so misguided when people who undertake those quests sell their journey short with self-effacing jibes about how much an unattractive nerd it makes them to care about their hobby. Mihaly Csikszentmihalyi detailed back in 1990 how peak human happiness arrives exactly in these moments of flow when your competence is stretched by a difficult-but-doable challenge. Don't tell me those endorphins don't also help counter the darkness. But it's just as much about the fact that these pursuits of competence usually offer a great opportunity for community as well that seals the deal. I've found time and again that people are starved for the kind of topic-based connections that, say, learning about Linux offers in spades. You're not just learning, you're learning with others. That is a time-tested antidote to depression: Forming and cultivating meaningful human connections. Yes, doing so over the internet isn't as powerful as doing it in person, but it's still powerful. It still offers community, involvement, and plenty of invitation to carry a meaningful burden. Open source nails this trifecta of motivations to a T. There are endless paths of discovery and mastery available. There are tons of fellow travelers with whom to connect and collaborate. And you'll find an unlimited number of meaningful burdens in maintainerships open for the taking. So next time you see that meme, you should cheer that the talk therapy table is empty. Leave it available for the severe, pathological cases that exercise and the pursuit of competence can't cure. Most people just don't need therapy, they need purpose, they need competence, they need exercise, and they need community.
The excellent-but-defunct blog Programming in the 21st Century defines "puzzle languages" as languages were part of the appeal is in figuring out how to express a program idiomatically, like a puzzle. As examples, he lists Haskell, Erlang, and J. All puzzle languages, the author says, have an "escape" out of the puzzle model that is pragmatic but stigmatized. But many mainstream languages have escape hatches, too. Languages have a lot of properties. One of these properties is the language's capabilities, roughly the set of things you can do in the language. Capability is desirable but comes into conflicts with a lot of other desirable properties, like simplicity or efficiency. In particular, reducing the capability of a language means that all remaining programs share more in common, meaning there's more assumptions the compiler and programmer can make ("tractability"). Assumptions are generally used to reason about correctness, but can also be about things like optimization: J's assumption that everything is an array leads to high-performance "special combinations". Rust is the most famous example of mainstream language that trades capability for tractability.1 Rust has a lot of rules designed to prevent common memory errors, like keeping a reference to deallocated memory or modifying memory while something else is reading it. As a consequence, there's a lot of things that cannot be done in (safe) Rust, like interface with an external C function (as it doesn't have these guarantees). To do this, you need to use unsafe Rust, which lets you do additional things forbidden by safe Rust, such as deference a raw pointer. Everybody tells you not to use unsafe unless you absolutely 100% know what you're doing, and possibly not even then. Sounds like an escape hatch to me! To extrapolate, an escape hatch is a feature (either in the language itself or a particular implementation) that deliberately breaks core assumptions about the language in order to add capabilities. This explains both Rust and most of the so-called "puzzle languages": they need escape hatches because they have very strong conceptual models of the language which leads to lots of assumptions about programs. But plenty of "kitchen sink" mainstream languages have escape hatches, too: Some compilers let C++ code embed inline assembly. Languages built on .NET or the JVM has some sort of interop with C# or Java, and many of those languages make assumptions about programs that C#/Java do not. The SQL language has stored procedures as an escape hatch and vendors create a second escape hatch of user-defined functions. Ruby lets you bypass any form of encapsulation with send. Frameworks have escape hatches, too! React has an entire page on them. (Does eval in interpreted languages count as an escape hatch? It feels different, but it does add a lot of capability. Maybe they don't "break assumptions" in the same way?) The problem with escape hatches In all languages with escape hatches, the rule is "use this as carefully and sparingly as possible", to the point where a messy solution without an escape hatch is preferable to a clean solution with one. Breaking a core assumption is a big deal! If the language is operating as if its still true, it's going to do incorrect things. I recently had this problem in a TLA+ contract. TLA+ is a language for modeling complicated systems, and assumes that the model is a self-contained universe. The client wanted to use the TLA+ to test a real system. The model checker should send commands to a test device and check the next states were the same. This is straightforward to set up with the IOExec escape hatch.2 But the model checker assumed that state exploration was pure and it could skip around the state randomly, meaning it would do things like set x = 10, then skip to set x = 1, then skip back to inc x; assert x == 11. Oops! We eventually found workarounds but it took a lot of clever tricks to pull off. I'll probably write up the technique when I'm less busy with The Book. The other problem with escape hatches is the rest of the language is designed around not having said capabilities, meaning it can't support the feature as well as a language designed for them from the start. Even if your escape hatch code is clean, it might not cleanly integrate with the rest of your code. This is why people complain about unsafe Rust so often. It should be noted though that all languages with automatic memory management are trading capability for tractability, too. If you can't deference pointers, you can't deference null pointers. ↩ From the Community Modules (which come default with the VSCode extension). ↩
I wrote a lot of blog posts over my time at Parse, but they all evaporated after Facebook killed the product. Most of them I didn’t care about (there were, ahem, a lot of status updates and “service reliability announcements”, but I was mad about losing this one in particular, a deceptively casual retrospective of […]
It's only been two months since I discovered the power and joy of this new generation of mini PCs. My journey started out with a Minisforum UM870, which is a lovely machine, but since then, I've come to really appreciate the work of Beelink. In a crowded market for mini PCs, Beelink stands out with their superior build quality, their class-leading cooling and silent operation, and their use of fully Linux-compatible components (the UM870 shipped with a MediaTek bluetooth/wifi card that doesn't work with Linux!). It's the complete package at three super compelling price points. For $289, you can get the EQR5, which runs an 8-core AMD Zen3 5825U that puts out 1723/6419 in Geekbench, and comes with 16GB RAM and 500GB NVMe. I've run Omarchy on it, and it flies. For me, the main drawback was the lack of a DisplayPort, which kept me from using it with an Apple display, and the fact that the SER8 exists. But if you're on a budget, and you're fine with HDMI only, it's a wild bargain. For $499, you can get the SER8. That's the price-to-performance sweet spot in the range. It uses the excellent 8-core AMD Zen4 8745HS that puts out 2595/12985 in Geekbench (~M4 multi-core numbers!), and runs our HEY test suite with 30,000 assertions almost as fast as an M4 Max! At that price, you get 32GB RAM + 1TB NVMe, as well as a DisplayPort, so it works with both the Apple 5K Studio Display and the Apple 6K XDR Display (you just need the right cable). Main drawback is limited wifi/bluetooth range, but Beelink tells me there's a fix on the way for that. For $929, you can get the SER9 HX370. This is the top dog in this form factor. It uses the incredible 12-core AMD Zen5 HX370 that hits 2990/15611 in Geekbench, and runs our HEY test suite faster than any Apple M chip I've ever tested. The built-in graphics are also very capable. Enough to play a ton of games at 1080p. It also sorted the SER8's current wifi/bluetooth range issue. I ran the SER8 as my main computer for a while, but now I'm using the SER9, and I just about never feel like I need anything more. Yes, the Framework Desktop, with its insane AMD Max 395+ chip, is even more bonkers. It almost cuts the HEY test suite time in half(!), but it's also $1,795, and not yet generally available. (But preorders are open for the ballers!). Whichever machine fits your budget, it's frankly incredible that we have this kind of performance and efficiency available at these prices with all of these Beelinks drawing less than 10 watt at idle and no more than 100 watt at peak! So it's no wonder that Beelink has been selling these units like hotcakes since I started talking about them on X as the ideal, cheap Omarchy desktop computers. It's such a symbiotic relationship. There are a ton of programmers who have become Linux curious, and Beelink offers no-brainer options to give that a try at a bargain. I just love when that happens. The perfect intersection of hardware, software, and timing. That's what we got here. It's a Beelink, baby! (And no, before you ask, I don't get any royalties, there's no affiliate link, and I don't own any shares in Beelink. I just love discovering great technology and seeing people start their Linux journey with an awesome, affordable computer!)
“One of the comments that sparked this article,” our founder Paul McMahon told me, “was someone saying, ‘I don’t really want to do networking because it seems kind of sleazy. I’m not that kind of person.’” I guess that’s the key misconception people have when they hear ‘networking.’ They think it’s like some used car salesman kind of approach where you have to go and get something out of the person. That’s a serious error, according to Paul, and it worries him that so many developers share that mindset. Instead, Paul considers networking a mix of making new friends, growing a community, and enjoying serendipitous connections that might not bear fruit until years later, but which could prove to be make-or-break career moments. It’s something that you don’t get quick results on and that doesn’t make a difference at all until it does. And it’s just because of the one connection you happen to make at an event you went to once, this rainy Tuesday night when you didn’t really feel like going, but told yourself you have to go—and that can make all the difference. As Paul has previously shared, he can attribute much of his own career success—and, interestingly enough, his peace of mind—to the huge amount of networking he’s done over the years. This is despite the fact that Paul is, in his own words, “not such a talkative person when it comes to small talk or whatever.” Recently I sat down with Paul to discuss exactly how developers are networking “wrong,” and how they can get it right instead. In our conversation, we covered: What networking really is, and why you need to start ASAP Paul’s top tip for anyone who wants to network Advice for networking as an introvert Online vs offline networking—which is more effective? And how to network in Japan, even when you don’t speak Japanese What is networking, really, and why should you start now? “Sometimes,” Paul explained, “people think of hiring fairs and various exhibitions as the way to network, but that’s not networking to me. It’s purely transactional. Job seekers are focused on getting interviews, recruiters on making hires. There’s no chance to make friends or help people outside of your defined role.” Networking is getting to know other people, understanding how maybe you can help them and how they can help you. And sometime down the road, maybe something comes out of it, maybe it doesn’t, but it’s just expanding your connections to people. One reason developers often avoid or delay networking is that, at its core, networking is a long game. Dramatic impacts on your business or career are possible—even probable—but they don’t come to fruition immediately. “A very specific example would be TokyoDev,” said Paul. “One of our initial clients that posted to the list came through networking.” Sounds like a straightforward result? It’s a bit more complicated than that. “There was a Belgian guy, Peter, whom I had known through the Ruby and tech community in Japan for a while,” Paul explained. “We knew each other, and Peter had met another Canadian guy, Jack, who [was] looking to hire a Ruby developer. “So Peter knew about me and TokyoDev and introduced me to Jack, and that was the founder of Degica, who became one of our first clients. . . . And that just happened because I had known Peter through attending events over the years.” Another example is how Paul’s connection to the Ruby community helped him launch Doorkeeper. His participation in Ruby events played a critical role in helping the product succeed, but only because he’d already volunteered at them for years. “Because I knew those people,” he said, “they wanted to support me, and I guess they also saw that I was genuine about this stuff, and I wasn’t participating in these events with some big plan about, ‘If I do this, then they’re going to use my system,’ or whatever. Again, it was people helping each other out.” These delayed and indirect impacts are why Paul thinks you should start networking right now. “You need to do it in advance of when you actually need it,” he said. “People say they’re looking for a job, and they’re told ‘You could network!’ Yeah, that could potentially help, but it’s almost too late.” You should have been networking a couple years ago when you didn’t need to be doing it, because then you’ve already built up the relationships. You can have this karma you’re building over time. . . . Networking has given me a lot of wealth. I don’t mean so much in money per se, but more it’s given me a safety net. “Now I’m confident,” he said, “that if tomorrow TokyoDev disappeared, I could easily find something just through my connections. I don’t think I’ll, at least in Japan, ever have to apply for a job again.” “I think my success with networking is something that anyone can replicate,” Paul went on, “provided they put in the time. I don’t consider myself to be especially skilled in networking, it’s just that I’ve spent over a decade making connections with people.” How to network (the non-sleazy way) Paul has a fair amount of advice for those who want to network in an effective, yet genuine fashion. His first and most important tip: Be interested in other people. Asking questions rather than delivering your own talking points is Paul’s number one method for forging connections. It also helps avoid those “used car salesman” vibes. “ That’s why, at TokyoDev,” Paul explained, “we typically bar recruiters from attending our developer events. Because there are these kinds of people who are just going around wanting to get business cards from everyone, wanting to get their contact information, wanting to then sell them on something later. It’s quite obvious that they’re like that, and that leads to a bad environment, [if] someone’s trying to sell you on something.” Networking for introverts The other reason Paul likes asking questions is that it helps him to network as an introvert. “That’s actually one of the things that makes networking easier for someone who isn’t naturally so talkative. . . . When you meet new people, there are some standard questions you can ask them, and it’s like a blank slate where you’re filling in the details about this person.” He explained further that going to events and being social can be fun for him, but he doesn’t exactly find it relaxing. “When it comes to talking about something I’m really interested in, I can do it, but I stumble in these social situations. Despite that, I think I have been pretty successful when it comes to networking.” “What has worked well for me,” he went on, “has been putting myself in those situations that require me to do some networking, like going to an event.” Even if you aren’t that proactive, you’re going to meet a couple of people there. You’re making more connections than you would if you stayed home and played video games. The more often you do it, the easier it gets, and not just because of practice: there’s a cumulative effect to making connections. “Say you’re going to an event, and maybe last time you met a couple of people, you could just say ‘Hi’ to those people again. And maybe they are talking with someone else they can introduce you to.” Or, you can be the one making the introductions. “What has also worked well for me, is that I like to introduce other people,” Paul said. It’s always a great feeling when I’m talking to someone at an event, and I hear about what they’re doing or what they’re wanting to do, and then I can introduce someone else who maybe matches that. “And it’s also good for me, then I can just be kind of passive there,” Paul joked. “I don’t have to be out there myself so much, if they’re talking to each other.” His last piece of advice for introverts is somewhat counterintuitive. “Paradoxically,” he told me, “it helps if you’re in some sort of leadership position.” If you’re an introvert, my advice would be one, just do it, but then also look for opportunities for helping in some more formal capacity, whether it’s organizing an event yourself, volunteering at an event . . . [or] making presentations. “Like for me, when I’ve organized a Tokyo Rubyist Meetup,” Paul said, “[then] naturally as the organizer there people come to talk to me and ask me questions. . . . And it’s been similar when I’ve presented at an event, because then people have something that they know that you know something about, and maybe they want to know more about it, and so then they can ask you more questions and lead the conversation that way.” Offline vs online networking When it comes to offline vs online networking, Paul prefers offline. In-person events are great for networking because they create serendipity. You meet people through events you wouldn’t meet otherwise just because you’re in the same physical space as them. Those time and space constraints add pressure to make conversation—in a good way. “It’s natural when you are meeting someone, you ask about what they’re doing, and you make that small connection there. Then, after seeing them at multiple different events, you get a bit of a stronger connection to them.” “Physical events are [also] much more constrained in the number of people, so it’s easier to help people,” he added. “Like with TokyoDev, I can’t help every single person online there, but if someone meets me at the event [and is] asking for advice or something like that, of course I’ve got to answer them. And I have more time for them there, because we’re in the same place at the same time.” As humans, we’re more likely to help other people we have met in person, I think just because that’s how our brains work. That being said, Paul’s also found success with online networking. For example, several TokyoDev contributors—myself included—started working with Paul after interacting with him online. I commented on TokyoDev’s Dungeons and Dragons article, which led to Paul checking my profile and asking to chat about my experience. Scott, our community moderator and editor, joined TokyoDev in a paid position after being active on the TokyoDev Discord. Michelle was also active on the Discord, and Paul initially asked her to write an article for TokyoDev on being a woman software engineer in Japan, before later bringing her onto the team. Key to these results was that they involved no stereotypical “networking” strategies on either side: we all connected simply by playing a role in a shared, online community. Consistent interactions with others, particularly over a longer period of time, builds mutual trust and understanding. Your online presence can help with offline networking. As TokyoDev became bigger and more people knew about me through my blog, it became a lot easier to network with people at events because they’re like, ‘Hey, you’re Paul from TokyoDev. I like that site.’ “It just leads to more opportunities,” he continued. “If you’ve interacted with someone before online, and then you meet them offline, you already do have a bit of a relationship with them, so you’re more likely to have a place to start the conversation. [And] if you’re someone who is struggling with doing in-person networking, the more you can produce or put out there [online], the more opportunities that can lead to.” Networking in Japanese While there are a number of events throughout Japan that are primarily in English, for best networking results, developers should take advantage of Japanese events as well—even if your Japanese isn’t that good. In 2010, Paul created the Tokyo Rubyist Meetup, with the intention of bringing together Japanese and international developers. To ensure it succeeded, he knew he needed more connections to the Japanese development community. “So I started attending a lot of Japanese developer events where I was the only non-Japanese person there,” said Paul. “I didn’t have such great Japanese skills. I couldn’t understand all the presentations. But it still gave me a chance to make lots of connections, both with people who would later present at [Tokyo Rubyist Meetup], but also with other Japanese developers whom I would work with either on my own products or also on other client projects.” I think it helped being kind of a visible minority. People were curious about me, about why I was attending these events. Their curiosity not only helped him network, but also gave him a helping hand when it came to Japanese conversation. “It’s a lot easier for me in Japanese to be asked questions and answer them,” he admitted. But Paul wasn’t just attending those seminars and events in a passive manner. He soon started delivering presentations himself, usually as part of Lightning Talks—again, despite his relatively low level of Japanese. “It doesn’t matter if you do a bad job of it,” he said. Japanese people I think are really receptive to people trying to speak in Japanese and making an effort. I think they’re happy to have someone who isn’t Japanese present, even if they don’t do a great job. He also quickly learned that the most important networking doesn’t take place at the meetup itself. “At least in the past,” he explained, “it was really split . . . [there’s the] seminar time where everyone goes and watches someone present. Everyone’s pretty passive there and there isn’t much conversation going on between attendees. “Then afterwards—and maybe less than half of the people attend—but they go to a restaurant and have drinks after the event. And that’s where all the real socialization happens, and so that’s where I was able to really make the most connections.” That said, Paul noted that the actual “drinking” part of the process has noticeably diminished. “Drinking culture in Japan is changing a lot,” he told me. “I noticed that even when hosting the Tokyo Rubyist Meetup. When we were first hosting it, we [had] an average of 2.5 beers per participant. And more recently, the average is one or less per participant there. “I think there is not so much of an expectation for people to drink a lot. Young Japanese people don’t drink at the same rate, so don’t feel like you actually have to get drunk at these events. You probably shouldn’t,” he added with a laugh. What you should do is be persistent, and patient. It took Paul about a year of very regularly attending events before he felt he was treated as a member of the community. “Literally I was attending more than the typical Japanese person,” he said. “At the peak, there were a couple events per week.” His hard work paid off, though. “I think one thing about Japanese culture,” he said, “is that it’s really group based.” Initially, as foreigners, we see ourselves in the foreign group versus the Japanese group, and there’s kind of a barrier there. But if you can find some other connection, like in my case Ruby, then with these developers I became part of the “Ruby developer group,” and then I felt much more accepted. Eventually he experienced another benefit. “I think it was after a year of volunteering, maybe two years. . . . RubyKaigi, the biggest Ruby conference in Japan and one of the biggest developer conferences in Japan [in general], used Doorkeeper, the event registration system [I created], to manage their event. “That was a big win for us because it showed that we were a serious system to lots of people there. It exposed us to lots of potential users and was one of the things that I think led to us, for a time, being the most popular event registration system among the tech community in Japan.” Based on his experiences, Paul would urge more developers to try attending Japanese dev events. “Because I think a lot of non-Japanese people are still too intimidated to go to these events, even if they have better Japanese ability than I did. “If you look at most of the Japanese developer events happening now, I think the participants are almost exclusively Japanese, but still, that doesn’t need to be the case.” Takeaways What Paul hopes other developers will take away from this article is that networking shouldn’t feel sleazy. Instead, good networking looks like: Being interested in other people. Asking them questions is the easiest way to start a conversation and make a genuine connection. Occasionally just making yourself go to that in-person event. Serendipity can’t happen if you don’t create opportunities for it. Introducing people to each other—it’s a fast and stress-free way to make more connections. Volunteering for events or organizing your own. Supporting offline events with a solid online presence as well. Not being afraid to attend Japanese events, even if your Japanese isn’t good. Above all, Paul stressed, don’t overcomplicate what networking is at its core. Really what networking comes down to is learning about what other people are doing, and how you can help them or how they can help you. Whether you’re online, offline, or doing it in Japanese, that mindset can turn networking from an awkward, sleazy-feeling experience into something you actually enjoy—even on a rainy Tuesday night.