Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
12
I debuted these principles in my axe-con 2025 talk, It is designed to break your heart: Cultivating a harm reduction mindset as an accessibility practitioner. They are adapted from The National Harm Reduction Coalition’s original eight principles. My adapted principles reflect philosophical and behavioral changes I’ve been cultivating. This is done to try and offset, and defend against systemic trauma and its resultant depression, burnout, and other negative experiences you can incur when doing digital accessibility work. If you have the time, I’d advise reading the original eight principles. I also recommend watching or reading the talk. I say this not in a self-promotional way, but instead that there is a lot of context that will be helpful in understanding: How these adapted principles came to be, and also The larger mindset shifts and practices that led to their creation. The principles There are eight principles in total. They are delivered in the context of how to approach...
2 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Eric Bailey

Tag, you’re it

I’ve been seeing, and enjoying reading these posts as they pop up in my RSS reader. Dave Rupert tagged me into the chain, so here we go! Why did you start blogging in the first place? With the gift of hindsight, I guess I came up being blog-adjacent. Like Dave, I also had a background in publishing as a youth. I worked for my high school newspaper, and had a part- and then later full-time job at my local newspaper. I also published a weirdo, monkey cheese nerd zine. Its main claims to fame were both pissing off the principal and preventing me from getting dates. Zines are cool and embracing cringe will set you free. I read a ton of blogs, but I never initially thought I’d be be someone who published one. This was due to fear of dog-piling criticism, as well as not thinking I had anything meaningful to contribute. Then I got Kivikoskied. Reader, I strongly encourage you to get Kivikoskied yourself. The first post I put on my site was a reaction to the WebAIM Millions report. Reading through it generated enough feelings that I needed a place to put them in a constructive way. What platform are you using to manage your blog and why did you choose it? The reaction to the WebAIM Millions report was originally just a HTML page with a dream. That page seemed to resonate with people, so with that encouragement I had to build blogging infrastructure after the fact. That infrastructure wound up being Eleventy. I love Eleventy, and it’s only gotten better since that initial adoption. Zach Leatherman is a mensch, and I sing the praises of his project every chance I can get. I love blogging with Eleventy because it prioritizes speed, stability, and performance. Static web pages generated via Markdown are easy enough to wrangle, and it means I can spend the majority of my time focusing on writing, and not managing dependencies or database updates. Have you blogged on other platforms before? WordPress, Jekyll, thoughtbot’s homegrown CMS, and a few others. May you never have to work with Méthode. How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog? I’ve evaluated countless writing apps, but find myself keep coming back to Dropbox Paper. I’m highly distractible, and love to fiddle and tinker. Because of this, I find that Paper’s intentional, designed simplicity keeps me focused and on-task. It’s a shame that we live in the rot economy—where innovation is synonymous with value extraction—and there is apparently no longer enough incentive to maintain it. The post is then exported as a Markdown file from Paper, has its contents pasted into VS Code, cleaned up a little bit, metadata is added, merged into GitHub, and voilà! Blog post! There are more efficient ways to do this, but I find the ritual of it all soothing. When do you feel most inspired to write? I’m going to share a little secret with you: Nearly every technical blog post I write is a longform subtweet. By this, I mean I use writing as a way to channel a lot of my anxieties and frustrations into something constructive. I wish I wrote more silly posts, but it’s difficult to prioritize them given the state of things. Do you publish immediately after writing, or do you let it simmer a bit as a draft? I’ll chip away at a draft for weeks, moving sections around and tweaking language until the entire thing feels cohesive. It’s less perfectionism and more wanting to be sure I’m communicating my thoughts as clearly as I can. There is also the inevitable flurry of edits that follow hitting publish. I’d bottle that feeling of sudden, panicked clarity if I could. What are you generally interested in writing about? The intersection of accessibility, usability, design systems, and the web platform. I’m also a sucker for CSS, tech culture, and a good metaphor. Who are you writing for? I write for people who are curious about the web, accessibility, and frontend technology at a medium-to-high level of familiarity. It has been so liberating to not have to explain the basics of accessibility and why it matters anymore. I also write for myself as augmented memory. This, along with services like Pinboard help with my memory. Blog posts are also conversations. It is also a disservice to both audiences if I’m not weaving a lot of contextually relevant voices into the work as outgoing links. What’s your favorite post on your blog? My favorite post on my website is my opus, Accessibility annotation kits only annotate. It took forever to put those thoughts into words. My favorite post on another website is Consider the Tomato. thoughtbot tolerated and encouraged a lot of my shenanigans, and I’m thankful for that. Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature? This website is in desperate need of a redesign, and the “updating in the open” banner is an albatross around my neck. Ironically, the time I should spend on that is spent writing blog posts. I’m now at the point where I fantasize about taking a month off of work to make said redesign happen. Grinning face with sweat emoji. Tag ‘em I’d tag everyone on my RSS reader, if I could. Until then: Adrian Roselli. I’m more or less contractually obligated to include a link to Adrian’s site any time I write about accessibility, as chances are he’s already covered it. Ben Myers. Another favorite accessibility author. I really enjoy his takes on disability and digital accessibility. Jan Maarten. Coworker and samebrain friend, whose longform pieces are always worth reading. Jim Nielsen. A Melanie Richards. Melanie is, in a word, prolific. I’m in awe of her digital gardening efforts. Miriam Suzanne. Less a triple threat and more a, uh, quintuple threat? Brilliance at every turn.

5 days ago 4 votes
Evaluating overlay-adjacent accessibility products

I get asked about my opinion on overlay-adjacent accessibility products with enough frequency that I thought it could be helpful to write about it. There’s a category of third party products out there that are almost, but not quite an accessibility overlay. By this I mean that they seem a little less predatory, and a little more grounded in terms of the promises they make. Some of these products are widgets. Some are browser extensions. Some are apps. Some are an odd fourth thing. Sometimes it’s a case of a solutioneering disability dongle grift, sometimes its a case of good intentions executed in a less-than-optimal way, and sometimes it’s something legitimately helpful. Oftentimes it’s something that lies in the middle area of all of this. Many of them also have some sort of “AI” integration, which is the unfortunate upsell du jour we have to collectively endure for the time being. The rubric I use to evaluate these products remains very similar to how I scrutinize overlays. Hopefully it’s something that can be helpful for your own efforts. Should the product’s functionality be patented? I’m not very happy with the idea that the mechanism to operate something in an accessible way is inhibited by way of legal restriction. This artificially limits who can use it, which is in opposition to the overall mission of digital accessibility. Ideally the technology is the free bit, and the service that facilitates it is what generates the profit. Do I need to subscribe to use it? A subscription-based model is a great way to run a business, but you don’t need to pay a recurring fee to use an accessible website. The nature of the web’s technology means it can be operated via keyboard, voice control, and other assistive technology if constructed properly. Workarounds and community support also exist for some things where it’s not built well. Here I’d also like you to consider the disability tax, and how that factors into a rental model. It’s not great. Does the browser or operating system already have this functionality? A lot of the time this boils down to an issue of discovery, digital literacy, or identity. As touched on in the previous section, browsers and operating systems offer a lot to help you self-serve. Notable examples are reading mode, on-screen narration, color filters, interface and text zoom, and forced color inversion. Can it be used across multiple experiences, or just one website? Stability and predictability of operation and output are vital for technology like this. It’s why I am so bullish on utilizing existing browser and operating system features. Products built to “enhance” the accessibility of a single website or app can’t contribute towards this. Ironically, their presence may actually contribute friction towards someone’s existing method of using things. A tricky little twist here is products that target a single website are often advertised towards the website owner, and not the people who will be using said website. Can I use the keyboard to operate it? I’ve gotten in the habit of pressing Tab a few times when I first check out the product’s website and see if anything happens. It’s a quick and easy test to see if the company walks the walk in addition to talking the talk. Here, I regrettably encounter missing focus indicators and non-semantic interactive controls more often than not. I might also sometimes run the homepage through axe DevTools, to see if there are other egregious errors. I then try to use the product itself with a keyboard if a demo is offered. I am usually found wanting here. How reliable is the AI? There are two broad considerations here: How reliable is the output? How can bias affect someone’s interpretation of things? While I am a skeptic, I can also acknowledge that there are some good use cases for LLMs and related technology when it comes to disability. I think about reliability in terms of the output in terms of the “assistive” part of assistive technology. By this, I mean it actually helps you do what you need to get done. Here, I’d point to Salma Alam-Naylor’s experience with newer startups in this space versus established, community supported solutions. Then consider LLM-based image description products. Here we want to make sure the content is accurate and relevant. Remember that image descriptions are the mechanism that some people rely on to help them understand the world. If that description is not accurate, it impacts how they form an understanding of their environment. A step past that thought is the biases inherent in, and perpetuated by LLM-based technology. I recall Ben Myers’ thoughts on implicit, hegemonic normalization, as well as the sobering truth that this technology can exert influence over its users worldview at scale. Can the company be trusted with your data? A lot of assistive technology is purposely designed to not announce the fact that it is being used. This is to stave off things like discrimination or ineffective, separate-yet-equal “accessibility only” sites. There’s also the murky world of data brokerage, and if the company is selling off this information or not. AccessiBe comes to mind here, and not in a good way. Also consider if the product has access to everything you visit and interact with, and who has access to that information. As a companion concern, it is also worth considering the product’s data security practices—or lack thereof. Here, I would like to point out that startups tend to deprioritize this boring kind of infrastructure work in favor of feature creation. Not having any personal information present in a system is the best way to guard against its theft. Also know that there is no way to undo a data breach once it occurs. Leaked information stays leaked. Will the company last? Speaking of startups, know that more fail than succeed. Are you prepared for an outcome where the product you rely on is is no longer updated or supported because the company that made it went out of business? It could also be a case where the company still exists, but ceases to support the product you use. Here, know that sometimes these companies will actively squash attempts for community-based resurrection and support of the service because it represents potential liability. This concern is another reason why I’m bullish on operating system and browser functionality. They have a lot more resiliency and focus on the long view in this particular area. But also I’m not the arbiter of who can use what. In the spirit of “the best camera is the one you have on you:” if something works for your specific access needs, by all means use it.

a month ago 23 votes
Stanislav Petrov

A lieutenant colonel in the Soviet Air Defense Forces prevented the end of human civilization on September 26th, 1983. His name was Stanislav Petrov. Protocol dictated that the Soviet Union would retaliate against any nuclear strikes sent by the United States. This was a policy of mutually assured destruction, a doctrine that compels a horrifying logical conclusion. The second and third stage effects of this type of exchange would be even more catastrophic. Allies for each side would likely be pulled into the conflict. The resulting nuclear winter was projected to lead to 2 billion deaths due to starvation. This is to say nothing about those who would have been unfortunate enough to have survived. Petrov’s job was to monitor Oko, the computerized warning systems built to centralize Soviet satellite communications. Around midnight, he received a report that one of the satellites had detected the infrared signature of a single launch of a United States ICBM. While Petrov was deciding what to do about this report, the system detected four more incoming missile launches. He had minutes to make a choice about what to do. It is impossible to imagine the amount of pressure placed on him at this moment. Source: Stanislav Petrov, Soviet officer credited with averting nuclear war, dies at 77 by Schwartzreport. Petrov lived in a world of deterministic systems. The technologies that powered these warning systems have outputs that are guaranteed, provided the proper inputs are provided. However, deterministic does not mean infallible. The only reason you are alive and reading this is because Petrov understood that the systems he observed were capable of error. He was suspicious of what he was seeing reported, and chose not to escalate a retaliatory strike. There were two factors guiding his decision: A surprise attack would most likely have used hundreds of missiles, and not just five. The allegedly foolproof Oko system was new and prone to errors. An error in a deterministic system can still lead to expected outputs being generated. For the Oko system, infrared reflections of the sun shining off of the tops of clouds created a false positive that was interpreted as detection of a nuclear launch event. Source: US-K History by Kosmonavtika. The concept of erroneous truth is a deep thing to internalize, as computerized systems are presented as omniscient, indefective, and absolute. Petrov’s rewards for this action were reprimands, reassignment, and denial of promotion. This was likely for embarrassing his superiors by the politically inconvenient shedding of light on issues with the Oko system. A coerced early retirement caused a nervous breakdown, likely him having to grapple with the weight of his decision. It was only in the 1990s—after the fall of the Soviet Union—that his actions were discovered internationally and celebrated. Stanislav Petrov was given the recognition that he deserved, including being honored by the United Nations, awarded the Dresden Peace Prize, featured in a documentary, and being able to visit a Minuteman Missile silo in the United States. On January 31st, 2025, OpenAI struck a deal with the United States government to use its AI product for nuclear weapon security. It is unclear how this technology will be used, where, and to what extent. It is also unclear how OpenAI’s systems function, as they are black box technologies. What is known is that LLM-generated responses—the product OpenAI sells—are non-deterministic. Non-deterministic systems don’t have guaranteed outputs from their inputs. In addition, LLM-based technology hallucinates—it invents content with no self-knowledge that it is a falsehood. Non-deterministic systems that are computerized also have the perception as being authoritative, the same as their deterministic peers. It is not a question of how the output is generated, it is one of the output being perceived to come from a machine. These are terrifying things to know. Consider not only the systems this technology is being applied to, but also the thoughtless speed of their integration. Then consider how we’ve historically been conditioned and rewarded to interpret the output of these systems, and then how we perceive and treat skeptics. We don’t live in a purely deterministic world of technology anymore. Stanislav Petrov died on September 18th, 2017, before this change occurred. I would be incredibly curious to know his thoughts about our current reality, as well as the increasing abdication of human monitoring of automated systems in favor of notably biased, supposed “AI solutions.” In acknowledging Petrov’s skepticism in a time of mania and political instability, we acknowledge a quote from former U.S. Secretary of Defense William J. Perry’s memoir about the incident: [Oko’s false positives] illustrates the immense danger of placing our fate in the hands of automated systems that are susceptible to failure and human beings who are fallible.

a month ago 23 votes
GitHub’s updated Commits page and the interactive list component

GitHub has updated the page template used to list Commits on a repository. Central to this experience is an interactive list component that I was responsible for architecting. This work was done alongside input from James Scholes, whose guidance was instrumental to the effort’s success. An interactive list is a construct that’s more commonplace on desktop applications than the web. That does not mean its approach is forbidden from being used for web experiences, however. What concerns does an interactive list address? The main concern an interactive list addresses is when each discrete item in a series contains multiple interactive child elements. Navigating through every child interactive element placed with each parent list item can be a tedious enough chore that it makes the effort a non-starter. For example, if the list has ten items and each item has seven interactive child elements, that means it takes up to seventy Tab keypresses someone needs to perform to get what they need. That’s an exhausting experience to endure. It could also be agonizing. Think motor control disabilities, where individual movements in aggregate can exceed someone’s pain tolerance threshold. Making each list item’s container itself focusable and traversable addresses this problem, as it lowers the number of keypresses someone needs to use. It also supports allowing you to quickly jump to the start or end of the list for even more navigation options. On GitHub, navigating an interactive list via your keyboard can be accomplished by pressing: Tab: Places focus on the interactive list item that last received focus. Defaults to the first item in the list if the list was previously not interacted with. Down: Moves focus to the next list item, if present. Up: Moves focus to the previous list item, if present. End: Moves focus to the last list item in the interactive list. Home: Moves focus to the first list item in the interactive list. There’s a trick here: We want to make sure each list item’s announcement contains enough information that someone can make an informed choice when navigating via a screen reader. We also do not want to make the announcement so verbose that it slows down the navigation process. For example, we only include the commit title when navigating via list item on the Commits page. For an Issue, we use: The Issue title, Its status, and Its author (there is currently a bug here, we’re working on fixing it). There is an intentionality behind the order of content in this announcement, as we want to include the most pertinent information first. This, in turn, helps people navigating by list item announcement make more informed choices faster. This lets us know: What the problem is, Has it been dealt with yet, and Who found the problem? We also use the term “More information available below” to signal that someone can explore the list item’s child content in more detail. This is accomplished via pressing: Tab: Navigates forwards through each child interactive element in sequence. Shift + Tab: Navigates backwards through each child interactive element in sequence. Esc: Moves focus out of the child interactive elements and places it back on the parent list item itself. Examples of child content that someone could encounter are an Issues’ author, its labels, linked Pull Requests, comment tally, and assignees. Problems The use of the phrase “More information available below” does not sit well with me, despite being the person who oversaw its inclusion. There’s a couple of reasons here: First, I’m normally loathe to hardcode interaction hints for screen readers. The interactive list component is a bit of an exception to that rule. It is an uncommon interaction pattern on the web, so the hint needs to be included until efforts to formalize it both: Manifest, and Get widespread support from assistive technology vendors. Without these two things, I fear that blind and low vision individuals will not be able to fully utilize the experience the same way their peers can. Second, the hint phrasing itself isn’t that great. The location-based term “below” is shorthand to try and communicate that there’s subsequent child content that is related to the list item’s main content. While “subsequent child content that is related to the list item’s main content” is more descriptive, it’s an earful. I am very much open to suggestions for a replacement phrase. And this potential for change sets up other things that weigh on me. Bigger problems Using this interactive list component on the Commits page template means there are now two main areas on GitHub where the component is present. The second being the lists of repository Issues for logged-in accounts. Large, structural changes to a design’s underlying semantics disrupts the mental model and muscle memory of how many people who use screen readers operate an experience. It’s an act that I’m always nervous about undertaking. The calculated bet here is that the prominence of the components on these high-traffic areas means that understanding how to operate them becomes easier over time. I’ve also hedged that bet by including alternate ways of navigating the interactive list, including baking headings into each Commit and Issue title. HeadingsMap. I do think that this update to each page’s semantic structure is net better than what came before it. However, it is still going to manifest as a large and sudden change for people who use screen readers. And for the record, I view changing the “More information available below” phrasing as another large and disruptive change. Subsequent large and sudden changes is what I want to avoid at all costs. That said, we’re running out the clock on a situation where an interactive list will someday contain non-interactive content. The component’s current approach does not have a great way for people to be aware of, and subsequently read that kind of content. That’s not great. Because of this inevitability, I would like to replace the list’s interaction approach with the one we’re using for nested/sub-Issues. There are a few reasons for this, but the main ones are: Improving consistency and uniformity of interaction across all of GitHub for this kind of clustering of content. Leaning on more well-known interaction techniques for secondary content within an item by using dialogs instead of Tab keypresses. Providing a mechanism that can more easily handle exploring non-interactive content being placed within a list item. Making these changes would mean a drastic update on top of another drastic update. While I do think it would be a better overall experience, rolling it out would require a lot of careful effort and planning. Even bigger problems In many ways, GitHub is a battleship. It is slow to turn just by virtue of the sheer size and scale of concerns it needs to cover. Enacting my goal of replacing and unifying these kinds of interactions would take time: It would mean petitioning for heavy investment in something that may be perceived as an already “solved” problem. It also would require collaboration across multiple siloed product areas, each with their own pre-existing and planned objectives and priorities. I have the gift of hindsight in writing this. The interactive list was originally intended to address just the list of repository Issues. Its usage has since has grown to cover more use cases—not all of them actually applicable. This is one of the existential problems of a design system. You can write all the documentation you want, but people are ultimately going to use what they’re going to use regardless of if its appropriate or not. Replacing or excising misapplied components is another effort that runs counter to organization priorities. That truth lives hand-in-hand with the need to maintain the overall state of usability for everyone who uses the service. You’re gonna carry that weight Making dramatic changes to core parts of GitHub’s assistive technology user experience, followed by more dramatic changes, then potentially followed by even more dramatic changes is an outcome we’re potentially facing. It is the nature of software—especially websites and web apps—to change. That said, I worry about the overall churn this all could represent. I feel the weight of that responsibility as the person who set this course. I also feel the consequent pressure it exerts. I’ll continue to write about and plead the case internally. However, I worry that I’ve blown my one chance to get things right. I know my colleagues who produce visual designs also may feel this way, but I also think it’s a more acute problem for digital accessibility. I also don’t think that this sort of situation is one that’s talked about that often in accessibility spaces, hence me writing about it. This is to say nothing about quantifying it, either. Centering I’m pretty proud of what we accomplished, but those feelings are moot if all this effort does not serve the people it was intended to. It’s also not about me. Our efforts to be more inclusive may ironically work against us here. How much churn is the point where it’s too much and people are pushed away? To that point, feedback helps. Constructive reports on access barriers and friction are something that can bypass the internal perception of the things I’ve outlined as being seen as non-problems. I am twice heartened when I see reports. First, it is a signal that means someone is still present and cares. Second, there has been renewed internal interest in investing in acting on these user-reported accessibility problems. The work never stops This post is about interactive lists on GitHub, and how to use them. It’s also about: The responsibilities, pressures, and politics of creating complex components like the interactive list and ensuring they are accessible, How these types of components affect the larger, holistic experience of GitHub as a whole, The need to ensure these components actually work for the people they serve, and The value of providing feedback if they don’t. These are powerful things to internalize if you also do this sort of work, but also valuable to keep in mind if you don’t. The have served me well in my journey at GitHub, and I hope they help to serve you too.

3 months ago 22 votes

More in programming

Hardware-Aware Coding: CPU Architecture Concepts Every Developer Should Know

Write faster code by understanding how it flows through your CPU

15 hours ago 3 votes
The Road Not Taken is Guaranteed Minimum Income

The dream is incomplete until we share it with our fellow Americans.

2 days ago 4 votes
Operational mechanisms for strategy.

Even the best policies fail if they aren’t adopted by the teams they’re intended to serve. Can we persistently change our company’s behaviors with a one-time announcement? No, probably not. I refer to the art of making policies work as “operations” or “strategy operations.” The good news is that effectively operating a policy is two-thirds avoiding common practices that simply don’t work. The other one-third takes some practice, but can be practiced in any engineering role: there’s no need to wait until you’re an executive to start building mastery. This chapter will dig into those mechanisms, with particular focus on: How policies are supported by operations, and how operations are composed of mechanisms that ensure they work well Evaluating operational mechanisms to select between different options, and determine which mechanisms are unlikely to be an effective choice Composing an operational plan for the specific set of policies that you are looking to support Common varieties of effective mechanisms such as approval forums, inspection mechanisms, nudges, and so on. We’ll also explore the sorts of mechanisms that tend to work poorly How to adjust your approach to operations if you are in an engineering role rather than an executive role How cargo-culting remains the largest threat to effective strategy operations Let’s unpack the details of turning your potentially good policy into an impactful policy. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. What are operational mechanisms? Operations are how a policy is implemented and reinforced. Effective operations ensure that your policies actually accomplish something. They can range from a recurring weekly meeting, to an alert that notifies the team when a threshold is exceeded, to a promotion rubric requiring a certain behavior to be promoted. In the strategy for working with new private equity ownership, we introduce a policy to backfill hires at a lower level, and also limit the maximum number of principal engineers: We will move to an “N-1” backfill policy, where departures are backfilled with a less senior level. We will also institute a strict maximum of one Principal Engineer per business unit, with any exceptions approved in writing by the CTO–this applies for both promotions and external hires. That introduces an explicit operational mechanism of escalations going to the CTO, but it also introduces an implicit and undefined mechanism: how do we ensure the backfills are actually down-leveled as the policy instructs? It might be a group chat with engineering recruiting where the CTO approves the level of backfilled roles. Instead, it might be the responsibility of recruiting to enforce that downleveling. In a third approach, it might be taken on trust that hiring managers will do the right thing. Each of those three scenarios is a potential operational solution to implementing this policy. Operations is picking the right one for your circumstances, and then tweaking it as you learn from running it. Operations in government For another interesting take on how critical operations are, Recoding America by Jennifer Pahlka is well worth the read. It explores how well-intended government legislation often isn’t implementable, which results in policies that require massive IT investments but provide little benefit to constituents. How to evaluate mechanisms In order to determine the most effective operational mechanisms for the problems you’re working on, it’s useful to have a standardized rubric for evaluating mechanisms. While this rubric isn’t perfectly universal–customize it for your needs–having any rubric will make it easier to evaluate your options consistently. The rubric I use to evaluate whether an operational mechanism will be effective is: Measurability: Can you measure both leading and lagging indicators to inspect the mechanism’s impact? If you have to choose between the two, measuring leading indicators allows much quicker evaluation and iteration on your mechanisms. Adoption cost: How much work will migrating to this mechanism require? Can this work be done incrementally or does it require a major, coordinated shift? User ease (or burden): After adopting this policy, how much easier (or harder) will it be for users to perform their work? If things will be harder, are those users able to tolerate the additional time? Provider ease (or burden): How much additional ongoing maintenance will this mechanism require from the centralized or platform team providing it? For example, if every new architecture proposal requires a thorough review by your Security team, does the Security team have the actual ability to support those reviews? Reliance on authority: How much does this mechanism depend on a top-down authority’s active support? If the sponsoring executive departs, will this mechanism remain effective? Is that an effective tradeoff in this case? Culturally aligned: Is this something that your organization is going to do, or something that they are going to fight against each step? Is there a way you can adjust the framing to make it more acceptable to your organization’s culture? Generally, I find folks are good at evaluating mechanisms against these critera, but somewhat worse at accepting the consequences of their evaluation. For example, falling in love with a particular mechanism and then trying to force the organization to accept a mechanism whose adoption cost is unbearably high, or introduce a mechanism that creates significant user burden onto a team that is already struggling with tight efficiency goals like a customer support team. Self-awareness helps here, but so does consulting others to point out the errors in your reasoning, which is a core part of how I’ve found success in adopting operational mechanisms. Composing an operational plan Your operational plan is the sum of the mechanisms used to support your policies. While evaluating each individual mechanism in isolation is part of creating an operations plan, it’s also valuable to consider how the mechanisms will work together: Review the policies you’ve developed. What sort of mechanisms seem most likely to support these policies? How might these mechanisms be pooled together to avoid redundancy? Review the operational mechanisms that have worked in your organization. What mechanisms have been used to best effect, and which have left a sufficiently bad taste in the organization’s collective memory that they’ll be hard to reuse effectively? Which new mechanisms showed up in your exploration? In your exploration phase, you’ll frequent encounter mechanisms that your organization hasn’t previously tried. If any of them seem particularly well-suited to the policies you’re considering, and none of your organization’s frequently used mechanisms are good fits, then consider testing a new one. Evaluate mechanisms against the evaluation rubric. For each of the mechanisms you’re considering using, apply the rubric from the above How to evaluate mechanisms to validate they’re good fits. Consolidate into an operational plan. Now that you’ve determined the mechanisms you want to consider, work on fitting the full set of mechanisms into one coherent plan. Be particularly mindful of the ease, or burden, the integrated plan creates for both your users and platform providers. Validate plan with users and providers. Many plans make sense from afar, but fail due to imposing an unreasonable burden. Or the burden might be acceptable, but the actual workflow simply won’t work at all. Consider validating via strategy testing. If you run the above process, and can’t come to an agreement with stakeholders on your proposed plan, then simply commit to running a strategy testing process including the plan. This will create space for everyone to build confidence in the approach before they feel forced to make a commitment to following it long-term. Even if you don’t use strategy testing for your plan, at least commit to scheduling a review in three months reflecting on how things have worked out. Your operational plan is the vehicle that delivers your policies to your organization. It’s extremely tempting to skip refining the details here, but it’s a relatively quick step and will completely change your strategy’s outcomes. Common mechanisms Most companies have a handful of frequently used operational mechanisms. Some of those mechanisms are company specific, such as Amazon’s weekly business review, and others repeat across companies like requiring executive approval. Across the many mechanisms you’ll encounter, you can generally cluster them into recurring categories. This section covers the mechanisms I’ve found consistently effective. Approval and advice forums At a high level, new policies are obvious, simple and apply cleanly to the problem they are intended to solve. However, when you apply those policies to detailed, complex circumstances, it’s often ambiguous how to stay loyal to a policy’s intensions. Approval and advice forums are a common solution to that problem. Calm’s product engineering strategy shows what the simplest, and most common, approval forum looks like in practice: Exceptions are granted by the CTO, and must be in writing. The above policies are deliberately restrictive. Sometimes they may be wrong, and we will make exceptions to them. However, each exception should be deliberate and grounded in concrete problems we are aligned both on solving and how we solve them. If we all scatter towards our preferred solution, then we’ll create negative leverage for Calm rather than serving as the engine that advances our product. All exceptions must be written. If they are not written, then you should operate as if it has not been granted. Our goal is to avoid ambiguity around whether an exception has, or has not, been approved. If there’s no written record that the CTO approved it, then it’s not approved. This example also has several weaknesses that happen in many approval forums. Most importantly, it doesn’t make it clear how to get approvals. It would be stronger if it explicitly explained how to get an approval (perhaps go ask in #cto-approvals), and where to find prior approvals to help someone considering requesting an exception to calibrate their request. Approvals don’t necessarily need to come from senior leadership. Instead, the senior leadership can loan their authority on a topic to another group. The LLM adoption strategy provides a good example of this: Start with Anthropic. We use Anthropic models, which are available through our existing cloud provider via AWS Bedrock. To avoid maintaining multiple implementations, where we view the underlying foundational model quality to be somewhat undifferentiated, we are not looking to adopt a broad set of LLMs at this point. This is anchored in our Wardley map of the LLM ecosystem. Exceptions will be reviewed by the Machine Learning Review in #ml-review In a more community-minded organization, the approval forums might not require senior leadership involvement at all. Instead, the culture might create an environment where the forums’ feedback is taken seriously on its own merits. Every company does approval forums a bit differently, ranging from our experiments at Carta with Navigators, granting executive authority for technical decisions to named engineers in each area, to Andrew Harmel-Law’s discussion of this topic in Facilitating Software Architecture. You can spend a lot of time arguing the details here, my experience is that having the right participants and a good executive sponsor matter a lot, and the other pieces matter a lot less. Inspection While even the best policies can fail, the more common scenario is that a policy will sort-of work, and need some modest adjustments to make it more successful. An inspect mechanism allows you to evaluate whether your policy’s is succeeding and if you need to make adjustments. The user-data access strategy provides an example: Measure progress on percentage of customer data access requests justified by a user-comprehensible, automated rationale. This will anchor our approach on simultaneously improving the security of user data and the usability of our colleagues’ internal tools. If we only expand requirements for accessing customer data, we won’t view this as progress because it’s not automated (and consequently is likely to encourage workarounds as teams try to solve problems quickly). Similarly, if we only improve usability, charts won’t represent this as progress, because we won’t have increased the number of supported requests. As part of this effort, we will create a private channel where the security and compliance team has visibility into all manual rationales for user-data access, and will directly message the manager of any individual who relies on a manual justification for accessing user data. This example is a good start, but fully realizing an inspection mechanism requires concretely specifying where and how the data will be tracked. A better version of this would include a link to the dashboard you’ll look at, and a commitment to reviewing the data on a certain frequency. For a recent inspection mechanism, I created a recurring invite with a link to the relevant data dashboard, and a specific chat channel for discussion, and invited the working group who had agreed to review the data on that cadence. This wasn’t a synchronous meeting, but rather a commitment to independently review, and discuss anything that felt surprising. Your particular mechanisms could be threshold-triggered alerts, something you fold into an existing metrics review meeting, a script you commit to running and reviewing periodically, or something else. The most important thing is that it cannot silently fail. Nudges While it’s common to hear complaints about how a team isn’t following a new policy, as if it were a deliberate choice they’d made, I find it more common that people want to do things the new way, but rarely take time to learn how to do it. Nudges are providing individuals with context to inform them about a better way they might do something, and they are an exceptionally effective mechanism. Grounding this in an example, at Stripe we had a policy of allowing teams to self-authorize introducing new cloud hosting costs. This worked well almost all the time. However, sometimes teams would accidentally introduce large cost increases without realizing it, and teams that introduced those spikes almost never had any awareness that they had caused the problem. Even if we’d told them they must not introduce unapproved spending spikes, they simply didn’t perceive they’d done it. We had the choice between preventing all teams from introducing new spend, or we could try using a nudge. The nudge we added informed teams when their cloud spend accelerated month over month, directed to charts that explained the acceleration, and told them where to go to ask questions. Nudges pair well with inspections, and there was also a monthly review by the Efficiency Engineering team to review any spikes and reach out where necessary. Maybe we could have forced all teams to review new spend, but this nudge approach didn’t require an authoritative mandate to implement. It also meant we only spent time advising teams that actually spent too much, instead of having to discuss with every team that might spend too much. As another example making that point, a working group at Carta added a nudge to inform managers of untested pull requests merged by their team. Some managers had previously said they simply didn’t know when and why their team had merged untested pull requests, and this nudge made it easy to detect. The nudge also respected their attention by not sending a notification at all if there wasn’t a new, untested pull request. With poor ergonomics, nudges can be an overwhelming assault on your colleagues attention, but done well, I continue to believe they are the most effective operational mechanism. Documentation Policies can’t be enforced by people who don’t know they exist, or by people who don’t know how to follow those policies. In my experience, nudges are the most effective way of solving both of those problems, because nudges bring information to people at exactly the moment that information would be useful. At most companies, well-done nudges are relatively uncommon, and the far more common solution to lack of information is documentation and training. There are so many approaches to both of these topics, and I’ve not found my own approaches here particularly effective. Consequently, I am hesitant to give much advice on what will work best for you. The best I can offer is that following standard practices for your company, even if the outcomes seem imperfect, is probably your best bet. Internal knowledge bases tend to rot quickly, and introducing yet another knowledge base is almost always the illusion of progress rather than real progress. Even when you really don’t like the current one. Finally, remember that success for documentation and training is not necessarily that everyone in the company knows how a new policy works. Instead, as discussed in the chapter on whether strategy is useful, a more useful goal is informational herd immunity: as long as someone on each team understands your policy, the team will generally be capable of following it. Automation Relying on humans to respond is slow, and the quality of human response is highly varied. In many cases, automation provides the most effective and most scalable mechanism to support your policies’ rollout. Automation was key in the Uber service migration strategy, moving us out of a manual, slow process that was taking up a great deal of user and provider time: Move to structured requests, and out of tickets. Missing or incorrect information in provisioning requests create significant delays in provisioning. Further, collecting this information is the first step of moving to a self-service process. As such, we can get paid twice by reducing errors in manual provisioning while also creating the interface for self-service workflows. In that case, better automation allowed us to eliminate a series of back-and-forth negotiations to collect data, and to instead get the necessary information in a single step. Occasionally we still ran into users who couldn’t fill in the form, but now we could focus on providing a good manual experience for those rare exceptions. As you use automation as a core strategy mechanism, it’s important to recognize that designing an effective user experience is a prerequisite to automation having a positive impact. If you view the user experience of your automation as a secondary concern, then you are unlikely to make much impact with automation. Deferment to future work Sometimes there’s something you really want a policy to do, but you also know that you have no reasonable mechanism to do it. In that case, you may find explicitly deferring action on the topic useful. The strategy for integration the Index acquisition at Stripe uses this mechanism: Defer making a decision regarding the introduction of Java to a later date: the introduction of Java is incompatible with our existing engineering strategy, but at this point we’ve also been unable to align stakeholders on how to address this decision. Further, we see attempting to address this issue as a distraction from our timely goal of launching a joint product within six months. We will take up this discussion after launching the initial release. As did the strategy for working with a private equity acquirer: We believe there are significant opportunities to reduce R&D maintenance investments, but we don’t have conviction about which particular efforts we should prioritize. We will kickoff a working group to identify the features with the highest support load. There’s no shame in deferral. As much as you want to make progress on a certain area, it’s better to explicitly acknowledge that you can’t make progress on it–and clarify when you will be able to–then to allow the organization to churn on an intractable problem. Meetings Meetings are the final mechanism, and you can fit any and all of the above mechanisms into a meeting. They are a universal mechanism, although frequently overused because they can do an adequate job of operating almost any policy. The most common mechanism is a reporting meeting, such as reporting progress in the Executive Weekly Meeting as suggested in the LLM adoption strategy: Develop an LLM-backed process for reactivating departed and suspended drivers in mature markets. Through modeling our driver lifecycle, we determined that improving onboarding time will have little impact on the total number of active drivers. Instead, we are focusing on mechanisms to reactivate departed and suspended drivers, which is the only opportunity to meaningfully impact active drivers. Report on progress monthly in Exec Weekly Meeting, coordinated in #exec-weekly The other common meeting archetype is the weekly working meeting introduced in the chapter on strategy testing. Meetings are almost always the most expensive mechanism you can find to solve a problem, but they are easy to suggest, run, and iterate on. If you can’t find any other mechanism you believe in, then a meeting is a decent starting point. Just don’t get too fond of them, and try to iterate your way to canceling every meeting that you start. Anti-patterns In addition to the effective operational methods discussed above, there are a number of additional mechanisms that are frequently used, but which I consider anti-patterns. They can provide some value, but there’s almost always a better alternative. Top-down pronouncements: Sometimes a policy will be operationalized by simply declaring it must be followed. It’s common to see a leader declare that a policy is now in effect, assuming that the announcement is a useful way to implement the new policy. For example, some “return to office” policies dictate that the team must work from their office, but driving a real change requires motivating thoes individuals to actually return. Education-as-announcements rollouts: The default way that many companies roll out policies is through one-time “education,” often as an all-company announcement for existing employees. They might follow up by updating training for onboarding new-hires. Education sounds great, but a couple trainings will never change organizational behavior. Changing behavior requires ongoing reminders, visible role models, inspection to understand why some teams are not adopting the behavior, and so on. Education can be a good component of operationalizing a policy, but it cannot stand on its own. Mandatory recurring trainings: These are a staple of compliance driven policies, generally because of laws which require providing a certain number of hours of relevant training each year. There are two deep challenges with mandatory trainings. First, because attendance is required, people tend to make little effort to make the content good. Second, many folks don’t pay attention because they expect the content to be low quality. It’s not uncommon to hear people say that they’ve never heard of a policy that they’ve performed annual training on for multiple years. It’s possible to overcome these barriers, but in a situation where you’re accountable for changing outcomes, as opposed to shifting legal obligations away from the company, these tend to work poorly. Just change the culture. Some leaders frame most problems as cultural problems, which is a reasonable frame: most things can be usefully viewed as a cultural problem. Unfortunately, it’s common for those who rely heavily on the cultural frame to also have a simplistic view about how culture is changed. Changing an organization’s culture is tricky, and requires a combination of many techniques to create visible leaders role modeling the new behavior, and reinforcement mechanisms to ensure pockets of dissent are weeded out. Anyone who frames culture change as a simple or instant change is living in an imaginary world. If you’re using one of these approaches, it isn’t necessarily a bad choice. Instead, you should just make sure you can explain why you’re using it, and then you need to also make sure you believe that explanation. If you don’t, look for a mechanism from the earlier What if you’re not an executive? It’s easy to get discouraged when you think about which operational mechanisms are available to you as a non-executive. So many of the frequently seen mechanisms like running mandatory recurring meetings, or a binding architecture review process are not accessible to you. That is true: they’re not accessible to you. However, there’s always a related mechanism that can be implemented with less authority. The binding architecture process can be replaced with an architectural advice process. The mandatory review of pull requests can be replaced with a nudge. Although it may be more common to see the authoritative mechanisms in the companies you work in, my experience working as an executive is that these authoritative mechanisms don’t work particularly well. They do a great job of technically shifting accountability to the wider organization, but they often don’t change behavior at all. So, instead of getting frustrated by what you can’t do, focus instead on the mechanisms that are available to you today. Add nudges, focus on the real dynamics of how colleagues do work in your organization, and build a real dataset. It’s very hard to get an executive to support your initiative before the mechanisms and data exist to support it, and very easy to get their support once they do. Once you’ve done what you can without authority to build confidence, if you really do need more authority, then you’re in a good place to escalate to get an executive to support your policies. Beware cargo-culting The longer that I am in the industry, but more I am surprised by how few strategists seem to care if their approach actually works. Instead, they seem focused on doing something that might work, offloading accountability to either the organization or some team, and then moving off to the next problem. Perhaps this is driven by an unfortunate reality that leaders are often evaluated by how they appear, rather than by what they accomplish. Whether or not that’s the underlying reason for why it happens, it does make it surprisingly difficult to know which patterns to borrow from strategy rollouts and implementations. The best advice, unfortunately, is to remain skeptically optimistic. Collect ideas widely, but force the ideas to prove their merit. Summary Now that you’ve finished this chapter, you’re significantly more qualified to write a complete, useful strategy than I was a decade into my career. Often skipped, the operations behind your strategy are at least as essential as any other step, and any strategy without them will fade quietly into your organization’s history. In addition to being able to rollout a strategy of your own, this chapter also provides a useful rescue toolkit you can use to put an existing, floundering strategy back on track. If you don’t see an opportunity to write new strategy within your organization, then there’s still probably room to flex your operational skill.

2 days ago 2 votes
Age is a problem at Apple

The average age of Apple's board members is 68! Nearly half are over 70, and the youngest is 63. It’s not much better with the executive team, where the average age hovers around 60. I’m all for the wisdom of our elders, but it’s ridiculous that the world’s premier tech company is now run by a gerontocracy. And I think it’s starting to show. The AI debacle is just the latest example. I can picture the board presentation on Genmoji: “It’s what the kids want these days!!”. It’s a dumb feature because nobody on Apple’s board or in its leadership has probably ever used it outside a quick demo. I’m not saying older people can’t be an asset. Hell, at 45, I’m no spring chicken myself in technology circles! But you need a mix. You need to blend fluid and crystallized intelligence. You need some people with a finger on the pulse, not just some bravely keeping one. Once you see this, it’s hard not to view slogans like “AI for the rest of us” through that lens. It’s as if AI is like programming a VCR, and you need the grandkids to come over and set it up for you. By comparison, the average age on Meta’s board is 55. They have three members in their 40s. Steve Jobs was 42 when he returned to Apple in 1997. He was 51 when he introduced the iPhone. And he was gone — from Apple and the world — at 56. Apple literally needs some fresh blood to turn the ship around.

3 days ago 3 votes
Upgrading to Raspberry Pi OS 2024-11-19

<![CDATA[I upgraded my Raspberry Pi 400 to 64-bit Raspberry Pi OS 2024-11-19 based on Debian Bookworm 12.9: The desktop of 64-bit Raspberry Pi OS 2024-11-19 on a Raspberry Pi 400. Since I had no files to preserve the process was surprisingly easy as I went with a full installation. And this time I finally used the Raspberry Pi Imager. When I first set up the Pi 400 my only other desktop computer was a Chromebox that couldn't run the Imager on Crostini Linux. This imposed a less convenient network installation which, combined with a subtle bug, made me waste a couple of hours over three installation attempts. Now I have a real Linux PC that runs the Imager just fine. Downloading Raspberry Pi OS, configuring it, and flashing the microSD card went smoothly. When I booted the Pi 400 from the card I was greeted by a ready to run system. On the newly upgraded system, building Medley Interlisp from source for X11 took an hour or so. The environment still runs well with the labwc Wayland cmpositor that now ships with Raspberry Pi OS. But, like the previous Raspberry Pi OS release, Medley doesn't run under TigerVNC because of a connection issue. #pi400 #linux a href="https://remark.as/p/journal.paoloamoroso.com/upgrading-to-raspberry-pi-os-2024-11-19"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>

4 days ago 4 votes