Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
4
I debuted these principles in my axe-con 2025 talk, It is designed to break your heart: Cultivating a harm reduction mindset as an accessibility practitioner. They are adapted from The National Harm Reduction Coalition’s original eight principles. My adapted principles reflect philosophical and behavioral changes I’ve been cultivating. This is done to try and offset, and defend against systemic trauma and its resultant depression, burnout, and other negative experiences you can incur when doing digital accessibility work. If you have the time, I’d advise reading the original eight principles. I also recommend watching or reading the talk. I say this not in a self-promotional way, but instead that there is a lot of context that will be helpful in understanding: How these adapted principles came to be, and also The larger mindset shifts and practices that led to their creation. The principles There are eight principles in total. They are delivered in the context of how to approach...
3 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Eric Bailey

Evaluating overlay-adjacent accessibility products

I get asked about my opinion on overlay-adjacent accessibility products with enough frequency that I thought it could be helpful to write about it. There’s a category of third party products out there that are almost, but not quite an accessibility overlay. By this I mean that they seem a little less predatory, and a little more grounded in terms of the promises they make. Some of these products are widgets. Some are browser extensions. Some are apps. Some are an odd fourth thing. Sometimes it’s a case of a solutioneering disability dongle grift, sometimes its a case of good intentions executed in a less-than-optimal way, and sometimes it’s something legitimately helpful. Oftentimes it’s something that lies in the middle area of all of this. Many of them also have some sort of “AI” integration, which is the unfortunate upsell du jour we have to collectively endure for the time being. The rubric I use to evaluate these products remains very similar to how I scrutinize overlays. Hopefully it’s something that can be helpful for your own efforts. Should the product’s functionality be patented? I’m not very happy with the idea that the mechanism to operate something in an accessible way is inhibited by way of legal restriction. This artificially limits who can use it, which is in opposition to the overall mission of digital accessibility. Ideally the technology is the free bit, and the service that facilitates it is what generates the profit. Do I need to subscribe to use it? A subscription-based model is a great way to run a business, but you don’t need to pay a recurring fee to use an accessible website. The nature of the web’s technology means it can be operated via keyboard, voice control, and other assistive technology if constructed properly. Workarounds and community support also exist for some things where it’s not built well. Here I’d also like you to consider the disability tax, and how that factors into a rental model. It’s not great. Does the browser or operating system already have this functionality? A lot of the time this boils down to an issue of discovery, digital literacy, or identity. As touched on in the previous section, browsers and operating systems offer a lot to help you self-serve. Notable examples are reading mode, on-screen narration, color filters, interface and text zoom, and forced color inversion. Can it be used across multiple experiences, or just one website? Stability and predictability of operation and output are vital for technology like this. It’s why I am so bullish on utilizing existing browser and operating system features. Products built to “enhance” the accessibility of a single website or app can’t contribute towards this. Ironically, their presence may actually contribute friction towards someone’s existing method of using things. A tricky little twist here is products that target a single website are often advertised towards the website owner, and not the people who will be using said website. Can I use the keyboard to operate it? I’ve gotten in the habit of pressing Tab a few times when I first check out the product’s website and see if anything happens. It’s a quick and easy test to see if the company walks the walk in addition to talking the talk. Here, I regrettably encounter missing focus indicators and non-semantic interactive controls more often than not. I might also sometimes run the homepage through axe DevTools, to see if there are other egregious errors. I then try to use the product itself with a keyboard if a demo is offered. I am usually found wanting here. How reliable is the AI? There are two broad considerations here: How reliable is the output? How can bias affect someone’s interpretation of things? While I am a skeptic, I can also acknowledge that there are some good use cases for LLMs and related technology when it comes to disability. I think about reliability in terms of the output in terms of the “assistive” part of assistive technology. By this, I mean it actually helps you do what you need to get done. Here, I’d point to Salma Alam-Naylor’s experience with newer startups in this space versus established, community supported solutions. Then consider LLM-based image description products. Here we want to make sure the content is accurate and relevant. Remember that image descriptions are the mechanism that some people rely on to help them understand the world. If that description is not accurate, it impacts how they form an understanding of their environment. A step past that thought is the biases inherent in, and perpetuated by LLM-based technology. I recall Ben Myers’ thoughts on implicit, hegemonic normalization, as well as the sobering truth that this technology can exert influence over its users worldview at scale. Can the company be trusted with your data? A lot of assistive technology is purposely designed to not announce the fact that it is being used. This is to stave off things like discrimination or ineffective, separate-yet-equal “accessibility only” sites. There’s also the murky world of data brokerage, and if the company is selling off this information or not. AccessiBe comes to mind here, and not in a good way. Also consider if the product has access to everything you visit and interact with, and who has access to that information. As a companion concern, it is also worth considering the product’s data security practices—or lack thereof. Here, I would like to point out that startups tend to deprioritize this boring kind of infrastructure work in favor of feature creation. Not having any personal information present in a system is the best way to guard against its theft. Also know that there is no way to undo a data breach once it occurs. Leaked information stays leaked. Will the company last? Speaking of startups, know that more fail than succeed. Are you prepared for an outcome where the product you rely on is is no longer updated or supported because the company that made it went out of business? It could also be a case where the company still exists, but ceases to support the product you use. Here, know that sometimes these companies will actively squash attempts for community-based resurrection and support of the service because it represents potential liability. This concern is another reason why I’m bullish on operating system and browser functionality. They have a lot more resiliency and focus on the long view in this particular area. But also I’m not the arbiter of who can use what. In the spirit of “the best camera is the one you have on you:” if something works for your specific access needs, by all means use it.

3 weeks ago 18 votes
Stanislav Petrov

A lieutenant colonel in the Soviet Air Defense Forces prevented the end of human civilization on September 26th, 1983. His name was Stanislav Petrov. Protocol dictated that the Soviet Union would retaliate against any nuclear strikes sent by the United States. This was a policy of mutually assured destruction, a doctrine that compels a horrifying logical conclusion. The second and third stage effects of this type of exchange would be even more catastrophic. Allies for each side would likely be pulled into the conflict. The resulting nuclear winter was projected to lead to 2 billion deaths due to starvation. This is to say nothing about those who would have been unfortunate enough to have survived. Petrov’s job was to monitor Oko, the computerized warning systems built to centralize Soviet satellite communications. Around midnight, he received a report that one of the satellites had detected the infrared signature of a single launch of a United States ICBM. While Petrov was deciding what to do about this report, the system detected four more incoming missile launches. He had minutes to make a choice about what to do. It is impossible to imagine the amount of pressure placed on him at this moment. Source: Stanislav Petrov, Soviet officer credited with averting nuclear war, dies at 77 by Schwartzreport. Petrov lived in a world of deterministic systems. The technologies that powered these warning systems have outputs that are guaranteed, provided the proper inputs are provided. However, deterministic does not mean infallible. The only reason you are alive and reading this is because Petrov understood that the systems he observed were capable of error. He was suspicious of what he was seeing reported, and chose not to escalate a retaliatory strike. There were two factors guiding his decision: A surprise attack would most likely have used hundreds of missiles, and not just five. The allegedly foolproof Oko system was new and prone to errors. An error in a deterministic system can still lead to expected outputs being generated. For the Oko system, infrared reflections of the sun shining off of the tops of clouds created a false positive that was interpreted as detection of a nuclear launch event. Source: US-K History by Kosmonavtika. The concept of erroneous truth is a deep thing to internalize, as computerized systems are presented as omniscient, indefective, and absolute. Petrov’s rewards for this action were reprimands, reassignment, and denial of promotion. This was likely for embarrassing his superiors by the politically inconvenient shedding of light on issues with the Oko system. A coerced early retirement caused a nervous breakdown, likely him having to grapple with the weight of his decision. It was only in the 1990s—after the fall of the Soviet Union—that his actions were discovered internationally and celebrated. Stanislav Petrov was given the recognition that he deserved, including being honored by the United Nations, awarded the Dresden Peace Prize, featured in a documentary, and being able to visit a Minuteman Missile silo in the United States. On January 31st, 2025, OpenAI struck a deal with the United States government to use its AI product for nuclear weapon security. It is unclear how this technology will be used, where, and to what extent. It is also unclear how OpenAI’s systems function, as they are black box technologies. What is known is that LLM-generated responses—the product OpenAI sells—are non-deterministic. Non-deterministic systems don’t have guaranteed outputs from their inputs. In addition, LLM-based technology hallucinates—it invents content with no self-knowledge that it is a falsehood. Non-deterministic systems that are computerized also have the perception as being authoritative, the same as their deterministic peers. It is not a question of how the output is generated, it is one of the output being perceived to come from a machine. These are terrifying things to know. Consider not only the systems this technology is being applied to, but also the thoughtless speed of their integration. Then consider how we’ve historically been conditioned and rewarded to interpret the output of these systems, and then how we perceive and treat skeptics. We don’t live in a purely deterministic world of technology anymore. Stanislav Petrov died on September 18th, 2017, before this change occurred. I would be incredibly curious to know his thoughts about our current reality, as well as the increasing abdication of human monitoring of automated systems in favor of notably biased, supposed “AI solutions.” In acknowledging Petrov’s skepticism in a time of mania and political instability, we acknowledge a quote from former U.S. Secretary of Defense William J. Perry’s memoir about the incident: [Oko’s false positives] illustrates the immense danger of placing our fate in the hands of automated systems that are susceptible to failure and human beings who are fallible.

a month ago 20 votes
GitHub’s updated Commits page and the interactive list component

GitHub has updated the page template used to list Commits on a repository. Central to this experience is an interactive list component that I was responsible for architecting. This work was done alongside input from James Scholes, whose guidance was instrumental to the effort’s success. An interactive list is a construct that’s more commonplace on desktop applications than the web. That does not mean its approach is forbidden from being used for web experiences, however. What concerns does an interactive list address? The main concern an interactive list addresses is when each discrete item in a series contains multiple interactive child elements. Navigating through every child interactive element placed with each parent list item can be a tedious enough chore that it makes the effort a non-starter. For example, if the list has ten items and each item has seven interactive child elements, that means it takes up to seventy Tab keypresses someone needs to perform to get what they need. That’s an exhausting experience to endure. It could also be agonizing. Think motor control disabilities, where individual movements in aggregate can exceed someone’s pain tolerance threshold. Making each list item’s container itself focusable and traversable addresses this problem, as it lowers the number of keypresses someone needs to use. It also supports allowing you to quickly jump to the start or end of the list for even more navigation options. On GitHub, navigating an interactive list via your keyboard can be accomplished by pressing: Tab: Places focus on the interactive list item that last received focus. Defaults to the first item in the list if the list was previously not interacted with. Down: Moves focus to the next list item, if present. Up: Moves focus to the previous list item, if present. End: Moves focus to the last list item in the interactive list. Home: Moves focus to the first list item in the interactive list. There’s a trick here: We want to make sure each list item’s announcement contains enough information that someone can make an informed choice when navigating via a screen reader. We also do not want to make the announcement so verbose that it slows down the navigation process. For example, we only include the commit title when navigating via list item on the Commits page. For an Issue, we use: The Issue title, Its status, and Its author (there is currently a bug here, we’re working on fixing it). There is an intentionality behind the order of content in this announcement, as we want to include the most pertinent information first. This, in turn, helps people navigating by list item announcement make more informed choices faster. This lets us know: What the problem is, Has it been dealt with yet, and Who found the problem? We also use the term “More information available below” to signal that someone can explore the list item’s child content in more detail. This is accomplished via pressing: Tab: Navigates forwards through each child interactive element in sequence. Shift + Tab: Navigates backwards through each child interactive element in sequence. Esc: Moves focus out of the child interactive elements and places it back on the parent list item itself. Examples of child content that someone could encounter are an Issues’ author, its labels, linked Pull Requests, comment tally, and assignees. Problems The use of the phrase “More information available below” does not sit well with me, despite being the person who oversaw its inclusion. There’s a couple of reasons here: First, I’m normally loathe to hardcode interaction hints for screen readers. The interactive list component is a bit of an exception to that rule. It is an uncommon interaction pattern on the web, so the hint needs to be included until efforts to formalize it both: Manifest, and Get widespread support from assistive technology vendors. Without these two things, I fear that blind and low vision individuals will not be able to fully utilize the experience the same way their peers can. Second, the hint phrasing itself isn’t that great. The location-based term “below” is shorthand to try and communicate that there’s subsequent child content that is related to the list item’s main content. While “subsequent child content that is related to the list item’s main content” is more descriptive, it’s an earful. I am very much open to suggestions for a replacement phrase. And this potential for change sets up other things that weigh on me. Bigger problems Using this interactive list component on the Commits page template means there are now two main areas on GitHub where the component is present. The second being the lists of repository Issues for logged-in accounts. Large, structural changes to a design’s underlying semantics disrupts the mental model and muscle memory of how many people who use screen readers operate an experience. It’s an act that I’m always nervous about undertaking. The calculated bet here is that the prominence of the components on these high-traffic areas means that understanding how to operate them becomes easier over time. I’ve also hedged that bet by including alternate ways of navigating the interactive list, including baking headings into each Commit and Issue title. HeadingsMap. I do think that this update to each page’s semantic structure is net better than what came before it. However, it is still going to manifest as a large and sudden change for people who use screen readers. And for the record, I view changing the “More information available below” phrasing as another large and disruptive change. Subsequent large and sudden changes is what I want to avoid at all costs. That said, we’re running out the clock on a situation where an interactive list will someday contain non-interactive content. The component’s current approach does not have a great way for people to be aware of, and subsequently read that kind of content. That’s not great. Because of this inevitability, I would like to replace the list’s interaction approach with the one we’re using for nested/sub-Issues. There are a few reasons for this, but the main ones are: Improving consistency and uniformity of interaction across all of GitHub for this kind of clustering of content. Leaning on more well-known interaction techniques for secondary content within an item by using dialogs instead of Tab keypresses. Providing a mechanism that can more easily handle exploring non-interactive content being placed within a list item. Making these changes would mean a drastic update on top of another drastic update. While I do think it would be a better overall experience, rolling it out would require a lot of careful effort and planning. Even bigger problems In many ways, GitHub is a battleship. It is slow to turn just by virtue of the sheer size and scale of concerns it needs to cover. Enacting my goal of replacing and unifying these kinds of interactions would take time: It would mean petitioning for heavy investment in something that may be perceived as an already “solved” problem. It also would require collaboration across multiple siloed product areas, each with their own pre-existing and planned objectives and priorities. I have the gift of hindsight in writing this. The interactive list was originally intended to address just the list of repository Issues. Its usage has since has grown to cover more use cases—not all of them actually applicable. This is one of the existential problems of a design system. You can write all the documentation you want, but people are ultimately going to use what they’re going to use regardless of if its appropriate or not. Replacing or excising misapplied components is another effort that runs counter to organization priorities. That truth lives hand-in-hand with the need to maintain the overall state of usability for everyone who uses the service. You’re gonna carry that weight Making dramatic changes to core parts of GitHub’s assistive technology user experience, followed by more dramatic changes, then potentially followed by even more dramatic changes is an outcome we’re potentially facing. It is the nature of software—especially websites and web apps—to change. That said, I worry about the overall churn this all could represent. I feel the weight of that responsibility as the person who set this course. I also feel the consequent pressure it exerts. I’ll continue to write about and plead the case internally. However, I worry that I’ve blown my one chance to get things right. I know my colleagues who produce visual designs also may feel this way, but I also think it’s a more acute problem for digital accessibility. I also don’t think that this sort of situation is one that’s talked about that often in accessibility spaces, hence me writing about it. This is to say nothing about quantifying it, either. Centering I’m pretty proud of what we accomplished, but those feelings are moot if all this effort does not serve the people it was intended to. It’s also not about me. Our efforts to be more inclusive may ironically work against us here. How much churn is the point where it’s too much and people are pushed away? To that point, feedback helps. Constructive reports on access barriers and friction are something that can bypass the internal perception of the things I’ve outlined as being seen as non-problems. I am twice heartened when I see reports. First, it is a signal that means someone is still present and cares. Second, there has been renewed internal interest in investing in acting on these user-reported accessibility problems. The work never stops This post is about interactive lists on GitHub, and how to use them. It’s also about: The responsibilities, pressures, and politics of creating complex components like the interactive list and ensuring they are accessible, How these types of components affect the larger, holistic experience of GitHub as a whole, The need to ensure these components actually work for the people they serve, and The value of providing feedback if they don’t. These are powerful things to internalize if you also do this sort of work, but also valuable to keep in mind if you don’t. The have served me well in my journey at GitHub, and I hope they help to serve you too.

2 months ago 20 votes
Don’t forget to localize your icons

Former United States president and war criminal George W. Bush gave a speech in Australia, directing a v-for-victory hand gesture at the assembled crowd. It wasn’t received the way he intended. What he failed to realize is that this gesture means a lot of different things to a lot different people. In Australia, the v-for-victory gesture means the same as giving someone the middle finger in the United States. This is all to say that localization is difficult. Localizing your app, web app, or website is more than just running all your text through Google Translate and hoping for the best. Creating effective, trustworthy communication with language communities means doing the work to make sure your content meets them where they are. A big part of this is learning about, and incorporating cultural norms into your efforts. Doing so will help you avoid committing any number of unintentional faux pas. In this best case scenario these goofs will create an awkward and potentially funny outcome: In the worst case, it will eradicate any sense of trust you’re attempting to build. Trust There is no magic number for how many mistranslated pieces of content flips the switch from tolerant bemusement to mistrust and anger. Each person running into these mistakes has a different tolerance threshold. Additionally, that threshold is also variable depending on factors such as level of stress, seriousness of the task at hand, prior interactions, etc. If you’re operating a business, loss of trust may mean less sales. Loss of trust may have far more serious ramifications if it’s a government service. Let’s also not forget that it is language communities and not individuals. Word-of-mouth does a lot of heavy lifting here, especially for underserved and historically discriminated-against populations. To that point, reputational harm is also a thing you need to contend with. Because of this, we need to remember all the things that are frequently left out of translation and localization efforts. For this post, I’d like to focus on icons. Iconic We tend to think of icons as immutable glyphs whose metaphors convey platonic functionality and purpose. A little box with an abstract mountain and a rising sun? I bet that lets you insert a picture. And how about a right-facing triangle? Five dollars says it plays something. However, these metaphors start to fall apart when not handled with care and discretion. If your imagery is too abstract it might not read the way it is intended to, especially for more obscure or niche functionality. Transit. Similarly, objects or concepts that don’t exist in the demographics you are serving won’t directly translate well. It will take work, but the results can be amazing. An exellent example of accommodation is Firefox OS’ localization efforts with the Fula people. Culture impacts how icons are interpreted, understood, and used, just like all other content. Here, I’d specifically like to call attention to three commonly-found icons whose meanings can be vasty different depending on the person using them. I would also like to highlight something that all three of these icons have in common: they use hand gestures to represent functionality. This makes a lot of sense! Us humans have been using our hands to communicate things for about as long as humanity itself has existed. It’s natural to take this communication and apply it to a digital medium. That said, we also need to acknowledge that due to their widespread use that these gestures—and therefore the icons that use them—can be interpreted differently by cultures and language communities that are different than the one who added the icons to the experience. The three icons themselves are thumb’s up, thumb’s down, and the okay hand symbol. Let’s unpack them: Thumb’s up What it’s intended to be used for This icon usually means expressing favor for something. It is typically also a tally, used as a signal for how popular the content is with an audience. Facebook did a lot of heavy lifting here with its Like button. In the same breath I’d also like to say that Facebook is a great example of how ignoring culture when serving a global audience can lead to disastrous outcomes. Who could be insulted by it In addition to expressing favor or approval, a thumb’s up can also be insulting in cultures originating from the following regions (not a comprehensive list): Bangladesh, Some parts of West Africa, Iran, Iraq, Afganistan, Some parts of Russia, Some parts of Latin America, and Australia, if you also waggle it up and down. It was also not a great gesture to be on the receiving end of in Rome, specifically if you were a downed gladiator at the mercy of the crowd. What you could use instead If it’s a binary “I like this/I don’t like this” choice, consider symbols like stars and hearts. Sparkles are out, because AI has ruined them. I’m also quite partial to just naming the action—after all the best icon is a text label. Thumb’s down What it’s intended to be used for This icon is commonly paired with a thumb’s up as part of a tally-based rating system. People can express their dislike of the content, which in turn can signal if the content failed to find a welcome reception. Who could be insulted by it A thumb’s down has a near-universal negative connotation, even in cultures where its use is intentional. It is also straight-up insulting in Japan. It may also have gang-related connotations. I’m hesitant to comment on that given how prevalent misinformation is about that sort of thing, but it’s also a good reminder of how symbolism can be adapted in ways we may not initially consider outside of “traditional” channels. Like the thumb’s up gesture, this is also not a comprehensive list. I’m a designer, not an ethnographic researcher. What you could use instead Consider removing outrage-based metrics. They’re easy to abuse and subvert, exploitative, and not psychologically healthy. If you well and truly need that quant data consider going with a rating scale instead of a combination of thumb’s up and thumb’s down icons. You might also want to consider ditching rating all together if you want people to actually read your content, or if you want to encourage more diversity of expression. Okay What it’s intended to be used for This symbol is usually used to represent acceptance or approval. Who could be insulted by it People from Greece may take offense to an okay hand symbol. The gesture might have also offended people in France and Spain when performed by hand, but that may have passed. Who could be threatened by it The okay hand sign has also been subverted by 4chan and co-opted by the White supremacy movement. An okay hand sign’s presence could be read as a threat by a population who is targeted by White supremacist hate. Here, it could be someone using it without knowing. It could also be a dogwhistle put in place by either a bad actor within an organization, or the entire organization itself. Thanks to the problem of other minds, the person on the receiving end cannot be sure about the underlying intent. Because of this, the safest option is to just up and leave. What you could use instead Terms like “I understand”, “I accept”, and “acknowledged” all work well here. I’d also be wary of using checkmarks, in that their meaning also isn’t a guarantee. So, what symbols can I use? There is no one true answer here, only degrees of certainty. Knowing what ideas, terms, and images are understood, accepted by, or offend a culture requires doing research. There is also the fact that the interpretation of these symbols can change over time. For this fact, I’d like to point out that pejorative imagery can sometimes become accepted due to constant, unending mass exposure. We won’t go back to using a Swastika to indicate good luck any time soon. However, the homogenization effect of the web’s implicit Western bias means that things like thumb’s up icons everywhere is just something people begrudgingly get used to. This doesn’t mean that we have to capitulate, however! Adapting your iconography to meet a language culture where it’s at can go a long way to demonstrating deep care. Just be sure that the rest of your localization efforts match the care you put into your icons and images. Otherwise it will leave the experience feeling off. An example of this is using imagery that feels natural in the language culture you’re serving, but having awkward and stilted text content. This disharmonious mismatch in tone will be noticed and felt, even if it isn’t concretely tied to any one thing. Different things mean different things in different ways Effective, clear communication that is interpreted as intended is a complicated thing to do. This gets even more intricate when factors like language, culture, and community enter the mix. Taking the time to do research, and also perform outreach to the communities you wish to communicate with can take a lot of work. But doing so will lead to better experiences, and therefore outcomes for all involved. Take stock of the images and icons you use as you undertake, or revisit your localization efforts. There may be more to it than you initially thought.

3 months ago 31 votes

More in programming

AI: Where in the Loop Should Humans Go?

This is a re-publishing of a blog post I originally wrote for work, but wanted on my own blog as well. AI is everywhere, and its impressive claims are leading to rapid adoption. At this stage, I’d qualify it as charismatic technology—something that under-delivers on what it promises, but promises so much that the industry still leverages it because we believe it will eventually deliver on these claims. This is a known pattern. In this post, I’ll use the example of automation deployments to go over known patterns and risks in order to provide you with a list of questions to ask about potential AI solutions. I’ll first cover a short list of base assumptions, and then borrow from scholars of cognitive systems engineering and resilience engineering to list said criteria. At the core of it is the idea that when we say we want humans in the loop, it really matters where in the loop they are. My base assumptions The first thing I’m going to say is that we currently do not have Artificial General Intelligence (AGI). I don’t care whether we have it in 2 years or 40 years or never; if I’m looking to deploy a tool (or an agent) that is supposed to do stuff to my production environments, it has to be able to do it now. I am not looking to be impressed, I am looking to make my life and the system better. Another mechanism I want you to keep in mind is something called the context gap. In a nutshell, any model or automation is constructed from a narrow definition of a controlled environment, which can expand as it gains autonomy, but remains limited. By comparison, people in a system start from a broad situation and narrow definitions down and add constraints to make problem-solving tractable. One side starts from a narrow context, and one starts from a wide one—so in practice, with humans and machines, you end up seeing a type of teamwork where one constantly updates the other: The optimal solution of a model is not an optimal solution of a problem unless the model is a perfect representation of the problem, which it never is.  — Ackoff (1979, p. 97) Because of that mindset, I will disregard all arguments of “it’s coming soon” and “it’s getting better real fast” and instead frame what current LLM solutions are shaped like: tools and automation. As it turns out, there are lots of studies about ergonomics, tool design, collaborative design, where semi-autonomous components fit into sociotechnical systems, and how they tend to fail. Additionally, I’ll borrow from the framing used by people who study joint cognitive systems: rather than looking only at the abilities of what a single person or tool can do, we’re going to look at the overall performance of the joint system. This is important because if you have a tool that is built to be operated like an autonomous agent, you can get weird results in your integration. You’re essentially building an interface for the wrong kind of component—like using a joystick to ride a bicycle. This lens will assist us in establishing general criteria about where the problems will likely be without having to test for every single one and evaluate them on benchmarks against each other. Questions you'll want to ask The following list of questions is meant to act as reminders—abstracting away all the theory from research papers you’d need to read—to let you think through some of the important stuff your teams should track, whether they are engineers using code generation, SREs using AIOps, or managers and execs making the call to adopt new tooling. Are you better even after the tool is taken away? An interesting warning comes from studying how LLMs function as learning aides. The researchers found that people who trained using LLMs tended to fail tests more when the LLMs were taken away compared to people who never studied with them, except if the prompts were specifically (and successfully) designed to help people learn. Likewise, it’s been known for decades that when automation handles standard challenges, the operators expected to take over when they reach their limits end up worse off and generally require more training to keep the overall system performant. While people can feel like they’re getting better and more productive with tool assistance, it doesn’t necessarily follow that they are learning or improving. Over time, there’s a serious risk that your overall system’s performance will be limited to what the automation can do—because without proper design, people keeping the automation in check will gradually lose the skills they had developed prior. Are you augmenting the person or the computer? Traditionally successful tools tend to work on the principle that they improve the physical or mental abilities of their operator: search tools let you go through more data than you could on your own and shift demands to external memory, a bicycle more effectively transmits force for locomotion, a blind spot alert on your car can extend your ability to pay attention to your surroundings, and so on. Automation that augments users therefore tends to be easier to direct, and sort of extends the person’s abilities, rather than acting based on preset goals and framing. Automation that augments a machine tends to broaden the device’s scope and control by leveraging some known effects of their environment and successfully hiding them away. For software folks, an autoscaling controller is a good example of the latter. Neither is fundamentally better nor worse than the other—but you should figure out what kind of automation you’re getting, because they fail differently. Augmenting the user implies that they can tackle a broader variety of challenges effectively. Augmenting the computers tends to mean that when the component reaches its limits, the challenges are worse for the operator. Is it turning you into a monitor rather than helping build an understanding? If your job is to look at the tool go and then say whether it was doing a good or bad job (and maybe take over if it does a bad job), you’re going to have problems. It has long been known that people adapt to their tools, and automation can create complacency. Self-driving cars that generally self-drive themselves well but still require a monitor are not effectively monitored. Instead, having AI that supports people or adds perspectives to the work an operator is already doing tends to yield better long-term results than patterns where the human learns to mostly delegate and focus elsewhere. (As a side note, this is why I tend to dislike incident summarizers. Don’t make it so people stop trying to piece together what happened! Instead, I prefer seeing tools that look at your summaries to remind you of items you may have forgotten, or that look for linguistic cues that point to biases or reductive points of view.) Does it pigeonhole what you can look at? When evaluating a tool, you should ask questions about where the automation lands: Does it let you look at the world more effectively? Does it tell you where to look in the world? Does it force you to look somewhere specific? Does it tell you to do something specific? Does it force you to do something? This is a bit of a hybrid between “Does it extend you?” and “Is it turning you into a monitor?” The five questions above let you figure that out. As the tool becomes a source of assertions or constraints (rather than a source of information and options), the operator becomes someone who interacts with the world from inside the tool rather than someone who interacts with the world with the tool’s help. The tool stops being a tool and becomes a representation of the whole system, which means whatever limitations and internal constraints it has are then transmitted to your users. Is it a built-in distraction? People tend to do multiple tasks over many contexts. Some automated systems are built with alarms or alerts that require stealing someone’s focus, and unless they truly are the most critical thing their users could give attention to, they are going to be an annoyance that can lower the effectiveness of the overall system. What perspectives does it bake in? Tools tend to embody a given perspective. For example, AIOps tools that are built to find a root cause will likely carry the conceptual framework behind root causes in their design. More subtly, these perspectives are sometimes hidden in the type of data you get: if your AIOps agent can only see alerts, your telemetry data, and maybe your code, it will rarely be a source of suggestions on how to improve your workflows because that isn’t part of its world. In roles that are inherently about pulling context from many disconnected sources, how on earth is automation going to make the right decisions? And moreover, who’s accountable for when it makes a poor decision on incomplete data? Surely not the buyer who installed it! This is also one of the many ways in which automation can reinforce biases—not just based on what is in its training data, but also based on its own structure and what inputs were considered most important at design time. The tool can itself become a keyhole through which your conclusions are guided. Is it going to become a hero? A common trope in incident response is heroes—the few people who know everything inside and out, and who end up being necessary bottlenecks to all emergencies. They can’t go away for vacation, they’re too busy to train others, they develop blind spots that nobody can fix, and they can’t be replaced. To avoid this, you have to maintain a continuous awareness of who knows what, and crosstrain each other to always have enough redundancy. If you have a team of multiple engineers and you add AI to it, having it do all of the tasks of a specific kind means it becomes a de facto hero to your team. If that’s okay, be aware that any outages or dysfunction in the AI agent would likely have no practical workaround. You will essentially have offshored part of your ops. Do you need it to be perfect? What a thing promises to be is never what it is—otherwise AWS would be enough, and Kubernetes would be enough, and JIRA would be enough, and the software would work fine with no one needing to fix things. That just doesn’t happen. Ever. Even if it’s really, really good, it’s gonna have outages and surprises, and it’ll mess up here and there, no matter what it is. We aren’t building an omnipotent computer god, we’re building imperfect software. You’ll want to seriously consider whether the tradeoffs you’d make in terms of quality and cost are worth it, and this is going to be a case-by-case basis. Just be careful not to fix the problem by adding a human in the loop that acts as a monitor! Is it doing the whole job or a fraction of it? We don’t notice major parts of our own jobs because they feel natural. A classic pattern here is one of AIs getting better at diagnosing patients, except the benchmarks are usually run on a patient chart where most of the relevant observations have already been made by someone else. Similarly, we often see AI pass a test with flying colors while it still can’t be productive at the job the test represents. People in general have adopted a model of cognition based on information processing that’s very similar to how computers work (get data in, think, output stuff, rinse and repeat), but for decades, there have been multiple disciplines that looked harder at situated work and cognition, moving past that model. Key patterns of cognition are not just in the mind, but are also embedded in the environment and in the interactions we have with each other. Be wary of acquiring a solution that solves what you think the problem is rather than what it actually is. We routinely show we don’t accurately know the latter. What if we have more than one? You probably know how straightforward it can be to write a toy project on your own, with full control of every refactor. You probably also know how this stops being true as your team grows. As it stands today, a lot of AI agents are built within a snapshot of the current world: one or few AI tools added to teams that are mostly made up of people. By analogy, this would be like everyone selling you a computer assuming it were the first and only electronic device inside your household. Problems arise when you go beyond these assumptions: maybe AI that writes code has to go through a code review process, but what if that code review is done by another unrelated AI agent? What happens when you get to operations and common mode failures impact components from various teams that all have agents empowered to go fix things to the best of their ability with the available data? Are they going to clash with people, or even with each other? Humans also have that ability and tend to solve it via processes and procedures, explicit coordination, announcing what they’ll do before they do it, and calling upon each other when they need help. Will multiple agents require something equivalent, and if so, do you have it in place? How do they cope with limited context? Some changes that cause issues might be safe to roll back, some not (maybe they include database migrations, maybe it is better to be down than corrupting data), and some may contain changes that rolling back wouldn’t fix (maybe the workload is controlled by one or more feature flags). Knowing what to do in these situations can sometimes be understood from code or release notes, but some situations can require different workflows involving broader parts of the organization. A risk of automation without context is that if you have situations where waiting or doing little is the best option, then you’ll need to either have automation that requires input to act, or a set of actions to quickly disable multiple types of automation as fast as possible. Many of these may exist at the same time, and it becomes the operators’ jobs to not only maintain their own context, but also maintain a mental model of the context each of these pieces of automation has access to. The fancier your agents, the fancier your operators’ understanding and abilities must be to properly orchestrate them. The more surprising your landscape is, the harder it can become to manage with semi-autonomous elements roaming around. After an outage or incident, who does the learning and who does the fixing? One way to track accountability in a system is to figure out who ends up having to learn lessons and change how things are done. It’s not always the same people or teams, and generally, learning will happen whether you want it or not. This is more of a rhetorical question right now, because I expect that in most cases, when things go wrong, whoever is expected to monitor the AI tool is going to have to steer it in a better direction and fix it (if they can); if it can’t be fixed, then the expectation will be that the automation, as a tool, will be used more judiciously in the future. In a nutshell, if the expectation is that your engineers are going to be doing the learning and tweaking, your AI isn’t an independent agent—it’s a tool that cosplays as an independent agent. Do what you will—just be mindful All in all, none of the above questions flat out say you should not use AI, nor where exactly in the loop you should put people. The key point is that you should ask that question and be aware that just adding whatever to your system is not going to substitute workers away. It will, instead, transform work and create new patterns and weaknesses. Some of these patterns are known and well-studied. We don’t have to go rushing to rediscover them all through failures as if we were the first to ever automate something. If AI ever gets so good and so smart that it’s better than all your engineers, it won’t make a difference whether you adopt it only once it’s good. In the meanwhile, these things do matter and have real impacts, so please design your systems responsibly. If you’re interested to know more about the theoretical elements underpinning this post, the following references—on top of whatever was already linked in the text—might be of interest: Books: Joint Cognitive Systems: Foundations of Cognitive Systems Engineering by Erik Hollnagel Joint Cognitive Systems: Patterns in Cognitive Systems Engineering by David D. Woods Cognition in the Wild by Edwin Hutchins Behind Human Error by David D. Woods, Sydney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter Papers: Ironies of Automation by Lisanne Bainbridge The French-Speaking Ergonomists’ Approach to Work Activity by Daniellou How in the World Did We Ever Get into That Mode? Mode Error and Awareness in Supervisory Control by Nadine Sarter Can We Ever Escape from Data Overload? A Cognitive Systems Diagnosis by David D. Woods Ten Challenges for Making Automation a “Team Player” in Joint Human-Agent Activity by Gary Klein and David D. Woods MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination by Sidney Dekker Managing the Hidden Costs of Coordination by Laura Maguire Designing for Expertise by David D. Woods The Impact of Generative AI on Critical Thinking by Lee et al.

2 days ago 5 votes
AMD YOLO

AMD is sending us the two MI300X boxes we asked for. They are in the mail. It took a bit, but AMD passed my cultural test. I now believe they aren’t going to shoot themselves in the foot on software, and if that’s true, there’s absolutely no reason they should be worth 1/16th of NVIDIA. CUDA isn’t really the moat people think it is, it was just an early ecosystem. tiny corp has a fully sovereign AMD stack, and soon we’ll port it to the MI300X. You won’t even have to use tinygrad proper, tinygrad has a torch frontend now. Either NVIDIA is super overvalued or AMD is undervalued. If the petaflop gets commoditized (tiny corp’s mission), the current situation doesn’t make any sense. The hardware is similar, AMD even got the double throughput Tensor Cores on RDNA4 (NVIDIA artificially halves this on their cards, soon they won’t be able to). I’m betting on AMD being undervalued, and that the demand for AI has barely started. With good software, the MI300X should outperform the H100. In for a quarter million. Long term. It can always dip short term, but check back in 5 years.

2 days ago 2 votes
whippet lab notebook: untagged mallocs, bis

Earlier this weekGuileWhippet But now I do! Today’s note is about how we can support untagged allocations of a few different kinds in Whippet’s .mostly-marking collector Why bother supporting untagged allocations at all? Well, if I had my way, I wouldn’t; I would just slog through Guile and fix all uses to be tagged. There are only a finite number of use sites and I could get to them all in a month or so. The problem comes for uses of from outside itself, in C extensions and embedding programs. These users are loathe to adapt to any kind of change, and garbage-collection-related changes are the worst. So, somehow, we need to support these users if we are not to break the Guile community.scm_gc_malloclibguile The problem with , though, is that it is missing an expression of intent, notably as regards tagging. You can use it to allocate an object that has a tag and thus can be traced precisely, or you can use it to allocate, well, anything else. I think we will have to add an API for the tagged case and assume that anything that goes through is requesting an untagged, conservatively-scanned block of memory. Similarly for : you could be allocating a tagged object that happens to not contain pointers, or you could be allocating an untagged array of whatever. A new API is needed there too for pointerless untagged allocations.scm_gc_mallocscm_gc_mallocscm_gc_malloc_pointerless Recall that the mostly-marking collector can be built in a number of different ways: it can support conservative and/or precise roots, it can trace the heap precisely or conservatively, it can be generational or not, and the collector can use multiple threads during pauses or not. Consider a basic configuration with precise roots. You can make tagged pointerless allocations just fine: the trace function for that tag is just trivial. You would like to extend the collector with the ability to make pointerless allocations, for raw data. How to do this?untagged Consider first that when the collector goes to trace an object, it can’t use bits inside the object to discriminate between the tagged and untagged cases. Fortunately though . Of those 8 bits, 3 are used for the mark (five different states, allowing for future concurrent tracing), two for the , one to indicate whether the object is pinned or not, and one to indicate the end of the object, so that we can determine object bounds just by scanning the metadata byte array. That leaves 1 bit, and we can use it to indicate untagged pointerless allocations. Hooray!the main space of the mostly-marking collector has one metadata byte for each 16 bytes of payloadprecise field-logging write barrier However there is a wrinkle: when Whippet decides the it should evacuate an object, it tracks the evacuation state in the object itself; the embedder has to provide an implementation of a , allowing the collector to detect whether an object is forwarded or not, to claim an object for forwarding, to commit a forwarding pointer, and so on. We can’t do that for raw data, because all bit states belong to the object, not the collector or the embedder. So, we have to set the “pinned” bit on the object, indicating that these objects can’t move.little state machine We could in theory manage the forwarding state in the metadata byte, but we don’t have the bits to do that currently; maybe some day. For now, untagged pointerless allocations are pinned. You might also want to support untagged allocations that contain pointers to other GC-managed objects. In this case you would want these untagged allocations to be scanned conservatively. We can do this, but if we do, it will pin all objects. Thing is, conservative stack roots is a kind of a sweet spot in language run-time design. You get to avoid constraining your compiler, you avoid a class of bugs related to rooting, but you can still support compaction of the heap. How is this, you ask? Well, consider that you can move any object for which we can precisely enumerate the incoming references. This is trivially the case for precise roots and precise tracing. For conservative roots, we don’t know whether a given edge is really an object reference or not, so we have to conservatively avoid moving those objects. But once you are done tracing conservative edges, any live object that hasn’t yet been traced is fair game for evacuation, because none of its predecessors have yet been visited. But once you add conservatively-traced objects back into the mix, you don’t know when you are done tracing conservative edges; you could always discover another conservatively-traced object later in the trace, so you have to pin everything. The good news, though, is that we have gained an easier migration path. I can now shove Whippet into Guile and get it running even before I have removed untagged allocations. Once I have done so, I will be able to allow for compaction / evacuation; things only get better from here. Also as a side benefit, the mostly-marking collector’s heap-conservative configurations are now faster, because we have metadata attached to objects which allows tracing to skip known-pointerless objects. This regains an optimization that BDW has long had via its , used in Guile since time out of mind.GC_malloc_atomic With support for untagged allocations, I think I am finally ready to start getting Whippet into Guile itself. Happy hacking, and see you on the other side! inside and outside on intent on data on slop fin

2 days ago 2 votes
Creating static map images with OpenStreetMap, Web Mercator, and Pillow

I’ve been working on a project where I need to plot points on a map. I don’t need an interactive or dynamic visualisation – just a static map with coloured dots for each coordinate. I’ve created maps on the web using Leaflet.js, which load map data from OpenStreetMap (OSM) and support zooming and panning – but for this project, I want a standalone image rather than something I embed in a web page. I want to put in coordinates, and get a PNG image back. This feels like it should be straightforward. There are lots of Python libraries for data visualisation, but it’s not an area I’ve ever explored in detail. I don’t know how to use these libraries, and despite trying I couldn’t work out how to accomplish this seemingly simple task. I made several attempts with libraries like matplotlib and plotly, but I felt like I was fighting the tools. Rather than persist, I wrote my own solution with “lower level” tools. The key was a page on the OpenStreetMap wiki explaining how to convert lat/lon coordinates into the pixel system used by OSM tiles. In particular, it allowed me to break the process into two steps: Get a “base map” image that covers the entire world Convert lat/lon coordinates into xy coordinates that can be overlaid on this image Let’s go through those steps. Get a “base map” image that covers the entire world Let’s talk about how OpenStreetMap works, and in particular their image tiles. If you start at the most zoomed-out level, OSM represents the entire world with a single 256×256 pixel square. This is the Web Mercator projection, and you don’t get much detail – just a rough outline of the world. We can zoom in, and this tile splits into four new tiles of the same size. There are twice as many pixels along each edge, and each tile has more detail. Notice that country boundaries are visible now, but we can’t see any names yet. We can zoom in even further, and each of these tiles split again. There still aren’t any text labels, but the map is getting more detailed and we can see small features that weren’t visible before. You get the idea – we could keep zooming, and we’d get more and more tiles, each with more detail. This tile system means you can get detailed information for a specific area, without loading the entire world. For example, if I’m looking at street information in Britain, I only need the detailed tiles for that part of the world. I don’t need the detailed tiles for Bolivia at the same time. OpenStreetMap will only give you 256×256 pixels at a time, but we can download every tile and stitch them together, one-by-one. Here’s a Python script that enumerates all the tiles at a particular zoom level, downloads them, and uses the Pillow library to combine them into a single large image: #!/usr/bin/env python3 """ Download all the map tiles for a particular zoom level from OpenStreetMap, and stitch them into a single image. """ import io import itertools import httpx from PIL import Image zoom_level = 2 width = 256 * 2**zoom_level height = 256 * (2**zoom_level) im = Image.new("RGB", (width, height)) for x, y in itertools.product(range(2**zoom_level), range(2**zoom_level)): resp = httpx.get(f"https://tile.openstreetmap.org/{zoom_level}/{x}/{y}.png", timeout=50) resp.raise_for_status() im_buffer = Image.open(io.BytesIO(resp.content)) im.paste(im_buffer, (x * 256, y * 256)) out_path = f"map_{zoom_level}.png" im.save(out_path) print(out_path) The higher the zoom level, the more tiles you need to download, and the larger the final image will be. I ran this script up to zoom level 6, and this is the data involved: Zoom level Number of tiles Pixels File size 0 1 256×256 17.1 kB 1 4 512×512 56.3 kB 2 16 1024×1024 155.2 kB 3 64 2048×2048 506.4 kB 4 256 4096×4096 2.7 MB 5 1,024 8192×8192 13.9 MB 6 4,096 16384×16384 46.1 MB I can just about open that zoom level 6 image on my computer, but it’s struggling. I didn’t try opening zoom level 7 – that includes 16,384 tiles, and I’d probably run out of memory. For most static images, zoom level 3 or 4 should be sufficient – I ended up a base map from zoom level 4 for my project. It takes a minute or so to download all the tiles from OpenStreetMap, but you only need to request it once, and then you have a static image you can use again and again. This is a particularly good approach if you want to draw a lot of maps. OpenStreetMap is provided for free, and we want to be a respectful user of the service. Downloading all the map tiles once is more efficient than making repeated requests for the same data. Overlay lat/lon coordinates on this base map Now we have an image with a map of the whole world, we need to overlay our lat/lon coordinates as points on this map. I found instructions on the OpenStreetMap wiki which explain how to convert GPS coordinates into a position on the unit square, which we can in turn add to our map. They outline a straightforward algorithm, which I implemented in Python: import math def convert_gps_coordinates_to_unit_xy( *, latitude: float, longitude: float ) -> tuple[float, float]: """ Convert GPS coordinates to positions on the unit square, which can be plotted on a Web Mercator projection of the world. This expects the coordinates to be specified in **degrees**. The result will be (x, y) coordinates: - x will fall in the range (0, 1). x=0 is the left (180° west) edge of the map. x=1 is the right (180° east) edge of the map. x=0.5 is the middle, the prime meridian. - y will fall in the range (0, 1). y=0 is the top (north) edge of the map, at 85.0511 °N. y=1 is the bottom (south) edge of the map, at 85.0511 °S. y=0.5 is the middle, the equator. """ # This is based on instructions from the OpenStreetMap Wiki: # https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#Example:_Convert_a_GPS_coordinate_to_a_pixel_position_in_a_Web_Mercator_tile # (Retrieved 16 January 2025) # Convert the coordinate to the Web Mercator projection # (https://epsg.io/3857) # # x = longitude # y = arsinh(tan(latitude)) # x_webm = longitude y_webm = math.asinh(math.tan(math.radians(latitude))) # Transform the projected point onto the unit square # # x = 0.5 + x / 360 # y = 0.5 - y / 2π # x_unit = 0.5 + x_webm / 360 y_unit = 0.5 - y_webm / (2 * math.pi) return x_unit, y_unit Their documentation includes a worked example using the coordinates of the Hachiko Statue. We can run our code, and check we get the same results: >>> convert_gps_coordinates_to_unit_xy(latitude=35.6590699, longitude=139.7006793) (0.8880574425, 0.39385379958274735) Most users of OpenStreetMap tiles will use these unit positions to select the tiles they need, and then dowload those images – but we can also position these points directly on the global map. I wrote some more Pillow code that converts GPS coordinates to these unit positions, scales those unit positions to the size of the entire map, then draws a coloured circle at each point on the map. Here’s the code: from PIL import Image, ImageDraw gps_coordinates = [ # Hachiko Memorial Statue in Tokyo {"latitude": 35.6590699, "longitude": 139.7006793}, # Greyfriars Bobby in Edinburgh {"latitude": 55.9469224, "longitude": -3.1913043}, # Fido Statue in Tuscany {"latitude": 43.955101, "longitude": 11.388186}, ] im = Image.open("base_map.png") draw = ImageDraw.Draw(im) for coord in gps_coordinates: x, y = convert_gps_coordinates_to_unit_xy(**coord) radius = 32 draw.ellipse( [ x * im.width - radius, y * im.height - radius, x * im.width + radius, y * im.height + radius, ], fill="red", ) im.save("map_with_dots.png") and here’s the map it produces: The nice thing about writing this code in Pillow is that it’s a library I already know how to use, and so I can customise it if I need to. I can change the shape and colour of the points, or crop to specific regions, or add text to the image. I’m sure more sophisticated data visualisation libraries can do all this, and more – but I wouldn’t know how. The downside is that if I need more advanced features, I’ll have to write them myself. I’m okay with that – trading sophistication for simplicity. I didn’t need to learn a complex visualization library – I was able to write code I can read and understand. In a world full of AI-generating code, writing something I know I understand feels more important than ever. [If the formatting of this post looks odd in your feed reader, visit the original article]

2 days ago 5 votes
Introducing the blogroll

This website has a new section: blogroll.opml! A blogroll is a list of blogs - a lightweight way of people recommending other people’s writing on the indieweb. What it includes The blogs that I included are just sampled from my many RSS subscriptions that I keep in my Feedbin reader. I’m subscribed to about 200 RSS feeds, the majority of which are dead or only publish once a year. I like that about blogs, that there’s no expectation of getting a post out every single day, like there is in more algorithmically-driven media. If someone who I interacted with on the internet years ago decides to restart their writing, that’s great! There’s no reason to prune all the quiet feeds. The picks are oriented toward what I’m into: niches, blogs that have a loose topic but don’t try to be general-interest, people with distinctive writing. If you import all of the feeds into your RSS reader, you’ll probably end up unsubscribing from some of them because some of the experimental electric guitar design or bonsai news is not what you’re into. Seems fine, or you’ll discover a new interest! How it works Ruben Schade figured out a brilliant way to show blogrolls and I copied him. Check out his post on styling OPML and RSS with XSLT to XHTML for how it works. My only additions to that scheme were making the blogroll page blend into the rest of the website by using an include tag with Jekyll to add the basic site skeleton, and adding a link with the download attribute to provide a simple way to download the OPML file. Oddly, if you try to save the OPML page using Save as… in Firefox, Firefox will save the transformed output via the XSLT, rather than the raw source code. XSLT is such an odd and rare part of the web ecosystem, I had to use it.

2 days ago 3 votes