Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
16
A lot of work goes into making every page and view of a website or webapp look consistent with every other page or view. It’s just good design. Smaller, newer experiences tend to be more uniform than not. This makes sense in that the bulk of the experience is created at the same time and orchestrated by a small group of people. Larger, older sites tend to grow into being slightly less invariable. If you have a keen eye you can spot the differences: An odd tint or shade here, Contrasting border radii there, An errant button style unearthing itself from a past redesign, and Over here is an errant microsite made by a third party vendor, And here is the abandoned pet project of a stakeholder who has long-since moved on. It is sort of like counting the rings on a tree: Here’s where flat design overtook skeumorphism, Here’s where the brand’s primary color went from royal purple to cornflower blue, Here’s where we left the harbor of web safe fonts to download some WOFF files, etc. There’s...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Eric Bailey

Article pitch for your consideration

A thing you should know is that you get put on a lot of lists if you spend a decent chunk of time publishing blog posts on your website. Your website and contact information will be shared around on these lists, for the purpose of soliciting you for guest posts. If you’re not familiar with the concept, guest posts are a way for other people to take advantage of your website’s search ranking as a way to divert traffic to other websites. There are benefits to doing this. The most straightforward one is SEO. Here, outward going links serves a heuristic web search engines look to for quality when weighing results. Guest posts can also have some additional gray hat goals, including audience segmenting and identification via things like UTM-driven campaigns. There are also straight-up cons such as linking to spyware, cryptominers and other forms of malware, and browser-based zero day exploits. Curiouser and curiouser I’ve always been curious about what exactly you get when you agree to a guest post offer. So, I dredged my spam folder and found one that sounded more direct and sincere. Here’s the cold call email pitch: Subject: Body: Keeping up with annual home and property maintenance is essential for preserving value and preventing costly repairs down the line. Whether it's inspecting your roof, cleaning gutters, or checking heating systems, regular upkeep can save homeowners time, money, and stress. I’m putting together an article that highlights key tasks for effective yearly maintenance, offering tips to help homeowners protect their biggest investment. I think this piece could really resonate with your audience! Let me know if you'd be interested in featuring it on your website. Thank you so much for your time today! Erin Reynolds P.S. If you’d like to propose an alternative topic, please do so. I would be happy to write on a topic that best suits your website. Don’t want to hear from me again? Please let me know. My reply reads: Hi Erin, This might be a weird one, but bear with me: My blog is a personal site, and its content is focused on web development and internet culture. I've always wanted to take someone up on this sort of offer, presented in the context of the article being something you get if you take the person reaching out on the offer to write a guest post. Is this something you'd be interested in? Erin took me up on my offer, and wrote about annual home and property maintenance. To her credit, she also did ask me if there was another subject I was interested in, but I figured we could stay the course of the original pitch. She was also prompt and communicative throughout the process, and delivered exactly what was promised. Here is the article in question: By Erin Reynolds, [diymama.net](https://diymama.net/) There's a quiet rhythm to living in a well-loved home. If you listen closely, your house speaks to you-whispers, mostly. The soft drip of a tired faucet, the groan of an HVAC unit that's been running too long, or the gentle scold of a clogged dryer vent. These aren't just annoyances. They re the language of upkeep, and whether you're in your first place or celebrating twenty years in the same four walls, learning to listen—and act—is everything. Annual maintenance isn't just about fixing what's broken. It's about stewardship, about being the kind of homeowner who doesn't wait for the ceiling to leak before checking the roof. There's something incredibly satisfying about having all your home maintenance documents in one tidy digital folder-no more rummaging through drawers for that appliance manual or the roof warranty. Digitizing receipts, inspection reports, and service invoices gives you a clear, accessible record of everything that's been done and when. Saving these as PDFs makes them universally readable and easy to share, whether you're selling your home or just need to reference them quickly. When you use a tool to create PDF files, you can convert virtually any document into a neat, portable format. You might not think much about gutters unless they're sagging or spilling over during a thunderstorm, but they play a quiet hero's role in protecting your home. Clean them out once a year —twice if you're under heavy tree cover—and you'll avoid water damage, foundation cracks, and even basement flooding. Take a Saturday with a sturdy ladder, some gloves, and a hose; it's oddly meditative work, like adult sandbox play. And if climbing rooftops isn't your thing, call in the pros-your future self will thank you during the next torrential downpour. That whoosh of warm or cool air we all take for granted? It comes at a price if neglected. Your heating and cooling system needs a checkup at least once a year, ideally before the seasons shift. A technician can clean the coils, swap the filter, and make sure it's all running like a symphony-not the death rattle of a dying compressor. Skipping this task means flirting with energy inefficiency and sudden breakdowns during a July heatwave or a January cold snap-and no one wants that call to the emergency repair guy at 2 a.m. Keep Your Appliances Running Like Clockwork Your appliances work hard so giving them a little yearly attention goes a long way. Cleaning refrigerator coils, checking for clogged dryer vents, and running cleaning cycles on dishwashers and washing machines helps extend their lifespan and keep things humming. But even with routine care, breakdowns happen, which is why investing in a home warranty can provide peace of mind when repairs crop up. Be sure to research home warranty appliance coverage that includes not only repair costs, but also removal of faulty units and protection against damage caused by previous poor installations. It's easy to forget the trees in your yard when they're not blooming or dropping leaves, but they're worth an annual walkaround. Look for branches that hang a little too close to power lines or seem precariously poised above your roof. Dead limbs are more than an eyesore-they're projectiles in a windstorm, liabilities when it comes to insurance, and threats to your peace of mind. Hiring an arborist to prune and assess health may not be the most glamorous expense, but it's a strategic one. This one's for all the window-ledge neglecters and bathroom corner deniers. Every year, old caulk shrinks and cracks, and when it does, water starts to creep in—under tubs, around sinks, behind tile. The same goes for gaps around doors and windows that let in drafts, bugs, and rising utility bills. Re-caulking is a humble chore that wields mighty results, and it's deeply satisfying to peel away the old and lay down a clean bead like you're frosting a cake. A tube of silicone sealant and an hour of your time buys you protection and a crisp finish. Sediment buildup is sneaky—it collects at the bottom of your water heater like sand in a jar, slowly choking its efficiency and shortening its life. Once a year, flush it out. It's not hard: a hose, a few steps, and maybe a YouTube video or two for moral support. You'll end up with cleaner water, faster heating, and a unit that isn't harboring the mineral equivalent of a brick in its belly. This is the kind of maintenance no one talks about at dinner parties but everyone should be doing. Roof problems rarely introduce themselves politely. They crash in during a storm or reveal themselves as creeping stains on the ceiling. But if you check your roof annually-scan for missing shingles, flashing that's come loose, or signs of moss and algae—you stand a better chance of catching issues while they're still small. If you're uneasy climbing up there, a good drone or a pair of binoculars can give you a decent read. Think of it like checking your teeth: do it regularly, and you'll avoid the root canal of roof repair. There's an entire category of small, often-overlooked chores that quietly hold your house together. Replacing smoke detector batteries, testing GFCI outlets, tightening loose deck boards, cleaning behind the refrigerator, checking for signs of mice in the attic. These aren't major jobs, but ignoring them year after year adds up like debt. Spend a weekend with a checklist and a good podcast and knock them out-it's as much about peace of mind as it is about safety. Being a homeowner isn't just about mortgages, paint colors, and patio furniture. It's about stewardship, a kind of quiet attentiveness to the place that holds your life. Annual maintenance doesn't come with applause or Instagram likes, but it keeps the scaffolding of your world solid and serene. When you walk into a home that's been cared for, you can feel it—the air is calmer, the floors don't squeak quite as loud, and the house seems to breathe easier, knowing someone's listening. Explore the world of inclusive design with Eric W. Bailey, where insightful articles, engaging talks, and innovative projects await to inspire your next digital creation! I mean, this is objectively solid advice! The appearance of trust What was nice to note here is none of the links contained any UTM parameters, and the sites linked out looked relatively on the up and up. It could be relevant and actionable results, or maybe some sort of coordinated quid-pro-quo personal or professional networking. That said: Be the villain. The deliverable was a Microsoft Word document attached to an email. On the surface this seems completely innocuous—a ton of people use it to write compared to Markdown. However, in the wrong hands it could definitely be a vector for bad things. Appearing legitimate is a good tactic to build a sense of trust and get me to open that file. From there, all sorts of terrible things could happen. To address this, I extracted the text via a non-Windows operating system installed on a Virtual Machine (VM). I also used a copy of LibreOffice to open the Word document. The idea was to take advantage of the VM’s sandboxing, as well as the less-sophisticated interoperability between the two word processing apps. This allowed for sanitized plain text extraction, without enabling anything else more nefarious. Sometimes a cigar is just a cigar I also searched certain select phrases from the guest post to see if this content was repeated anywhere else, and didn’t find anything. I found other guest posts written by Erin on the web, but that’s the whole point, isn’t it? The internet is getting choked out by LLM-generated slop. Writing was already a tough job, and now it’s even gotten more thankless. It’s always important to keep in mind that there’s people behind the technology. I choose to believe that this is an article written in earnest by someone who cares about DIY home repair and wants to get the word out. So, to Erin: Here’s to your article! And to you, the reader: I hope you learned something new about taking care of the place you live in.

3 months ago 30 votes
Tag, you’re it

I’ve been seeing, and enjoying reading these posts as they pop up in my RSS reader. Dave Rupert tagged me into the chain, so here we go! Why did you start blogging in the first place? With the gift of hindsight, I guess I came up being blog-adjacent. Like Dave, I also had a background in publishing as a youth. I worked for my high school newspaper, and had a part- and then later full-time job at my local newspaper. I also published a weirdo, monkey cheese nerd zine. Its main claims to fame were both pissing off the principal and preventing me from getting dates. Zines are cool and embracing cringe will set you free. I read a ton of blogs, but I never initially thought I’d be be someone who published one. This was due to fear of dog-piling criticism, as well as not thinking I had anything meaningful to contribute. Then I got Kivikoskied. Reader, I strongly encourage you to get Kivikoskied yourself. The first post I put on my site was a reaction to the WebAIM Millions report. Reading through it generated enough feelings that I needed a place to put them in a constructive way. What platform are you using to manage your blog and why did you choose it? The reaction to the WebAIM Millions report was originally just a HTML page with a dream. That page seemed to resonate with people, so with that encouragement I had to build blogging infrastructure after the fact. That infrastructure wound up being Eleventy. I love Eleventy, and it’s only gotten better since that initial adoption. Zach Leatherman is a mensch, and I sing the praises of his project every chance I can get. I love blogging with Eleventy because it prioritizes speed, stability, and performance. Static web pages generated via Markdown are easy enough to wrangle, and it means I can spend the majority of my time focusing on writing, and not managing dependencies or database updates. Have you blogged on other platforms before? WordPress, Jekyll, thoughtbot’s homegrown CMS, and a few others. May you never have to work with Méthode. How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog? I’ve evaluated countless writing apps, but find myself keep coming back to Dropbox Paper. I’m highly distractible, and love to fiddle and tinker. Because of this, I find that Paper’s intentional, designed simplicity keeps me focused and on-task. It’s a shame that we live in the rot economy—where innovation is synonymous with value extraction—and there is apparently no longer enough incentive to maintain it. The post is then exported as a Markdown file from Paper, has its contents pasted into VS Code, cleaned up a little bit, metadata is added, merged into GitHub, and voilà! Blog post! There are more efficient ways to do this, but I find the ritual of it all soothing. When do you feel most inspired to write? I’m going to share a little secret with you: Nearly every technical blog post I write is a longform subtweet. By this, I mean I use writing as a way to channel a lot of my anxieties and frustrations into something constructive. I wish I wrote more silly posts, but it’s difficult to prioritize them given the state of things. Do you publish immediately after writing, or do you let it simmer a bit as a draft? I’ll chip away at a draft for weeks, moving sections around and tweaking language until the entire thing feels cohesive. It’s less perfectionism and more wanting to be sure I’m communicating my thoughts as clearly as I can. There is also the inevitable flurry of edits that follow hitting publish. I’d bottle that feeling of sudden, panicked clarity if I could. What are you generally interested in writing about? The intersection of accessibility, usability, design systems, and the web platform. I’m also a sucker for CSS, tech culture, and a good metaphor. Who are you writing for? I write for people who are curious about the web, accessibility, and frontend technology at a medium-to-high level of familiarity. It has been so liberating to not have to explain the basics of accessibility and why it matters anymore. I also write for myself as augmented memory. This, along with services like Pinboard help with my memory. Blog posts are also conversations. It is also a disservice to both audiences if I’m not weaving a lot of contextually relevant voices into the work as outgoing links. What’s your favorite post on your blog? My favorite post on my website is my opus, Accessibility annotation kits only annotate. It took forever to put those thoughts into words. My favorite post on another website is Consider the Tomato. thoughtbot tolerated and encouraged a lot of my shenanigans, and I’m thankful for that. Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature? This website is in desperate need of a redesign, and the “updating in the open” banner is an albatross around my neck. Ironically, the time I should spend on that is spent writing blog posts. I’m now at the point where I fantasize about taking a month off of work to make said redesign happen. Grinning face with sweat emoji. Tag ‘em I’d tag everyone on my RSS reader, if I could. Until then: Adrian Roselli. I’m more or less contractually obligated to include a link to Adrian’s site any time I write about accessibility, as chances are he’s already covered it. Ben Myers. Another favorite accessibility author. I really enjoy his takes on disability and digital accessibility. Jan Maarten. Coworker and samebrain friend, whose longform pieces are always worth reading. Jim Nielsen. A Melanie Richards. Melanie is, in a word, prolific. I’m in awe of her digital gardening efforts. Miriam Suzanne. Less a triple threat and more a, uh, quintuple threat? Brilliance at every turn.

3 months ago 33 votes
Harm reduction principles for digital accessibility practitioners

I debuted these principles in my axe-con 2025 talk, It is designed to break your heart: Cultivating a harm reduction mindset as an accessibility practitioner. They are adapted from The National Harm Reduction Coalition’s original eight principles. My adapted principles reflect philosophical and behavioral changes I’ve been cultivating. This is done to try and offset, and defend against systemic trauma and its resultant depression, burnout, and other negative experiences you can incur when doing digital accessibility work. If you have the time, I’d advise reading the original eight principles. I also recommend watching or reading the talk. I say this not in a self-promotional way, but instead that there is a lot of context that will be helpful in understanding: How these adapted principles came to be, and also The larger mindset shifts and practices that led to their creation. The principles There are eight principles in total. They are delivered in the context of how to approach evaluating a team’s efforts, and are: Accepting ableism and minimizing it Accepting, for better or worse, that ableism is a part of our world and choosing to work to minimize its harmful effects, rather than simply ignoring or condemning it. The original principle this is derived from is: “Accepts, for better or worse, that licit and illicit drug use is part of our world and chooses to work to minimize its harmful effects rather than simply ignore or condemn them.” Provisioning of resources is non-judgemental Calling for the non-judgemental provision of services and resources for people who create access barriers within the disciplines in which they work, in order to assist them in reducing harm. The original principle this is derived from is: “Calls for the non-judgmental, non-coercive provision of services and resources to people who use drugs and the communities in which they live in order to assist them in reducing attendant harm.” Do not minimize or ignore real harm Does not attempt to minimize or ignore the real and tragic harm and danger that can be created by inaccessible experiences. The original principle this is derived from is: “Does not attempt to minimize or ignore the real and tragic harm and danger that can be associated with illicit drug use.” Some barriers are worse than others Understands that how access barriers are created is a complex, multi-faceted phenomenon that encompasses a range of severities from life-endangering to annoying, and acknowledges that some barriers are clearly worse than others. The original principle this is derived from is: “Understands drug use as a complex, multi-faceted phenomenon that encompasses a continuum of behaviors from severe use to total abstinence, and acknowledges that some ways of using drugs are clearly safer than others.” Social inequalities affect vulnerability Recognizes that the realities of poverty, class, racism, social isolation, past trauma, sex-based discrimination, and other social inequalities affect both people’s vulnerability to, and capacity for effectively dealing with creating inaccessible experiences. The original principle this is derived from is: “Recognizes that the realities of poverty, class, racism, social isolation, past trauma, sex-based discrimination, and other social inequalities affect both people’s vulnerability to and capacity for effectively dealing with drug-related harm.” Improvement of quality is success Establishes quality of individual and team life and well-being—not necessarily cessation of all current workflows—as the criteria for successful interventions and policies. The original principle this is derived from is: “Establishes quality of individual and community life and well-being—not necessarily cessation of all drug use—as the criteria for successful interventions and policies.” Empowering people also helps their peers Affirms people who create access barriers themselves as the primary agents of reducing the harms of their efforts, and seeks to empower them to share information and support each other in creating and using remediation strategies that are effective for their daily workflows. The original principle this is derived from is: “Affirms people who use drugs themselves as the primary agents of reducing the harms of their drug use and seeks to empower people who use drugs to share information and support each other in strategies which meet their actual conditions of use.” Ensure that disabled people have a voice in change Ensures that people who are affected by access barriers, and those who have been affected by your organization’s access barriers, have a real voice in the creation of features and services designed to serve them. The original principle this is derived from is: “Ensures that people who use drugs and those with a history of drug use routinely have a real voice in the creation of programs and policies designed to serve them.” Reframe My talk digs deeper into into the parallels between the adapted and original principles, as well as the similarities between digital accessibility and harm reduction work. This is in the service of attempting to reframe our efforts. By this, I mean that we are miscategorized participants in imperfect, trauma-generating systems. The change in perspective I am advocating for also compels changes in behavior in order to not only survive, but also flourish as digital accessibility practitioners. The adapted principles are integral to making this effort successful.

4 months ago 40 votes
Evaluating overlay-adjacent accessibility products

I get asked about my opinion on overlay-adjacent accessibility products with enough frequency that I thought it could be helpful to write about it. There’s a category of third party products out there that are almost, but not quite an accessibility overlay. By this I mean that they seem a little less predatory, and a little more grounded in terms of the promises they make. Some of these products are widgets. Some are browser extensions. Some are apps. Some are an odd fourth thing. Sometimes it’s a case of a solutioneering disability dongle grift, sometimes its a case of good intentions executed in a less-than-optimal way, and sometimes it’s something legitimately helpful. Oftentimes it’s something that lies in the middle area of all of this. Many of them also have some sort of “AI” integration, which is the unfortunate upsell du jour we have to collectively endure for the time being. The rubric I use to evaluate these products remains very similar to how I scrutinize overlays. Hopefully it’s something that can be helpful for your own efforts. Should the product’s functionality be patented? I’m not very happy with the idea that the mechanism to operate something in an accessible way is inhibited by way of legal restriction. This artificially limits who can use it, which is in opposition to the overall mission of digital accessibility. Ideally the technology is the free bit, and the service that facilitates it is what generates the profit. Do I need to subscribe to use it? A subscription-based model is a great way to run a business, but you don’t need to pay a recurring fee to use an accessible website. The nature of the web’s technology means it can be operated via keyboard, voice control, and other assistive technology if constructed properly. Workarounds and community support also exist for some things where it’s not built well. Here I’d also like you to consider the disability tax, and how that factors into a rental model. It’s not great. Does the browser or operating system already have this functionality? A lot of the time this boils down to an issue of discovery, digital literacy, or identity. As touched on in the previous section, browsers and operating systems offer a lot to help you self-serve. Notable examples are reading mode, on-screen narration, color filters, interface and text zoom, and forced color inversion. Can it be used across multiple experiences, or just one website? Stability and predictability of operation and output are vital for technology like this. It’s why I am so bullish on utilizing existing browser and operating system features. Products built to “enhance” the accessibility of a single website or app can’t contribute towards this. Ironically, their presence may actually contribute friction towards someone’s existing method of using things. A tricky little twist here is products that target a single website are often advertised towards the website owner, and not the people who will be using said website. Can I use the keyboard to operate it? I’ve gotten in the habit of pressing Tab a few times when I first check out the product’s website and see if anything happens. It’s a quick and easy test to see if the company walks the walk in addition to talking the talk. Here, I regrettably encounter missing focus indicators and non-semantic interactive controls more often than not. I might also sometimes run the homepage through axe DevTools, to see if there are other egregious errors. I then try to use the product itself with a keyboard if a demo is offered. I am usually found wanting here. How reliable is the AI? There are two broad considerations here: How reliable is the output? How can bias affect someone’s interpretation of things? While I am a skeptic, I can also acknowledge that there are some good use cases for LLMs and related technology when it comes to disability. I think about reliability in terms of the output in terms of the “assistive” part of assistive technology. By this, I mean it actually helps you do what you need to get done. Here, I’d point to Salma Alam-Naylor’s experience with newer startups in this space versus established, community supported solutions. Then consider LLM-based image description products. Here we want to make sure the content is accurate and relevant. Remember that image descriptions are the mechanism that some people rely on to help them understand the world. If that description is not accurate, it impacts how they form an understanding of their environment. A step past that thought is the biases inherent in, and perpetuated by LLM-based technology. I recall Ben Myers’ thoughts on implicit, hegemonic normalization, as well as the sobering truth that this technology can exert influence over its users worldview at scale. Can the company be trusted with your data? A lot of assistive technology is purposely designed to not announce the fact that it is being used. This is to stave off things like discrimination or ineffective, separate-yet-equal “accessibility only” sites. There’s also the murky world of data brokerage, and if the company is selling off this information or not. AccessiBe comes to mind here, and not in a good way. Also consider if the product has access to everything you visit and interact with, and who has access to that information. As a companion concern, it is also worth considering the product’s data security practices—or lack thereof. Here, I would like to point out that startups tend to deprioritize this boring kind of infrastructure work in favor of feature creation. Not having any personal information present in a system is the best way to guard against its theft. Also know that there is no way to undo a data breach once it occurs. Leaked information stays leaked. Will the company last? Speaking of startups, know that more fail than succeed. Are you prepared for an outcome where the product you rely on is is no longer updated or supported because the company that made it went out of business? It could also be a case where the company still exists, but ceases to support the product you use. Here, know that sometimes these companies will actively squash attempts for community-based resurrection and support of the service because it represents potential liability. This concern is another reason why I’m bullish on operating system and browser functionality. They have a lot more resiliency and focus on the long view in this particular area. But also I’m not the arbiter of who can use what. In the spirit of “the best camera is the one you have on you:” if something works for your specific access needs, by all means use it.

4 months ago 47 votes
Stanislav Petrov

A lieutenant colonel in the Soviet Air Defense Forces prevented the end of human civilization on September 26th, 1983. His name was Stanislav Petrov. Protocol dictated that the Soviet Union would retaliate against any nuclear strikes sent by the United States. This was a policy of mutually assured destruction, a doctrine that compels a horrifying logical conclusion. The second and third stage effects of this type of exchange would be even more catastrophic. Allies for each side would likely be pulled into the conflict. The resulting nuclear winter was projected to lead to 2 billion deaths due to starvation. This is to say nothing about those who would have been unfortunate enough to have survived. Petrov’s job was to monitor Oko, the computerized warning systems built to centralize Soviet satellite communications. Around midnight, he received a report that one of the satellites had detected the infrared signature of a single launch of a United States ICBM. While Petrov was deciding what to do about this report, the system detected four more incoming missile launches. He had minutes to make a choice about what to do. It is impossible to imagine the amount of pressure placed on him at this moment. Source: Stanislav Petrov, Soviet officer credited with averting nuclear war, dies at 77 by Schwartzreport. Petrov lived in a world of deterministic systems. The technologies that powered these warning systems have outputs that are guaranteed, provided the proper inputs are provided. However, deterministic does not mean infallible. The only reason you are alive and reading this is because Petrov understood that the systems he observed were capable of error. He was suspicious of what he was seeing reported, and chose not to escalate a retaliatory strike. There were two factors guiding his decision: A surprise attack would most likely have used hundreds of missiles, and not just five. The allegedly foolproof Oko system was new and prone to errors. An error in a deterministic system can still lead to expected outputs being generated. For the Oko system, infrared reflections of the sun shining off of the tops of clouds created a false positive that was interpreted as detection of a nuclear launch event. Source: US-K History by Kosmonavtika. The concept of erroneous truth is a deep thing to internalize, as computerized systems are presented as omniscient, indefective, and absolute. Petrov’s rewards for this action were reprimands, reassignment, and denial of promotion. This was likely for embarrassing his superiors by the politically inconvenient shedding of light on issues with the Oko system. A coerced early retirement caused a nervous breakdown, likely him having to grapple with the weight of his decision. It was only in the 1990s—after the fall of the Soviet Union—that his actions were discovered internationally and celebrated. Stanislav Petrov was given the recognition that he deserved, including being honored by the United Nations, awarded the Dresden Peace Prize, featured in a documentary, and being able to visit a Minuteman Missile silo in the United States. On January 31st, 2025, OpenAI struck a deal with the United States government to use its AI product for nuclear weapon security. It is unclear how this technology will be used, where, and to what extent. It is also unclear how OpenAI’s systems function, as they are black box technologies. What is known is that LLM-generated responses—the product OpenAI sells—are non-deterministic. Non-deterministic systems don’t have guaranteed outputs from their inputs. In addition, LLM-based technology hallucinates—it invents content with no self-knowledge that it is a falsehood. Non-deterministic systems that are computerized also have the perception as being authoritative, the same as their deterministic peers. It is not a question of how the output is generated, it is one of the output being perceived to come from a machine. These are terrifying things to know. Consider not only the systems this technology is being applied to, but also the thoughtless speed of their integration. Then consider how we’ve historically been conditioned and rewarded to interpret the output of these systems, and then how we perceive and treat skeptics. We don’t live in a purely deterministic world of technology anymore. Stanislav Petrov died on September 18th, 2017, before this change occurred. I would be incredibly curious to know his thoughts about our current reality, as well as the increasing abdication of human monitoring of automated systems in favor of notably biased, supposed “AI solutions.” In acknowledging Petrov’s skepticism in a time of mania and political instability, we acknowledge a quote from former U.S. Secretary of Defense William J. Perry’s memoir about the incident: [Oko’s false positives] illustrates the immense danger of placing our fate in the hands of automated systems that are susceptible to failure and human beings who are fallible.

5 months ago 44 votes

More in programming

My first year since coming back to Linux

<![CDATA[It has been a year since I set up my System76 Merkaat with Linux Mint. In July of 2024 I migrated from ChromeOS and the Merkaat has been my daily driver on the desktop. A year later I have nothing major to report, which is the point. Despite the occasional unplanned reinstallation I have been enjoying the stability of Linux and just using the PC. This stability finally enabled me to burn bridges with mainstream operating systems and fully embrace Linux and open systems. I'm ready to handle the worst and get back to work. Just a few years ago the frustration of troubleshooting a broken system would have made me seriously consider the switch to a proprietary solution. But a year of regular use, with an ordinary mix of quiet moments and glitches, gave me the confidence to stop worrying and learn to love Linux. linux a href="https://remark.as/p/journal.paoloamoroso.com/my-first-year-since-coming-back-to-linux"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>

18 hours ago 3 votes
Overanalyzing a minor quirk of Espressif’s reset circuit

The mystery In the previous article, I briefly mentioned a slight difference between the ESP-Prog and the reproduced circuit, when it comes to EN: Focusing on EN, it looks like the voltage level goes back to 3.3V much faster on the ESP-Prog than on the breadboard circuit. The grid is horizontally spaced at 2ms, so … Continue reading Overanalyzing a minor quirk of Espressif’s reset circuit → The post Overanalyzing a minor quirk of Espressif’s reset circuit appeared first on Quentin Santos.

19 hours ago 2 votes
What can agents actually do?

There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they’re a bit too abstract. Sort of like saying that your company could be much better if you merely adopted software. That’s certainly true, but it’s not a particularly helpful claim. This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and make the case that the potential of AI agents is equivalent to the potential of this generation of AI. By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company. How do agents work? At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic’s /v1/message with a prompt: How should I adopt LLMs in my company? That prompt is used to fill the LLM’s context window, which conditions the model to generate certain kinds of responses. This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result. Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely. However, composing the perfect context window is very time intensive, benefiting from techniques like metaprompting to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, Who is going to become the next mayor of New York City?, then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you’re asking the question to begin with! This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window. This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response. However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive: The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using. Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend The response from the API call with defined functions is either: Generated text as any other call to an LLM might provide A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a get_weather tool, when prompted about the weather in Paris, might return this response: [{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }] The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France). If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window. The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context. What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context. This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios: Flow control via rules has concrete rules about how tools can be used. Some examples: it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc) it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval) it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails) apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data) a tool to escalate to a human representative can only be called after five back and forths with the LLM agent Flow control via statistics can use statistics to identify and act on abnormal behavior: if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters. At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include: Building general context to add to context window, sometimes thought of as maintaining memory Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc Periodically initiating workflows at a certain time, such as hourly review of incoming tickets Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are: Use an LLM to evaluate a context window and get a result Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response Manage flow control for tool usage via rules or statistical analysis Agents are software programs, and can do anything other software programs do Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities. Use Case 1: Customer Support Agent One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact. Our approach might be: Allow tickets (or support chats) to flow into an AI agent Provide a variety of tools to the agent to support: Retrieving information about the user: recent customer support tickets, account history, account state, and so on Escalating to next tier of customer support Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much) Closing the user account on request Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling Review hourly, then daily, and then weekly metrics of agent performance Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review. Note that even when you’ve moved “Customer Support to AI agents”, you still have: a tier of human agents dealing with the most complex calls humans reviewing the periodic performance statistics humans performing quality control on AI agent-customer interactions You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones. Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent. Use Case 2: Triaging incoming bug reports When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow. The process might work as follows: Pipe all created incidents and all created tickets to this agent for review. Expose these tools to the agent: Open an incident Retrieve current incidents Retrieve recently created tickets Retrieve production metrics Retrieve deployment logs Retrieve feature flag change logs Toggle known-safe feature flags Propose merging an incident with another for human approval Propose merging a ticket with another ticket for human approval Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues. Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow. Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket. Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time. Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience. Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion. Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident. Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time. Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time. This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes: The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident. You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval. Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL. Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed. Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation. Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely. Do AI Agents Represent Entirety of this Generation of AI? If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit. Closing thoughts LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work. Software isn’t magic, it’s very logical, but what it can accomplish is magical. The same goes for agents and LLMs. The more we can accelerate that learning curve, the better for our industry.

16 hours ago 2 votes
Can tinygrad win?

This is not going to be a cakewalk like self driving cars. Most of comma’s competition is now out of business, taking billions and billions of dollars with it. Re: Tesla and FSD, we always expected Tesla to have the lead, but it’s not a winner take all market, it will look more like iOS vs Android. comma has been around for 10 years, is profitable, and is now growing rapidly. In self driving, most of the competition wasn’t even playing the right game. This isn’t how it is for ML frameworks. tinygrad’s competition is playing the right game, open source, and run by some quite smart people. But this is my second startup, so hopefully taking a bit more risk is appropriate. For comma to win, all it would take is people in 2016 being wrong about LIDAR, mapping, end to end, and hand coding, which hopefully we all agree now that they were. For tinygrad to win, it requires something much deeper to be wrong about software development in general. As it stands now, tinygrad is 14556 lines. Line count is not a perfect proxy for complexity, but when you have differences of multiple orders of magnitude, it might mean something. I asked ChatGPT to estimate the lines of code in PyTorch, JAX, and MLIR. JAX = 400k MLIR = 950k PyTorch = 3300k They range from one to two orders of magnitude off. And this isn’t even including all the libraries and drivers the other frameworks rely on, CUDA, cuBLAS, Triton, nccl, LLVM, etc…. tinygrad includes every single piece of code needed to drive an AMD RDNA3 GPU except for LLVM, and we plan to remove LLVM in a year or two as well. But so what? What does line count matter? One hypothesis is that tinygrad is only smaller because it’s not speed or feature competitive, and that if and when it becomes competitive, it will also be that many lines. But I just don’t think that’s true. tinygrad is already feature competitive, and for speed, I think the bitter lesson also applies to software. When you look at the machine learning ecosystem, you realize it’s just the same problems over and over again. The problem of multi machine, multi GPU, multi SM, multi ALU, cross machine memory scheduling, DRAM scheduling, SRAM scheduling, register scheduling, it’s all the same underlying problem at different scales. And yet, in all the current ecosystems, there are completely different codebases and libraries at each scale. I don’t think this stands. I suspect there is a simple formulation of the problem underlying all of the scheduling. Of course, this problem will be in NP and hard to optimize, but I’m betting the bitter lesson wins here. The goal of the tinygrad project is to abstract away everything except the absolute core problem in the cleanest way possible. This is why we need to replace everything. A model for the hardware is simple compared to a model for CUDA. If we succeed, tinygrad will not only be the fastest NN framework, but it will be under 25k lines all in, GPT-5 scale training job to MMIO on the PCIe bus! Here are the steps to get there: Expose the underlying search problem spanning several orders of magnitude. Due to the execution of neural networks not being data dependent, this problem is very amenable to search. Make sure your formulation is simple and complete. Fully capture all dimensions of the search space. The optimization goal is simple, run faster. Apply the state of the art in search. Burn compute. Use LLMs to guide. Use SAT solvers. Reinforcement learning. It doesn’t matter, there’s no way to cheat this goal. Just see if it runs faster. If this works, not only do we win with tinygrad, but hopefully people begin to rethink software in general. Of course, it’s a big if, this isn’t like comma where it was hard to lose. But if it wins… The main thing to watch is development speed. Our bet has to be that tinygrad’s development speed is outpacing the others. We have the AMD contract to train LLaMA 405B as fast as NVIDIA due in a year, let’s see if we succeed.

20 hours ago 1 votes
Do You Even Personalize, Bro?

There’s a video on YouTube from “Technology Connections” — who I’ve never heard of or watched until now — called Algorithms are breaking how we think. I learned of this video from Gedeon Maheux of The Iconfactory fame. Speaking in the context of why they made Tapestry, he said the ideas in this video would be their manifesto. So I gave it a watch. Generally speaking, the video asks: Does anyone care to have a self-directed experience online, or with a computer more generally? I'm not sure how infrequently we’re actually deciding for ourselves these days [how we decide what we want to see, watch, and do on the internet] Ironically we spend more time than ever on computing devices, but less time than ever curating our own experiences with them. Which — again ironically — is the inverse of many things in our lives. Generally speaking, the more time we spend with something, the more we invest in making it our own — customizing it to our own idiosyncrasies. But how much time do you spend curating, customizing, and personalizing your digital experience? (If you’re reading this in an RSS reader, high five!) I’m not talking about “I liked that post, or saved that video, so the algorithm is personalizing things for me”. Do you know what to get yourself more of? Do you know where to find it? Do you even ask yourself these questions? “That sounds like too much work” you might say. And you’re right, it is work. As the guy in the video says: I'm one of those weirdos who think the most rewarding things in life take effort Me too. Email · Mastodon · Bluesky

8 hours ago 1 votes