Full Width [alt+shift+f] Shortcuts [alt+shift+k] TRY SIMPLE MODE
Sign Up [alt+shift+s] Log In [alt+shift+l]

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Eric Bailey

Article pitch for your consideration

A thing you should know is that you get put on a lot of lists if you spend a decent chunk of time publishing blog posts on your website. Your website and contact information will be shared around on these lists, for the purpose of soliciting you for guest posts. If you’re not familiar with the concept, guest posts are a way for other people to take advantage of your website’s search ranking as a way to divert traffic to other websites. There are benefits to doing this. The most straightforward one is SEO. Here, outward going links serves a heuristic web search engines look to for quality when weighing results. Guest posts can also have some additional gray hat goals, including audience segmenting and identification via things like UTM-driven campaigns. There are also straight-up cons such as linking to spyware, cryptominers and other forms of malware, and browser-based zero day exploits. Curiouser and curiouser I’ve always been curious about what exactly you get when you agree to a guest post offer. So, I dredged my spam folder and found one that sounded more direct and sincere. Here’s the cold call email pitch: Subject: Body: Keeping up with annual home and property maintenance is essential for preserving value and preventing costly repairs down the line. Whether it's inspecting your roof, cleaning gutters, or checking heating systems, regular upkeep can save homeowners time, money, and stress. I’m putting together an article that highlights key tasks for effective yearly maintenance, offering tips to help homeowners protect their biggest investment. I think this piece could really resonate with your audience! Let me know if you'd be interested in featuring it on your website. Thank you so much for your time today! Erin Reynolds P.S. If you’d like to propose an alternative topic, please do so. I would be happy to write on a topic that best suits your website. Don’t want to hear from me again? Please let me know. My reply reads: Hi Erin, This might be a weird one, but bear with me: My blog is a personal site, and its content is focused on web development and internet culture. I've always wanted to take someone up on this sort of offer, presented in the context of the article being something you get if you take the person reaching out on the offer to write a guest post. Is this something you'd be interested in? Erin took me up on my offer, and wrote about annual home and property maintenance. To her credit, she also did ask me if there was another subject I was interested in, but I figured we could stay the course of the original pitch. She was also prompt and communicative throughout the process, and delivered exactly what was promised. Here is the article in question: By Erin Reynolds, [diymama.net](https://diymama.net/) There's a quiet rhythm to living in a well-loved home. If you listen closely, your house speaks to you-whispers, mostly. The soft drip of a tired faucet, the groan of an HVAC unit that's been running too long, or the gentle scold of a clogged dryer vent. These aren't just annoyances. They re the language of upkeep, and whether you're in your first place or celebrating twenty years in the same four walls, learning to listen—and act—is everything. Annual maintenance isn't just about fixing what's broken. It's about stewardship, about being the kind of homeowner who doesn't wait for the ceiling to leak before checking the roof. There's something incredibly satisfying about having all your home maintenance documents in one tidy digital folder-no more rummaging through drawers for that appliance manual or the roof warranty. Digitizing receipts, inspection reports, and service invoices gives you a clear, accessible record of everything that's been done and when. Saving these as PDFs makes them universally readable and easy to share, whether you're selling your home or just need to reference them quickly. When you use a tool to create PDF files, you can convert virtually any document into a neat, portable format. You might not think much about gutters unless they're sagging or spilling over during a thunderstorm, but they play a quiet hero's role in protecting your home. Clean them out once a year —twice if you're under heavy tree cover—and you'll avoid water damage, foundation cracks, and even basement flooding. Take a Saturday with a sturdy ladder, some gloves, and a hose; it's oddly meditative work, like adult sandbox play. And if climbing rooftops isn't your thing, call in the pros-your future self will thank you during the next torrential downpour. That whoosh of warm or cool air we all take for granted? It comes at a price if neglected. Your heating and cooling system needs a checkup at least once a year, ideally before the seasons shift. A technician can clean the coils, swap the filter, and make sure it's all running like a symphony-not the death rattle of a dying compressor. Skipping this task means flirting with energy inefficiency and sudden breakdowns during a July heatwave or a January cold snap-and no one wants that call to the emergency repair guy at 2 a.m. Keep Your Appliances Running Like Clockwork Your appliances work hard so giving them a little yearly attention goes a long way. Cleaning refrigerator coils, checking for clogged dryer vents, and running cleaning cycles on dishwashers and washing machines helps extend their lifespan and keep things humming. But even with routine care, breakdowns happen, which is why investing in a home warranty can provide peace of mind when repairs crop up. Be sure to research home warranty appliance coverage that includes not only repair costs, but also removal of faulty units and protection against damage caused by previous poor installations. It's easy to forget the trees in your yard when they're not blooming or dropping leaves, but they're worth an annual walkaround. Look for branches that hang a little too close to power lines or seem precariously poised above your roof. Dead limbs are more than an eyesore-they're projectiles in a windstorm, liabilities when it comes to insurance, and threats to your peace of mind. Hiring an arborist to prune and assess health may not be the most glamorous expense, but it's a strategic one. This one's for all the window-ledge neglecters and bathroom corner deniers. Every year, old caulk shrinks and cracks, and when it does, water starts to creep in—under tubs, around sinks, behind tile. The same goes for gaps around doors and windows that let in drafts, bugs, and rising utility bills. Re-caulking is a humble chore that wields mighty results, and it's deeply satisfying to peel away the old and lay down a clean bead like you're frosting a cake. A tube of silicone sealant and an hour of your time buys you protection and a crisp finish. Sediment buildup is sneaky—it collects at the bottom of your water heater like sand in a jar, slowly choking its efficiency and shortening its life. Once a year, flush it out. It's not hard: a hose, a few steps, and maybe a YouTube video or two for moral support. You'll end up with cleaner water, faster heating, and a unit that isn't harboring the mineral equivalent of a brick in its belly. This is the kind of maintenance no one talks about at dinner parties but everyone should be doing. Roof problems rarely introduce themselves politely. They crash in during a storm or reveal themselves as creeping stains on the ceiling. But if you check your roof annually-scan for missing shingles, flashing that's come loose, or signs of moss and algae—you stand a better chance of catching issues while they're still small. If you're uneasy climbing up there, a good drone or a pair of binoculars can give you a decent read. Think of it like checking your teeth: do it regularly, and you'll avoid the root canal of roof repair. There's an entire category of small, often-overlooked chores that quietly hold your house together. Replacing smoke detector batteries, testing GFCI outlets, tightening loose deck boards, cleaning behind the refrigerator, checking for signs of mice in the attic. These aren't major jobs, but ignoring them year after year adds up like debt. Spend a weekend with a checklist and a good podcast and knock them out-it's as much about peace of mind as it is about safety. Being a homeowner isn't just about mortgages, paint colors, and patio furniture. It's about stewardship, a kind of quiet attentiveness to the place that holds your life. Annual maintenance doesn't come with applause or Instagram likes, but it keeps the scaffolding of your world solid and serene. When you walk into a home that's been cared for, you can feel it—the air is calmer, the floors don't squeak quite as loud, and the house seems to breathe easier, knowing someone's listening. Explore the world of inclusive design with Eric W. Bailey, where insightful articles, engaging talks, and innovative projects await to inspire your next digital creation! I mean, this is objectively solid advice! The appearance of trust What was nice to note here is none of the links contained any UTM parameters, and the sites linked out looked relatively on the up and up. It could be relevant and actionable results, or maybe some sort of coordinated quid-pro-quo personal or professional networking. That said: Be the villain. The deliverable was a Microsoft Word document attached to an email. On the surface this seems completely innocuous—a ton of people use it to write compared to Markdown. However, in the wrong hands it could definitely be a vector for bad things. Appearing legitimate is a good tactic to build a sense of trust and get me to open that file. From there, all sorts of terrible things could happen. To address this, I extracted the text via a non-Windows operating system installed on a Virtual Machine (VM). I also used a copy of LibreOffice to open the Word document. The idea was to take advantage of the VM’s sandboxing, as well as the less-sophisticated interoperability between the two word processing apps. This allowed for sanitized plain text extraction, without enabling anything else more nefarious. Sometimes a cigar is just a cigar I also searched certain select phrases from the guest post to see if this content was repeated anywhere else, and didn’t find anything. I found other guest posts written by Erin on the web, but that’s the whole point, isn’t it? The internet is getting choked out by LLM-generated slop. Writing was already a tough job, and now it’s even gotten more thankless. It’s always important to keep in mind that there’s people behind the technology. I choose to believe that this is an article written in earnest by someone who cares about DIY home repair and wants to get the word out. So, to Erin: Here’s to your article! And to you, the reader: I hope you learned something new about taking care of the place you live in.

4 months ago 39 votes
Tag, you’re it

I’ve been seeing, and enjoying reading these posts as they pop up in my RSS reader. Dave Rupert tagged me into the chain, so here we go! Why did you start blogging in the first place? With the gift of hindsight, I guess I came up being blog-adjacent. Like Dave, I also had a background in publishing as a youth. I worked for my high school newspaper, and had a part- and then later full-time job at my local newspaper. I also published a weirdo, monkey cheese nerd zine. Its main claims to fame were both pissing off the principal and preventing me from getting dates. Zines are cool and embracing cringe will set you free. I read a ton of blogs, but I never initially thought I’d be be someone who published one. This was due to fear of dog-piling criticism, as well as not thinking I had anything meaningful to contribute. Then I got Kivikoskied. Reader, I strongly encourage you to get Kivikoskied yourself. The first post I put on my site was a reaction to the WebAIM Millions report. Reading through it generated enough feelings that I needed a place to put them in a constructive way. What platform are you using to manage your blog and why did you choose it? The reaction to the WebAIM Millions report was originally just a HTML page with a dream. That page seemed to resonate with people, so with that encouragement I had to build blogging infrastructure after the fact. That infrastructure wound up being Eleventy. I love Eleventy, and it’s only gotten better since that initial adoption. Zach Leatherman is a mensch, and I sing the praises of his project every chance I can get. I love blogging with Eleventy because it prioritizes speed, stability, and performance. Static web pages generated via Markdown are easy enough to wrangle, and it means I can spend the majority of my time focusing on writing, and not managing dependencies or database updates. Have you blogged on other platforms before? WordPress, Jekyll, thoughtbot’s homegrown CMS, and a few others. May you never have to work with Méthode. How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog? I’ve evaluated countless writing apps, but find myself keep coming back to Dropbox Paper. I’m highly distractible, and love to fiddle and tinker. Because of this, I find that Paper’s intentional, designed simplicity keeps me focused and on-task. It’s a shame that we live in the rot economy—where innovation is synonymous with value extraction—and there is apparently no longer enough incentive to maintain it. The post is then exported as a Markdown file from Paper, has its contents pasted into VS Code, cleaned up a little bit, metadata is added, merged into GitHub, and voilà! Blog post! There are more efficient ways to do this, but I find the ritual of it all soothing. When do you feel most inspired to write? I’m going to share a little secret with you: Nearly every technical blog post I write is a longform subtweet. By this, I mean I use writing as a way to channel a lot of my anxieties and frustrations into something constructive. I wish I wrote more silly posts, but it’s difficult to prioritize them given the state of things. Do you publish immediately after writing, or do you let it simmer a bit as a draft? I’ll chip away at a draft for weeks, moving sections around and tweaking language until the entire thing feels cohesive. It’s less perfectionism and more wanting to be sure I’m communicating my thoughts as clearly as I can. There is also the inevitable flurry of edits that follow hitting publish. I’d bottle that feeling of sudden, panicked clarity if I could. What are you generally interested in writing about? The intersection of accessibility, usability, design systems, and the web platform. I’m also a sucker for CSS, tech culture, and a good metaphor. Who are you writing for? I write for people who are curious about the web, accessibility, and frontend technology at a medium-to-high level of familiarity. It has been so liberating to not have to explain the basics of accessibility and why it matters anymore. I also write for myself as augmented memory. This, along with services like Pinboard help with my memory. Blog posts are also conversations. It is also a disservice to both audiences if I’m not weaving a lot of contextually relevant voices into the work as outgoing links. What’s your favorite post on your blog? My favorite post on my website is my opus, Accessibility annotation kits only annotate. It took forever to put those thoughts into words. My favorite post on another website is Consider the Tomato. thoughtbot tolerated and encouraged a lot of my shenanigans, and I’m thankful for that. Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature? This website is in desperate need of a redesign, and the “updating in the open” banner is an albatross around my neck. Ironically, the time I should spend on that is spent writing blog posts. I’m now at the point where I fantasize about taking a month off of work to make said redesign happen. Grinning face with sweat emoji. Tag ‘em I’d tag everyone on my RSS reader, if I could. Until then: Adrian Roselli. I’m more or less contractually obligated to include a link to Adrian’s site any time I write about accessibility, as chances are he’s already covered it. Ben Myers. Another favorite accessibility author. I really enjoy his takes on disability and digital accessibility. Jan Maarten. Coworker and samebrain friend, whose longform pieces are always worth reading. Jim Nielsen. A Melanie Richards. Melanie is, in a word, prolific. I’m in awe of her digital gardening efforts. Miriam Suzanne. Less a triple threat and more a, uh, quintuple threat? Brilliance at every turn.

5 months ago 43 votes
Harm reduction principles for digital accessibility practitioners

I debuted these principles in my axe-con 2025 talk, It is designed to break your heart: Cultivating a harm reduction mindset as an accessibility practitioner. They are adapted from The National Harm Reduction Coalition’s original eight principles. My adapted principles reflect philosophical and behavioral changes I’ve been cultivating. This is done to try and offset, and defend against systemic trauma and its resultant depression, burnout, and other negative experiences you can incur when doing digital accessibility work. If you have the time, I’d advise reading the original eight principles. I also recommend watching or reading the talk. I say this not in a self-promotional way, but instead that there is a lot of context that will be helpful in understanding: How these adapted principles came to be, and also The larger mindset shifts and practices that led to their creation. The principles There are eight principles in total. They are delivered in the context of how to approach evaluating a team’s efforts, and are: Accepting ableism and minimizing it Accepting, for better or worse, that ableism is a part of our world and choosing to work to minimize its harmful effects, rather than simply ignoring or condemning it. The original principle this is derived from is: “Accepts, for better or worse, that licit and illicit drug use is part of our world and chooses to work to minimize its harmful effects rather than simply ignore or condemn them.” Provisioning of resources is non-judgemental Calling for the non-judgemental provision of services and resources for people who create access barriers within the disciplines in which they work, in order to assist them in reducing harm. The original principle this is derived from is: “Calls for the non-judgmental, non-coercive provision of services and resources to people who use drugs and the communities in which they live in order to assist them in reducing attendant harm.” Do not minimize or ignore real harm Does not attempt to minimize or ignore the real and tragic harm and danger that can be created by inaccessible experiences. The original principle this is derived from is: “Does not attempt to minimize or ignore the real and tragic harm and danger that can be associated with illicit drug use.” Some barriers are worse than others Understands that how access barriers are created is a complex, multi-faceted phenomenon that encompasses a range of severities from life-endangering to annoying, and acknowledges that some barriers are clearly worse than others. The original principle this is derived from is: “Understands drug use as a complex, multi-faceted phenomenon that encompasses a continuum of behaviors from severe use to total abstinence, and acknowledges that some ways of using drugs are clearly safer than others.” Social inequalities affect vulnerability Recognizes that the realities of poverty, class, racism, social isolation, past trauma, sex-based discrimination, and other social inequalities affect both people’s vulnerability to, and capacity for effectively dealing with creating inaccessible experiences. The original principle this is derived from is: “Recognizes that the realities of poverty, class, racism, social isolation, past trauma, sex-based discrimination, and other social inequalities affect both people’s vulnerability to and capacity for effectively dealing with drug-related harm.” Improvement of quality is success Establishes quality of individual and team life and well-being—not necessarily cessation of all current workflows—as the criteria for successful interventions and policies. The original principle this is derived from is: “Establishes quality of individual and community life and well-being—not necessarily cessation of all drug use—as the criteria for successful interventions and policies.” Empowering people also helps their peers Affirms people who create access barriers themselves as the primary agents of reducing the harms of their efforts, and seeks to empower them to share information and support each other in creating and using remediation strategies that are effective for their daily workflows. The original principle this is derived from is: “Affirms people who use drugs themselves as the primary agents of reducing the harms of their drug use and seeks to empower people who use drugs to share information and support each other in strategies which meet their actual conditions of use.” Ensure that disabled people have a voice in change Ensures that people who are affected by access barriers, and those who have been affected by your organization’s access barriers, have a real voice in the creation of features and services designed to serve them. The original principle this is derived from is: “Ensures that people who use drugs and those with a history of drug use routinely have a real voice in the creation of programs and policies designed to serve them.” Reframe My talk digs deeper into into the parallels between the adapted and original principles, as well as the similarities between digital accessibility and harm reduction work. This is in the service of attempting to reframe our efforts. By this, I mean that we are miscategorized participants in imperfect, trauma-generating systems. The change in perspective I am advocating for also compels changes in behavior in order to not only survive, but also flourish as digital accessibility practitioners. The adapted principles are integral to making this effort successful.

5 months ago 48 votes
Evaluating overlay-adjacent accessibility products

I get asked about my opinion on overlay-adjacent accessibility products with enough frequency that I thought it could be helpful to write about it. There’s a category of third party products out there that are almost, but not quite an accessibility overlay. By this I mean that they seem a little less predatory, and a little more grounded in terms of the promises they make. Some of these products are widgets. Some are browser extensions. Some are apps. Some are an odd fourth thing. Sometimes it’s a case of a solutioneering disability dongle grift, sometimes its a case of good intentions executed in a less-than-optimal way, and sometimes it’s something legitimately helpful. Oftentimes it’s something that lies in the middle area of all of this. Many of them also have some sort of “AI” integration, which is the unfortunate upsell du jour we have to collectively endure for the time being. The rubric I use to evaluate these products remains very similar to how I scrutinize overlays. Hopefully it’s something that can be helpful for your own efforts. Should the product’s functionality be patented? I’m not very happy with the idea that the mechanism to operate something in an accessible way is inhibited by way of legal restriction. This artificially limits who can use it, which is in opposition to the overall mission of digital accessibility. Ideally the technology is the free bit, and the service that facilitates it is what generates the profit. Do I need to subscribe to use it? A subscription-based model is a great way to run a business, but you don’t need to pay a recurring fee to use an accessible website. The nature of the web’s technology means it can be operated via keyboard, voice control, and other assistive technology if constructed properly. Workarounds and community support also exist for some things where it’s not built well. Here I’d also like you to consider the disability tax, and how that factors into a rental model. It’s not great. Does the browser or operating system already have this functionality? A lot of the time this boils down to an issue of discovery, digital literacy, or identity. As touched on in the previous section, browsers and operating systems offer a lot to help you self-serve. Notable examples are reading mode, on-screen narration, color filters, interface and text zoom, and forced color inversion. Can it be used across multiple experiences, or just one website? Stability and predictability of operation and output are vital for technology like this. It’s why I am so bullish on utilizing existing browser and operating system features. Products built to “enhance” the accessibility of a single website or app can’t contribute towards this. Ironically, their presence may actually contribute friction towards someone’s existing method of using things. A tricky little twist here is products that target a single website are often advertised towards the website owner, and not the people who will be using said website. Can I use the keyboard to operate it? I’ve gotten in the habit of pressing Tab a few times when I first check out the product’s website and see if anything happens. It’s a quick and easy test to see if the company walks the walk in addition to talking the talk. Here, I regrettably encounter missing focus indicators and non-semantic interactive controls more often than not. I might also sometimes run the homepage through axe DevTools, to see if there are other egregious errors. I then try to use the product itself with a keyboard if a demo is offered. I am usually found wanting here. How reliable is the AI? There are two broad considerations here: How reliable is the output? How can bias affect someone’s interpretation of things? While I am a skeptic, I can also acknowledge that there are some good use cases for LLMs and related technology when it comes to disability. I think about reliability in terms of the output in terms of the “assistive” part of assistive technology. By this, I mean it actually helps you do what you need to get done. Here, I’d point to Salma Alam-Naylor’s experience with newer startups in this space versus established, community supported solutions. Then consider LLM-based image description products. Here we want to make sure the content is accurate and relevant. Remember that image descriptions are the mechanism that some people rely on to help them understand the world. If that description is not accurate, it impacts how they form an understanding of their environment. A step past that thought is the biases inherent in, and perpetuated by LLM-based technology. I recall Ben Myers’ thoughts on implicit, hegemonic normalization, as well as the sobering truth that this technology can exert influence over its users worldview at scale. Can the company be trusted with your data? A lot of assistive technology is purposely designed to not announce the fact that it is being used. This is to stave off things like discrimination or ineffective, separate-yet-equal “accessibility only” sites. There’s also the murky world of data brokerage, and if the company is selling off this information or not. AccessiBe comes to mind here, and not in a good way. Also consider if the product has access to everything you visit and interact with, and who has access to that information. As a companion concern, it is also worth considering the product’s data security practices—or lack thereof. Here, I would like to point out that startups tend to deprioritize this boring kind of infrastructure work in favor of feature creation. Not having any personal information present in a system is the best way to guard against its theft. Also know that there is no way to undo a data breach once it occurs. Leaked information stays leaked. Will the company last? Speaking of startups, know that more fail than succeed. Are you prepared for an outcome where the product you rely on is is no longer updated or supported because the company that made it went out of business? It could also be a case where the company still exists, but ceases to support the product you use. Here, know that sometimes these companies will actively squash attempts for community-based resurrection and support of the service because it represents potential liability. This concern is another reason why I’m bullish on operating system and browser functionality. They have a lot more resiliency and focus on the long view in this particular area. But also I’m not the arbiter of who can use what. In the spirit of “the best camera is the one you have on you:” if something works for your specific access needs, by all means use it.

6 months ago 56 votes
Stanislav Petrov

A lieutenant colonel in the Soviet Air Defense Forces prevented the end of human civilization on September 26th, 1983. His name was Stanislav Petrov. Protocol dictated that the Soviet Union would retaliate against any nuclear strikes sent by the United States. This was a policy of mutually assured destruction, a doctrine that compels a horrifying logical conclusion. The second and third stage effects of this type of exchange would be even more catastrophic. Allies for each side would likely be pulled into the conflict. The resulting nuclear winter was projected to lead to 2 billion deaths due to starvation. This is to say nothing about those who would have been unfortunate enough to have survived. Petrov’s job was to monitor Oko, the computerized warning systems built to centralize Soviet satellite communications. Around midnight, he received a report that one of the satellites had detected the infrared signature of a single launch of a United States ICBM. While Petrov was deciding what to do about this report, the system detected four more incoming missile launches. He had minutes to make a choice about what to do. It is impossible to imagine the amount of pressure placed on him at this moment. Source: Stanislav Petrov, Soviet officer credited with averting nuclear war, dies at 77 by Schwartzreport. Petrov lived in a world of deterministic systems. The technologies that powered these warning systems have outputs that are guaranteed, provided the proper inputs are provided. However, deterministic does not mean infallible. The only reason you are alive and reading this is because Petrov understood that the systems he observed were capable of error. He was suspicious of what he was seeing reported, and chose not to escalate a retaliatory strike. There were two factors guiding his decision: A surprise attack would most likely have used hundreds of missiles, and not just five. The allegedly foolproof Oko system was new and prone to errors. An error in a deterministic system can still lead to expected outputs being generated. For the Oko system, infrared reflections of the sun shining off of the tops of clouds created a false positive that was interpreted as detection of a nuclear launch event. Source: US-K History by Kosmonavtika. The concept of erroneous truth is a deep thing to internalize, as computerized systems are presented as omniscient, indefective, and absolute. Petrov’s rewards for this action were reprimands, reassignment, and denial of promotion. This was likely for embarrassing his superiors by the politically inconvenient shedding of light on issues with the Oko system. A coerced early retirement caused a nervous breakdown, likely him having to grapple with the weight of his decision. It was only in the 1990s—after the fall of the Soviet Union—that his actions were discovered internationally and celebrated. Stanislav Petrov was given the recognition that he deserved, including being honored by the United Nations, awarded the Dresden Peace Prize, featured in a documentary, and being able to visit a Minuteman Missile silo in the United States. On January 31st, 2025, OpenAI struck a deal with the United States government to use its AI product for nuclear weapon security. It is unclear how this technology will be used, where, and to what extent. It is also unclear how OpenAI’s systems function, as they are black box technologies. What is known is that LLM-generated responses—the product OpenAI sells—are non-deterministic. Non-deterministic systems don’t have guaranteed outputs from their inputs. In addition, LLM-based technology hallucinates—it invents content with no self-knowledge that it is a falsehood. Non-deterministic systems that are computerized also have the perception as being authoritative, the same as their deterministic peers. It is not a question of how the output is generated, it is one of the output being perceived to come from a machine. These are terrifying things to know. Consider not only the systems this technology is being applied to, but also the thoughtless speed of their integration. Then consider how we’ve historically been conditioned and rewarded to interpret the output of these systems, and then how we perceive and treat skeptics. We don’t live in a purely deterministic world of technology anymore. Stanislav Petrov died on September 18th, 2017, before this change occurred. I would be incredibly curious to know his thoughts about our current reality, as well as the increasing abdication of human monitoring of automated systems in favor of notably biased, supposed “AI solutions.” In acknowledging Petrov’s skepticism in a time of mania and political instability, we acknowledge a quote from former U.S. Secretary of Defense William J. Perry’s memoir about the incident: [Oko’s false positives] illustrates the immense danger of placing our fate in the hands of automated systems that are susceptible to failure and human beings who are fallible.

6 months ago 51 votes

More in programming

The future of large files in Git is Git

.title {text-wrap:balance;} #content > p:first-child {text-wrap:balance;} If Git had a nemesis, it’d be large files. Large files bloat Git’s storage, slow down git clone, and wreak havoc on Git forges. In 2015, GitHub released Git LFS—a Git extension that hacked around problems with large files. But Git LFS added new complications and storage costs. Meanwhile, the Git project has been quietly working on large files. And while LFS ain’t dead yet, the latest Git release shows the path towards a future where LFS is, finally, obsolete. What you can do today: replace Git LFS with Git partial clone Git LFS works by storing large files outside your repo. When you clone a project via LFS, you get the repo’s history and small files, but skip large files. Instead, Git LFS downloads only the large files you need for your working copy. In 2017, the Git project introduced partial clones that provide the same benefits as Git LFS: Partial clone allows us to avoid downloading [large binary assets] in advance during clone and fetch operations and thereby reduce download times and disk usage. – Partial Clone Design Notes, git-scm.com Git’s partial clone and LFS both make for: Small checkouts – On clone, you get the latest copy of big files instead of every copy. Fast clones – Because you avoid downloading large files, each clone is fast. Quick setup – Unlike shallow clones, you get the entire history of the project—you can get to work right away. What is a partial clone? A Git partial clone is a clone with a --filter. For example, to avoid downloading files bigger than 100KB, you’d use: git clone --filter='blobs:size=100k' <repo> Later, Git will lazily download any files over 100KB you need for your checkout. By default, if I git clone a repo with many revisions of a noisome 25 MB PNG file, then cloning is slow and the checkout is obnoxiously large: $ time git clone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (153/153), 1.19 GiB real 3m49.052s Almost four minutes to check out a single 25MB file! $ du --max-depth=0 --human-readable noise-over-git/. 1.3G noise-over-git/. $ ^ 🤬 And 50 revisions of that single 25MB file eat 1.3GB of space. But a partial clone side-steps these problems: $ git config --global alias.pclone 'clone --filter=blob:limit=100k' $ time git pclone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (1/1), 24.03 MiB real 0m6.132s $ du --max-depth=0 --human-readable noise-over-git/. 49M noise-over-git/ $ ^ 😻 (the same size as a git lfs checkout) My filter made cloning 97% faster (3m 49s → 6s), and it reduced my checkout size by 96% (1.3GB → 49M)! But there are still some caveats here. If you run a command that needs data you filtered out, Git will need to make a trip to the server to get it. So, commands like git diff, git blame, and git checkout will require a trip to your Git host to run. But, for large files, this is the same behavior as Git LFS. Plus, I can’t remember the last time I ran git blame on a PNG 🙃. Why go to the trouble? What’s wrong with Git LFS? Git LFS foists Git’s problems with large files onto users. And the problems are significant: 🖕 High vendor lock-in – When GitHub wrote Git LFS, the other large file systems—Git Fat, Git Annex, and Git Media—were agnostic about the server-side. But GitHub locked users to their proprietary server implementation and charged folks to use it.1 💸 Costly – GitHub won because it let users host repositories for free. But Git LFS started as a paid product. Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage. In contrast, storing 50GB on Amazon’s S3 standard storage is $13/year. 😰 Hard to undo – Once you’ve moved to Git LFS, it’s impossible to undo the move without rewriting history. 🌀 Ongoing set-up costs – All your collaborators need to install Git LFS. Without Git LFS installed, your collaborators will get confusing, metadata-filled text files instead of the large files they expect. The future: Git large object promisors Large files create problems for Git forges, too. GitHub and GitLab put limits on file size2 because big files cost more money to host. Git LFS keeps server-side costs low by offloading large files to CDNs. But the Git project has a new solution. Earlier this year, Git merged a new feature: large object promisers. Large object promisors aim to provide the same server-side benefits as LFS, minus the hassle to users. This effort aims to especially improve things on the server side, and especially for large blobs that are already compressed in a binary format. This effort aims to provide an alternative to Git LFS – Large Object Promisors, git-scm.com What is a large object promisor? Large object promisors are special Git remotes that only house large files. In the bright, shiny future, large object promisors will work like this: You push a large file to your Git host. In the background, your Git host offloads that large file to a large object promisor. When you clone, the Git host tells your Git client about the promisor. Your client will clone from the Git host, and automagically nab large files from the promisor remote. But we’re still a ways off from that bright, shiny future. Git large object promisors are still a work in progress. Pieces of large object promisors merged to Git in March of 2025. But there’s more to do and open questions yet to answer. And so, for today, you’re stuck with Git LFS for giant files. But once large object promisors see broad adoption, maybe GitHub will let you push files bigger than 100MB. The future of large files in Git is Git. The Git project is thinking hard about large files, so you don’t have to. Today, we’re stuck with Git LFS. But soon, the only obstacle for large files in Git will be your half-remembered, ominous hunch that it’s a bad idea to stow your MP3 library in Git. Edited by Refactoring English Later, other Git forges made their own LFS servers. Today, you can push to multiple Git forges or use an LFS transfer agent, but all this makes set up harder for contributors. You’re pretty much locked-in unless you put in extra effort to get unlocked.↩︎ File size limits: 100MB for GitHub, 100MB for GitLab.com↩︎

19 hours ago 6 votes
Just a Little More Context Bro, I Promise, and It’ll Fix Everything

Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky

20 hours ago 3 votes
How to Leverage the CPU’s Micro-Op Cache for Faster Loops

Measuring, analyzing, and optimizing loops using Linux perf, Top-Down Microarchitectural Analysis, and the CPU’s micro-op cache

yesterday 5 votes
Omarchy micro-forks Chromium

You can just change things! That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome? Well, yes! We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming. And now it's released as a package anyone can install on any flavor of Arch using the AUR (Arch User Repository). We got it all done in just four days. From idea, to solicitation, to successful patch, to release, to incorporation. And now it'll be part of the next release of Omarchy. There are no speed limits in open source. Nobody to ask for permission. You have the code, so you can make the change. All you need is skill and will (and maybe, if you need someone else to do it for you, a $5,000 incentive 😄).

2 days ago 4 votes
Choosing Tools To Make Websites

Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky

3 days ago 5 votes