Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
6
We will never agree about AI until we agree about what it means to live a good life. Current debates* about artificial intelligence circle endlessly around questions of capability, economic impact, and resource allocation – not to mention language. Is AI truly useful? What will it do to jobs? How much should we invest in its development? And what do we mean by AI? What’s the difference between machine learning and large language modeling? What can one do that the other cannot? What happens when we mix them up? These discussions are necessary, but will continue to be maddening without backing up some and widening the scope. Meanwhile, it often feels like we’re arguing about the objective merit of a right-hand turn over a left without first agreeing where we’re trying to go. The real questions about AI are actually questions about human flourishing. How much of a person’s life should be determined by work? What level of labor should be necessary to meet basic needs? Where do we...
5 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

A New Kind of Wholeness

Check out the light in my office right now 🤩 . AI effectively, but to understand how it fits into the larger patterns of human creativity and purpose. That’s a good thing — designers are good observers. No matter what the tech, we notice patterns we notice the lack of them. So in the midst of what is likely a major, AI-driven transition for us all, it’s worth considering that the future of design won’t be about human versus machine, but about understanding the pattern language that emerges when both intelligences work together in a system. As Christopher Alexander and his cohort might have said, it will be about creating a new kind of wholeness — one that honors both the computational power of AI and the nuanced wisdom of human experience.

3 days ago 3 votes
The Productive Void

What enabled us to create AI is the very thing it has the power to erase. I still have dozens of sketchbooks, many filled with ideas for logos, layouts, and other designs. These pages don’t just capture fully realized images, albeit sketches. They capture an entire process of thinking –the false starts, the unexpected discoveries, the gradual, iterative refinement of ideas. Today, those same explorations can, if you choose, be reduced to a single prompt: “Generate a minimal, modern wordmark for a SaaS product called _____.” The immediacy is seductive. Recently, I found myself experimenting with one of the latest AI tools promising simple “logo generation.” Within minutes, I had generated dozens of logomarks — each one passable, but each one missing something essential that I couldn’t quite name. My eight-year-daughter, looking on, asked a telling question: “Dad, are you playing a game?” Her question has stayed with me. After twenty years in design, I’ve watched our tools evolve from physical to digital to algorithmic, each transition sold with somewhat mixed messages: more simplicity and more features; more efficiency and more reach; more speed and more possibility. But as we race toward a future where AI can generate endless variations with a few keystrokes, I’m increasingly conscious of what happens – or in the case of generative work, what doesn’t happen — in the spaces between. These are the vital territory of human creativity that resists compression. Yes, I realize it may sound as if I am arguing against a straw man. One needn’t stop at the generated logo, which is, after all, just a single image. It’s simply a sketch to launch from. It speeds up the process of concepting. I suppose I don’t have a significant problem accepting the role of AI in ideation. But I have already seen how the immediacy of AI sets the expectation for process collapse. During a recent meeting, someone demonstrated generating a logo in seconds. When I asked whether the tool could produce an editable file (knowing, of course, that it couldn’t), the answer I got was, “does that matter?” Well, of course it does! To take a logo forward, to truly make it functional as the cornerstone of an identity system, a flat, uneditable image isn’t enough. When ends justify the means, the means too easily become invisible. But that’s another article. (Note to self: Process Collapse is the Silent Spring of AI.) Back to hands and paper, then. Knowing full-well that it’s become rather trite for people of my age to repeatedly bring up the merits of “analog” materials, I am going to do it anyway. And that is because the resistance of a surface agains a pen or pencil creates an important friction that works its way through your body and into your mind. That resistance is valuable; it forces us to slow down. That gives our mind time to process each move and consider the next. The best digital tools preserved some of this productive friction. Whether you’re working in Illustrator, Figma, or some other creative composition environment, there is likely a pen tool, and its virtue is not in being fast. AI tools offer no such thing, by design. They collapse the space between intention and result, between thinking and making. They eliminate the productive void — that space where uncertainty leads to discovery. It’s the same void we experience when waiting in lines, walking from one place to the next, showering, washing dishes, stopped in traffic. These are the places where solutions to difficult problems bloom in our minds, not from toiling over them, but from letting go, albeit briefly. The latency between mind and machine, whether that machine is a digital pen or one filled with ink, is a feature, not a bug. It’s to be preserved as fertile ground for observation and consideration. AI scorches that earth, at least in the context of image and design generation, at least right now. AI is undeniable; the tools have already changed how I work in ways both subtle and profound. But as someone who has watched design trends come and go since the late 1990s — from glitchy bitmap Geocities chic to skeuomorphism to flat design; from Dreamweaver to Photoshop to Figma to AI — I’m as willing to change as I am wary of how quickly we can mistake convenience for improvement. My son is three years old, and very much at the stage of development where his awareness of what he could have is not yet tempered by an understanding of how it arrives before him. A shrill demand for “more apple!” is repeated instantly because, well, toddlers have no patience. The current stage of AI has me thinking about that growth stage quite a bit. He is, after all, growing up in a world where, thanks to 21st century technology, the space between wanting and having grows ever shorter. He doesn’t know that yet, but I do. And I worry what kind of a person he might become if he doesn’t experience some friction. And this is, of course, a reflection of my concern about what happens to society when it no longer has to wait for anything. When frictions are sanded down by the mill of innovation to such a point that we — what? — lose the will to do much of anything? Obviously, we are not yet at that point — there is much good that AI could equip us to do — but also, one point leads to the next. What will the next be? There is a certain irony to this line of thinking, I know. I’m writing this in a space surrounded by digital tools that would have seemed magical during my sketchbooking days back in college. Each technological shift in design since that time has imposed some kind of creative compression, each giving something and taking something away. AI will do that, too. But it’s the speed of AI that worries me most as a maker. It’s the thing about AI that has already prepared me to be surprised by my own world not too long from now. And though I try to reserve judgement — I feel it’s the intellectually honest thing to do at this moment – it seems we are at risk of losing access to the spaces between things, spaces we may not fully value until they are gone. These are the productive void. The thing to do isn’t to reject AI completely. I can already see that sort of resistance is futile. But can we preserve the spaces between things? Can we protect the natural resource of friction, of waiting, of gaps and iteration? In my own practice, I’m learning to use AI not as a tool of compression, but one of expansion. Prompts become not endpoints but starting points. They create new voids to explore, new territories for my mind to inhabit. My accumulating stacks of sketchbooks remind me that design has always been about more than just the outcome. It’s about the journey, the resistance, the productive uncertainty that leads to discovery. As we rush to embrace the power of AI, we might do well to remember that what enabled us to create it is the very thing it has the power to erase.

a week ago 7 votes
Shut up, Siri

There will be no monoculture of human-computer interaction. Every day I see a new thinkpiece on “the post-screen future” or “UI-less design” or “the end of the click.” I even used to write things like that. But that’s because I had less experience with human-computer interaction than I have now. You see, there’s this contagion of belief that new technologies not only open new doors, but definitively close old ones. But that’s rarely true. The internet didn’t end radio. The iPhone didn’t end laptops or even desktop computers. And voice interfaces won’t end screens and manual interactions. There will be no monoculture of human-computer interaction. We may have the technology to make the click an unnecessary interaction convention; I doubt we have the desire. That is a good thing. Sure, we’ll talk to our machines, just not all the time. The definition of the click as “a mechanical act requiring decision, precision, and a split-second negotiation between choice and commitment” is a good one, because it details all the reasons why the click is so useful and effective. However, some might imagine that a sophisticated enough machine would obviate the need for any direct, physical interaction. After all, didn’t the characters in Star Trek walk around the ship, constantly invoking the ship’s Computer to give them answers by just speaking “Computer…” into the room and waiting for its response? They did! But they also had many screens and panels and did a lot of tapping and pressing that might as well have been clicking. Sure, Star Trek was made long before we had a good sense of what advanced computing might actually be capable of, and what it might actually be like to use. But it also might be that the creators of Star Trek held some insight into human-computer interaction that shaped their world building. Consider how your brain processes information. The eye-brain connection is one of the most sophisticated and efficient systems in human biology. You can scan a list of options and make comparisons in fractions of a second — far faster than listening to those same options read aloud. Suppose we found ourselves ordering dinner at a restaurant in a purely voice command future. I imagine that would be a lot like the moment when your server reads off the evening’s specials — what was that first one again? — but for the entire time and for everyone at the table. It would take too long, and it would be very annoying. That’s the thing about how our senses interact with the brain — they don’t all work in the same way. You can view more than one thing at a time, identify them, react to them, and process them virtually simultaneously, but you cannot come close to that kind of performance with sound. Imagine sitting across from two friends and both show you a picture at the same time. You’ll likely be able to identify both right away. Now imagine those two friends telling you something important at the same time. You’re almost certain to ask them to tell you again, one at a time. What’s more, our brains develop sophisticated spatial memory for visual interfaces. Regular users of any application know exactly where their favorite functions are located — they can navigate complex interfaces almost unconsciously, their cursor moving to the right spot without conscious thought. This kind of spatial memory simply doesn’t exist for voice commands, where every interaction requires active recall of the correct verbal command. Now imagine an office or public space where everyone is speaking commands to their devices. The cacophony would be unbearable. This highlights another crucial advantage of visual interfaces and direct selection: they’re silent. Sometimes we need to interact with our devices without broadcasting our actions to everyone around us. Voice interfaces remove this option for privacy and discretion in public spaces. The screen, by the way, tends to get the blame for all the negative things that have come with our increasingly digital lives — the distractions, intrusions, manipulations, and so on — but the screen itself isn’t to blame. In fact, the screen exists because of how incredibly useful it is as a memory surrogate. The screen is a surface for information and interaction, much like whiteboard, a chalkboard, a canvas, a scroll, or a patch of dirt once was long ago. The function it serves is to hold information for us — so that we don’t have to retain it in perfect memory. That’s why screens are useful, and that’s why — I think — they were still present on an imagined starship three centuries from now along with a conversant AI. “Clicking” — which is really just a shorthand for some direct selection method — is incredibly efficient, and increasingly so as the number of options increases. Imagine a list of three items, which is probably the simplest scenario. Speaking a selection command like, “the third one, please” is just as efficient as manually selecting the third one in the list. And this is probably true up to somewhere around 6 or 7 items — there’s an old principle having to do with our ability to hold no more than that number of individual pieces of information in our minds. But beyond that number, it gets more difficult without just being able to point. In order to say you want the ninth one in a list, for example, requires that you know it’s the ninth one in the list, which might take you a moment to figure out — certainly longer than just pointing at it. Consider also the computational efficiency. A click or tap requires minimal processing power — it’s a simple input with precise coordinates. Voice commands, on the other hand, require constant audio processing, speech recognition, and AI-driven interpretation. In a world increasingly concerned with energy consumption and computational resources, the efficiency of direct selection becomes even more relevant. It’s also worth noting that different interface modes serve different accessibility needs. While voice interfaces can be crucial for users with certain physical limitations, visual interfaces with direct selection are essential for users with hearing impairments or speech difficulties. The future isn’t about replacing one mode with another — it’s about ensuring that multiple modes of interaction are available to serve diverse needs. Perhaps each item in the list is clearly different. In that case, you might just be able to speak aloud which one you want. But what if they’re not that different? What if you aren’t sure what each item is? Perhaps these items aren’t even words, in which case you now have to describe them in a way that the machine can disambiguate. What if there are three dozen in a grid? At that level of density, tracking with your eye and some kind of pointer helps you move more rapidly through the information, to say nothing about making a final selection. Instead of imagining a wholesale replacement of visual interfaces, we should be thinking about how to better integrate different modes of interaction. How can voice and AI augment visual interfaces rather than replace them? How can we preserve the efficiency of visual processing while adding the convenience of voice commands? The click isn’t just a technological artifact — it’s a reflection of how humans process and interact with information. As long as we have eyes and spatial reasoning, we’ll need interfaces that leverage these capabilities. The future isn’t clickless; it’s multi-modal.

a week ago 8 votes
Dedigitization

When quantum computing breaks everything. You probably know someone who still keeps essential passwords scrawled on a post-it note stuck someplace. And you’ve probably urged them to set up a password manager and shred the note for God’s sake! But what if they’re on to something? They say that even a stopped watch is right twice a day. What if the post-its time has come back around again? What if paper is safer than code? As bitrot-prone as it is, I think we’ve been lulled into a sense of digital permanence. Whatever we save will last forever, preserved in perfect fidelity, accessible only to those who have permission to see it. This assumption underlies the way nearly every aspect of modern life has come to work, from banking to healthcare to personal communications. But here comes quantum computing, and it threatens to undo all of it. When sufficiently powerful quantum computers arrive, they’ll be able to break most encryption we use today. And when that happens, our most precious secrets might find their safest home in an unexpected place: paper. It sounds like the pitch for a TV show, I know. But this isn’t fantasy. Quantum computers leverage the strange properties of quantum mechanics to solve certain problems exponentially faster than classical computers. Among these problems is factoring large numbers — the mathematical operation that underlies most modern encryption. While current quantum computers aren’t yet powerful enough to break encryption, experts predict they will be within a decade. More concerning is that adversaries are already harvesting encrypted data, waiting for the day they can decrypt it. (For a sobering assessment of where quantum computing and encryption stand today, see Shor’s Algorithm, D-Wave, Quantum-Resistant Algorithms, the NSA’s Commercial National Security Algorithm Suite 2.0, and RAND’s forecast of how this will all go down.) This creates a new calculus around digital security: How long does information need to stay secret? For communications like text messages or emails, maybe a few years is enough. But what about medical records that should remain private for a lifetime? Or state secrets that need to remain confidential for generations? Or corporate intellectual property that must never be revealed? For information that needs permanent protection, we might need to look backward to move forward. Paper — or other physical storage media — offers something digital storage cannot: security through physical rather than mathematical barriers. You can’t hack paper. You can’t decrypt it. You can’t harvest it now and crack it later. The only way to access information stored on paper is to physically acquire it. They’re going to have to start screening Tinker Tailor Soldier Spy at CIA training again. Consider cryptocurrency as a bit of harbinger of this future. Bitcoin, despite being entirely digital, already requires physical security solutions. Hardware wallets — physical devices that store cryptographic keys — are considered the most secure way to protect digital assets. But even this hybrid approach depends upon encryption that quantum computers could eventually break. The very existence of these physical intermediaries hints at a fundamental truth: purely digital security may be impossible in a post-quantum world. Many organizations already maintain their most sensitive information in physical form. The U.S. military keeps certain critical systems air-gapped and documented on paper. Some banking systems still rely on physical ledgers as backups. Corporate lawyers often prefer paper for their most sensitive documents. These aren’t antiquated holdovers — they’re pragmatic solutions to security concerns that quantum computing will only make more relevant. But returning to paper doesn’t mean abandoning digital convenience entirely. A hybrid approach might emerge, where routine operations remain digital while truly sensitive information returns to physical form. This could lead to new systems and practices: secure physical storage facilities might become as common as data centers; document destruction might become as critical as data deletion; physical security might become as sophisticated as cybersecurity. Every sensitive government or corporate decision is made in conclave. The future might also see novel solutions beyond traditional paper. Biological storage — encoding information in DNA — could offer physical security with digital density. New materials might be developed specifically for secure information storage (of course, if you can put it in there, someone can probably get it out). We might even see the emergence of new forms of encryption based on physical rather than mathematical properties. Good lord what if you have to dance your password… The rise of quantum computing doesn’t mean the end of privacy, but it might mean the end of our assumption that digital is forever. In a world where no encryption is permanently secure, the most enduring secrets might be those written on paper, locked in a drawer, protected by physical rather than mathematical barriers. That person with the post-it note might just be ahead of their time — though perhaps they should consider moving it from the wall to a safe. This isn’t regression — it’s adaptation. Just as quantum computing represents a fundamental shift in how we process information, we might need a fundamental shift in how we protect it. It may be that the future of security looks a lot like its past.

a week ago 10 votes

More in design

Kanzaki Restaurant

Open is the key word and unalterable. The open kitchen, the open bar counter, the open display space, all these...

12 hours ago 2 votes
CSS Space Toggles

I’ve been working on a transition to using light-dark() function in CSS. What this boils down to is, rather than CSS that looks like this: :root { color-scheme: light; --text: #000; } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; --text: #fff; } } I now have this: :root { color-scheme: light; --text: light-dark(#000, #fff); } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; } } That probably doesn’t look that interesting. That’s what I thought when I first learned about light-dark() — “Oh hey, that’s cool, but it’s just different syntax. Six of one, half dozen of another kind of thing.” But it does unlock some interesting ways to handling themeing which I will have to cover in another post. Suffice it to say, I think I’m starting to drink the light-dark() koolaid. Anyhow, using the above pattern, I want to compose CSS variables to make a light/dark theme based on a configurable hue. Something like this: :root { color-scheme: light; /* configurable via JS */ --accent-hue: 56; /* which then cascades to other derivations */ --accent: light-dark( hsl(var(--accent-hue) 50% 100%), hsl(var(--accent-hue) 50% 0%), ); } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; } } The problem is that --accent-hue value doesn’t quite look right in dark mode. It needs more contrast. I need a slightly different hue for dark mode. So my thought is: I’ll put that value in a light-dark() function. :root { --accent-hue: light-dark(56, 47); --my-color: light-dark( hsl(var(--accent-hue) 50% 100%), hsl(var(--accent-hue) 50% 0%), ); } Unfortunately, that doesn’t work. You can’t put arbitrary values in light-dark(). It only accepts color values. I asked what you could do instead and Roma Komarov told me about CSS “space toggles”. I’d never heard about these, so I looked them up. First I found Chris Coyier’s article which made me feel good because even Chris admits he didn’t fully understand them. Then Christopher Kirk-Nielsen linked me to his article which helped me understand this idea of “space toggles” even more. I ended up following the pattern Christopher mentions in his article and it works like a charm in my implementation! The gist of the code works like this: When the user hasn’t specified a theme, default to “system” which is light by default, or dark if they’re on a device that supports prefers-color-scheme. When a user explicitly sets the color theme, set an attribute on the root element to denote that. /* Default preferences when "unset" or "system" */ :root { --LIGHT: initial; --DARK: ; color-scheme: light; } @media (prefers-color-scheme: dark) { :root { --LIGHT: ; --DARK: initial; color-scheme: dark; } } /* Handle explicit user overrides */ :root[data-theme-appearance="light"] { --LIGHT: initial; --DARK: ; color-scheme: light; } :root[data-theme-appearance="dark"] { --LIGHT: ; --DARK: initial; color-scheme: dark; } /* Now set my variables */ :root { /* Set the “space toggles’ */ --accent-hue: var(--LIGHT, 56) var(--DARK, 47); /* Then use them */ --my-color: light-dark( hsl(var(--accent-hue) 50% 90%), hsl(var(--accent-hue) 50% 10%), ); } So what is the value of --accent-hue? That line sort of reads like this: If --LIGHT has a value, return 56 else if --DARK has a value, return 47 And it works like a charm! Now I can set arbitrary values for things like accent color hue, saturation, and lightness, then leverage them elsewhere. And when the color scheme or accent color change, all these values recalculate and cascade through the entire website — cool! A Note on Minification A quick tip: if you’re minifying your HTML and you’re using this space toggle trick, beware of minifying your CSS! Stuff like this: selector { --ON: ; --OFF: initial; } Could get minified to: selector{--OFF:initial} And this “space toggles trick” won’t work at all. Trust me, I learned from experience. Email · Mastodon · Bluesky

2 days ago 4 votes
Doğuş Çay Packaging Redesign by katkagraphics

Last semester at university we were given a really cool task. We had to choose an existing company that distributes...

2 days ago 3 votes
A New Kind of Wholeness

Check out the light in my office right now 🤩 . AI effectively, but to understand how it fits into the larger patterns of human creativity and purpose. That’s a good thing — designers are good observers. No matter what the tech, we notice patterns we notice the lack of them. So in the midst of what is likely a major, AI-driven transition for us all, it’s worth considering that the future of design won’t be about human versus machine, but about understanding the pattern language that emerges when both intelligences work together in a system. As Christopher Alexander and his cohort might have said, it will be about creating a new kind of wholeness — one that honors both the computational power of AI and the nuanced wisdom of human experience.

3 days ago 3 votes
Aspect Ratio Changes With CSS View Transitions

So here I am playing with CSS view transitions (again). I’ve got Dave Rupert’s post open in one tab, which serves as my recurring reference for the question, “How do you get these things to work again?” I’ve followed Dave’s instructions for transitioning the page generally and am now working on individual pieces of UI specifically. I feel like I’m 98% of the way there, I’ve just hit a small bug. It’s small. Many people might not even notice it. But I do and it’s bugging me. When I transition from one page to the next, I expect this “active page” outline to transition nicely from the old page to the new one. But it doesn’t. Not quite. Did you notice it? It’s subtle and fast, but it’s there. I have to slow my ::view-transition-old() animation timing waaaaay down to see catch it. The outline grows proportionally in width but not in height as it transitions from one element to the next. I kill myself on trying to figure out what this bug is. Dave mentions in his post how he had to use fit-content to fix some issues with container changes between pages. I don’t fully understand what he’s getting at, but I think maybe that’s where my issue is? I try sticking fit-content on different things but none of it works. I ask AI and it’s totally worthless, synthesizing disparate topics about CSS into a seemingly right on the surface but totally wrong answer. So I sit and think about it. What’s happening almost looks like some kind of screwy side effect of a transform: scale() operation. Perhaps it’s something about how default user agent styles for these things is animating the before/after state? No, that can’t be it… Honestly, I have no idea. I don’t know much about CSS view transitions, but I know enough to know that I don’t know enough to even formulate the right set of keywords for a decent question. I feel stuck. I consider reaching out on the socials for help, but at the last minute I somehow stumble on this perfectly wonderful blog post from Jake Archibald: “View transitions: Handling aspect ratio changes” and he’s got a one-line fix in my hands in seconds! The article is beautiful. It not only gives me an answer, but it provides really wonderful visuals that help describe why the problem I’m seeing is a problem in the first place. It really helps fill out my understanding of how this feature works. I absolutely love finding writing like this on the web. So now my problem is fixed — no more weirdness! If you’re playing with CSS view transitions these days, Jake’s article is a must read to help shape your understanding of how the feature works. Go give it a read. Email · Mastodon · Bluesky

4 days ago 7 votes