More from Christopher Butler
Check out the light in my office right now 🤩 . AI effectively, but to understand how it fits into the larger patterns of human creativity and purpose. That’s a good thing — designers are good observers. No matter what the tech, we notice patterns we notice the lack of them. So in the midst of what is likely a major, AI-driven transition for us all, it’s worth considering that the future of design won’t be about human versus machine, but about understanding the pattern language that emerges when both intelligences work together in a system. As Christopher Alexander and his cohort might have said, it will be about creating a new kind of wholeness — one that honors both the computational power of AI and the nuanced wisdom of human experience.
We will never agree about AI until we agree about what it means to live a good life. Current debates* about artificial intelligence circle endlessly around questions of capability, economic impact, and resource allocation – not to mention language. Is AI truly useful? What will it do to jobs? How much should we invest in its development? And what do we mean by AI? What’s the difference between machine learning and large language modeling? What can one do that the other cannot? What happens when we mix them up? These discussions are necessary, but will continue to be maddening without backing up some and widening the scope. Meanwhile, it often feels like we’re arguing about the objective merit of a right-hand turn over a left without first agreeing where we’re trying to go. The real questions about AI are actually questions about human flourishing. How much of a person’s life should be determined by work? What level of labor should be necessary to meet basic needs? Where do we draw the line between necessity and luxury? How should people derive contentment and meaning? Without wrestling with these fundamental questions, our AI debates are just technical discussions floating free of human context. Consider how differently we might approach AI development if we had clear answers about what constitutes a good human life. If we believed that meaningful work is essential to human flourishing, we’d focus the development of AI on human augmentation while being vigilant of how it might replace human function. We’d carefully choose how it is applied, leveraging machine-learning systems to analyze datasets beyond our comprehension and move scientific investigations forward, but withhold its use in areas that derive value from human creativity. If we thought that freedom from labor was the path to human fulfillment, we’d push AI toward maximum automation and do the work of transitioning from a labor and resource-driven capitalist system to a completely different structure. We would completely remake the world, starting with the lives of those who inhabit it. But without this philosophical foundation, we’re left with market forces, technological momentum, and environmental pressures shaping our future by default. The details of human life become outputs of these systems rather than conscious choices guided by shared values. As abstract as this may sound, it is as essential as any technical detail that differentiates one model from another. Every investment in AI derives from a worldview which, at best, prefers maintaining the structural status quo, or at worst, desires a further widening of the gap between economic power and poverty. Every adoption layer of large language models reinforces the picture of society drawn by just one piece of it — the internet — and as dependence upon these systems increases, so does the reality distortion. The transition from index-driven search engines to AI-driven research engines reaches a nearly gaslight level of affirming a certain kind of truth; a referral, after all, is a different kind of truth-builder than an answer. And though both systems draw from exactly the same information, one will persuade its users more directly. Its perception will be reality. Unless, of course, we say otherwise. We’re building the infrastructure of future human experience without explicitly discussing what that experience should be. To be sure, many humans have shared worldviews. Some are metaphysical in nature, if not explicitly religious. Some are maintained independent of economic and technological forces, if not in direct rejection of them. Among the many pockets of human civilization rooted in pre-digital traditions, the inexorable supremacy of AI likely looks like an apocalypse they’d prefer to avoid. I am not saying we all must live and believe as others do. A shared picture of human flourishing does not require a totalitarian trickle-down demand on every detail of day-to-day life. But it must be defined enough to help answer questions, particularly about technology, that are relevant to anyone alive. The most urgent conversation about AI isn’t about its capabilities or risks, but about the kind of life we want it to help us create. Until we grapple with these deeper questions about human flourishing, our technical debates will continue to miss the point and further alienate us from one another. This from Robin Sloan vs. this from Baldur Bjarnason vs. this from Michelle Barker, for example. All thoughtful, offering nuance and good points, but also missing one another.
What enabled us to create AI is the very thing it has the power to erase. I still have dozens of sketchbooks, many filled with ideas for logos, layouts, and other designs. These pages don’t just capture fully realized images, albeit sketches. They capture an entire process of thinking –the false starts, the unexpected discoveries, the gradual, iterative refinement of ideas. Today, those same explorations can, if you choose, be reduced to a single prompt: “Generate a minimal, modern wordmark for a SaaS product called _____.” The immediacy is seductive. Recently, I found myself experimenting with one of the latest AI tools promising simple “logo generation.” Within minutes, I had generated dozens of logomarks — each one passable, but each one missing something essential that I couldn’t quite name. My eight-year-daughter, looking on, asked a telling question: “Dad, are you playing a game?” Her question has stayed with me. After twenty years in design, I’ve watched our tools evolve from physical to digital to algorithmic, each transition sold with somewhat mixed messages: more simplicity and more features; more efficiency and more reach; more speed and more possibility. But as we race toward a future where AI can generate endless variations with a few keystrokes, I’m increasingly conscious of what happens – or in the case of generative work, what doesn’t happen — in the spaces between. These are the vital territory of human creativity that resists compression. Yes, I realize it may sound as if I am arguing against a straw man. One needn’t stop at the generated logo, which is, after all, just a single image. It’s simply a sketch to launch from. It speeds up the process of concepting. I suppose I don’t have a significant problem accepting the role of AI in ideation. But I have already seen how the immediacy of AI sets the expectation for process collapse. During a recent meeting, someone demonstrated generating a logo in seconds. When I asked whether the tool could produce an editable file (knowing, of course, that it couldn’t), the answer I got was, “does that matter?” Well, of course it does! To take a logo forward, to truly make it functional as the cornerstone of an identity system, a flat, uneditable image isn’t enough. When ends justify the means, the means too easily become invisible. But that’s another article. (Note to self: Process Collapse is the Silent Spring of AI.) Back to hands and paper, then. Knowing full-well that it’s become rather trite for people of my age to repeatedly bring up the merits of “analog” materials, I am going to do it anyway. And that is because the resistance of a surface agains a pen or pencil creates an important friction that works its way through your body and into your mind. That resistance is valuable; it forces us to slow down. That gives our mind time to process each move and consider the next. The best digital tools preserved some of this productive friction. Whether you’re working in Illustrator, Figma, or some other creative composition environment, there is likely a pen tool, and its virtue is not in being fast. AI tools offer no such thing, by design. They collapse the space between intention and result, between thinking and making. They eliminate the productive void — that space where uncertainty leads to discovery. It’s the same void we experience when waiting in lines, walking from one place to the next, showering, washing dishes, stopped in traffic. These are the places where solutions to difficult problems bloom in our minds, not from toiling over them, but from letting go, albeit briefly. The latency between mind and machine, whether that machine is a digital pen or one filled with ink, is a feature, not a bug. It’s to be preserved as fertile ground for observation and consideration. AI scorches that earth, at least in the context of image and design generation, at least right now. AI is undeniable; the tools have already changed how I work in ways both subtle and profound. But as someone who has watched design trends come and go since the late 1990s — from glitchy bitmap Geocities chic to skeuomorphism to flat design; from Dreamweaver to Photoshop to Figma to AI — I’m as willing to change as I am wary of how quickly we can mistake convenience for improvement. My son is three years old, and very much at the stage of development where his awareness of what he could have is not yet tempered by an understanding of how it arrives before him. A shrill demand for “more apple!” is repeated instantly because, well, toddlers have no patience. The current stage of AI has me thinking about that growth stage quite a bit. He is, after all, growing up in a world where, thanks to 21st century technology, the space between wanting and having grows ever shorter. He doesn’t know that yet, but I do. And I worry what kind of a person he might become if he doesn’t experience some friction. And this is, of course, a reflection of my concern about what happens to society when it no longer has to wait for anything. When frictions are sanded down by the mill of innovation to such a point that we — what? — lose the will to do much of anything? Obviously, we are not yet at that point — there is much good that AI could equip us to do — but also, one point leads to the next. What will the next be? There is a certain irony to this line of thinking, I know. I’m writing this in a space surrounded by digital tools that would have seemed magical during my sketchbooking days back in college. Each technological shift in design since that time has imposed some kind of creative compression, each giving something and taking something away. AI will do that, too. But it’s the speed of AI that worries me most as a maker. It’s the thing about AI that has already prepared me to be surprised by my own world not too long from now. And though I try to reserve judgement — I feel it’s the intellectually honest thing to do at this moment – it seems we are at risk of losing access to the spaces between things, spaces we may not fully value until they are gone. These are the productive void. The thing to do isn’t to reject AI completely. I can already see that sort of resistance is futile. But can we preserve the spaces between things? Can we protect the natural resource of friction, of waiting, of gaps and iteration? In my own practice, I’m learning to use AI not as a tool of compression, but one of expansion. Prompts become not endpoints but starting points. They create new voids to explore, new territories for my mind to inhabit. My accumulating stacks of sketchbooks remind me that design has always been about more than just the outcome. It’s about the journey, the resistance, the productive uncertainty that leads to discovery. As we rush to embrace the power of AI, we might do well to remember that what enabled us to create it is the very thing it has the power to erase.
When quantum computing breaks everything. You probably know someone who still keeps essential passwords scrawled on a post-it note stuck someplace. And you’ve probably urged them to set up a password manager and shred the note for God’s sake! But what if they’re on to something? They say that even a stopped watch is right twice a day. What if the post-its time has come back around again? What if paper is safer than code? As bitrot-prone as it is, I think we’ve been lulled into a sense of digital permanence. Whatever we save will last forever, preserved in perfect fidelity, accessible only to those who have permission to see it. This assumption underlies the way nearly every aspect of modern life has come to work, from banking to healthcare to personal communications. But here comes quantum computing, and it threatens to undo all of it. When sufficiently powerful quantum computers arrive, they’ll be able to break most encryption we use today. And when that happens, our most precious secrets might find their safest home in an unexpected place: paper. It sounds like the pitch for a TV show, I know. But this isn’t fantasy. Quantum computers leverage the strange properties of quantum mechanics to solve certain problems exponentially faster than classical computers. Among these problems is factoring large numbers — the mathematical operation that underlies most modern encryption. While current quantum computers aren’t yet powerful enough to break encryption, experts predict they will be within a decade. More concerning is that adversaries are already harvesting encrypted data, waiting for the day they can decrypt it. (For a sobering assessment of where quantum computing and encryption stand today, see Shor’s Algorithm, D-Wave, Quantum-Resistant Algorithms, the NSA’s Commercial National Security Algorithm Suite 2.0, and RAND’s forecast of how this will all go down.) This creates a new calculus around digital security: How long does information need to stay secret? For communications like text messages or emails, maybe a few years is enough. But what about medical records that should remain private for a lifetime? Or state secrets that need to remain confidential for generations? Or corporate intellectual property that must never be revealed? For information that needs permanent protection, we might need to look backward to move forward. Paper — or other physical storage media — offers something digital storage cannot: security through physical rather than mathematical barriers. You can’t hack paper. You can’t decrypt it. You can’t harvest it now and crack it later. The only way to access information stored on paper is to physically acquire it. They’re going to have to start screening Tinker Tailor Soldier Spy at CIA training again. Consider cryptocurrency as a bit of harbinger of this future. Bitcoin, despite being entirely digital, already requires physical security solutions. Hardware wallets — physical devices that store cryptographic keys — are considered the most secure way to protect digital assets. But even this hybrid approach depends upon encryption that quantum computers could eventually break. The very existence of these physical intermediaries hints at a fundamental truth: purely digital security may be impossible in a post-quantum world. Many organizations already maintain their most sensitive information in physical form. The U.S. military keeps certain critical systems air-gapped and documented on paper. Some banking systems still rely on physical ledgers as backups. Corporate lawyers often prefer paper for their most sensitive documents. These aren’t antiquated holdovers — they’re pragmatic solutions to security concerns that quantum computing will only make more relevant. But returning to paper doesn’t mean abandoning digital convenience entirely. A hybrid approach might emerge, where routine operations remain digital while truly sensitive information returns to physical form. This could lead to new systems and practices: secure physical storage facilities might become as common as data centers; document destruction might become as critical as data deletion; physical security might become as sophisticated as cybersecurity. Every sensitive government or corporate decision is made in conclave. The future might also see novel solutions beyond traditional paper. Biological storage — encoding information in DNA — could offer physical security with digital density. New materials might be developed specifically for secure information storage (of course, if you can put it in there, someone can probably get it out). We might even see the emergence of new forms of encryption based on physical rather than mathematical properties. Good lord what if you have to dance your password… The rise of quantum computing doesn’t mean the end of privacy, but it might mean the end of our assumption that digital is forever. In a world where no encryption is permanently secure, the most enduring secrets might be those written on paper, locked in a drawer, protected by physical rather than mathematical barriers. That person with the post-it note might just be ahead of their time — though perhaps they should consider moving it from the wall to a safe. This isn’t regression — it’s adaptation. Just as quantum computing represents a fundamental shift in how we process information, we might need a fundamental shift in how we protect it. It may be that the future of security looks a lot like its past.
More in design
I’ve been working on a transition to using light-dark() function in CSS. What this boils down to is, rather than CSS that looks like this: :root { color-scheme: light; --text: #000; } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; --text: #fff; } } I now have this: :root { color-scheme: light; --text: light-dark(#000, #fff); } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; } } That probably doesn’t look that interesting. That’s what I thought when I first learned about light-dark() — “Oh hey, that’s cool, but it’s just different syntax. Six of one, half dozen of another kind of thing.” But it does unlock some interesting ways to handling themeing which I will have to cover in another post. Suffice it to say, I think I’m starting to drink the light-dark() koolaid. Anyhow, using the above pattern, I want to compose CSS variables to make a light/dark theme based on a configurable hue. Something like this: :root { color-scheme: light; /* configurable via JS */ --accent-hue: 56; /* which then cascades to other derivations */ --accent: light-dark( hsl(var(--accent-hue) 50% 100%), hsl(var(--accent-hue) 50% 0%), ); } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; } } The problem is that --accent-hue value doesn’t quite look right in dark mode. It needs more contrast. I need a slightly different hue for dark mode. So my thought is: I’ll put that value in a light-dark() function. :root { --accent-hue: light-dark(56, 47); --my-color: light-dark( hsl(var(--accent-hue) 50% 100%), hsl(var(--accent-hue) 50% 0%), ); } Unfortunately, that doesn’t work. You can’t put arbitrary values in light-dark(). It only accepts color values. I asked what you could do instead and Roma Komarov told me about CSS “space toggles”. I’d never heard about these, so I looked them up. First I found Chris Coyier’s article which made me feel good because even Chris admits he didn’t fully understand them. Then Christopher Kirk-Nielsen linked me to his article which helped me understand this idea of “space toggles” even more. I ended up following the pattern Christopher mentions in his article and it works like a charm in my implementation! The gist of the code works like this: When the user hasn’t specified a theme, default to “system” which is light by default, or dark if they’re on a device that supports prefers-color-scheme. When a user explicitly sets the color theme, set an attribute on the root element to denote that. /* Default preferences when "unset" or "system" */ :root { --LIGHT: initial; --DARK: ; color-scheme: light; } @media (prefers-color-scheme: dark) { :root { --LIGHT: ; --DARK: initial; color-scheme: dark; } } /* Handle explicit user overrides */ :root[data-theme-appearance="light"] { --LIGHT: initial; --DARK: ; color-scheme: light; } :root[data-theme-appearance="dark"] { --LIGHT: ; --DARK: initial; color-scheme: dark; } /* Now set my variables */ :root { /* Set the “space toggles’ */ --accent-hue: var(--LIGHT, 56) var(--DARK, 47); /* Then use them */ --my-color: light-dark( hsl(var(--accent-hue) 50% 90%), hsl(var(--accent-hue) 50% 10%), ); } So what is the value of --accent-hue? That line sort of reads like this: If --LIGHT has a value, return 56 else if --DARK has a value, return 47 And it works like a charm! Now I can set arbitrary values for things like accent color hue, saturation, and lightness, then leverage them elsewhere. And when the color scheme or accent color change, all these values recalculate and cascade through the entire website — cool! A Note on Minification A quick tip: if you’re minifying your HTML and you’re using this space toggle trick, beware of minifying your CSS! Stuff like this: selector { --ON: ; --OFF: initial; } Could get minified to: selector{--OFF:initial} And this “space toggles trick” won’t work at all. Trust me, I learned from experience. Email · Mastodon · Bluesky
Last semester at university we were given a really cool task. We had to choose an existing company that distributes...
Check out the light in my office right now 🤩 . AI effectively, but to understand how it fits into the larger patterns of human creativity and purpose. That’s a good thing — designers are good observers. No matter what the tech, we notice patterns we notice the lack of them. So in the midst of what is likely a major, AI-driven transition for us all, it’s worth considering that the future of design won’t be about human versus machine, but about understanding the pattern language that emerges when both intelligences work together in a system. As Christopher Alexander and his cohort might have said, it will be about creating a new kind of wholeness — one that honors both the computational power of AI and the nuanced wisdom of human experience.
So here I am playing with CSS view transitions (again). I’ve got Dave Rupert’s post open in one tab, which serves as my recurring reference for the question, “How do you get these things to work again?” I’ve followed Dave’s instructions for transitioning the page generally and am now working on individual pieces of UI specifically. I feel like I’m 98% of the way there, I’ve just hit a small bug. It’s small. Many people might not even notice it. But I do and it’s bugging me. When I transition from one page to the next, I expect this “active page” outline to transition nicely from the old page to the new one. But it doesn’t. Not quite. Did you notice it? It’s subtle and fast, but it’s there. I have to slow my ::view-transition-old() animation timing waaaaay down to see catch it. The outline grows proportionally in width but not in height as it transitions from one element to the next. I kill myself on trying to figure out what this bug is. Dave mentions in his post how he had to use fit-content to fix some issues with container changes between pages. I don’t fully understand what he’s getting at, but I think maybe that’s where my issue is? I try sticking fit-content on different things but none of it works. I ask AI and it’s totally worthless, synthesizing disparate topics about CSS into a seemingly right on the surface but totally wrong answer. So I sit and think about it. What’s happening almost looks like some kind of screwy side effect of a transform: scale() operation. Perhaps it’s something about how default user agent styles for these things is animating the before/after state? No, that can’t be it… Honestly, I have no idea. I don’t know much about CSS view transitions, but I know enough to know that I don’t know enough to even formulate the right set of keywords for a decent question. I feel stuck. I consider reaching out on the socials for help, but at the last minute I somehow stumble on this perfectly wonderful blog post from Jake Archibald: “View transitions: Handling aspect ratio changes” and he’s got a one-line fix in my hands in seconds! The article is beautiful. It not only gives me an answer, but it provides really wonderful visuals that help describe why the problem I’m seeing is a problem in the first place. It really helps fill out my understanding of how this feature works. I absolutely love finding writing like this on the web. So now my problem is fixed — no more weirdness! If you’re playing with CSS view transitions these days, Jake’s article is a must read to help shape your understanding of how the feature works. Go give it a read. Email · Mastodon · Bluesky
Bloom Bright is a packaging design for Tualang Honey, celebrating its unique medicinal and beauty benefits. The name “Bloom Bright”...