Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
77
The AI Designer is a powerful notion, but it is incomplete. For creative craftspeople, AI is more of an imposed thought experiment than it is a utility. Yes, there are countless AI tools that can do parts of the job — make an image for you, create a layout, automate a workflow, even predict eye-tracking — but when you start using them for deadline-driven work, the quality isn’t there, and neither are the time-savings. It is often more than reasonable to conclude that the outcome would be better and faster done yourself. Meanwhile, though, there is the idea of the AI Designer. It is in every room we are, and the ones we are not — where decisions about design’s role in an operation are made. The idea of the AI Designer promises more than it delivers at the moment, but the promise is so irresistible that it forces everyone into considering a world where everything is different. That kind of thing can lead to bad decision-making and mess with a designer’s head. Times like this...
8 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

In Defense of Text Labels

Why Icons Alone Aren’t Enough I’m a firm believer in text labels. Interfaces are over-stuffed with icons. The more icons we have to scan over, the more brain power we put toward making sense of them rather than using the tools they represent. This slows us down, not just once, but over and over again. While it may feel duplicative to add a text label, the reality is that few icons are self-sufficient in communicating meaning. The Problems that Icons Create 1. Few icons communicate a clear, singular meaning immediately It’s easy to say that a good icon will communicate meaning — or that if an icon needs a text label, it’s not doing its job. But that doesn’t take into consideration the burden that icons — good or bad — put on people trying to navigate interfaces. Even the simplest icons can create ambiguity. While a trash can icon reliably communicates “delete,” what about the common pencil icon. Does it mean create? Edit? Write? Draw? Context can help with disambiguation, but not always, and that contextual interpretation requires additional cognitive effort. When an icon’s meaning isn’t immediately clear, it slows down our orientation within an interface and the use of its features. Each encounter requires a split-second of processing that might seem negligible but accumulates across interactions. 2. The more icons within an interface, the more difficult it can be to navigate. As feature sets grow, we often resort to increasingly abstract or subtle visual distinctions between icons. What might have worked with 5-7 core functions becomes unmanageable at 15-20 features. Users must differentiate between various forms of creation, sharing, saving, and organizing - all through pictorial representation alone. The burden of communication increases for each individual icon as an interface’s feature set expands. It becomes increasingly difficult to communicate specific functions with icons alone, especially when distinguishing between similar actions like creating and editing, saving and archiving, or uploading and downloading. 3. Icons function as an interface-specific language within a broader ecosystem. Interfaces operate within other interfaces. Your application may run within a browser that also runs within an operating system. Users must navigate multiple levels of interface complexity, most of which you cannot control. When creating bespoke icons, you force users to learn a new visual language while still maintaining awareness of established conventions. This creates particular challenges with standardized icon sets. When we use established systems like Google’s Material Design, an icon that represents one function in our interface might represent something entirely different in another application. This cross-context confusion adds to the cognitive load of icon interpretation. Why Text Labeling Helps Your Interface 1. Text alone is usually more efficient. Our brains process familiar words holistically rather than letter-by-letter, making them incredibly efficient information carriers. We’ve spent our lives learning to recognize words instantly, while most app icons require new visual vocabulary. Scanning text is fundamentally easier than scanning icons. A stacked list of text requires only a one-directional scan (top-to-bottom), while icon grids demand bi-directional scanning (top-to-bottom and left-to-right). This efficiency becomes particularly apparent in mobile interfaces, where similar-looking app icons can create a visually confusing grid. 2. Text can make icons more efficient. The example above comes from Magnolia, an application I designed. On the left is the side navigation panel without labels. On the right is the same panel with text labels. Magnolia is an extremely niche tool with highly specific features that align with the needs of research and planning teams who develop account briefs. Without the labels, the people who we created Magnolia for would likely find the navigation system confusing. Adding text labels to icons serves two purposes: it clarifies meaning and provides greater creative freedom. When an icon’s meaning is reinforced by text, users can scan more quickly and confidently. Additionally, designers can focus more on the unity of their interface’s visual language when they’re not relying on icons alone to communicate function. 3. Icons are effective anchors in text-heavy applications. Above is another example from Magnolia. Notice how the list of options on the right (Export, Regenerate, and History) stands out because of the icons, but the text labels make it immediately clear what these things do. See, this isn’t an argument for eliminating icons entirely. Icons serve an important role as visual landmarks, helping to differentiate functional areas from content areas. Especially in text-heavy applications, icons help pull the eye toward interactive elements. The combination of icon and text label creates clearer affordances than either element alone. Finding the Balance Every time we choose between an icon and a text label, we’re making a choice about cognitive load. We’re deciding how much mental energy people will spend interpreting our interfaces rather than using them. While a purely iconic interface might seem simple and more attractive, it often creates an invisible tax on attention and understanding. The solution, of course, isn’t found in a “perfect” icon, nor in abandoning icons entirely. Icons remain powerful tools for creating visual hierarchy and differentiation. Instead, we need to be more thoughtful about when and how we deploy them. The best interfaces recognize that icons and text aren’t competing approaches but complementary tools that work best in harmony. This means considering not just the immediate context of our own interfaces, but the broader ecosystem in which they exist. Our applications don’t exist in isolation — they’re part of a complex digital environment where users are constantly switching between different contexts, each with its own visual language. The next time you’re tempted to create yet another icon, or to remove text labels, remember: the most elegant solution isn’t always the one that looks simple — it’s the one that makes communication and understanding feel simple.

17 hours ago 1 votes
A New Kind of Wholeness

Check out the light in my office right now 🤩 . AI effectively, but to understand how it fits into the larger patterns of human creativity and purpose. That’s a good thing — designers are good observers. No matter what the tech, we notice patterns we notice the lack of them. So in the midst of what is likely a major, AI-driven transition for us all, it’s worth considering that the future of design won’t be about human versus machine, but about understanding the pattern language that emerges when both intelligences work together in a system. As Christopher Alexander and his cohort might have said, it will be about creating a new kind of wholeness — one that honors both the computational power of AI and the nuanced wisdom of human experience.

4 days ago 6 votes
The AI Debate We're Not Having

We will never agree about AI until we agree about what it means to live a good life. Current debates* about artificial intelligence circle endlessly around questions of capability, economic impact, and resource allocation – not to mention language. Is AI truly useful? What will it do to jobs? How much should we invest in its development? And what do we mean by AI? What’s the difference between machine learning and large language modeling? What can one do that the other cannot? What happens when we mix them up? These discussions are necessary, but will continue to be maddening without backing up some and widening the scope. Meanwhile, it often feels like we’re arguing about the objective merit of a right-hand turn over a left without first agreeing where we’re trying to go. The real questions about AI are actually questions about human flourishing. How much of a person’s life should be determined by work? What level of labor should be necessary to meet basic needs? Where do we draw the line between necessity and luxury? How should people derive contentment and meaning? Without wrestling with these fundamental questions, our AI debates are just technical discussions floating free of human context. Consider how differently we might approach AI development if we had clear answers about what constitutes a good human life. If we believed that meaningful work is essential to human flourishing, we’d focus the development of AI on human augmentation while being vigilant of how it might replace human function. We’d carefully choose how it is applied, leveraging machine-learning systems to analyze datasets beyond our comprehension and move scientific investigations forward, but withhold its use in areas that derive value from human creativity. If we thought that freedom from labor was the path to human fulfillment, we’d push AI toward maximum automation and do the work of transitioning from a labor and resource-driven capitalist system to a completely different structure. We would completely remake the world, starting with the lives of those who inhabit it. But without this philosophical foundation, we’re left with market forces, technological momentum, and environmental pressures shaping our future by default. The details of human life become outputs of these systems rather than conscious choices guided by shared values. As abstract as this may sound, it is as essential as any technical detail that differentiates one model from another. Every investment in AI derives from a worldview which, at best, prefers maintaining the structural status quo, or at worst, desires a further widening of the gap between economic power and poverty. Every adoption layer of large language models reinforces the picture of society drawn by just one piece of it — the internet — and as dependence upon these systems increases, so does the reality distortion. The transition from index-driven search engines to AI-driven research engines reaches a nearly gaslight level of affirming a certain kind of truth; a referral, after all, is a different kind of truth-builder than an answer. And though both systems draw from exactly the same information, one will persuade its users more directly. Its perception will be reality. Unless, of course, we say otherwise. We’re building the infrastructure of future human experience without explicitly discussing what that experience should be. To be sure, many humans have shared worldviews. Some are metaphysical in nature, if not explicitly religious. Some are maintained independent of economic and technological forces, if not in direct rejection of them. Among the many pockets of human civilization rooted in pre-digital traditions, the inexorable supremacy of AI likely looks like an apocalypse they’d prefer to avoid. I am not saying we all must live and believe as others do. A shared picture of human flourishing does not require a totalitarian trickle-down demand on every detail of day-to-day life. But it must be defined enough to help answer questions, particularly about technology, that are relevant to anyone alive. The most urgent conversation about AI isn’t about its capabilities or risks, but about the kind of life we want it to help us create. Until we grapple with these deeper questions about human flourishing, our technical debates will continue to miss the point and further alienate us from one another.   This from Robin Sloan vs. this from Baldur Bjarnason vs. this from Michelle Barker, for example. All thoughtful, offering nuance and good points, but also missing one another.

6 days ago 8 votes
The Productive Void

What enabled us to create AI is the very thing it has the power to erase. I still have dozens of sketchbooks, many filled with ideas for logos, layouts, and other designs. These pages don’t just capture fully realized images, albeit sketches. They capture an entire process of thinking –the false starts, the unexpected discoveries, the gradual, iterative refinement of ideas. Today, those same explorations can, if you choose, be reduced to a single prompt: “Generate a minimal, modern wordmark for a SaaS product called _____.” The immediacy is seductive. Recently, I found myself experimenting with one of the latest AI tools promising simple “logo generation.” Within minutes, I had generated dozens of logomarks — each one passable, but each one missing something essential that I couldn’t quite name. My eight-year-daughter, looking on, asked a telling question: “Dad, are you playing a game?” Her question has stayed with me. After twenty years in design, I’ve watched our tools evolve from physical to digital to algorithmic, each transition sold with somewhat mixed messages: more simplicity and more features; more efficiency and more reach; more speed and more possibility. But as we race toward a future where AI can generate endless variations with a few keystrokes, I’m increasingly conscious of what happens – or in the case of generative work, what doesn’t happen — in the spaces between. These are the vital territory of human creativity that resists compression. Yes, I realize it may sound as if I am arguing against a straw man. One needn’t stop at the generated logo, which is, after all, just a single image. It’s simply a sketch to launch from. It speeds up the process of concepting. I suppose I don’t have a significant problem accepting the role of AI in ideation. But I have already seen how the immediacy of AI sets the expectation for process collapse. During a recent meeting, someone demonstrated generating a logo in seconds. When I asked whether the tool could produce an editable file (knowing, of course, that it couldn’t), the answer I got was, “does that matter?” Well, of course it does! To take a logo forward, to truly make it functional as the cornerstone of an identity system, a flat, uneditable image isn’t enough. When ends justify the means, the means too easily become invisible. But that’s another article. (Note to self: Process Collapse is the Silent Spring of AI.) Back to hands and paper, then. Knowing full-well that it’s become rather trite for people of my age to repeatedly bring up the merits of “analog” materials, I am going to do it anyway. And that is because the resistance of a surface agains a pen or pencil creates an important friction that works its way through your body and into your mind. That resistance is valuable; it forces us to slow down. That gives our mind time to process each move and consider the next. The best digital tools preserved some of this productive friction. Whether you’re working in Illustrator, Figma, or some other creative composition environment, there is likely a pen tool, and its virtue is not in being fast. AI tools offer no such thing, by design. They collapse the space between intention and result, between thinking and making. They eliminate the productive void — that space where uncertainty leads to discovery. It’s the same void we experience when waiting in lines, walking from one place to the next, showering, washing dishes, stopped in traffic. These are the places where solutions to difficult problems bloom in our minds, not from toiling over them, but from letting go, albeit briefly. The latency between mind and machine, whether that machine is a digital pen or one filled with ink, is a feature, not a bug. It’s to be preserved as fertile ground for observation and consideration. AI scorches that earth, at least in the context of image and design generation, at least right now. AI is undeniable; the tools have already changed how I work in ways both subtle and profound. But as someone who has watched design trends come and go since the late 1990s — from glitchy bitmap Geocities chic to skeuomorphism to flat design; from Dreamweaver to Photoshop to Figma to AI — I’m as willing to change as I am wary of how quickly we can mistake convenience for improvement. My son is three years old, and very much at the stage of development where his awareness of what he could have is not yet tempered by an understanding of how it arrives before him. A shrill demand for “more apple!” is repeated instantly because, well, toddlers have no patience. The current stage of AI has me thinking about that growth stage quite a bit. He is, after all, growing up in a world where, thanks to 21st century technology, the space between wanting and having grows ever shorter. He doesn’t know that yet, but I do. And I worry what kind of a person he might become if he doesn’t experience some friction. And this is, of course, a reflection of my concern about what happens to society when it no longer has to wait for anything. When frictions are sanded down by the mill of innovation to such a point that we — what? — lose the will to do much of anything? Obviously, we are not yet at that point — there is much good that AI could equip us to do — but also, one point leads to the next. What will the next be? There is a certain irony to this line of thinking, I know. I’m writing this in a space surrounded by digital tools that would have seemed magical during my sketchbooking days back in college. Each technological shift in design since that time has imposed some kind of creative compression, each giving something and taking something away. AI will do that, too. But it’s the speed of AI that worries me most as a maker. It’s the thing about AI that has already prepared me to be surprised by my own world not too long from now. And though I try to reserve judgement — I feel it’s the intellectually honest thing to do at this moment – it seems we are at risk of losing access to the spaces between things, spaces we may not fully value until they are gone. These are the productive void. The thing to do isn’t to reject AI completely. I can already see that sort of resistance is futile. But can we preserve the spaces between things? Can we protect the natural resource of friction, of waiting, of gaps and iteration? In my own practice, I’m learning to use AI not as a tool of compression, but one of expansion. Prompts become not endpoints but starting points. They create new voids to explore, new territories for my mind to inhabit. My accumulating stacks of sketchbooks remind me that design has always been about more than just the outcome. It’s about the journey, the resistance, the productive uncertainty that leads to discovery. As we rush to embrace the power of AI, we might do well to remember that what enabled us to create it is the very thing it has the power to erase.

a week ago 8 votes
Shut up, Siri

There will be no monoculture of human-computer interaction. Every day I see a new thinkpiece on “the post-screen future” or “UI-less design” or “the end of the click.” I even used to write things like that. But that’s because I had less experience with human-computer interaction than I have now. You see, there’s this contagion of belief that new technologies not only open new doors, but definitively close old ones. But that’s rarely true. The internet didn’t end radio. The iPhone didn’t end laptops or even desktop computers. And voice interfaces won’t end screens and manual interactions. There will be no monoculture of human-computer interaction. We may have the technology to make the click an unnecessary interaction convention; I doubt we have the desire. That is a good thing. Sure, we’ll talk to our machines, just not all the time. The definition of the click as “a mechanical act requiring decision, precision, and a split-second negotiation between choice and commitment” is a good one, because it details all the reasons why the click is so useful and effective. However, some might imagine that a sophisticated enough machine would obviate the need for any direct, physical interaction. After all, didn’t the characters in Star Trek walk around the ship, constantly invoking the ship’s Computer to give them answers by just speaking “Computer…” into the room and waiting for its response? They did! But they also had many screens and panels and did a lot of tapping and pressing that might as well have been clicking. Sure, Star Trek was made long before we had a good sense of what advanced computing might actually be capable of, and what it might actually be like to use. But it also might be that the creators of Star Trek held some insight into human-computer interaction that shaped their world building. Consider how your brain processes information. The eye-brain connection is one of the most sophisticated and efficient systems in human biology. You can scan a list of options and make comparisons in fractions of a second — far faster than listening to those same options read aloud. Suppose we found ourselves ordering dinner at a restaurant in a purely voice command future. I imagine that would be a lot like the moment when your server reads off the evening’s specials — what was that first one again? — but for the entire time and for everyone at the table. It would take too long, and it would be very annoying. That’s the thing about how our senses interact with the brain — they don’t all work in the same way. You can view more than one thing at a time, identify them, react to them, and process them virtually simultaneously, but you cannot come close to that kind of performance with sound. Imagine sitting across from two friends and both show you a picture at the same time. You’ll likely be able to identify both right away. Now imagine those two friends telling you something important at the same time. You’re almost certain to ask them to tell you again, one at a time. What’s more, our brains develop sophisticated spatial memory for visual interfaces. Regular users of any application know exactly where their favorite functions are located — they can navigate complex interfaces almost unconsciously, their cursor moving to the right spot without conscious thought. This kind of spatial memory simply doesn’t exist for voice commands, where every interaction requires active recall of the correct verbal command. Now imagine an office or public space where everyone is speaking commands to their devices. The cacophony would be unbearable. This highlights another crucial advantage of visual interfaces and direct selection: they’re silent. Sometimes we need to interact with our devices without broadcasting our actions to everyone around us. Voice interfaces remove this option for privacy and discretion in public spaces. The screen, by the way, tends to get the blame for all the negative things that have come with our increasingly digital lives — the distractions, intrusions, manipulations, and so on — but the screen itself isn’t to blame. In fact, the screen exists because of how incredibly useful it is as a memory surrogate. The screen is a surface for information and interaction, much like whiteboard, a chalkboard, a canvas, a scroll, or a patch of dirt once was long ago. The function it serves is to hold information for us — so that we don’t have to retain it in perfect memory. That’s why screens are useful, and that’s why — I think — they were still present on an imagined starship three centuries from now along with a conversant AI. “Clicking” — which is really just a shorthand for some direct selection method — is incredibly efficient, and increasingly so as the number of options increases. Imagine a list of three items, which is probably the simplest scenario. Speaking a selection command like, “the third one, please” is just as efficient as manually selecting the third one in the list. And this is probably true up to somewhere around 6 or 7 items — there’s an old principle having to do with our ability to hold no more than that number of individual pieces of information in our minds. But beyond that number, it gets more difficult without just being able to point. In order to say you want the ninth one in a list, for example, requires that you know it’s the ninth one in the list, which might take you a moment to figure out — certainly longer than just pointing at it. Consider also the computational efficiency. A click or tap requires minimal processing power — it’s a simple input with precise coordinates. Voice commands, on the other hand, require constant audio processing, speech recognition, and AI-driven interpretation. In a world increasingly concerned with energy consumption and computational resources, the efficiency of direct selection becomes even more relevant. It’s also worth noting that different interface modes serve different accessibility needs. While voice interfaces can be crucial for users with certain physical limitations, visual interfaces with direct selection are essential for users with hearing impairments or speech difficulties. The future isn’t about replacing one mode with another — it’s about ensuring that multiple modes of interaction are available to serve diverse needs. Perhaps each item in the list is clearly different. In that case, you might just be able to speak aloud which one you want. But what if they’re not that different? What if you aren’t sure what each item is? Perhaps these items aren’t even words, in which case you now have to describe them in a way that the machine can disambiguate. What if there are three dozen in a grid? At that level of density, tracking with your eye and some kind of pointer helps you move more rapidly through the information, to say nothing about making a final selection. Instead of imagining a wholesale replacement of visual interfaces, we should be thinking about how to better integrate different modes of interaction. How can voice and AI augment visual interfaces rather than replace them? How can we preserve the efficiency of visual processing while adding the convenience of voice commands? The click isn’t just a technological artifact — it’s a reflection of how humans process and interact with information. As long as we have eyes and spatial reasoning, we’ll need interfaces that leverage these capabilities. The future isn’t clickless; it’s multi-modal.

a week ago 9 votes

More in design

In Defense of Text Labels

Why Icons Alone Aren’t Enough I’m a firm believer in text labels. Interfaces are over-stuffed with icons. The more icons we have to scan over, the more brain power we put toward making sense of them rather than using the tools they represent. This slows us down, not just once, but over and over again. While it may feel duplicative to add a text label, the reality is that few icons are self-sufficient in communicating meaning. The Problems that Icons Create 1. Few icons communicate a clear, singular meaning immediately It’s easy to say that a good icon will communicate meaning — or that if an icon needs a text label, it’s not doing its job. But that doesn’t take into consideration the burden that icons — good or bad — put on people trying to navigate interfaces. Even the simplest icons can create ambiguity. While a trash can icon reliably communicates “delete,” what about the common pencil icon. Does it mean create? Edit? Write? Draw? Context can help with disambiguation, but not always, and that contextual interpretation requires additional cognitive effort. When an icon’s meaning isn’t immediately clear, it slows down our orientation within an interface and the use of its features. Each encounter requires a split-second of processing that might seem negligible but accumulates across interactions. 2. The more icons within an interface, the more difficult it can be to navigate. As feature sets grow, we often resort to increasingly abstract or subtle visual distinctions between icons. What might have worked with 5-7 core functions becomes unmanageable at 15-20 features. Users must differentiate between various forms of creation, sharing, saving, and organizing - all through pictorial representation alone. The burden of communication increases for each individual icon as an interface’s feature set expands. It becomes increasingly difficult to communicate specific functions with icons alone, especially when distinguishing between similar actions like creating and editing, saving and archiving, or uploading and downloading. 3. Icons function as an interface-specific language within a broader ecosystem. Interfaces operate within other interfaces. Your application may run within a browser that also runs within an operating system. Users must navigate multiple levels of interface complexity, most of which you cannot control. When creating bespoke icons, you force users to learn a new visual language while still maintaining awareness of established conventions. This creates particular challenges with standardized icon sets. When we use established systems like Google’s Material Design, an icon that represents one function in our interface might represent something entirely different in another application. This cross-context confusion adds to the cognitive load of icon interpretation. Why Text Labeling Helps Your Interface 1. Text alone is usually more efficient. Our brains process familiar words holistically rather than letter-by-letter, making them incredibly efficient information carriers. We’ve spent our lives learning to recognize words instantly, while most app icons require new visual vocabulary. Scanning text is fundamentally easier than scanning icons. A stacked list of text requires only a one-directional scan (top-to-bottom), while icon grids demand bi-directional scanning (top-to-bottom and left-to-right). This efficiency becomes particularly apparent in mobile interfaces, where similar-looking app icons can create a visually confusing grid. 2. Text can make icons more efficient. The example above comes from Magnolia, an application I designed. On the left is the side navigation panel without labels. On the right is the same panel with text labels. Magnolia is an extremely niche tool with highly specific features that align with the needs of research and planning teams who develop account briefs. Without the labels, the people who we created Magnolia for would likely find the navigation system confusing. Adding text labels to icons serves two purposes: it clarifies meaning and provides greater creative freedom. When an icon’s meaning is reinforced by text, users can scan more quickly and confidently. Additionally, designers can focus more on the unity of their interface’s visual language when they’re not relying on icons alone to communicate function. 3. Icons are effective anchors in text-heavy applications. Above is another example from Magnolia. Notice how the list of options on the right (Export, Regenerate, and History) stands out because of the icons, but the text labels make it immediately clear what these things do. See, this isn’t an argument for eliminating icons entirely. Icons serve an important role as visual landmarks, helping to differentiate functional areas from content areas. Especially in text-heavy applications, icons help pull the eye toward interactive elements. The combination of icon and text label creates clearer affordances than either element alone. Finding the Balance Every time we choose between an icon and a text label, we’re making a choice about cognitive load. We’re deciding how much mental energy people will spend interpreting our interfaces rather than using them. While a purely iconic interface might seem simple and more attractive, it often creates an invisible tax on attention and understanding. The solution, of course, isn’t found in a “perfect” icon, nor in abandoning icons entirely. Icons remain powerful tools for creating visual hierarchy and differentiation. Instead, we need to be more thoughtful about when and how we deploy them. The best interfaces recognize that icons and text aren’t competing approaches but complementary tools that work best in harmony. This means considering not just the immediate context of our own interfaces, but the broader ecosystem in which they exist. Our applications don’t exist in isolation — they’re part of a complex digital environment where users are constantly switching between different contexts, each with its own visual language. The next time you’re tempted to create yet another icon, or to remove text labels, remember: the most elegant solution isn’t always the one that looks simple — it’s the one that makes communication and understanding feel simple.

17 hours ago 1 votes
KaDeWe: Private Label by Studio Chapeaux

Challenge: Create a private label for the legendary sixth-floor Food Hall at KaDeWe — also known as the culinary heaven...

yesterday 3 votes
CSS Space Toggles

I’ve been working on a transition to using light-dark() function in CSS. What this boils down to is, rather than CSS that looks like this: :root { color-scheme: light; --text: #000; } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; --text: #fff; } } I now have this: :root { color-scheme: light; --text: light-dark(#000, #fff); } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; } } That probably doesn’t look that interesting. That’s what I thought when I first learned about light-dark() — “Oh hey, that’s cool, but it’s just different syntax. Six of one, half dozen of another kind of thing.” But it does unlock some interesting ways to handling themeing which I will have to cover in another post. Suffice it to say, I think I’m starting to drink the light-dark() koolaid. Anyhow, using the above pattern, I want to compose CSS variables to make a light/dark theme based on a configurable hue. Something like this: :root { color-scheme: light; /* configurable via JS */ --accent-hue: 56; /* which then cascades to other derivations */ --accent: light-dark( hsl(var(--accent-hue) 50% 100%), hsl(var(--accent-hue) 50% 0%), ); } @media (prefers-color-scheme: dark) { :root { color-scheme: dark; } } The problem is that --accent-hue value doesn’t quite look right in dark mode. It needs more contrast. I need a slightly different hue for dark mode. So my thought is: I’ll put that value in a light-dark() function. :root { --accent-hue: light-dark(56, 47); --my-color: light-dark( hsl(var(--accent-hue) 50% 100%), hsl(var(--accent-hue) 50% 0%), ); } Unfortunately, that doesn’t work. You can’t put arbitrary values in light-dark(). It only accepts color values. I asked what you could do instead and Roma Komarov told me about CSS “space toggles”. I’d never heard about these, so I looked them up. First I found Chris Coyier’s article which made me feel good because even Chris admits he didn’t fully understand them. Then Christopher Kirk-Nielsen linked me to his article which helped me understand this idea of “space toggles” even more. I ended up following the pattern Christopher mentions in his article and it works like a charm in my implementation! The gist of the code works like this: When the user hasn’t specified a theme, default to “system” which is light by default, or dark if they’re on a device that supports prefers-color-scheme. When a user explicitly sets the color theme, set an attribute on the root element to denote that. /* Default preferences when "unset" or "system" */ :root { --LIGHT: initial; --DARK: ; color-scheme: light; } @media (prefers-color-scheme: dark) { :root { --LIGHT: ; --DARK: initial; color-scheme: dark; } } /* Handle explicit user overrides */ :root[data-theme-appearance="light"] { --LIGHT: initial; --DARK: ; color-scheme: light; } :root[data-theme-appearance="dark"] { --LIGHT: ; --DARK: initial; color-scheme: dark; } /* Now set my variables */ :root { /* Set the “space toggles’ */ --accent-hue: var(--LIGHT, 56) var(--DARK, 47); /* Then use them */ --my-color: light-dark( hsl(var(--accent-hue) 50% 90%), hsl(var(--accent-hue) 50% 10%), ); } So what is the value of --accent-hue? That line sort of reads like this: If --LIGHT has a value, return 56 else if --DARK has a value, return 47 And it works like a charm! Now I can set arbitrary values for things like accent color hue, saturation, and lightness, then leverage them elsewhere. And when the color scheme or accent color change, all these values recalculate and cascade through the entire website — cool! A Note on Minification A quick tip: if you’re minifying your HTML and you’re using this space toggle trick, beware of minifying your CSS! Stuff like this: selector { --ON: ; --OFF: initial; } Could get minified to: selector{--OFF:initial} And this “space toggles trick” won’t work at all. Trust me, I learned from experience. Email · Mastodon · Bluesky

3 days ago 7 votes
Doğuş Çay Packaging Redesign by katkagraphics

Last semester at university we were given a really cool task. We had to choose an existing company that distributes...

3 days ago 4 votes
A New Kind of Wholeness

Check out the light in my office right now 🤩 . AI effectively, but to understand how it fits into the larger patterns of human creativity and purpose. That’s a good thing — designers are good observers. No matter what the tech, we notice patterns we notice the lack of them. So in the midst of what is likely a major, AI-driven transition for us all, it’s worth considering that the future of design won’t be about human versus machine, but about understanding the pattern language that emerges when both intelligences work together in a system. As Christopher Alexander and his cohort might have said, it will be about creating a new kind of wholeness — one that honors both the computational power of AI and the nuanced wisdom of human experience.

4 days ago 6 votes