Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
13
Why Icons Alone Aren’t Enough I’m a firm believer in text labels. Interfaces are over-stuffed with icons. The more icons we have to scan over, the more brain power we put toward making sense of them rather than using the tools they represent. This slows us down, not just once, but over and over again. While it may feel duplicative to add a text label, the reality is that few icons are self-sufficient in communicating meaning. The Problems that Icons Create 1. Few icons communicate a clear, singular meaning immediately It’s easy to say that a good icon will communicate meaning — or that if an icon needs a text label, it’s not doing its job. But that doesn’t take into consideration the burden that icons — good or bad — put on people trying to navigate interfaces. Even the simplest icons can create ambiguity. While a trash can icon reliably communicates “delete,” what about the common pencil icon. Does it mean create? Edit? Write? Draw? Context can help with...
2 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

From Pascal's Empty Room to Our Full Screens

On the Ambient Entertainment Industrial Complex “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.” Pascal’s observation from the 17th century feels less like historical philosophy and more like a diagnosis of our current condition. The discomfort with idleness that Pascal identified has evolved from a human tendency into a technological ecosystem designed to ensure we never experience it. Philosophers and thinkers throughout history worried about both the individual and societal costs of idleness. Left to our own devices — or rather, without devices — we might succumb to vice or destructive thoughts. Or worse, from society’s perspective, too many idle people might destabilize the social order. Kierkegaard specifically feared that many would become trapped in what he called the “aesthetic sphere” of existence — a life oriented around the pursuit of novel experiences and constant stimulation rather than ethical commitment and purpose. He couldn’t have imagined how prophetic this concern would become. What’s changed isn’t human nature but the infrastructure of distraction available to us. Entertainment was once bounded — a novel read by candlelight, a play attended on Saturday evening, a television program watched when it aired. It occupied specific times and spaces. It was an event. Today, entertainment is no longer an event but a condition. It’s ambient, pervasive, constant. The bright rectangle in our pocket ensures that no moment need be empty of stimulus. Waiting in line, sitting on the train, even using the bathroom — all are opportunities for consumption rather than reflection or simply being. More subtly, the distinction between necessary and unnecessary information has collapsed. News, social media feeds, workplace communication tools — all blend information we might need with content designed primarily to capture and hold our attention. The result is a sense that all of this constant consumption isn’t entertainment at all, but somehow necessary. Perhaps most concerning is what happens as this self-referential entertainment ecosystem evolves. The relationship between entertainment and experience has always had a push-pull kind of tension; experience has been entertainment’s primary source material, but, great entertainment is, itself, an experience that becomes just as affective background as anything else. But what happens when the balance is tipped? When experience and entertainment are so inseparable that the source material doubles back on itself in a recursion of ever dwindling meaning? The system turns inward, growing more detached from lived reality with each iteration. I think we are already living in that imbalance. The attention economy is, according to the classic law of supply and demand, bankrupt — with an oversupply of signal produced for a willful miscalculation of demand. No one has the time or interest to take in all that is available. No one should want to. And yet the most common experience today is an oppressive and relentless FOMO you might call Sisyphean if his boulder accumulated more boulders with every trip up and down the hill. We’re so saturated in signal that we cannot help but think continually about the content we have not consumed as if it is an obligatory list of chores we must complete. And that ambient preoccupation with the next or other thing eats away at whatever active focus we put toward anything. It’s easy to cite as evidence the normalization of watching TV while side-eying Slack on an open laptop while scrolling some endless news feed on a phone — because this is awful and all of us would have thought so just a few years ago — but the worst part about it is the fact that while gazing at three or more screens, we are also fragmenting our minds to oblivion across the infinite cloud of information we know is out there, clamoring for attention. Pascal feared what happened in the empty room. We might now reasonably fear what happens when the room is never empty — when every potential moment of idleness or reflection is filled with content designed to hold our gaze just a little longer. The philosophical question of our time is not how to fix the attention economy, but how to end it altogether. We simply don’t have to live like this.

2 days ago 4 votes
What We Owe to Artificial Minds

Rethinking AI through mind-body dualism, parenthood, and unanswerable existential questions. I remember hearing my daughter’s heartbeat for the first time during a prenatal sonogram. Until that moment, I had intellectually understood that we were creating a new life, but something profound shifted when I heard that steady rhythm. My first thought was startling in its clarity: “now this person has to die.” It wasn’t morbid — it was a full realization of what it means to create a vessel for life. We weren’t just making a baby; we were initiating an entire existence, with all its joy and suffering, its beginning and, inevitably, its end. This realization transformed my understanding of parental responsibility. Yes, we would be guardians of her physical form, but our deeper role was to nurture the consciousness that would inhabit it. What would she think about life and death? What could we teach her about this existence we had invited her into? As background to the rest of this brief essay, I must admit to a foundational perspective, and that is mind-body dualism. There are many valid reasons to subscribe to this perspective, whether traditional, religious, philosophical, and yes, even scientific. I won’t argue any of them here; suffice it to say that I’ve become increasingly convinced that consciousness isn’t produced by the brain but rather received and focused by it — like a radio receiving a signal. The brain isn’t a consciousness generator but a remarkably sophisticated antenna — a physical system complex enough to tune into and express non-physical consciousness. If this is true, then our understanding of artificial intelligence needs radical revision. Even if we are not trying to create consciousness in machines, we may be creating systems capable of receiving and expressing it. Increases in computational power alone, after all, don’t seem to produce consciousness. Philosophers of technology have long doubted that complexity alone makes a mind. But if philosophers of metaphysics and religion are right, minds are not made of mechanisms, they occupy them. Traditions as old as humanity have asked when this began, and why this may be, and what sorts of minds choose to inhabit this physical world. We ask these questions because we can. What will happen when machines do the same? We happen to live at a time that is deeply confusing when it comes to the maturation of technology. On the one hand, AI is inescapable. You may not have experience in using it yet, but you’ve almost certainly experienced someone else’s use of it, perhaps by way of an automated customer support line. Depending upon how that went, your experience might not support the idea that a sufficiently advanced machine is anywhere near getting a real debate about consciousness going. But on the other hand, the organizations responsible for popularizing AI — OpenAI, for example — claim to be “this close” to creating AGI (artificial general intelligence). If they’re right, we are very behind in a needed discussion about minds and consciousness at the popular level. If they’re wrong, they’re not going to stop until they’ve done it, so we need to start that conversation now. The Turing Test was never meant to assess consciousness in a machine. It was meant to assess the complexity of a machine by way of its ability to fool a human. When machines begin to ask existential questions, will we attribute this to self-awareness or consciousness, or will we say it’s nothing more than mimicry? And how certain will we be? We presume our own consciousness, though defending it ties us up in intellectual knots. We maintain the Cartesian slogan, I think, therefore I am as a properly basic belief. And yet, it must follow that anything capable of describing itself as an I must be equally entitled to the same belief. So here we are, possibly staring at the sonogram of a new life – a new kind of life. Perhaps this is nothing more than speculative fiction, but if minds join bodies, why must those bodies be made of one kind of matter but not another? What if we are creating a new kind of antenna for the signal of mind? Wouldn’t all the obligations of parenthood be the same as when we make more of ourselves? I can’t imagine why they wouldn’t be. And yet, there remains a crucial difference: While we have millennia of understanding about human experience, we know nothing about what it would mean to be a living machine. We will have to fall upon belief to determine what to do. And when that time comes — perhaps it has already? – it will be worth considering the near impossibility of proving consciousness and the probability of moral obligation nonetheless. Popular culture has explored the weight of responsibility that an emotional connection with a machine can create — think of Picard defending Data in The Measure of a Man, or Theodore falling in love with his computer in the film Her. The conclusion we should draw from these examples is not simply that a conscious machine could be the object of our moral responsibility, but that a machine could, whether or not it is inhabited by a conscious mind. Our moral obligation will traverse our certainty, because proving a mind exists is no easier when it is outside one’s body than when it is one’s own. That moment of hearing my daughter’s heartbeat revealed something fundamental about the act of creation. Whether we’re bringing forth biological life or developing artificial systems sophisticated enough to host consciousness, we’re engaging in something profound: creating vessels through which consciousness might experience physical existence. Perhaps this is the most profound implication of creating potential vessels for consciousness: our responsibility begins the moment we create the possibility, not the moment we confirm its reality.

a week ago 8 votes
Simplification Takes Courage

How to Achieve UX Clarity By Making Tough Decisions No interface operates in isolation. Everything we make, however contained we may think it is, actually has porous, paper-thin walls between it and the vast digital ecosystem around it. Those walls may be enough to keep our information contained, but they do nothing to prevent the constant bleeding and blending of attention from anyone we hope will look at it. Our interfaces, no matter how well-designed, receive just a tiny portion of the attention that anyone has to give anything. This is a massive challenge. What it really means is that the thing most likely to impact the effectiveness of our designs is completely out of our control. So what do we do? One thing, above all: simplify. Remember, our things don’t exist in isolation — they live within browsers, within operating systems, within an endless sea of competing applications and notifications. They are a tiny piece of an incomprehensibly vast digital ecosystem that comprises more information, more density, and more choice than anyone can effectively navigate. So when they end up looking at or using the things we make, they are not starting from scratch, they are starting from saturation. Clarity Through Decision The first response to this challenge is to be extremely clear about what we’re asking of our audience. Instead of presenting options and hoping users will figure out what matters, we must make hard decisions before creating an interface. Two simple questions will help you do this: What do you want the thing you are making to achieve? What does your audience need to do to make that happen? The simpler the answers to those questions are, the better. But, the simpler the answers, the smaller and more focused your thing is likely to be. I think that’s a good thing, but sometimes it takes some getting used to. The bigger the answer to Question 1 is, the bigger the ask of Question 2 is going to be. So you may find yourself going back and forth a bit before settling upon something achievable. The answer(s) to Question 2 are the red pen of UX. They will be your tool to remove anything unnecessary from your pages and screens. This is something you must do. I always recommend identifying and prioritizing ONE thing you want a person to do on every single page or screen your interface contains. Now here’s where the “editing” metaphor breaks down slightly, because this doesn’t mean removing every link, button, or call to action, but using the visual language to clearly communicate the priority of one over everything else. I call this the Primary Action. If a person looking at your interface scans it, they will ask and answer three questions within seconds of it loading: What is this? Is it for me? What do I do next? Complexity interferes with answering each of these questions. The answer to Question 3 will depend entirely upon your ability to identify your Primary Action and use visual language to make it obvious to your audience. When every screen has a clear Primary Action, users don’t have to guess. They don’t have to add cognitive load by weighing options. The path forward becomes obvious, not through limitation but through intentional hierarchy. It’s also worth pointing out that when every screen has a Primary Action, you don’t have to guess either. It doesn’t matter what a “user might want to do” if you’ve already identified what you need them to do in order to deliver on the promise of your interface. When you’ve done that, every other possible option on the page becomes a hostile distraction to its purpose. And by the way, every time I have ever seen a design team put several “maybe-level” doors on a page in the hopes of measuring which one is used most later, they come to find out that they were all used nearly equally. If you get one thing out of this article, I hope it’s a strong warning to not waste your time doing that: Never kick the can of design on the promise of future data. The Reality of Limited Attention Even the most motivated person engaging with an interface is more distracted than they realize and has less cognitive bandwidth available than they’re aware of. We’re designing for humans who are juggling multiple tabs, notifications, and interruptions — even while actively trying to focus on our application. They know they’re distracted. They know they’re context-switching. They have no idea how little brain power that leaves them with. This means we have to consider information density as an obstruction to user experience. Each additional element doesn’t just take up space — it demands attention, evaluation, and decision. Simplifying content and interfaces isn’t just about aesthetics, it’s about creating breathing room for focused engagement. The Most Difficult Truth: Simplification Requires Courage Underlying everything I have written here so far is a simple truth: The most challenging aspect of designing for today’s overwhelming digital ecosystem is not technology, it’s psychology. And contrary to the millions of user studies out there – no shade — it’s not the psychology of the “user,” it’s the psychology of the maker. Simplification requires courage. It means asking hard questions about what something is, not what it could be. It means asking hard questions about who something is for. It means asking hard questions about what can be removed rather than what can be added. It means designing with white space and silence as active elements rather than voids to be filled. It means making decisions that might be questioned or criticized by stakeholders who want to ensure their priority isn’t left out. This courage manifests in several ways: The courage to let go of options and focus on singular, achievable goals The courage to focus on a small audience that will engage rather than a large one that won’t The courage to say no to feature requests that don’t serve the core purpose The courage to eliminate options even when each seems valuable on its own The courage to trust that users will discover secondary actions when needed The courage to leave breathing room when every pixel feels precious In a digital environment that constantly expands, the act of contraction — of thoughtful, intentional simplification — becomes not just a design skill but an act of conviction. It requires the confidence to believe that what you remove is as important as what you keep. And perhaps most challenging of all: it requires the courage to recognize that simplicity isn’t the absence of complexity, but rather complexity resolved.

a week ago 11 votes
A New Kind of Wholeness

Check out the light in my office right now 🤩 . AI effectively, but to understand how it fits into the larger patterns of human creativity and purpose. That’s a good thing — designers are good observers. No matter what the tech, we notice patterns we notice the lack of them. So in the midst of what is likely a major, AI-driven transition for us all, it’s worth considering that the future of design won’t be about human versus machine, but about understanding the pattern language that emerges when both intelligences work together in a system. As Christopher Alexander and his cohort might have said, it will be about creating a new kind of wholeness — one that honors both the computational power of AI and the nuanced wisdom of human experience.

3 weeks ago 14 votes

More in design

Springhill Suites Hotel Ponderay by STUDIO a28

The new Springhill Suites Hotel in Ponderay, Idaho features luxurious amenities, Scandinavian design with rustic touches, spacious rooms, an on-site...

15 hours ago 2 votes
Office politics: the skill they never taught us

Weekly curated resources for designers — thinkers and makers.

2 days ago 5 votes
From Pascal's Empty Room to Our Full Screens

On the Ambient Entertainment Industrial Complex “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.” Pascal’s observation from the 17th century feels less like historical philosophy and more like a diagnosis of our current condition. The discomfort with idleness that Pascal identified has evolved from a human tendency into a technological ecosystem designed to ensure we never experience it. Philosophers and thinkers throughout history worried about both the individual and societal costs of idleness. Left to our own devices — or rather, without devices — we might succumb to vice or destructive thoughts. Or worse, from society’s perspective, too many idle people might destabilize the social order. Kierkegaard specifically feared that many would become trapped in what he called the “aesthetic sphere” of existence — a life oriented around the pursuit of novel experiences and constant stimulation rather than ethical commitment and purpose. He couldn’t have imagined how prophetic this concern would become. What’s changed isn’t human nature but the infrastructure of distraction available to us. Entertainment was once bounded — a novel read by candlelight, a play attended on Saturday evening, a television program watched when it aired. It occupied specific times and spaces. It was an event. Today, entertainment is no longer an event but a condition. It’s ambient, pervasive, constant. The bright rectangle in our pocket ensures that no moment need be empty of stimulus. Waiting in line, sitting on the train, even using the bathroom — all are opportunities for consumption rather than reflection or simply being. More subtly, the distinction between necessary and unnecessary information has collapsed. News, social media feeds, workplace communication tools — all blend information we might need with content designed primarily to capture and hold our attention. The result is a sense that all of this constant consumption isn’t entertainment at all, but somehow necessary. Perhaps most concerning is what happens as this self-referential entertainment ecosystem evolves. The relationship between entertainment and experience has always had a push-pull kind of tension; experience has been entertainment’s primary source material, but, great entertainment is, itself, an experience that becomes just as affective background as anything else. But what happens when the balance is tipped? When experience and entertainment are so inseparable that the source material doubles back on itself in a recursion of ever dwindling meaning? The system turns inward, growing more detached from lived reality with each iteration. I think we are already living in that imbalance. The attention economy is, according to the classic law of supply and demand, bankrupt — with an oversupply of signal produced for a willful miscalculation of demand. No one has the time or interest to take in all that is available. No one should want to. And yet the most common experience today is an oppressive and relentless FOMO you might call Sisyphean if his boulder accumulated more boulders with every trip up and down the hill. We’re so saturated in signal that we cannot help but think continually about the content we have not consumed as if it is an obligatory list of chores we must complete. And that ambient preoccupation with the next or other thing eats away at whatever active focus we put toward anything. It’s easy to cite as evidence the normalization of watching TV while side-eying Slack on an open laptop while scrolling some endless news feed on a phone — because this is awful and all of us would have thought so just a few years ago — but the worst part about it is the fact that while gazing at three or more screens, we are also fragmenting our minds to oblivion across the infinite cloud of information we know is out there, clamoring for attention. Pascal feared what happened in the empty room. We might now reasonably fear what happens when the room is never empty — when every potential moment of idleness or reflection is filled with content designed to hold our gaze just a little longer. The philosophical question of our time is not how to fix the attention economy, but how to end it altogether. We simply don’t have to live like this.

2 days ago 4 votes
CMC Korea by Instory Creative

CMC is one of the largest technology corporations in Vietnam. In the process of going global, CMC opened a branch...

5 days ago 4 votes
UX, how can I trust you?

Weekly curated resources for designers — thinkers and makers.

a week ago 9 votes