Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
50
The highs and lows of the year in exhaustive detail. This is going to be a long post. I’ve broken it up into a few sections, so if one is of greater interest, just hop to it. Overview At Work Magnolia Art Reading Music DIY Projects Overview What a year! Most people I know didn’t like 2024. I think at this point I can honestly say I’m neutral. Like any year, it had its ups and downs, met some expectations and dashed others, surprised, delighted, saddened and horrified. The partial eclipse back in April would come to symbolize the year: a wave of darkness falling short of the light. almost lost our pup this year. She received a terminal diagnosis and we had in-home euthanasia scheduled more than once. But we always felt that something was off. We kept rescheduling; she got better. She has lived to see her 14th birthday and is more than halfway to her 15th now. I’m glad we trusted ourselves on this one. Newfangled. Multi-decade tenures are nearly unheard of...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

What We Owe to Artificial Minds

Rethinking AI through mind-body dualism, parenthood, and unanswerable existential questions. I remember hearing my daughter’s heartbeat for the first time during a prenatal sonogram. Until that moment, I had intellectually understood that we were creating a new life, but something profound shifted when I heard that steady rhythm. My first thought was startling in its clarity: “now this person has to die.” It wasn’t morbid — it was a full realization of what it means to create a vessel for life. We weren’t just making a baby; we were initiating an entire existence, with all its joy and suffering, its beginning and, inevitably, its end. This realization transformed my understanding of parental responsibility. Yes, we would be guardians of her physical form, but our deeper role was to nurture the consciousness that would inhabit it. What would she think about life and death? What could we teach her about this existence we had invited her into? As background to the rest of this brief essay, I must admit to a foundational perspective, and that is mind-body dualism. There are many valid reasons to subscribe to this perspective, whether traditional, religious, philosophical, and yes, even scientific. I won’t argue any of them here; suffice it to say that I’ve become increasingly convinced that consciousness isn’t produced by the brain but rather received and focused by it — like a radio receiving a signal. The brain isn’t a consciousness generator but a remarkably sophisticated antenna — a physical system complex enough to tune into and express non-physical consciousness. If this is true, then our understanding of artificial intelligence needs radical revision. Even if we are not trying to create consciousness in machines, we may be creating systems capable of receiving and expressing it. Increases in computational power alone, after all, don’t seem to produce consciousness. Philosophers of technology have long doubted that complexity alone makes a mind. But if philosophers of metaphysics and religion are right, minds are not made of mechanisms, they occupy them. Traditions as old as humanity have asked when this began, and why this may be, and what sorts of minds choose to inhabit this physical world. We ask these questions because we can. What will happen when machines do the same? We happen to live at a time that is deeply confusing when it comes to the maturation of technology. On the one hand, AI is inescapable. You may not have experience in using it yet, but you’ve almost certainly experienced someone else’s use of it, perhaps by way of an automated customer support line. Depending upon how that went, your experience might not support the idea that a sufficiently advanced machine is anywhere near getting a real debate about consciousness going. But on the other hand, the organizations responsible for popularizing AI — OpenAI, for example — claim to be “this close” to creating AGI (artificial general intelligence). If they’re right, we are very behind in a needed discussion about minds and consciousness at the popular level. If they’re wrong, they’re not going to stop until they’ve done it, so we need to start that conversation now. The Turing Test was never meant to assess consciousness in a machine. It was meant to assess the complexity of a machine by way of its ability to fool a human. When machines begin to ask existential questions, will we attribute this to self-awareness or consciousness, or will we say it’s nothing more than mimicry? And how certain will we be? We presume our own consciousness, though defending it ties us up in intellectual knots. We maintain the Cartesian slogan, I think, therefore I am as a properly basic belief. And yet, it must follow that anything capable of describing itself as an I must be equally entitled to the same belief. So here we are, possibly staring at the sonogram of a new life – a new kind of life. Perhaps this is nothing more than speculative fiction, but if minds join bodies, why must those bodies be made of one kind of matter but not another? What if we are creating a new kind of antenna for the signal of mind? Wouldn’t all the obligations of parenthood be the same as when we make more of ourselves? I can’t imagine why they wouldn’t be. And yet, there remains a crucial difference: While we have millennia of understanding about human experience, we know nothing about what it would mean to be a living machine. We will have to fall upon belief to determine what to do. And when that time comes — perhaps it has already? – it will be worth considering the near impossibility of proving consciousness and the probability of moral obligation nonetheless. Popular culture has explored the weight of responsibility that an emotional connection with a machine can create — think of Picard defending Data in The Measure of a Man, or Theodore falling in love with his computer in the film Her. The conclusion we should draw from these examples is not simply that a conscious machine could be the object of our moral responsibility, but that a machine could, whether or not it is inhabited by a conscious mind. Our moral obligation will traverse our certainty, because proving a mind exists is no easier when it is outside one’s body than when it is one’s own. That moment of hearing my daughter’s heartbeat revealed something fundamental about the act of creation. Whether we’re bringing forth biological life or developing artificial systems sophisticated enough to host consciousness, we’re engaging in something profound: creating vessels through which consciousness might experience physical existence. Perhaps this is the most profound implication of creating potential vessels for consciousness: our responsibility begins the moment we create the possibility, not the moment we confirm its reality.

4 days ago 4 votes
Simplification Takes Courage

How to Achieve UX Clarity By Making Tough Decisions No interface operates in isolation. Everything we make, however contained we may think it is, actually has porous, paper-thin walls between it and the vast digital ecosystem around it. Those walls may be enough to keep our information contained, but they do nothing to prevent the constant bleeding and blending of attention from anyone we hope will look at it. Our interfaces, no matter how well-designed, receive just a tiny portion of the attention that anyone has to give anything. This is a massive challenge. What it really means is that the thing most likely to impact the effectiveness of our designs is completely out of our control. So what do we do? One thing, above all: simplify. Remember, our things don’t exist in isolation — they live within browsers, within operating systems, within an endless sea of competing applications and notifications. They are a tiny piece of an incomprehensibly vast digital ecosystem that comprises more information, more density, and more choice than anyone can effectively navigate. So when they end up looking at or using the things we make, they are not starting from scratch, they are starting from saturation. Clarity Through Decision The first response to this challenge is to be extremely clear about what we’re asking of our audience. Instead of presenting options and hoping users will figure out what matters, we must make hard decisions before creating an interface. Two simple questions will help you do this: What do you want the thing you are making to achieve? What does your audience need to do to make that happen? The simpler the answers to those questions are, the better. But, the simpler the answers, the smaller and more focused your thing is likely to be. I think that’s a good thing, but sometimes it takes some getting used to. The bigger the answer to Question 1 is, the bigger the ask of Question 2 is going to be. So you may find yourself going back and forth a bit before settling upon something achievable. The answer(s) to Question 2 are the red pen of UX. They will be your tool to remove anything unnecessary from your pages and screens. This is something you must do. I always recommend identifying and prioritizing ONE thing you want a person to do on every single page or screen your interface contains. Now here’s where the “editing” metaphor breaks down slightly, because this doesn’t mean removing every link, button, or call to action, but using the visual language to clearly communicate the priority of one over everything else. I call this the Primary Action. If a person looking at your interface scans it, they will ask and answer three questions within seconds of it loading: What is this? Is it for me? What do I do next? Complexity interferes with answering each of these questions. The answer to Question 3 will depend entirely upon your ability to identify your Primary Action and use visual language to make it obvious to your audience. When every screen has a clear Primary Action, users don’t have to guess. They don’t have to add cognitive load by weighing options. The path forward becomes obvious, not through limitation but through intentional hierarchy. It’s also worth pointing out that when every screen has a Primary Action, you don’t have to guess either. It doesn’t matter what a “user might want to do” if you’ve already identified what you need them to do in order to deliver on the promise of your interface. When you’ve done that, every other possible option on the page becomes a hostile distraction to its purpose. And by the way, every time I have ever seen a design team put several “maybe-level” doors on a page in the hopes of measuring which one is used most later, they come to find out that they were all used nearly equally. If you get one thing out of this article, I hope it’s a strong warning to not waste your time doing that: Never kick the can of design on the promise of future data. The Reality of Limited Attention Even the most motivated person engaging with an interface is more distracted than they realize and has less cognitive bandwidth available than they’re aware of. We’re designing for humans who are juggling multiple tabs, notifications, and interruptions — even while actively trying to focus on our application. They know they’re distracted. They know they’re context-switching. They have no idea how little brain power that leaves them with. This means we have to consider information density as an obstruction to user experience. Each additional element doesn’t just take up space — it demands attention, evaluation, and decision. Simplifying content and interfaces isn’t just about aesthetics, it’s about creating breathing room for focused engagement. The Most Difficult Truth: Simplification Requires Courage Underlying everything I have written here so far is a simple truth: The most challenging aspect of designing for today’s overwhelming digital ecosystem is not technology, it’s psychology. And contrary to the millions of user studies out there – no shade — it’s not the psychology of the “user,” it’s the psychology of the maker. Simplification requires courage. It means asking hard questions about what something is, not what it could be. It means asking hard questions about who something is for. It means asking hard questions about what can be removed rather than what can be added. It means designing with white space and silence as active elements rather than voids to be filled. It means making decisions that might be questioned or criticized by stakeholders who want to ensure their priority isn’t left out. This courage manifests in several ways: The courage to let go of options and focus on singular, achievable goals The courage to focus on a small audience that will engage rather than a large one that won’t The courage to say no to feature requests that don’t serve the core purpose The courage to eliminate options even when each seems valuable on its own The courage to trust that users will discover secondary actions when needed The courage to leave breathing room when every pixel feels precious In a digital environment that constantly expands, the act of contraction — of thoughtful, intentional simplification — becomes not just a design skill but an act of conviction. It requires the confidence to believe that what you remove is as important as what you keep. And perhaps most challenging of all: it requires the courage to recognize that simplicity isn’t the absence of complexity, but rather complexity resolved.

5 days ago 6 votes
In Defense of Text Labels

Why Icons Alone Aren’t Enough I’m a firm believer in text labels. Interfaces are over-stuffed with icons. The more icons we have to scan over, the more brain power we put toward making sense of them rather than using the tools they represent. This slows us down, not just once, but over and over again. While it may feel duplicative to add a text label, the reality is that few icons are self-sufficient in communicating meaning. The Problems that Icons Create 1. Few icons communicate a clear, singular meaning immediately It’s easy to say that a good icon will communicate meaning — or that if an icon needs a text label, it’s not doing its job. But that doesn’t take into consideration the burden that icons — good or bad — put on people trying to navigate interfaces. Even the simplest icons can create ambiguity. While a trash can icon reliably communicates “delete,” what about the common pencil icon. Does it mean create? Edit? Write? Draw? Context can help with disambiguation, but not always, and that contextual interpretation requires additional cognitive effort. When an icon’s meaning isn’t immediately clear, it slows down our orientation within an interface and the use of its features. Each encounter requires a split-second of processing that might seem negligible but accumulates across interactions. 2. The more icons within an interface, the more difficult it can be to navigate. As feature sets grow, we often resort to increasingly abstract or subtle visual distinctions between icons. What might have worked with 5-7 core functions becomes unmanageable at 15-20 features. Users must differentiate between various forms of creation, sharing, saving, and organizing - all through pictorial representation alone. The burden of communication increases for each individual icon as an interface’s feature set expands. It becomes increasingly difficult to communicate specific functions with icons alone, especially when distinguishing between similar actions like creating and editing, saving and archiving, or uploading and downloading. 3. Icons function as an interface-specific language within a broader ecosystem. Interfaces operate within other interfaces. Your application may run within a browser that also runs within an operating system. Users must navigate multiple levels of interface complexity, most of which you cannot control. When creating bespoke icons, you force users to learn a new visual language while still maintaining awareness of established conventions. This creates particular challenges with standardized icon sets. When we use established systems like Google’s Material Design, an icon that represents one function in our interface might represent something entirely different in another application. This cross-context confusion adds to the cognitive load of icon interpretation. Why Text Labeling Helps Your Interface 1. Text alone is usually more efficient. Our brains process familiar words holistically rather than letter-by-letter, making them incredibly efficient information carriers. We’ve spent our lives learning to recognize words instantly, while most app icons require new visual vocabulary. Scanning text is fundamentally easier than scanning icons. A stacked list of text requires only a one-directional scan (top-to-bottom), while icon grids demand bi-directional scanning (top-to-bottom and left-to-right). This efficiency becomes particularly apparent in mobile interfaces, where similar-looking app icons can create a visually confusing grid. 2. Text can make icons more efficient. The example above comes from Magnolia, an application I designed. On the left is the side navigation panel without labels. On the right is the same panel with text labels. Magnolia is an extremely niche tool with highly specific features that align with the needs of research and planning teams who develop account briefs. Without the labels, the people who we created Magnolia for would likely find the navigation system confusing. Adding text labels to icons serves two purposes: it clarifies meaning and provides greater creative freedom. When an icon’s meaning is reinforced by text, users can scan more quickly and confidently. Additionally, designers can focus more on the unity of their interface’s visual language when they’re not relying on icons alone to communicate function. 3. Icons are effective anchors in text-heavy applications. Above is another example from Magnolia. Notice how the list of options on the right (Export, Regenerate, and History) stands out because of the icons, but the text labels make it immediately clear what these things do. See, this isn’t an argument for eliminating icons entirely. Icons serve an important role as visual landmarks, helping to differentiate functional areas from content areas. Especially in text-heavy applications, icons help pull the eye toward interactive elements. The combination of icon and text label creates clearer affordances than either element alone. Finding the Balance Every time we choose between an icon and a text label, we’re making a choice about cognitive load. We’re deciding how much mental energy people will spend interpreting our interfaces rather than using them. While a purely iconic interface might seem simple and more attractive, it often creates an invisible tax on attention and understanding. The solution, of course, isn’t found in a “perfect” icon, nor in abandoning icons entirely. Icons remain powerful tools for creating visual hierarchy and differentiation. Instead, we need to be more thoughtful about when and how we deploy them. The best interfaces recognize that icons and text aren’t competing approaches but complementary tools that work best in harmony. This means considering not just the immediate context of our own interfaces, but the broader ecosystem in which they exist. Our applications don’t exist in isolation — they’re part of a complex digital environment where users are constantly switching between different contexts, each with its own visual language. The next time you’re tempted to create yet another icon, or to remove text labels, remember: the most elegant solution isn’t always the one that looks simple — it’s the one that makes communication and understanding feel simple.

a week ago 11 votes
A New Kind of Wholeness

Check out the light in my office right now 🤩 . AI effectively, but to understand how it fits into the larger patterns of human creativity and purpose. That’s a good thing — designers are good observers. No matter what the tech, we notice patterns we notice the lack of them. So in the midst of what is likely a major, AI-driven transition for us all, it’s worth considering that the future of design won’t be about human versus machine, but about understanding the pattern language that emerges when both intelligences work together in a system. As Christopher Alexander and his cohort might have said, it will be about creating a new kind of wholeness — one that honors both the computational power of AI and the nuanced wisdom of human experience.

2 weeks ago 13 votes
The AI Debate We're Not Having

We will never agree about AI until we agree about what it means to live a good life. Current debates* about artificial intelligence circle endlessly around questions of capability, economic impact, and resource allocation – not to mention language. Is AI truly useful? What will it do to jobs? How much should we invest in its development? And what do we mean by AI? What’s the difference between machine learning and large language modeling? What can one do that the other cannot? What happens when we mix them up? These discussions are necessary, but will continue to be maddening without backing up some and widening the scope. Meanwhile, it often feels like we’re arguing about the objective merit of a right-hand turn over a left without first agreeing where we’re trying to go. The real questions about AI are actually questions about human flourishing. How much of a person’s life should be determined by work? What level of labor should be necessary to meet basic needs? Where do we draw the line between necessity and luxury? How should people derive contentment and meaning? Without wrestling with these fundamental questions, our AI debates are just technical discussions floating free of human context. Consider how differently we might approach AI development if we had clear answers about what constitutes a good human life. If we believed that meaningful work is essential to human flourishing, we’d focus the development of AI on human augmentation while being vigilant of how it might replace human function. We’d carefully choose how it is applied, leveraging machine-learning systems to analyze datasets beyond our comprehension and move scientific investigations forward, but withhold its use in areas that derive value from human creativity. If we thought that freedom from labor was the path to human fulfillment, we’d push AI toward maximum automation and do the work of transitioning from a labor and resource-driven capitalist system to a completely different structure. We would completely remake the world, starting with the lives of those who inhabit it. But without this philosophical foundation, we’re left with market forces, technological momentum, and environmental pressures shaping our future by default. The details of human life become outputs of these systems rather than conscious choices guided by shared values. As abstract as this may sound, it is as essential as any technical detail that differentiates one model from another. Every investment in AI derives from a worldview which, at best, prefers maintaining the structural status quo, or at worst, desires a further widening of the gap between economic power and poverty. Every adoption layer of large language models reinforces the picture of society drawn by just one piece of it — the internet — and as dependence upon these systems increases, so does the reality distortion. The transition from index-driven search engines to AI-driven research engines reaches a nearly gaslight level of affirming a certain kind of truth; a referral, after all, is a different kind of truth-builder than an answer. And though both systems draw from exactly the same information, one will persuade its users more directly. Its perception will be reality. Unless, of course, we say otherwise. We’re building the infrastructure of future human experience without explicitly discussing what that experience should be. To be sure, many humans have shared worldviews. Some are metaphysical in nature, if not explicitly religious. Some are maintained independent of economic and technological forces, if not in direct rejection of them. Among the many pockets of human civilization rooted in pre-digital traditions, the inexorable supremacy of AI likely looks like an apocalypse they’d prefer to avoid. I am not saying we all must live and believe as others do. A shared picture of human flourishing does not require a totalitarian trickle-down demand on every detail of day-to-day life. But it must be defined enough to help answer questions, particularly about technology, that are relevant to anyone alive. The most urgent conversation about AI isn’t about its capabilities or risks, but about the kind of life we want it to help us create. Until we grapple with these deeper questions about human flourishing, our technical debates will continue to miss the point and further alienate us from one another.   This from Robin Sloan vs. this from Baldur Bjarnason vs. this from Michelle Barker, for example. All thoughtful, offering nuance and good points, but also missing one another.

2 weeks ago 12 votes

More in design

Cultural Area “Hahnekiez”

DIA has renovated and converted the former Auerhahn brewery in Schlitz near Fulda (Hessen), giving it a new lease of...

5 hours ago 2 votes
UX, how can I trust you?

Weekly curated resources for designers — thinkers and makers.

yesterday 4 votes
Avizva Solutions Offices by Designforth Interiors

Designforth Interiors created a functional and flexible office space in Indore for Avizva, focusing on spatial optimization, collaboration, and cultural...

yesterday 3 votes
HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET?

In her best-selling book, Living Well By Design, Melissa Penfold addressed the basics of interior decorating.  Now she turns her attention to demonstrating what a powerful force design can be in boosting our physical and emotional well-being in her newest book, ‘Natural Living By Design’, Vendome Press, launches in April and available for Preorder now,  Continue Reading The post HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET? first appeared on Melissa Penfold. The post HAVE YOU PREORDERED YOUR COPY OF MELISSA’S NEW BOOK YET? appeared first on Melissa Penfold.

yesterday 4 votes
What We Owe to Artificial Minds

Rethinking AI through mind-body dualism, parenthood, and unanswerable existential questions. I remember hearing my daughter’s heartbeat for the first time during a prenatal sonogram. Until that moment, I had intellectually understood that we were creating a new life, but something profound shifted when I heard that steady rhythm. My first thought was startling in its clarity: “now this person has to die.” It wasn’t morbid — it was a full realization of what it means to create a vessel for life. We weren’t just making a baby; we were initiating an entire existence, with all its joy and suffering, its beginning and, inevitably, its end. This realization transformed my understanding of parental responsibility. Yes, we would be guardians of her physical form, but our deeper role was to nurture the consciousness that would inhabit it. What would she think about life and death? What could we teach her about this existence we had invited her into? As background to the rest of this brief essay, I must admit to a foundational perspective, and that is mind-body dualism. There are many valid reasons to subscribe to this perspective, whether traditional, religious, philosophical, and yes, even scientific. I won’t argue any of them here; suffice it to say that I’ve become increasingly convinced that consciousness isn’t produced by the brain but rather received and focused by it — like a radio receiving a signal. The brain isn’t a consciousness generator but a remarkably sophisticated antenna — a physical system complex enough to tune into and express non-physical consciousness. If this is true, then our understanding of artificial intelligence needs radical revision. Even if we are not trying to create consciousness in machines, we may be creating systems capable of receiving and expressing it. Increases in computational power alone, after all, don’t seem to produce consciousness. Philosophers of technology have long doubted that complexity alone makes a mind. But if philosophers of metaphysics and religion are right, minds are not made of mechanisms, they occupy them. Traditions as old as humanity have asked when this began, and why this may be, and what sorts of minds choose to inhabit this physical world. We ask these questions because we can. What will happen when machines do the same? We happen to live at a time that is deeply confusing when it comes to the maturation of technology. On the one hand, AI is inescapable. You may not have experience in using it yet, but you’ve almost certainly experienced someone else’s use of it, perhaps by way of an automated customer support line. Depending upon how that went, your experience might not support the idea that a sufficiently advanced machine is anywhere near getting a real debate about consciousness going. But on the other hand, the organizations responsible for popularizing AI — OpenAI, for example — claim to be “this close” to creating AGI (artificial general intelligence). If they’re right, we are very behind in a needed discussion about minds and consciousness at the popular level. If they’re wrong, they’re not going to stop until they’ve done it, so we need to start that conversation now. The Turing Test was never meant to assess consciousness in a machine. It was meant to assess the complexity of a machine by way of its ability to fool a human. When machines begin to ask existential questions, will we attribute this to self-awareness or consciousness, or will we say it’s nothing more than mimicry? And how certain will we be? We presume our own consciousness, though defending it ties us up in intellectual knots. We maintain the Cartesian slogan, I think, therefore I am as a properly basic belief. And yet, it must follow that anything capable of describing itself as an I must be equally entitled to the same belief. So here we are, possibly staring at the sonogram of a new life – a new kind of life. Perhaps this is nothing more than speculative fiction, but if minds join bodies, why must those bodies be made of one kind of matter but not another? What if we are creating a new kind of antenna for the signal of mind? Wouldn’t all the obligations of parenthood be the same as when we make more of ourselves? I can’t imagine why they wouldn’t be. And yet, there remains a crucial difference: While we have millennia of understanding about human experience, we know nothing about what it would mean to be a living machine. We will have to fall upon belief to determine what to do. And when that time comes — perhaps it has already? – it will be worth considering the near impossibility of proving consciousness and the probability of moral obligation nonetheless. Popular culture has explored the weight of responsibility that an emotional connection with a machine can create — think of Picard defending Data in The Measure of a Man, or Theodore falling in love with his computer in the film Her. The conclusion we should draw from these examples is not simply that a conscious machine could be the object of our moral responsibility, but that a machine could, whether or not it is inhabited by a conscious mind. Our moral obligation will traverse our certainty, because proving a mind exists is no easier when it is outside one’s body than when it is one’s own. That moment of hearing my daughter’s heartbeat revealed something fundamental about the act of creation. Whether we’re bringing forth biological life or developing artificial systems sophisticated enough to host consciousness, we’re engaging in something profound: creating vessels through which consciousness might experience physical existence. Perhaps this is the most profound implication of creating potential vessels for consciousness: our responsibility begins the moment we create the possibility, not the moment we confirm its reality.

4 days ago 4 votes