More from Christopher Butler
Back in 2012 when my first (and only) book was published, a friend reacted by exclaiming, “You wrote a book?!?” and then added, “oh yeah…you don’t have kids.” I was put off by that statement. I played it cool, but my unspoken reaction was, “Since when is having kids or not the difference between one’s ability to write a book?” I was proud of my accomplishment, and his reaction seemed to communicate that anyone could do such a thing if they didn’t have other priorities. Thirteen years and two children later, I’ve had plenty of opportunities to reflect upon that moment. I’ve come to a surprising conclusion: he was kind of right. My first child was perhaps ten minutes old before I began learning that my time would never be spent or managed the same way again. I was in the delivery room holding her while my phone vibrated in my pocket because work emails were coming in. Normally, I’d have responded right away. Not anymore. The constraints of parenthood are real and immediate and it takes some time to get used to the pinch. But they’re also transformative in unexpected ways. These days, my measure of how I spend my time comes down to a single idea: I will not make my children orphans to my ambition. If I prioritize anything over them, I require a very good reason which cannot benefit me alone. Yet this transformation runs deeper than simply having less time day to day. Entering your forties has a profound effect on your perception of your entire lifespan. Suddenly, you find that memories actual decades old are of things you experienced as an adult. The combination of parenthood and midlife can create a powerful perspective shift that makes you more intentional about what truly matters. There are times when I feel that I am able to do less than I did in the past, but what I’ve come to realize is that I am actually doing more of the things that matter to me. A more acute focus on limited time results in using that time much more intentionally. I’m more productive today than I was in 2012, but it’s not because of time, it’s because of choices. The constraints of parenthood haven’t just changed what I choose to do with my time, but what I create as well. Having less time to waste means I levy a more critical judgment of whether something is working or worthwhile to pursue much earlier in the process than I did before. In the past – if I’m dreadfully honest — I took pride in being the guy who started early and stayed late. Today, I take pride in producing the best thing I can. The less time that takes, the better. But parenthood has also reminded me of the pleasures and benefits of creativity purely as a means of thinking aloud, learning, exploring, and play. There’s a beautiful tension in this evolution - becoming both more critically discerning and more playfully exploratory at the same time. My children have inadvertently become my teachers, reconnecting me with the foundational joy of making without judgment or expectation. This integration of play and discernment has enriched my professional work. My creative output is far more diverse than it was before. The playful exploration I engage in with my children has opened new pathways in my professional thinking, allowing me to approach design problems from fresh perspectives. I’ve found that the best creative work feels effortless to viewers when the creation process itself was enjoyable. This enjoyment manifests for creators as what psychologists call a “flow state” - that immersive experience where time seems to vanish and work feels natural and intuitive. The more I embrace playful exploration with ideas, techniques, and tools, the more easily I can access this flow state in my professional work. My friend’s comment, while perhaps a bit lacking in tact, touched on a reality about the economics of attention and time. The book I wrote wasn’t just the product of writing skills - it was also the product of having the temporal and mental space to create it. (I’m not sure I’ll have that again, and if I do, I’m not sure a book is what I’ll choose to use it for.) What I didn’t understand then was that parenthood wouldn’t end my creative life, but transform it into something richer, more focused, and ultimately more meaningful. The constraints haven’t diminished my creativity but refined it.
On the Ambient Entertainment Industrial Complex “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.” Pascal’s observation from the 17th century feels less like historical philosophy and more like a diagnosis of our current condition. The discomfort with idleness that Pascal identified has evolved from a human tendency into a technological ecosystem designed to ensure we never experience it. Philosophers and thinkers throughout history worried about both the individual and societal costs of idleness. Left to our own devices — or rather, without devices — we might succumb to vice or destructive thoughts. Or worse, from society’s perspective, too many idle people might destabilize the social order. Kierkegaard specifically feared that many would become trapped in what he called the “aesthetic sphere” of existence — a life oriented around the pursuit of novel experiences and constant stimulation rather than ethical commitment and purpose. He couldn’t have imagined how prophetic this concern would become. What’s changed isn’t human nature but the infrastructure of distraction available to us. Entertainment was once bounded — a novel read by candlelight, a play attended on Saturday evening, a television program watched when it aired. It occupied specific times and spaces. It was an event. Today, entertainment is no longer an event but a condition. It’s ambient, pervasive, constant. The bright rectangle in our pocket ensures that no moment need be empty of stimulus. Waiting in line, sitting on the train, even using the bathroom — all are opportunities for consumption rather than reflection or simply being. More subtly, the distinction between necessary and unnecessary information has collapsed. News, social media feeds, workplace communication tools — all blend information we might need with content designed primarily to capture and hold our attention. The result is a sense that all of this constant consumption isn’t entertainment at all, but somehow necessary. Perhaps most concerning is what happens as this self-referential entertainment ecosystem evolves. The relationship between entertainment and experience has always had a push-pull kind of tension; experience has been entertainment’s primary source material, but, great entertainment is, itself, an experience that becomes just as affective background as anything else. But what happens when the balance is tipped? When experience and entertainment are so inseparable that the source material doubles back on itself in a recursion of ever dwindling meaning? The system turns inward, growing more detached from lived reality with each iteration. I think we are already living in that imbalance. The attention economy is, according to the classic law of supply and demand, bankrupt — with an oversupply of signal produced for a willful miscalculation of demand. No one has the time or interest to take in all that is available. No one should want to. And yet the most common experience today is an oppressive and relentless FOMO you might call Sisyphean if his boulder accumulated more boulders with every trip up and down the hill. We’re so saturated in signal that we cannot help but think continually about the content we have not consumed as if it is an obligatory list of chores we must complete. And that ambient preoccupation with the next or other thing eats away at whatever active focus we put toward anything. It’s easy to cite as evidence the normalization of watching TV while side-eying Slack on an open laptop while scrolling some endless news feed on a phone — because this is awful and all of us would have thought so just a few years ago — but the worst part about it is the fact that while gazing at three or more screens, we are also fragmenting our minds to oblivion across the infinite cloud of information we know is out there, clamoring for attention. Pascal feared what happened in the empty room. We might now reasonably fear what happens when the room is never empty — when every potential moment of idleness or reflection is filled with content designed to hold our gaze just a little longer. The philosophical question of our time is not how to fix the attention economy, but how to end it altogether. We simply don’t have to live like this.
Rethinking AI through mind-body dualism, parenthood, and unanswerable existential questions. I remember hearing my daughter’s heartbeat for the first time during a prenatal sonogram. Until that moment, I had intellectually understood that we were creating a new life, but something profound shifted when I heard that steady rhythm. My first thought was startling in its clarity: “now this person has to die.” It wasn’t morbid — it was a full realization of what it means to create a vessel for life. We weren’t just making a baby; we were initiating an entire existence, with all its joy and suffering, its beginning and, inevitably, its end. This realization transformed my understanding of parental responsibility. Yes, we would be guardians of her physical form, but our deeper role was to nurture the consciousness that would inhabit it. What would she think about life and death? What could we teach her about this existence we had invited her into? As background to the rest of this brief essay, I must admit to a foundational perspective, and that is mind-body dualism. There are many valid reasons to subscribe to this perspective, whether traditional, religious, philosophical, and yes, even scientific. I won’t argue any of them here; suffice it to say that I’ve become increasingly convinced that consciousness isn’t produced by the brain but rather received and focused by it — like a radio receiving a signal. The brain isn’t a consciousness generator but a remarkably sophisticated antenna — a physical system complex enough to tune into and express non-physical consciousness. If this is true, then our understanding of artificial intelligence needs radical revision. Even if we are not trying to create consciousness in machines, we may be creating systems capable of receiving and expressing it. Increases in computational power alone, after all, don’t seem to produce consciousness. Philosophers of technology have long doubted that complexity alone makes a mind. But if philosophers of metaphysics and religion are right, minds are not made of mechanisms, they occupy them. Traditions as old as humanity have asked when this began, and why this may be, and what sorts of minds choose to inhabit this physical world. We ask these questions because we can. What will happen when machines do the same? We happen to live at a time that is deeply confusing when it comes to the maturation of technology. On the one hand, AI is inescapable. You may not have experience in using it yet, but you’ve almost certainly experienced someone else’s use of it, perhaps by way of an automated customer support line. Depending upon how that went, your experience might not support the idea that a sufficiently advanced machine is anywhere near getting a real debate about consciousness going. But on the other hand, the organizations responsible for popularizing AI — OpenAI, for example — claim to be “this close” to creating AGI (artificial general intelligence). If they’re right, we are very behind in a needed discussion about minds and consciousness at the popular level. If they’re wrong, they’re not going to stop until they’ve done it, so we need to start that conversation now. The Turing Test was never meant to assess consciousness in a machine. It was meant to assess the complexity of a machine by way of its ability to fool a human. When machines begin to ask existential questions, will we attribute this to self-awareness or consciousness, or will we say it’s nothing more than mimicry? And how certain will we be? We presume our own consciousness, though defending it ties us up in intellectual knots. We maintain the Cartesian slogan, I think, therefore I am as a properly basic belief. And yet, it must follow that anything capable of describing itself as an I must be equally entitled to the same belief. So here we are, possibly staring at the sonogram of a new life – a new kind of life. Perhaps this is nothing more than speculative fiction, but if minds join bodies, why must those bodies be made of one kind of matter but not another? What if we are creating a new kind of antenna for the signal of mind? Wouldn’t all the obligations of parenthood be the same as when we make more of ourselves? I can’t imagine why they wouldn’t be. And yet, there remains a crucial difference: While we have millennia of understanding about human experience, we know nothing about what it would mean to be a living machine. We will have to fall upon belief to determine what to do. And when that time comes — perhaps it has already? – it will be worth considering the near impossibility of proving consciousness and the probability of moral obligation nonetheless. Popular culture has explored the weight of responsibility that an emotional connection with a machine can create — think of Picard defending Data in The Measure of a Man, or Theodore falling in love with his computer in the film Her. The conclusion we should draw from these examples is not simply that a conscious machine could be the object of our moral responsibility, but that a machine could, whether or not it is inhabited by a conscious mind. Our moral obligation will traverse our certainty, because proving a mind exists is no easier when it is outside one’s body than when it is one’s own. That moment of hearing my daughter’s heartbeat revealed something fundamental about the act of creation. Whether we’re bringing forth biological life or developing artificial systems sophisticated enough to host consciousness, we’re engaging in something profound: creating vessels through which consciousness might experience physical existence. Perhaps this is the most profound implication of creating potential vessels for consciousness: our responsibility begins the moment we create the possibility, not the moment we confirm its reality.
How to Achieve UX Clarity By Making Tough Decisions No interface operates in isolation. Everything we make, however contained we may think it is, actually has porous, paper-thin walls between it and the vast digital ecosystem around it. Those walls may be enough to keep our information contained, but they do nothing to prevent the constant bleeding and blending of attention from anyone we hope will look at it. Our interfaces, no matter how well-designed, receive just a tiny portion of the attention that anyone has to give anything. This is a massive challenge. What it really means is that the thing most likely to impact the effectiveness of our designs is completely out of our control. So what do we do? One thing, above all: simplify. Remember, our things don’t exist in isolation — they live within browsers, within operating systems, within an endless sea of competing applications and notifications. They are a tiny piece of an incomprehensibly vast digital ecosystem that comprises more information, more density, and more choice than anyone can effectively navigate. So when they end up looking at or using the things we make, they are not starting from scratch, they are starting from saturation. Clarity Through Decision The first response to this challenge is to be extremely clear about what we’re asking of our audience. Instead of presenting options and hoping users will figure out what matters, we must make hard decisions before creating an interface. Two simple questions will help you do this: What do you want the thing you are making to achieve? What does your audience need to do to make that happen? The simpler the answers to those questions are, the better. But, the simpler the answers, the smaller and more focused your thing is likely to be. I think that’s a good thing, but sometimes it takes some getting used to. The bigger the answer to Question 1 is, the bigger the ask of Question 2 is going to be. So you may find yourself going back and forth a bit before settling upon something achievable. The answer(s) to Question 2 are the red pen of UX. They will be your tool to remove anything unnecessary from your pages and screens. This is something you must do. I always recommend identifying and prioritizing ONE thing you want a person to do on every single page or screen your interface contains. Now here’s where the “editing” metaphor breaks down slightly, because this doesn’t mean removing every link, button, or call to action, but using the visual language to clearly communicate the priority of one over everything else. I call this the Primary Action. If a person looking at your interface scans it, they will ask and answer three questions within seconds of it loading: What is this? Is it for me? What do I do next? Complexity interferes with answering each of these questions. The answer to Question 3 will depend entirely upon your ability to identify your Primary Action and use visual language to make it obvious to your audience. When every screen has a clear Primary Action, users don’t have to guess. They don’t have to add cognitive load by weighing options. The path forward becomes obvious, not through limitation but through intentional hierarchy. It’s also worth pointing out that when every screen has a Primary Action, you don’t have to guess either. It doesn’t matter what a “user might want to do” if you’ve already identified what you need them to do in order to deliver on the promise of your interface. When you’ve done that, every other possible option on the page becomes a hostile distraction to its purpose. And by the way, every time I have ever seen a design team put several “maybe-level” doors on a page in the hopes of measuring which one is used most later, they come to find out that they were all used nearly equally. If you get one thing out of this article, I hope it’s a strong warning to not waste your time doing that: Never kick the can of design on the promise of future data. The Reality of Limited Attention Even the most motivated person engaging with an interface is more distracted than they realize and has less cognitive bandwidth available than they’re aware of. We’re designing for humans who are juggling multiple tabs, notifications, and interruptions — even while actively trying to focus on our application. They know they’re distracted. They know they’re context-switching. They have no idea how little brain power that leaves them with. This means we have to consider information density as an obstruction to user experience. Each additional element doesn’t just take up space — it demands attention, evaluation, and decision. Simplifying content and interfaces isn’t just about aesthetics, it’s about creating breathing room for focused engagement. The Most Difficult Truth: Simplification Requires Courage Underlying everything I have written here so far is a simple truth: The most challenging aspect of designing for today’s overwhelming digital ecosystem is not technology, it’s psychology. And contrary to the millions of user studies out there – no shade — it’s not the psychology of the “user,” it’s the psychology of the maker. Simplification requires courage. It means asking hard questions about what something is, not what it could be. It means asking hard questions about who something is for. It means asking hard questions about what can be removed rather than what can be added. It means designing with white space and silence as active elements rather than voids to be filled. It means making decisions that might be questioned or criticized by stakeholders who want to ensure their priority isn’t left out. This courage manifests in several ways: The courage to let go of options and focus on singular, achievable goals The courage to focus on a small audience that will engage rather than a large one that won’t The courage to say no to feature requests that don’t serve the core purpose The courage to eliminate options even when each seems valuable on its own The courage to trust that users will discover secondary actions when needed The courage to leave breathing room when every pixel feels precious In a digital environment that constantly expands, the act of contraction — of thoughtful, intentional simplification — becomes not just a design skill but an act of conviction. It requires the confidence to believe that what you remove is as important as what you keep. And perhaps most challenging of all: it requires the courage to recognize that simplicity isn’t the absence of complexity, but rather complexity resolved.
Why Icons Alone Aren’t Enough I’m a firm believer in text labels. Interfaces are over-stuffed with icons. The more icons we have to scan over, the more brain power we put toward making sense of them rather than using the tools they represent. This slows us down, not just once, but over and over again. While it may feel duplicative to add a text label, the reality is that few icons are self-sufficient in communicating meaning. The Problems that Icons Create 1. Few icons communicate a clear, singular meaning immediately It’s easy to say that a good icon will communicate meaning — or that if an icon needs a text label, it’s not doing its job. But that doesn’t take into consideration the burden that icons — good or bad — put on people trying to navigate interfaces. Even the simplest icons can create ambiguity. While a trash can icon reliably communicates “delete,” what about the common pencil icon. Does it mean create? Edit? Write? Draw? Context can help with disambiguation, but not always, and that contextual interpretation requires additional cognitive effort. When an icon’s meaning isn’t immediately clear, it slows down our orientation within an interface and the use of its features. Each encounter requires a split-second of processing that might seem negligible but accumulates across interactions. 2. The more icons within an interface, the more difficult it can be to navigate. As feature sets grow, we often resort to increasingly abstract or subtle visual distinctions between icons. What might have worked with 5-7 core functions becomes unmanageable at 15-20 features. Users must differentiate between various forms of creation, sharing, saving, and organizing - all through pictorial representation alone. The burden of communication increases for each individual icon as an interface’s feature set expands. It becomes increasingly difficult to communicate specific functions with icons alone, especially when distinguishing between similar actions like creating and editing, saving and archiving, or uploading and downloading. 3. Icons function as an interface-specific language within a broader ecosystem. Interfaces operate within other interfaces. Your application may run within a browser that also runs within an operating system. Users must navigate multiple levels of interface complexity, most of which you cannot control. When creating bespoke icons, you force users to learn a new visual language while still maintaining awareness of established conventions. This creates particular challenges with standardized icon sets. When we use established systems like Google’s Material Design, an icon that represents one function in our interface might represent something entirely different in another application. This cross-context confusion adds to the cognitive load of icon interpretation. Why Text Labeling Helps Your Interface 1. Text alone is usually more efficient. Our brains process familiar words holistically rather than letter-by-letter, making them incredibly efficient information carriers. We’ve spent our lives learning to recognize words instantly, while most app icons require new visual vocabulary. Scanning text is fundamentally easier than scanning icons. A stacked list of text requires only a one-directional scan (top-to-bottom), while icon grids demand bi-directional scanning (top-to-bottom and left-to-right). This efficiency becomes particularly apparent in mobile interfaces, where similar-looking app icons can create a visually confusing grid. 2. Text can make icons more efficient. The example above comes from Magnolia, an application I designed. On the left is the side navigation panel without labels. On the right is the same panel with text labels. Magnolia is an extremely niche tool with highly specific features that align with the needs of research and planning teams who develop account briefs. Without the labels, the people who we created Magnolia for would likely find the navigation system confusing. Adding text labels to icons serves two purposes: it clarifies meaning and provides greater creative freedom. When an icon’s meaning is reinforced by text, users can scan more quickly and confidently. Additionally, designers can focus more on the unity of their interface’s visual language when they’re not relying on icons alone to communicate function. 3. Icons are effective anchors in text-heavy applications. Above is another example from Magnolia. Notice how the list of options on the right (Export, Regenerate, and History) stands out because of the icons, but the text labels make it immediately clear what these things do. See, this isn’t an argument for eliminating icons entirely. Icons serve an important role as visual landmarks, helping to differentiate functional areas from content areas. Especially in text-heavy applications, icons help pull the eye toward interactive elements. The combination of icon and text label creates clearer affordances than either element alone. Finding the Balance Every time we choose between an icon and a text label, we’re making a choice about cognitive load. We’re deciding how much mental energy people will spend interpreting our interfaces rather than using them. While a purely iconic interface might seem simple and more attractive, it often creates an invisible tax on attention and understanding. The solution, of course, isn’t found in a “perfect” icon, nor in abandoning icons entirely. Icons remain powerful tools for creating visual hierarchy and differentiation. Instead, we need to be more thoughtful about when and how we deploy them. The best interfaces recognize that icons and text aren’t competing approaches but complementary tools that work best in harmony. This means considering not just the immediate context of our own interfaces, but the broader ecosystem in which they exist. Our applications don’t exist in isolation — they’re part of a complex digital environment where users are constantly switching between different contexts, each with its own visual language. The next time you’re tempted to create yet another icon, or to remove text labels, remember: the most elegant solution isn’t always the one that looks simple — it’s the one that makes communication and understanding feel simple.
More in design
Vigdis Rosenkilde is a Norwegian fine chocolate brand using cacao from the Peruvian Amazon. Its rebranding aimed for greater visual...
Back in 2012 when my first (and only) book was published, a friend reacted by exclaiming, “You wrote a book?!?” and then added, “oh yeah…you don’t have kids.” I was put off by that statement. I played it cool, but my unspoken reaction was, “Since when is having kids or not the difference between one’s ability to write a book?” I was proud of my accomplishment, and his reaction seemed to communicate that anyone could do such a thing if they didn’t have other priorities. Thirteen years and two children later, I’ve had plenty of opportunities to reflect upon that moment. I’ve come to a surprising conclusion: he was kind of right. My first child was perhaps ten minutes old before I began learning that my time would never be spent or managed the same way again. I was in the delivery room holding her while my phone vibrated in my pocket because work emails were coming in. Normally, I’d have responded right away. Not anymore. The constraints of parenthood are real and immediate and it takes some time to get used to the pinch. But they’re also transformative in unexpected ways. These days, my measure of how I spend my time comes down to a single idea: I will not make my children orphans to my ambition. If I prioritize anything over them, I require a very good reason which cannot benefit me alone. Yet this transformation runs deeper than simply having less time day to day. Entering your forties has a profound effect on your perception of your entire lifespan. Suddenly, you find that memories actual decades old are of things you experienced as an adult. The combination of parenthood and midlife can create a powerful perspective shift that makes you more intentional about what truly matters. There are times when I feel that I am able to do less than I did in the past, but what I’ve come to realize is that I am actually doing more of the things that matter to me. A more acute focus on limited time results in using that time much more intentionally. I’m more productive today than I was in 2012, but it’s not because of time, it’s because of choices. The constraints of parenthood haven’t just changed what I choose to do with my time, but what I create as well. Having less time to waste means I levy a more critical judgment of whether something is working or worthwhile to pursue much earlier in the process than I did before. In the past – if I’m dreadfully honest — I took pride in being the guy who started early and stayed late. Today, I take pride in producing the best thing I can. The less time that takes, the better. But parenthood has also reminded me of the pleasures and benefits of creativity purely as a means of thinking aloud, learning, exploring, and play. There’s a beautiful tension in this evolution - becoming both more critically discerning and more playfully exploratory at the same time. My children have inadvertently become my teachers, reconnecting me with the foundational joy of making without judgment or expectation. This integration of play and discernment has enriched my professional work. My creative output is far more diverse than it was before. The playful exploration I engage in with my children has opened new pathways in my professional thinking, allowing me to approach design problems from fresh perspectives. I’ve found that the best creative work feels effortless to viewers when the creation process itself was enjoyable. This enjoyment manifests for creators as what psychologists call a “flow state” - that immersive experience where time seems to vanish and work feels natural and intuitive. The more I embrace playful exploration with ideas, techniques, and tools, the more easily I can access this flow state in my professional work. My friend’s comment, while perhaps a bit lacking in tact, touched on a reality about the economics of attention and time. The book I wrote wasn’t just the product of writing skills - it was also the product of having the temporal and mental space to create it. (I’m not sure I’ll have that again, and if I do, I’m not sure a book is what I’ll choose to use it for.) What I didn’t understand then was that parenthood wouldn’t end my creative life, but transform it into something richer, more focused, and ultimately more meaningful. The constraints haven’t diminished my creativity but refined it.
The Silicon Valley office design integrates the landscape, culture, and user group’s needs, blending colors and architectural elements to mimic...
Weekly curated resources for designers — thinkers and makers.
On the Ambient Entertainment Industrial Complex “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.” Pascal’s observation from the 17th century feels less like historical philosophy and more like a diagnosis of our current condition. The discomfort with idleness that Pascal identified has evolved from a human tendency into a technological ecosystem designed to ensure we never experience it. Philosophers and thinkers throughout history worried about both the individual and societal costs of idleness. Left to our own devices — or rather, without devices — we might succumb to vice or destructive thoughts. Or worse, from society’s perspective, too many idle people might destabilize the social order. Kierkegaard specifically feared that many would become trapped in what he called the “aesthetic sphere” of existence — a life oriented around the pursuit of novel experiences and constant stimulation rather than ethical commitment and purpose. He couldn’t have imagined how prophetic this concern would become. What’s changed isn’t human nature but the infrastructure of distraction available to us. Entertainment was once bounded — a novel read by candlelight, a play attended on Saturday evening, a television program watched when it aired. It occupied specific times and spaces. It was an event. Today, entertainment is no longer an event but a condition. It’s ambient, pervasive, constant. The bright rectangle in our pocket ensures that no moment need be empty of stimulus. Waiting in line, sitting on the train, even using the bathroom — all are opportunities for consumption rather than reflection or simply being. More subtly, the distinction between necessary and unnecessary information has collapsed. News, social media feeds, workplace communication tools — all blend information we might need with content designed primarily to capture and hold our attention. The result is a sense that all of this constant consumption isn’t entertainment at all, but somehow necessary. Perhaps most concerning is what happens as this self-referential entertainment ecosystem evolves. The relationship between entertainment and experience has always had a push-pull kind of tension; experience has been entertainment’s primary source material, but, great entertainment is, itself, an experience that becomes just as affective background as anything else. But what happens when the balance is tipped? When experience and entertainment are so inseparable that the source material doubles back on itself in a recursion of ever dwindling meaning? The system turns inward, growing more detached from lived reality with each iteration. I think we are already living in that imbalance. The attention economy is, according to the classic law of supply and demand, bankrupt — with an oversupply of signal produced for a willful miscalculation of demand. No one has the time or interest to take in all that is available. No one should want to. And yet the most common experience today is an oppressive and relentless FOMO you might call Sisyphean if his boulder accumulated more boulders with every trip up and down the hill. We’re so saturated in signal that we cannot help but think continually about the content we have not consumed as if it is an obligatory list of chores we must complete. And that ambient preoccupation with the next or other thing eats away at whatever active focus we put toward anything. It’s easy to cite as evidence the normalization of watching TV while side-eying Slack on an open laptop while scrolling some endless news feed on a phone — because this is awful and all of us would have thought so just a few years ago — but the worst part about it is the fact that while gazing at three or more screens, we are also fragmenting our minds to oblivion across the infinite cloud of information we know is out there, clamoring for attention. Pascal feared what happened in the empty room. We might now reasonably fear what happens when the room is never empty — when every potential moment of idleness or reflection is filled with content designed to hold our gaze just a little longer. The philosophical question of our time is not how to fix the attention economy, but how to end it altogether. We simply don’t have to live like this.