More from Christopher Butler
Growing Up at the Dawn of Cyberspace For those of us born around 1980, William Gibson’s Neuromancer might be the most prophetic novel we never read as teenagers. Published in 1984, it predicted the digital world we would inherit: a reality where human consciousness extends into cyberspace, where corporations control the digital commons, and where being “jacked in” to a global information network is the default state of existence. But it was The Matrix, arriving in 1999 when I was nineteen, that captured something even more fundamental about our generation’s experience. Beyond its surface narrative of machines and simulated reality, beyond its Hot Topic aesthetic, the film tapped into a profound truth about coming of age in the digital era: the experience of ontological shock. Every generation experiences the disorientation of discovering the world isn’t what they thought it was. But for the last X’ers, this natural coming-of-age shock coincided with a collective technological awakening. Just as we were questioning the nature of reality and our place in it as young adults, the stable physical world of our childhood was being transformed by digital technology. The institutions, social structures, and ways of being that seemed permanent turned out to be as mutable as computer code. Neo’s journey in The Matrix — discovering his reality is a simulation and learning to see “the code” behind it — paralleled our own experience of watching the physical world become increasingly overlaid and mediated by digital systems. The film’s themes of paranoia and revelation resonated because we were living through our own red pill experience, watching as more and more of human experience moved into the digital realm that Gibson had imagined fifteen years before. The timing was uncanny. The Matrix arrived amid a perfect storm of millennial anxiety: Y2K fears about computers failing catastrophically, a disputed presidential election that would be decided by the Supreme Court, and then the shocking events of 9/11. For those of us just entering adulthood in the United States, these concurrent disruptions to technological, political, and social stability congealed into a generational dysphoria. The film’s paranoid questioning of reality felt less like science fiction and more like a documentary of our collective psychological state. This double shock — personal and technological — has shaped how I, and I suspect many of us, think about and design technology today. When you’ve experienced reality becoming suddenly permeable, you assume disruption, glitches, and the shock of others. You develop empathy for anyone confronting new technological paradigms. You understand the importance of transparency, of helping people see the systems they’re operating within rather than hiding them. Perhaps this is why our generation often approaches technology with a mix of fluency and skepticism. We’re comfortable in digital spaces, but we remember what came before. We know firsthand how quickly reality can transform, how easily new layers of mediation can become invisible, how important it is to maintain awareness of the code behind our increasingly digital existence. The paranoia of The Matrix wasn’t just science fiction — it was a preview of what it means to live in a world where the boundaries between physical and digital reality grow increasingly blurry. For those of us who came of age alongside the internet, that ontological shock never fully faded. Maybe it shouldn’t — I hold on to mine as an asset to my work and thinking.
When AI meets the unconscious… I have had dreams I will never forget. Long, vivid experiences with plot twists and turns that confound the notion that dreaming is simply the reorganization of day residue. I have discovered and created new things, written essays, stories, and songs. And while I can recall much of what these dreams contain, the depth and detail of these experiences slips away over time. But what if it didn’t? Sometimes I wish I could go back into these dreams. Now, as AI advances into increasingly intimate territories of human experience, that wish doesn’t seem quite so impossible. And I suspect it’s not very far off. Researchers have already developed systems that can translate brain activity into words with surprising accuracy. AI models have already been trained to reconstruct visual experiences from brain activity. You could say the machine is already in our heads. We’re approaching a future where dreams might be recorded and replayed like movies, where the mysterious theater of our unconscious mind could become accessible to the waking world. The designer in me is fascinated by this possibility. After all, what is a dream if not the ultimate personal interface — a world generated entirely by and for a single user? But as someone who has spent decades thinking about the relationship between humans and their machines, I’m also deeply uncertain about the implications of externalizing something so fundamentally internal. I think about the ways technology has already changed our relationship with memory. My phone holds thousands of photos and videos of my children growing up — far more than my parents could have ever taken of me. Each moment is captured, tagged, searchable. I no longer wonder whether this abundance of external memory has changed how I form internal ones — I know that it has. When everything is recorded, we experience and remember moments very differently. Dreams could head down a similar path. Imagine a world where every dream is captured, analyzed, archived. Where AI algorithms search for patterns in our unconscious minds, offering insights about our deepest fears and desires. Where therapy sessions include replaying and examining dreams in high definition. Where artists can extract imagery directly from their dream-states into their work. The potential benefits are obvious. For people suffering from PTSD or recurring nightmares, being able to externalize and examine their dreams could be transformative. Dream recording could open new frontiers in creativity, psychology, and self-understanding. It could help us better understand consciousness itself. But I keep thinking about what we might lose. Dreams have always been a last refuge of privacy in an increasingly surveilled world. They’re one of the few experiences that remain truly personal, truly unmediated. When I dream, the world I experience exists nowhere else — not in the cloud, not on a server, not in anyone else’s consciousness. It’s mine alone. What happens when that changes? When dreams become data? When the unconscious mind becomes another surface for algorithms to analyze, another source of patterns to detect, another stream of content to monetize, perhaps even the property of private corporations and insurance companies? I can already imagine the premium subscription services: “Upload your dreams to our secure cloud storage!” “Analyze your dream patterns with our AI dream interpreter!” “Share your dreams with friends!” “Pay for privacy.” The marriage of AI and dreaming represents a fascinating frontier in human-computer interaction. But it also forces us to confront difficult questions about the boundaries between technology and human experience. At what point does augmenting our natural capabilities become transforming them into something else entirely? What aspects of human experience should remain unmediated, unrecorded, untranslated into data? I still wish I could return to my own dreams — how I wish I could extract from them everything I saw, heard, thought, and made within their worlds. But perhaps there’s something beautiful about the fact that I can’t — that my dreams remain untouched by algorithms and interfaces, un-mined even by me. Perhaps some experiences should remain as fleeting and ineffable and personal, as our dreams mostly are, even in an age where technology promises to make everything accessible, everything shareable, everything known. As we move toward a future where even our dreams might be recorded and analyzed by machines, we’ll need to think carefully about what we gain and what we lose when we externalize our most internal experiences. The challenge won’t be technical — it will be philosophical. Not “Can we do this?” but “Should we?” Not “How do we record dreams?” but “What does it mean for a dream to be recorded?” These are the questions that keep me up at night. Though perhaps that’s fitting — being awake with questions about dreams.
SEO, Clickless Search, and the AInternet Imagine designing and building a home while its residents continued living in it. What you create is highly customized to them because you observe them living in real time and make what they need. One day, while you’re still working, these residents move out and new ones move in. Now imagine you didn’t realize that for, say, a year or two afterward. This is what it has been like to design things for the internet. People lived here once, then AI moved in. But we’re still building a house for people. I think we might be building the wrong thing. I’ve been designing interfaces for two decades now, and when I look at the modern web, I see a landscape increasingly shaped not by human needs but by machine logic — a vast network of APIs, algorithms, and automated systems talking to each other in languages we never hear. Yes, “we” wrote those languages, but let’s be honest: “we” isn’t most of us. Last week, my daughter asked me to help her find information about Greek mythology. She’s been reading books about it and had some specific questions that the books couldn’t answer. As we typed in her questions, I noticed something important: Instead of clicking through to websites, we found ourselves staying on the search page as AI-generated answers appeared above the traditional results. Unbeknownst to her, we were witnessing the end of SEO as we know it. The conventional search engine optimization wisdom is evolving accordingly. The rungs of the SEO ladder aren’t just increasing — making it more difficult to compete for subject matter authority by Page Rank — they’re changing. Specifically, the pattern of SEO to optimizer benefit is going to upend the entire point. I’ve already seen advice suggesting that because Google’s AI synthesizes content differently than a comparatively simple indexing bot, we need to begin to structure our content differently so that it will be more likely to appear in Google’s AI summaries. FAQ content, for a business, for example, could be elevated in this strategy, as its structure anticipates the kinds of questions that people considering a product or service might ask. AI, after all, is already training us to change how we search for things. Specifically, queries are aligning with more conversational semantics rather than metadata-focused keywords and phrases. All fair enough — we can probably gain increased visibility within a search engine’s AI summaries by way of “agentic design.” But…why? Old-school SEO had a fairly balanced value proposition: Google was really good at giving people sources for the information they need and benefitted by running advertising on websites. Websites benefitted by getting attention delivered to them by Google. In a “clickless search” scenario, though, the scale tips considerably. If Google has its way, users will increasingly stay on google.com, their questions answered by AI that synthesizes information from across the web. Yes, there will be an attribution link to your original content, but have you seen them? They are tiny. Who will click them? Our motivation to optimize content for Google is transforming from “please send visitors our way” to “please use our content as a source” — but in this new paradigm, what’s really in it for us? Generally, I’d say… not much. And if clickless search makes human attention delivery significantly less likely, one has to wonder: will a website’s visual design even matter anymore? How many humans should we expect to actually see what we create? For those of us happy with a very small, human audience, none of this matters much, other than we’ll probably see our numbers continue to drop. For those whose livelihoods depend upon traffic to websites they control and have designed for humans, this matters very much. So what’s the point of “agentic design”? I can only think of one scenario, and that is when the answer Google’s AI can provide isn’t what you know, but just you. Knowledge about things will go entirely to Google, on our backs. Some knowledge about how to do things a machine cannot will be retained by us. Perhaps the only content worth optimizing for AI will be that which machines cannot replicate or synthesize — unique human experiences, specialized expertise, creative works that resist automation. Everything else — facts, figures, general knowledge — will be absorbed into the AI’s vast knowledge base, built on our collective work but no longer driving visitors to our individual spaces. This home we’ve been building is so much bigger than the metaphor can even support. The internet has become a kind of parallel world where machines are the native inhabitants and we humans are more like tourists, guided by AI assistants that translate machine logic into human-readable experiences. Our devices are increasingly less like tools and more like interpreters, mediating our experience of a digital ecosystem that has grown too vast and complex for direct human navigation. And this has all happened very quickly. To be disoriented is understandable. The interesting question isn’t how to optimize for AI agents, but what kinds of human experiences are worth preserving in a world where machines do most of the talking.
What technology’s drive for seamlessness gets wrong. I’ve been thinking lately about friction — not the physical force, but the small resistances we encounter in daily life. The tech industry has made eliminating friction its north star, pursuing ever more seamless experiences. Every tap saved, every decision automated, every interface made invisible — all pursued as necessary and celebrated as progress. But I’m increasingly convinced we’re missing something fundamental about human nature in this relentless pursuit of effortlessness. Consider how different it feels to discover something rather than have it given to you. Finding an unexpected book while looking for a recommended title can feel like destiny. Stumbling upon a great restaurant instead of Uber-ing to the top recommendation on Yelp creates a lasting and special memory. Or, how the experience of reading changes when you switch from a physical book to an e-reader with infinite options always a tap away. The friction in these “older” experiences isn’t just inefficiency — it’s part of what makes them meaningful. There’s something deeply human about wanting to earn our outcomes rather than having them bestowed upon us. When we work for something, when we overcome resistance to achieve it, we value it more. This isn’t just nostalgia talking; it’s about how we create meaning through engagement and effort. This insight has profound implications for how we design technology. In our drive to make everything instant and effortless, we may be undermining the very experiences we’re trying to enhance. When AI can generate any image we describe, write any text we request, or answer any question immediately, something is lost in the space where effort used to be. The challenge for designers isn’t to eliminate all friction, but to find the right balance — enough resistance to create value and meaning, but not so much as to become genuinely obstructive. This “golden ratio” of friction might be different for each experience, but the principle remains: some friction isn’t just acceptable, it’s essential. Seamlessness isn’t always the answer. In a world increasingly mediated by technology, we might need more friction, not less — more moments of intentional resistance that remind us we’re human, more opportunities to earn our way to what we desire. After all, the most meaningful experiences in life rarely come without effort.
More in design
Complete store transformation in 10 weeks. A new era begins for Hunkemöller with the introduction of their innovative retail concept—and...
Growing Up at the Dawn of Cyberspace For those of us born around 1980, William Gibson’s Neuromancer might be the most prophetic novel we never read as teenagers. Published in 1984, it predicted the digital world we would inherit: a reality where human consciousness extends into cyberspace, where corporations control the digital commons, and where being “jacked in” to a global information network is the default state of existence. But it was The Matrix, arriving in 1999 when I was nineteen, that captured something even more fundamental about our generation’s experience. Beyond its surface narrative of machines and simulated reality, beyond its Hot Topic aesthetic, the film tapped into a profound truth about coming of age in the digital era: the experience of ontological shock. Every generation experiences the disorientation of discovering the world isn’t what they thought it was. But for the last X’ers, this natural coming-of-age shock coincided with a collective technological awakening. Just as we were questioning the nature of reality and our place in it as young adults, the stable physical world of our childhood was being transformed by digital technology. The institutions, social structures, and ways of being that seemed permanent turned out to be as mutable as computer code. Neo’s journey in The Matrix — discovering his reality is a simulation and learning to see “the code” behind it — paralleled our own experience of watching the physical world become increasingly overlaid and mediated by digital systems. The film’s themes of paranoia and revelation resonated because we were living through our own red pill experience, watching as more and more of human experience moved into the digital realm that Gibson had imagined fifteen years before. The timing was uncanny. The Matrix arrived amid a perfect storm of millennial anxiety: Y2K fears about computers failing catastrophically, a disputed presidential election that would be decided by the Supreme Court, and then the shocking events of 9/11. For those of us just entering adulthood in the United States, these concurrent disruptions to technological, political, and social stability congealed into a generational dysphoria. The film’s paranoid questioning of reality felt less like science fiction and more like a documentary of our collective psychological state. This double shock — personal and technological — has shaped how I, and I suspect many of us, think about and design technology today. When you’ve experienced reality becoming suddenly permeable, you assume disruption, glitches, and the shock of others. You develop empathy for anyone confronting new technological paradigms. You understand the importance of transparency, of helping people see the systems they’re operating within rather than hiding them. Perhaps this is why our generation often approaches technology with a mix of fluency and skepticism. We’re comfortable in digital spaces, but we remember what came before. We know firsthand how quickly reality can transform, how easily new layers of mediation can become invisible, how important it is to maintain awareness of the code behind our increasingly digital existence. The paranoia of The Matrix wasn’t just science fiction — it was a preview of what it means to live in a world where the boundaries between physical and digital reality grow increasingly blurry. For those of us who came of age alongside the internet, that ontological shock never fully faded. Maybe it shouldn’t — I hold on to mine as an asset to my work and thinking.
After designing a few gadget-related projects, I decided to take on a new challenge: designing a lightning from scratch. Lightning is an area of fascination for me. I have an ongoing draft post about the various designer lamps in my home that I plan to publish soon. In the meantime,
Switchup designed Nanobébé’s office with a focus on simplicity, natural light, and glass dividers, creating a modern, collaborative space that...