More from Christopher Butler
Growing Up at the Dawn of Cyberspace For those of us born around 1980, William Gibson’s Neuromancer might be the most prophetic novel we never read as teenagers. Published in 1984, it predicted the digital world we would inherit: a reality where human consciousness extends into cyberspace, where corporations control the digital commons, and where being “jacked in” to a global information network is the default state of existence. But it was The Matrix, arriving in 1999 when I was nineteen, that captured something even more fundamental about our generation’s experience. Beyond its surface narrative of machines and simulated reality, beyond its Hot Topic aesthetic, the film tapped into a profound truth about coming of age in the digital era: the experience of ontological shock. Every generation experiences the disorientation of discovering the world isn’t what they thought it was. But for the last X’ers, this natural coming-of-age shock coincided with a collective technological awakening. Just as we were questioning the nature of reality and our place in it as young adults, the stable physical world of our childhood was being transformed by digital technology. The institutions, social structures, and ways of being that seemed permanent turned out to be as mutable as computer code. Neo’s journey in The Matrix — discovering his reality is a simulation and learning to see “the code” behind it — paralleled our own experience of watching the physical world become increasingly overlaid and mediated by digital systems. The film’s themes of paranoia and revelation resonated because we were living through our own red pill experience, watching as more and more of human experience moved into the digital realm that Gibson had imagined fifteen years before. The timing was uncanny. The Matrix arrived amid a perfect storm of millennial anxiety: Y2K fears about computers failing catastrophically, a disputed presidential election that would be decided by the Supreme Court, and then the shocking events of 9/11. For those of us just entering adulthood in the United States, these concurrent disruptions to technological, political, and social stability congealed into a generational dysphoria. The film’s paranoid questioning of reality felt less like science fiction and more like a documentary of our collective psychological state. This double shock — personal and technological — has shaped how I, and I suspect many of us, think about and design technology today. When you’ve experienced reality becoming suddenly permeable, you assume disruption, glitches, and the shock of others. You develop empathy for anyone confronting new technological paradigms. You understand the importance of transparency, of helping people see the systems they’re operating within rather than hiding them. Perhaps this is why our generation often approaches technology with a mix of fluency and skepticism. We’re comfortable in digital spaces, but we remember what came before. We know firsthand how quickly reality can transform, how easily new layers of mediation can become invisible, how important it is to maintain awareness of the code behind our increasingly digital existence. The paranoia of The Matrix wasn’t just science fiction — it was a preview of what it means to live in a world where the boundaries between physical and digital reality grow increasingly blurry. For those of us who came of age alongside the internet, that ontological shock never fully faded. Maybe it shouldn’t — I hold on to mine as an asset to my work and thinking.
A product marketing consultant with over a decade of experience is leaving to pursue art, illustration, and poetry. Another designer, burned out on growing her business, is pivoting to focus on fitness instead. These aren’t just isolated anecdotes — they’re part of an emerging pattern of experienced creative professionals not just changing jobs, but leaving the field entirely. When people who’ve invested years mastering a profession decide to walk away, it’s worth asking why. There’s a particular kind of exhaustion that comes from trying to create meaning within systems designed to extract value. Creative professionals know this exhaustion intimately. They live in the tension between human connection and mechanical metrics, between authentic communication and algorithmic optimization, between their own values and the relentless machinery of growth. The challenge isn’t just about workload, though that’s certainly part of it. It’s about existing in a perpetual state of cognitive dissonance. Many of these professionals entered marketing because they believed in the power of communication, in the art of storytelling, in the possibility of connecting people with things that might genuinely improve their lives. Instead, they find themselves serving an industry driven by investment patterns and technological determinism that often clash with their core values. Then there’s the ever-shifting definition of success. What counts as a “result” in design and marketing has become increasingly abstract and elusive. Engagement metrics, conversion rates, attribution models — these measurements proliferate and mutate faster than anyone can meaningfully interpret them. The tools for measuring success change before we can even agree on what success means. It’s a peculiarly modern predicament: working harder than ever while feeling the impact of that work dissolve into an increasingly fractured and cynical digital landscape. We are told to be authentic while optimizing for algorithms, to be human while automating everything possible, to be creative while conforming to data-driven best practices. We are expected to master new platforms, tools, and paradigms at an exhausting pace, all while the cultural conversation increasingly dismisses our entire profession as manipulation at best, spam at worst, in either case – entirely automatable. Given the combination of working more than ever but getting less than ever out of it while also trying to change everything about what you do as the entire world is screaming at you all day about how worthless what you do is, burnout should be no surprise to anyone with an active heartbeat. The exodus to other fields might reveal something deeper: a desire to return to work that produces tangible, meaningful outcomes. When a designer or marketer becomes an artist, they choose to create something that exists in the world, that can be finished, seen, and touched. When they become a fitness instructor, they choose help people achieve concrete, physical results, perhaps even changing their lives in ways they never thought possible. These shifts suggest a hunger for work that can’t be algorithm-optimized into meaninglessness and not (yet) credibly done by a machine. What’s particularly striking is that many of these departing marketers aren’t moving to adjacent fields or seeking different roles within the industry. This isn’t a finding-my-unique-ability conversation in the corporate sphere; they’re leaving. They’re not just tired of their jobs; they’re tired of participating in a system of uninterpreted abstraction that they are, nonetheless, beholden to. Perhaps this trend is a warning sign that we need to fundamentally rethink how we connect people with value in a digital age. The exhaustion of marketers might be a canary in the coal mine, signaling that our current approaches to attention, engagement, and value creation are becoming unsustainable.
When AI meets the unconscious… I have had dreams I will never forget. Long, vivid experiences with plot twists and turns that confound the notion that dreaming is simply the reorganization of day residue. I have discovered and created new things, written essays, stories, and songs. And while I can recall much of what these dreams contain, the depth and detail of these experiences slips away over time. But what if it didn’t? Sometimes I wish I could go back into these dreams. Now, as AI advances into increasingly intimate territories of human experience, that wish doesn’t seem quite so impossible. And I suspect it’s not very far off. Researchers have already developed systems that can translate brain activity into words with surprising accuracy. AI models have already been trained to reconstruct visual experiences from brain activity. You could say the machine is already in our heads. We’re approaching a future where dreams might be recorded and replayed like movies, where the mysterious theater of our unconscious mind could become accessible to the waking world. The designer in me is fascinated by this possibility. After all, what is a dream if not the ultimate personal interface — a world generated entirely by and for a single user? But as someone who has spent decades thinking about the relationship between humans and their machines, I’m also deeply uncertain about the implications of externalizing something so fundamentally internal. I think about the ways technology has already changed our relationship with memory. My phone holds thousands of photos and videos of my children growing up — far more than my parents could have ever taken of me. Each moment is captured, tagged, searchable. I no longer wonder whether this abundance of external memory has changed how I form internal ones — I know that it has. When everything is recorded, we experience and remember moments very differently. Dreams could head down a similar path. Imagine a world where every dream is captured, analyzed, archived. Where AI algorithms search for patterns in our unconscious minds, offering insights about our deepest fears and desires. Where therapy sessions include replaying and examining dreams in high definition. Where artists can extract imagery directly from their dream-states into their work. The potential benefits are obvious. For people suffering from PTSD or recurring nightmares, being able to externalize and examine their dreams could be transformative. Dream recording could open new frontiers in creativity, psychology, and self-understanding. It could help us better understand consciousness itself. But I keep thinking about what we might lose. Dreams have always been a last refuge of privacy in an increasingly surveilled world. They’re one of the few experiences that remain truly personal, truly unmediated. When I dream, the world I experience exists nowhere else — not in the cloud, not on a server, not in anyone else’s consciousness. It’s mine alone. What happens when that changes? When dreams become data? When the unconscious mind becomes another surface for algorithms to analyze, another source of patterns to detect, another stream of content to monetize, perhaps even the property of private corporations and insurance companies? I can already imagine the premium subscription services: “Upload your dreams to our secure cloud storage!” “Analyze your dream patterns with our AI dream interpreter!” “Share your dreams with friends!” “Pay for privacy.” The marriage of AI and dreaming represents a fascinating frontier in human-computer interaction. But it also forces us to confront difficult questions about the boundaries between technology and human experience. At what point does augmenting our natural capabilities become transforming them into something else entirely? What aspects of human experience should remain unmediated, unrecorded, untranslated into data? I still wish I could return to my own dreams — how I wish I could extract from them everything I saw, heard, thought, and made within their worlds. But perhaps there’s something beautiful about the fact that I can’t — that my dreams remain untouched by algorithms and interfaces, un-mined even by me. Perhaps some experiences should remain as fleeting and ineffable and personal, as our dreams mostly are, even in an age where technology promises to make everything accessible, everything shareable, everything known. As we move toward a future where even our dreams might be recorded and analyzed by machines, we’ll need to think carefully about what we gain and what we lose when we externalize our most internal experiences. The challenge won’t be technical — it will be philosophical. Not “Can we do this?” but “Should we?” Not “How do we record dreams?” but “What does it mean for a dream to be recorded?” These are the questions that keep me up at night. Though perhaps that’s fitting — being awake with questions about dreams.
SEO, Clickless Search, and the AInternet Imagine designing and building a home while its residents continued living in it. What you create is highly customized to them because you observe them living in real time and make what they need. One day, while you’re still working, these residents move out and new ones move in. Now imagine you didn’t realize that for, say, a year or two afterward. This is what it has been like to design things for the internet. People lived here once, then AI moved in. But we’re still building a house for people. I think we might be building the wrong thing. I’ve been designing interfaces for two decades now, and when I look at the modern web, I see a landscape increasingly shaped not by human needs but by machine logic — a vast network of APIs, algorithms, and automated systems talking to each other in languages we never hear. Yes, “we” wrote those languages, but let’s be honest: “we” isn’t most of us. Last week, my daughter asked me to help her find information about Greek mythology. She’s been reading books about it and had some specific questions that the books couldn’t answer. As we typed in her questions, I noticed something important: Instead of clicking through to websites, we found ourselves staying on the search page as AI-generated answers appeared above the traditional results. Unbeknownst to her, we were witnessing the end of SEO as we know it. The conventional search engine optimization wisdom is evolving accordingly. The rungs of the SEO ladder aren’t just increasing — making it more difficult to compete for subject matter authority by Page Rank — they’re changing. Specifically, the pattern of SEO to optimizer benefit is going to upend the entire point. I’ve already seen advice suggesting that because Google’s AI synthesizes content differently than a comparatively simple indexing bot, we need to begin to structure our content differently so that it will be more likely to appear in Google’s AI summaries. FAQ content, for a business, for example, could be elevated in this strategy, as its structure anticipates the kinds of questions that people considering a product or service might ask. AI, after all, is already training us to change how we search for things. Specifically, queries are aligning with more conversational semantics rather than metadata-focused keywords and phrases. All fair enough — we can probably gain increased visibility within a search engine’s AI summaries by way of “agentic design.” But…why? Old-school SEO had a fairly balanced value proposition: Google was really good at giving people sources for the information they need and benefitted by running advertising on websites. Websites benefitted by getting attention delivered to them by Google. In a “clickless search” scenario, though, the scale tips considerably. If Google has its way, users will increasingly stay on google.com, their questions answered by AI that synthesizes information from across the web. Yes, there will be an attribution link to your original content, but have you seen them? They are tiny. Who will click them? Our motivation to optimize content for Google is transforming from “please send visitors our way” to “please use our content as a source” — but in this new paradigm, what’s really in it for us? Generally, I’d say… not much. And if clickless search makes human attention delivery significantly less likely, one has to wonder: will a website’s visual design even matter anymore? How many humans should we expect to actually see what we create? For those of us happy with a very small, human audience, none of this matters much, other than we’ll probably see our numbers continue to drop. For those whose livelihoods depend upon traffic to websites they control and have designed for humans, this matters very much. So what’s the point of “agentic design”? I can only think of one scenario, and that is when the answer Google’s AI can provide isn’t what you know, but just you. Knowledge about things will go entirely to Google, on our backs. Some knowledge about how to do things a machine cannot will be retained by us. Perhaps the only content worth optimizing for AI will be that which machines cannot replicate or synthesize — unique human experiences, specialized expertise, creative works that resist automation. Everything else — facts, figures, general knowledge — will be absorbed into the AI’s vast knowledge base, built on our collective work but no longer driving visitors to our individual spaces. This home we’ve been building is so much bigger than the metaphor can even support. The internet has become a kind of parallel world where machines are the native inhabitants and we humans are more like tourists, guided by AI assistants that translate machine logic into human-readable experiences. Our devices are increasingly less like tools and more like interpreters, mediating our experience of a digital ecosystem that has grown too vast and complex for direct human navigation. And this has all happened very quickly. To be disoriented is understandable. The interesting question isn’t how to optimize for AI agents, but what kinds of human experiences are worth preserving in a world where machines do most of the talking.
More in design
Weekly curated resources for designers — thinkers and makers.
Amouage, the Omani House of High Perfumery, expands its global presence with its first Asian flagship in Zhang Yuan, Shanghai’s...
Growing Up at the Dawn of Cyberspace For those of us born around 1980, William Gibson’s Neuromancer might be the most prophetic novel we never read as teenagers. Published in 1984, it predicted the digital world we would inherit: a reality where human consciousness extends into cyberspace, where corporations control the digital commons, and where being “jacked in” to a global information network is the default state of existence. But it was The Matrix, arriving in 1999 when I was nineteen, that captured something even more fundamental about our generation’s experience. Beyond its surface narrative of machines and simulated reality, beyond its Hot Topic aesthetic, the film tapped into a profound truth about coming of age in the digital era: the experience of ontological shock. Every generation experiences the disorientation of discovering the world isn’t what they thought it was. But for the last X’ers, this natural coming-of-age shock coincided with a collective technological awakening. Just as we were questioning the nature of reality and our place in it as young adults, the stable physical world of our childhood was being transformed by digital technology. The institutions, social structures, and ways of being that seemed permanent turned out to be as mutable as computer code. Neo’s journey in The Matrix — discovering his reality is a simulation and learning to see “the code” behind it — paralleled our own experience of watching the physical world become increasingly overlaid and mediated by digital systems. The film’s themes of paranoia and revelation resonated because we were living through our own red pill experience, watching as more and more of human experience moved into the digital realm that Gibson had imagined fifteen years before. The timing was uncanny. The Matrix arrived amid a perfect storm of millennial anxiety: Y2K fears about computers failing catastrophically, a disputed presidential election that would be decided by the Supreme Court, and then the shocking events of 9/11. For those of us just entering adulthood in the United States, these concurrent disruptions to technological, political, and social stability congealed into a generational dysphoria. The film’s paranoid questioning of reality felt less like science fiction and more like a documentary of our collective psychological state. This double shock — personal and technological — has shaped how I, and I suspect many of us, think about and design technology today. When you’ve experienced reality becoming suddenly permeable, you assume disruption, glitches, and the shock of others. You develop empathy for anyone confronting new technological paradigms. You understand the importance of transparency, of helping people see the systems they’re operating within rather than hiding them. Perhaps this is why our generation often approaches technology with a mix of fluency and skepticism. We’re comfortable in digital spaces, but we remember what came before. We know firsthand how quickly reality can transform, how easily new layers of mediation can become invisible, how important it is to maintain awareness of the code behind our increasingly digital existence. The paranoia of The Matrix wasn’t just science fiction — it was a preview of what it means to live in a world where the boundaries between physical and digital reality grow increasingly blurry. For those of us who came of age alongside the internet, that ontological shock never fully faded. Maybe it shouldn’t — I hold on to mine as an asset to my work and thinking.
After designing a few gadget-related projects, I decided to take on a new challenge: designing a lightning from scratch. Lightning is an area of fascination for me. I have an ongoing draft post about the various designer lamps in my home that I plan to publish soon. In the meantime,
Switchup designed Nanobébé’s office with a focus on simplicity, natural light, and glass dividers, creating a modern, collaborative space that...