Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
2
Meaningful design decisions flow from clear intent, not from data. “We don’t know what people want. We don’t even know what they do.” This confession — which so many clients never truly say but should — drives an almost compulsive need for testing and validation. Heat maps, A/B tests, user surveys — we’ve built an entire industry around the promise that enough data can bridge the gap between uncertainty and understanding. But here’s what testing actually tells us: what users do in artificial scenarios. It doesn’t tell us what they want, and it certainly doesn’t tell us what we should want them to do. We’ve confused observation with insight. A heat map might show where users click, but it won’t reveal why they click there or whether those clicks align with your business objectives. User testing might reveal pain points in your interface, but it won’t tell you if solving those pain points serves your strategic goals. The uncomfortable truth is that meaningful design...
2 days ago

More from Christopher Butler

The Empty Hours

AI promises to automate both work and leisure. What will we do then? In 2005, I lived high up on a hill in Penang, from where I could literally watch the tech industry reshape the island and the nearby mainland. The common wisdom then was that automation would soon empty the factories across the country. Today, those same factories not only buzz with human activity — they’ve expanded dramatically, with manufacturing output up 130% and still employing 16% of Malaysia’s workforce. The work has shifted, evolved, adapted. We’re remarkably good at finding new things to do. I think about this often as I navigate my own relationship with AI tools. Last week, I asked an AI to generate some initial concepts for a client project — work that would have once filled pages of my sketchbook. As I watched the results populate my screen, my daughter asked what I was doing. “Letting the computer do some drawing for me,” I said. She considered this for a moment, then asked, “But you can draw already. If the computer does it for you, what will you do?” It’s the question of our age, isn’t it? As AI promises to take over not just routine tasks but creative ones — writing, design, music composition — we’re facing a prolonged period of anxiety. Not just about losing our jobs, but about losing our purpose. The industrial revolution promised to free us from physical labor and the digital revolution promised to free us from mental drudgery. Yet somehow we’ve ended up more stretched, more scheduled, more occupied than ever. Both were very real technological transitional periods; both had significant, measurable impacts on the economies of their time; neither ushered in a golden age of leisure. History shows that we — in the broadest sense — adapt. But here’s something to consider: adaptation takes time. At the height of the pre-industrial textile industry, 20% of all women and children in England were employed, hand-spinning textile fibers. This was in the late 18th century. Over the course of the following forty years, a process of mechanization took place that almost completely obviated the need for that particular workforce. But children working as hand-spinners at the pre-industrial height would have been well past middle-age by the time child-employment was no longer common. The transitional period would have lasted nearly the entirety of their working lives. Similarly, the decline of manufacturing in the United States elapsed over a period of nearly fifty years, from its peak in the mid-1960s to 2019, when a net loss of 3.5 million jobs was measured. Again, this transition was career-length — generational. In both transitions, new forms of work became available that would have been unforeseen prior to change being underway. We are only a handful of years into what we may someday know as the AI Revolution. It seems to be moving at a faster pace than either of its historical antecedents. Perhaps it truly is. Nevertheless, historical adaptation suggests that we look forward to the new kinds of work this transition will make a way for us to do. I wonder what they may be. AI, after all, isn’t just a faster way to accomplish specific tasks; investment in it suggests an expectation for much grander than that, on the order of anything that can be reduced to pattern recognition and reproduction. As it turns out, that’s most of what we do. So what’s left? What remains uniquely human when machines can answer our questions, organize and optimize our world, entertain us, and create our art? The answer might lie in the spaces between productivity — in the meaningful inefficiencies that machines are designed to eliminate. AI might be able to prove this someday, but anecdotally, it’s in the various moments of friction and delay throughout my day that I do my most active and creative thinking. While waiting for the water to heat up. Walking my dog. Brewing coffee. Standing in line. Maybe we’re approaching a grand reversal: after centuries of humans doing machine-like work, perhaps it’s time for humans to become more distinctly human. To focus not on what’s efficient or productive, but on what’s meaningful precisely because it can’t be automated: connection, contemplation, play. But this requires a radical shift in how we think about time and purpose. For generations, we’ve defined ourselves by our work, measured our days by our output. As AI threatens to take both our labor and our creative outlets, we will need to learn — or remember — how to exist without constant production and how to separate our basic human needs from economies of scale. The factories of Malaysia taught me something important: automation doesn’t move in a straight line. Human ingenuity finds new problems to solve, new work to do, new ways to be useful. But as AI promises to automate not just our labor but our leisure, we might finally be forced to confront the question my daughter so innocently posed: what will we do instead? This will not be easy. The answer, I hope, lies not just in finding new forms of work to replace the old, but in discovering what it means to be meaningfully unoccupied. The real challenge of the AI age might not be technological at all, but existential: learning to value the empty hours not for what we can fill them with, but for what they are. I believe in the intrinsic value of human life; one’s worth is no greater after years of labor and the accumulation of wealth and status than it was at its beginning. Life cannot be earned, just lived. This is a hard lesson. Wouldn’t it be strange if the most able teacher was not human but machine?

14 hours ago 1 votes
Digital Reality Digital Shock

Growing Up at the Dawn of Cyberspace For those of us born around 1980, William Gibson’s Neuromancer might be the most prophetic novel we never read as teenagers. Published in 1984, it predicted the digital world we would inherit: a reality where human consciousness extends into cyberspace, where corporations control the digital commons, and where being “jacked in” to a global information network is the default state of existence. But it was The Matrix, arriving in 1999 when I was nineteen, that captured something even more fundamental about our generation’s experience. Beyond its surface narrative of machines and simulated reality, beyond its Hot Topic aesthetic, the film tapped into a profound truth about coming of age in the digital era: the experience of ontological shock. Every generation experiences the disorientation of discovering the world isn’t what they thought it was. But for the last X’ers, this natural coming-of-age shock coincided with a collective technological awakening. Just as we were questioning the nature of reality and our place in it as young adults, the stable physical world of our childhood was being transformed by digital technology. The institutions, social structures, and ways of being that seemed permanent turned out to be as mutable as computer code. Neo’s journey in The Matrix — discovering his reality is a simulation and learning to see “the code” behind it — paralleled our own experience of watching the physical world become increasingly overlaid and mediated by digital systems. The film’s themes of paranoia and revelation resonated because we were living through our own red pill experience, watching as more and more of human experience moved into the digital realm that Gibson had imagined fifteen years before. The timing was uncanny. The Matrix arrived amid a perfect storm of millennial anxiety: Y2K fears about computers failing catastrophically, a disputed presidential election that would be decided by the Supreme Court, and then the shocking events of 9/11. For those of us just entering adulthood in the United States, these concurrent disruptions to technological, political, and social stability congealed into a generational dysphoria. The film’s paranoid questioning of reality felt less like science fiction and more like a documentary of our collective psychological state. This double shock — personal and technological — has shaped how I, and I suspect many of us, think about and design technology today. When you’ve experienced reality becoming suddenly permeable, you assume disruption, glitches, and the shock of others. You develop empathy for anyone confronting new technological paradigms. You understand the importance of transparency, of helping people see the systems they’re operating within rather than hiding them. Perhaps this is why our generation often approaches technology with a mix of fluency and skepticism. We’re comfortable in digital spaces, but we remember what came before. We know firsthand how quickly reality can transform, how easily new layers of mediation can become invisible, how important it is to maintain awareness of the code behind our increasingly digital existence. The paranoia of The Matrix wasn’t just science fiction — it was a preview of what it means to live in a world where the boundaries between physical and digital reality grow increasingly blurry. For those of us who came of age alongside the internet, that ontological shock never fully faded. Maybe it shouldn’t — I hold on to mine as an asset to my work and thinking.

3 days ago 3 votes
The Exodus

A product marketing consultant with over a decade of experience is leaving to pursue art, illustration, and poetry. Another designer, burned out on growing her business, is pivoting to focus on fitness instead. These aren’t just isolated anecdotes — they’re part of an emerging pattern of experienced creative professionals not just changing jobs, but leaving the field entirely. When people who’ve invested years mastering a profession decide to walk away, it’s worth asking why. There’s a particular kind of exhaustion that comes from trying to create meaning within systems designed to extract value. Creative professionals know this exhaustion intimately. They live in the tension between human connection and mechanical metrics, between authentic communication and algorithmic optimization, between their own values and the relentless machinery of growth. The challenge isn’t just about workload, though that’s certainly part of it. It’s about existing in a perpetual state of cognitive dissonance. Many of these professionals entered marketing because they believed in the power of communication, in the art of storytelling, in the possibility of connecting people with things that might genuinely improve their lives. Instead, they find themselves serving an industry driven by investment patterns and technological determinism that often clash with their core values. Then there’s the ever-shifting definition of success. What counts as a “result” in design and marketing has become increasingly abstract and elusive. Engagement metrics, conversion rates, attribution models — these measurements proliferate and mutate faster than anyone can meaningfully interpret them. The tools for measuring success change before we can even agree on what success means. It’s a peculiarly modern predicament: working harder than ever while feeling the impact of that work dissolve into an increasingly fractured and cynical digital landscape. We are told to be authentic while optimizing for algorithms, to be human while automating everything possible, to be creative while conforming to data-driven best practices. We are expected to master new platforms, tools, and paradigms at an exhausting pace, all while the cultural conversation increasingly dismisses our entire profession as manipulation at best, spam at worst, in either case – entirely automatable. Given the combination of working more than ever but getting less than ever out of it while also trying to change everything about what you do as the entire world is screaming at you all day about how worthless what you do is, burnout should be no surprise to anyone with an active heartbeat. The exodus to other fields might reveal something deeper: a desire to return to work that produces tangible, meaningful outcomes. When a designer or marketer becomes an artist, they choose to create something that exists in the world, that can be finished, seen, and touched. When they become a fitness instructor, they choose help people achieve concrete, physical results, perhaps even changing their lives in ways they never thought possible. These shifts suggest a hunger for work that can’t be algorithm-optimized into meaninglessness and not (yet) credibly done by a machine. What’s particularly striking is that many of these departing marketers aren’t moving to adjacent fields or seeking different roles within the industry. This isn’t a finding-my-unique-ability conversation in the corporate sphere; they’re leaving. They’re not just tired of their jobs; they’re tired of participating in a system of uninterpreted abstraction that they are, nonetheless, beholden to. Perhaps this trend is a warning sign that we need to fundamentally rethink how we connect people with value in a digital age. The exhaustion of marketers might be a canary in the coal mine, signaling that our current approaches to attention, engagement, and value creation are becoming unsustainable.

5 days ago 9 votes
Dreams in the Machine

When AI meets the unconscious… I have had dreams I will never forget. Long, vivid experiences with plot twists and turns that confound the notion that dreaming is simply the reorganization of day residue. I have discovered and created new things, written essays, stories, and songs. And while I can recall much of what these dreams contain, the depth and detail of these experiences slips away over time. But what if it didn’t? Sometimes I wish I could go back into these dreams. Now, as AI advances into increasingly intimate territories of human experience, that wish doesn’t seem quite so impossible. And I suspect it’s not very far off. Researchers have already developed systems that can translate brain activity into words with surprising accuracy. AI models have already been trained to reconstruct visual experiences from brain activity. You could say the machine is already in our heads. We’re approaching a future where dreams might be recorded and replayed like movies, where the mysterious theater of our unconscious mind could become accessible to the waking world. The designer in me is fascinated by this possibility. After all, what is a dream if not the ultimate personal interface — a world generated entirely by and for a single user? But as someone who has spent decades thinking about the relationship between humans and their machines, I’m also deeply uncertain about the implications of externalizing something so fundamentally internal. I think about the ways technology has already changed our relationship with memory. My phone holds thousands of photos and videos of my children growing up — far more than my parents could have ever taken of me. Each moment is captured, tagged, searchable. I no longer wonder whether this abundance of external memory has changed how I form internal ones — I know that it has. When everything is recorded, we experience and remember moments very differently. Dreams could head down a similar path. Imagine a world where every dream is captured, analyzed, archived. Where AI algorithms search for patterns in our unconscious minds, offering insights about our deepest fears and desires. Where therapy sessions include replaying and examining dreams in high definition. Where artists can extract imagery directly from their dream-states into their work. The potential benefits are obvious. For people suffering from PTSD or recurring nightmares, being able to externalize and examine their dreams could be transformative. Dream recording could open new frontiers in creativity, psychology, and self-understanding. It could help us better understand consciousness itself. But I keep thinking about what we might lose. Dreams have always been a last refuge of privacy in an increasingly surveilled world. They’re one of the few experiences that remain truly personal, truly unmediated. When I dream, the world I experience exists nowhere else — not in the cloud, not on a server, not in anyone else’s consciousness. It’s mine alone. What happens when that changes? When dreams become data? When the unconscious mind becomes another surface for algorithms to analyze, another source of patterns to detect, another stream of content to monetize, perhaps even the property of private corporations and insurance companies? I can already imagine the premium subscription services: “Upload your dreams to our secure cloud storage!” “Analyze your dream patterns with our AI dream interpreter!” “Share your dreams with friends!” “Pay for privacy.” The marriage of AI and dreaming represents a fascinating frontier in human-computer interaction. But it also forces us to confront difficult questions about the boundaries between technology and human experience. At what point does augmenting our natural capabilities become transforming them into something else entirely? What aspects of human experience should remain unmediated, unrecorded, untranslated into data? I still wish I could return to my own dreams — how I wish I could extract from them everything I saw, heard, thought, and made within their worlds. But perhaps there’s something beautiful about the fact that I can’t — that my dreams remain untouched by algorithms and interfaces, un-mined even by me. Perhaps some experiences should remain as fleeting and ineffable and personal, as our dreams mostly are, even in an age where technology promises to make everything accessible, everything shareable, everything known. As we move toward a future where even our dreams might be recorded and analyzed by machines, we’ll need to think carefully about what we gain and what we lose when we externalize our most internal experiences. The challenge won’t be technical — it will be philosophical. Not “Can we do this?” but “Should we?” Not “How do we record dreams?” but “What does it mean for a dream to be recorded?” These are the questions that keep me up at night. Though perhaps that’s fitting — being awake with questions about dreams.

6 days ago 12 votes

More in design

Hana Bank by Indiesalon

While a bank is a space with a clear purpose, the gap in waiting that is in the process often...

11 hours ago 2 votes
The Empty Hours

AI promises to automate both work and leisure. What will we do then? In 2005, I lived high up on a hill in Penang, from where I could literally watch the tech industry reshape the island and the nearby mainland. The common wisdom then was that automation would soon empty the factories across the country. Today, those same factories not only buzz with human activity — they’ve expanded dramatically, with manufacturing output up 130% and still employing 16% of Malaysia’s workforce. The work has shifted, evolved, adapted. We’re remarkably good at finding new things to do. I think about this often as I navigate my own relationship with AI tools. Last week, I asked an AI to generate some initial concepts for a client project — work that would have once filled pages of my sketchbook. As I watched the results populate my screen, my daughter asked what I was doing. “Letting the computer do some drawing for me,” I said. She considered this for a moment, then asked, “But you can draw already. If the computer does it for you, what will you do?” It’s the question of our age, isn’t it? As AI promises to take over not just routine tasks but creative ones — writing, design, music composition — we’re facing a prolonged period of anxiety. Not just about losing our jobs, but about losing our purpose. The industrial revolution promised to free us from physical labor and the digital revolution promised to free us from mental drudgery. Yet somehow we’ve ended up more stretched, more scheduled, more occupied than ever. Both were very real technological transitional periods; both had significant, measurable impacts on the economies of their time; neither ushered in a golden age of leisure. History shows that we — in the broadest sense — adapt. But here’s something to consider: adaptation takes time. At the height of the pre-industrial textile industry, 20% of all women and children in England were employed, hand-spinning textile fibers. This was in the late 18th century. Over the course of the following forty years, a process of mechanization took place that almost completely obviated the need for that particular workforce. But children working as hand-spinners at the pre-industrial height would have been well past middle-age by the time child-employment was no longer common. The transitional period would have lasted nearly the entirety of their working lives. Similarly, the decline of manufacturing in the United States elapsed over a period of nearly fifty years, from its peak in the mid-1960s to 2019, when a net loss of 3.5 million jobs was measured. Again, this transition was career-length — generational. In both transitions, new forms of work became available that would have been unforeseen prior to change being underway. We are only a handful of years into what we may someday know as the AI Revolution. It seems to be moving at a faster pace than either of its historical antecedents. Perhaps it truly is. Nevertheless, historical adaptation suggests that we look forward to the new kinds of work this transition will make a way for us to do. I wonder what they may be. AI, after all, isn’t just a faster way to accomplish specific tasks; investment in it suggests an expectation for much grander than that, on the order of anything that can be reduced to pattern recognition and reproduction. As it turns out, that’s most of what we do. So what’s left? What remains uniquely human when machines can answer our questions, organize and optimize our world, entertain us, and create our art? The answer might lie in the spaces between productivity — in the meaningful inefficiencies that machines are designed to eliminate. AI might be able to prove this someday, but anecdotally, it’s in the various moments of friction and delay throughout my day that I do my most active and creative thinking. While waiting for the water to heat up. Walking my dog. Brewing coffee. Standing in line. Maybe we’re approaching a grand reversal: after centuries of humans doing machine-like work, perhaps it’s time for humans to become more distinctly human. To focus not on what’s efficient or productive, but on what’s meaningful precisely because it can’t be automated: connection, contemplation, play. But this requires a radical shift in how we think about time and purpose. For generations, we’ve defined ourselves by our work, measured our days by our output. As AI threatens to take both our labor and our creative outlets, we will need to learn — or remember — how to exist without constant production and how to separate our basic human needs from economies of scale. The factories of Malaysia taught me something important: automation doesn’t move in a straight line. Human ingenuity finds new problems to solve, new work to do, new ways to be useful. But as AI promises to automate not just our labor but our leisure, we might finally be forced to confront the question my daughter so innocently posed: what will we do instead? This will not be easy. The answer, I hope, lies not just in finding new forms of work to replace the old, but in discovering what it means to be meaningfully unoccupied. The real challenge of the AI age might not be technological at all, but existential: learning to value the empty hours not for what we can fill them with, but for what they are. I believe in the intrinsic value of human life; one’s worth is no greater after years of labor and the accumulation of wealth and status than it was at its beginning. Life cannot be earned, just lived. This is a hard lesson. Wouldn’t it be strange if the most able teacher was not human but machine?

14 hours ago 1 votes
Gram Games office by Park Studio

Park Studio collaborated with Üçadam to design Gram Games’ Istanbul office, incorporating industrial elements like brick walls and wooden accents...

7 hours ago 1 votes
Notes on Google Search Now Requiring JavaScript

John Gruber has a post about how Google’s search results now require JavaScript[1]. Why? Here’s Google: the change is intended to “better protect” Google Search against malicious activity, such as bots and spam Lol, the irony. Let’s turn to JavaScript for protection, as if the entire ad-based tracking/analytics world born out of JavaScript’s capabilities isn’t precisely what led to a less secure, less private, more exploited web. But whatever, “the web” is Google’s product so they can do what they want with it — right? Here’s John: Old original Google was a company of and for the open web. Post 2010-or-so Google is a company that sees the web as a de facto proprietary platform that it owns and controls. Those who experience the web through Google Chrome and Google Search are on that proprietary not-closed-per-se-but-not-really-open web. Search that requires JavaScript won’t cause the web to die. But it’s a sign of what’s to come (emphasis mine): Requiring JavaScript for Google Search is not about the fact that 99.9 percent of humans surfing the web have JavaScript enabled in their browsers. It’s about taking advantage of that fact to tightly control client access to Google Search results. But the nature of the true open web is that the server sticks to the specs for the HTTP protocol and the HTML content format, and clients are free to interpret that as they see fit. Original, novel, clever ways to do things with website output is what made the web so thrilling, fun, useful, and amazing. This JavaScript mandate is Google’s attempt at asserting that it will only serve search results to exactly the client software that it sees fit to serve. Requiring JavaScript is all about control. The web was founded on the idea of open access for all. But since that’s been completely and utterly abused (see LLM training datasets) we’re gonna lose it. The whole “freemium with ads” model that underpins the web was exploited for profit by AI at an industrial scale and that’s causing the “free and open web” to become the “paid and private web”. Universal access is quickly becoming select access — Google search results included. If you want to go down a rabbit hole of reading more about this, there’s the TechCrunch article John cites, a Hacker News thread, and this post from a company founded on providing search APIs. ⏎ Email :: Mastodon :: Bluesky #generalNotes

46 minutes ago 1 votes
Kedrovka cedar milk by Maria Korneva

Kedrovka is a brand of plant-based milk crafted for those who care about their health, value natural ingredients, and seek...

5 hours ago 1 votes