Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
13
When AI meets the unconscious… I have had dreams I will never forget. Long, vivid experiences with plot twists and turns that confound the notion that dreaming is simply the reorganization of day residue. I have discovered and created new things, written essays, stories, and songs. And while I can recall much of what these dreams contain, the depth and detail of these experiences slips away over time. But what if it didn’t? Sometimes I wish I could go back into these dreams. Now, as AI advances into increasingly intimate territories of human experience, that wish doesn’t seem quite so impossible. And I suspect it’s not very far off. Researchers have already developed systems that can translate brain activity into words with surprising accuracy. AI models have already been trained to reconstruct visual experiences from brain activity. You could say the machine is already in our heads. We’re approaching a future where dreams might be recorded and replayed like movies, where the...
a week ago

More from Christopher Butler

The Empty Hours

AI promises to automate both work and leisure. What will we do then? In 2005, I lived high up on a hill in Penang, from where I could literally watch the tech industry reshape the island and the nearby mainland. The common wisdom then was that automation would soon empty the factories across the country. Today, those same factories not only buzz with human activity — they’ve expanded dramatically, with manufacturing output up 130% and still employing 16% of Malaysia’s workforce. The work has shifted, evolved, adapted. We’re remarkably good at finding new things to do. I think about this often as I navigate my own relationship with AI tools. Last week, I asked an AI to generate some initial concepts for a client project — work that would have once filled pages of my sketchbook. As I watched the results populate my screen, my daughter asked what I was doing. “Letting the computer do some drawing for me,” I said. She considered this for a moment, then asked, “But you can draw already. If the computer does it for you, what will you do?” It’s the question of our age, isn’t it? As AI promises to take over not just routine tasks but creative ones — writing, design, music composition — we’re facing a prolonged period of anxiety. Not just about losing our jobs, but about losing our purpose. The industrial revolution promised to free us from physical labor and the digital revolution promised to free us from mental drudgery. Yet somehow we’ve ended up more stretched, more scheduled, more occupied than ever. Both were very real technological transitional periods; both had significant, measurable impacts on the economies of their time; neither ushered in a golden age of leisure. History shows that we — in the broadest sense — adapt. But here’s something to consider: adaptation takes time. At the height of the pre-industrial textile industry, 20% of all women and children in England were employed, hand-spinning textile fibers. This was in the late 18th century. Over the course of the following forty years, a process of mechanization took place that almost completely obviated the need for that particular workforce. But children working as hand-spinners at the pre-industrial height would have been well past middle-age by the time child-employment was no longer common. The transitional period would have lasted nearly the entirety of their working lives. Similarly, the decline of manufacturing in the United States elapsed over a period of nearly fifty years, from its peak in the mid-1960s to 2019, when a net loss of 3.5 million jobs was measured. Again, this transition was career-length — generational. In both transitions, new forms of work became available that would have been unforeseen prior to change being underway. We are only a handful of years into what we may someday know as the AI Revolution. It seems to be moving at a faster pace than either of its historical antecedents. Perhaps it truly is. Nevertheless, historical adaptation suggests that we look forward to the new kinds of work this transition will make a way for us to do. I wonder what they may be. AI, after all, isn’t just a faster way to accomplish specific tasks; investment in it suggests an expectation for much grander than that, on the order of anything that can be reduced to pattern recognition and reproduction. As it turns out, that’s most of what we do. So what’s left? What remains uniquely human when machines can answer our questions, organize and optimize our world, entertain us, and create our art? The answer might lie in the spaces between productivity — in the meaningful inefficiencies that machines are designed to eliminate. AI might be able to prove this someday, but anecdotally, it’s in the various moments of friction and delay throughout my day that I do my most active and creative thinking. While waiting for the water to heat up. Walking my dog. Brewing coffee. Standing in line. Maybe we’re approaching a grand reversal: after centuries of humans doing machine-like work, perhaps it’s time for humans to become more distinctly human. To focus not on what’s efficient or productive, but on what’s meaningful precisely because it can’t be automated: connection, contemplation, play. But this requires a radical shift in how we think about time and purpose. For generations, we’ve defined ourselves by our work, measured our days by our output. As AI threatens to take both our labor and our creative outlets, we will need to learn — or remember — how to exist without constant production and how to separate our basic human needs from economies of scale. The factories of Malaysia taught me something important: automation doesn’t move in a straight line. Human ingenuity finds new problems to solve, new work to do, new ways to be useful. But as AI promises to automate not just our labor but our leisure, we might finally be forced to confront the question my daughter so innocently posed: what will we do instead? This will not be easy. The answer, I hope, lies not just in finding new forms of work to replace the old, but in discovering what it means to be meaningfully unoccupied. The real challenge of the AI age might not be technological at all, but existential: learning to value the empty hours not for what we can fill them with, but for what they are. I believe in the intrinsic value of human life; one’s worth is no greater after years of labor and the accumulation of wealth and status than it was at its beginning. Life cannot be earned, just lived. This is a hard lesson. Wouldn’t it be strange if the most able teacher was not human but machine?

yesterday 2 votes
The Testing Trap

Meaningful design decisions flow from clear intent, not from data. “We don’t know what people want. We don’t even know what they do.” This confession — which so many clients never truly say but should — drives an almost compulsive need for testing and validation. Heat maps, A/B tests, user surveys — we’ve built an entire industry around the promise that enough data can bridge the gap between uncertainty and understanding. But here’s what testing actually tells us: what users do in artificial scenarios. It doesn’t tell us what they want, and it certainly doesn’t tell us what we should want them to do. We’ve confused observation with insight. A heat map might show where users click, but it won’t reveal why they click there or whether those clicks align with your business objectives. User testing might reveal pain points in your interface, but it won’t tell you if solving those pain points serves your strategic goals. The uncomfortable truth is that meaningful design decisions flow from clear intent, not from data. If you know exactly what outcome you want to achieve, you can design toward that outcome without needing to validate every decision with testing. This isn’t an argument against testing entirely. It’s an argument for testing with purpose. Before running any test, ask yourself: Do you have the intent to act on what you find? Do you have the means to act on what you find? If the answer to either question is no, you’re not testing for insight — you’re testing for comfort. You’re seeking permission to make decisions you should be making based on clear strategic intent. The most successful digital products weren’t built by following heat maps. They were built by teams with crystal-clear visions of what they wanted users to achieve. Testing can refine the path to that vision, but it can’t replace the vision itself.

3 days ago 3 votes
Digital Reality Digital Shock

Growing Up at the Dawn of Cyberspace For those of us born around 1980, William Gibson’s Neuromancer might be the most prophetic novel we never read as teenagers. Published in 1984, it predicted the digital world we would inherit: a reality where human consciousness extends into cyberspace, where corporations control the digital commons, and where being “jacked in” to a global information network is the default state of existence. But it was The Matrix, arriving in 1999 when I was nineteen, that captured something even more fundamental about our generation’s experience. Beyond its surface narrative of machines and simulated reality, beyond its Hot Topic aesthetic, the film tapped into a profound truth about coming of age in the digital era: the experience of ontological shock. Every generation experiences the disorientation of discovering the world isn’t what they thought it was. But for the last X’ers, this natural coming-of-age shock coincided with a collective technological awakening. Just as we were questioning the nature of reality and our place in it as young adults, the stable physical world of our childhood was being transformed by digital technology. The institutions, social structures, and ways of being that seemed permanent turned out to be as mutable as computer code. Neo’s journey in The Matrix — discovering his reality is a simulation and learning to see “the code” behind it — paralleled our own experience of watching the physical world become increasingly overlaid and mediated by digital systems. The film’s themes of paranoia and revelation resonated because we were living through our own red pill experience, watching as more and more of human experience moved into the digital realm that Gibson had imagined fifteen years before. The timing was uncanny. The Matrix arrived amid a perfect storm of millennial anxiety: Y2K fears about computers failing catastrophically, a disputed presidential election that would be decided by the Supreme Court, and then the shocking events of 9/11. For those of us just entering adulthood in the United States, these concurrent disruptions to technological, political, and social stability congealed into a generational dysphoria. The film’s paranoid questioning of reality felt less like science fiction and more like a documentary of our collective psychological state. This double shock — personal and technological — has shaped how I, and I suspect many of us, think about and design technology today. When you’ve experienced reality becoming suddenly permeable, you assume disruption, glitches, and the shock of others. You develop empathy for anyone confronting new technological paradigms. You understand the importance of transparency, of helping people see the systems they’re operating within rather than hiding them. Perhaps this is why our generation often approaches technology with a mix of fluency and skepticism. We’re comfortable in digital spaces, but we remember what came before. We know firsthand how quickly reality can transform, how easily new layers of mediation can become invisible, how important it is to maintain awareness of the code behind our increasingly digital existence. The paranoia of The Matrix wasn’t just science fiction — it was a preview of what it means to live in a world where the boundaries between physical and digital reality grow increasingly blurry. For those of us who came of age alongside the internet, that ontological shock never fully faded. Maybe it shouldn’t — I hold on to mine as an asset to my work and thinking.

4 days ago 3 votes
The Exodus

A product marketing consultant with over a decade of experience is leaving to pursue art, illustration, and poetry. Another designer, burned out on growing her business, is pivoting to focus on fitness instead. These aren’t just isolated anecdotes — they’re part of an emerging pattern of experienced creative professionals not just changing jobs, but leaving the field entirely. When people who’ve invested years mastering a profession decide to walk away, it’s worth asking why. There’s a particular kind of exhaustion that comes from trying to create meaning within systems designed to extract value. Creative professionals know this exhaustion intimately. They live in the tension between human connection and mechanical metrics, between authentic communication and algorithmic optimization, between their own values and the relentless machinery of growth. The challenge isn’t just about workload, though that’s certainly part of it. It’s about existing in a perpetual state of cognitive dissonance. Many of these professionals entered marketing because they believed in the power of communication, in the art of storytelling, in the possibility of connecting people with things that might genuinely improve their lives. Instead, they find themselves serving an industry driven by investment patterns and technological determinism that often clash with their core values. Then there’s the ever-shifting definition of success. What counts as a “result” in design and marketing has become increasingly abstract and elusive. Engagement metrics, conversion rates, attribution models — these measurements proliferate and mutate faster than anyone can meaningfully interpret them. The tools for measuring success change before we can even agree on what success means. It’s a peculiarly modern predicament: working harder than ever while feeling the impact of that work dissolve into an increasingly fractured and cynical digital landscape. We are told to be authentic while optimizing for algorithms, to be human while automating everything possible, to be creative while conforming to data-driven best practices. We are expected to master new platforms, tools, and paradigms at an exhausting pace, all while the cultural conversation increasingly dismisses our entire profession as manipulation at best, spam at worst, in either case – entirely automatable. Given the combination of working more than ever but getting less than ever out of it while also trying to change everything about what you do as the entire world is screaming at you all day about how worthless what you do is, burnout should be no surprise to anyone with an active heartbeat. The exodus to other fields might reveal something deeper: a desire to return to work that produces tangible, meaningful outcomes. When a designer or marketer becomes an artist, they choose to create something that exists in the world, that can be finished, seen, and touched. When they become a fitness instructor, they choose help people achieve concrete, physical results, perhaps even changing their lives in ways they never thought possible. These shifts suggest a hunger for work that can’t be algorithm-optimized into meaninglessness and not (yet) credibly done by a machine. What’s particularly striking is that many of these departing marketers aren’t moving to adjacent fields or seeking different roles within the industry. This isn’t a finding-my-unique-ability conversation in the corporate sphere; they’re leaving. They’re not just tired of their jobs; they’re tired of participating in a system of uninterpreted abstraction that they are, nonetheless, beholden to. Perhaps this trend is a warning sign that we need to fundamentally rethink how we connect people with value in a digital age. The exhaustion of marketers might be a canary in the coal mine, signaling that our current approaches to attention, engagement, and value creation are becoming unsustainable.

6 days ago 11 votes

More in design

Notes on Google Search Now Requiring JavaScript

John Gruber has a post about how Google’s search results now require JavaScript[1]. Why? Here’s Google: the change is intended to “better protect” Google Search against malicious activity, such as bots and spam Lol, the irony. Let’s turn to JavaScript for protection, as if the entire ad-based tracking/analytics world born out of JavaScript’s capabilities isn’t precisely what led to a less secure, less private, more exploited web. But whatever, “the web” is Google’s product so they can do what they want with it — right? Here’s John: Old original Google was a company of and for the open web. Post 2010-or-so Google is a company that sees the web as a de facto proprietary platform that it owns and controls. Those who experience the web through Google Chrome and Google Search are on that proprietary not-closed-per-se-but-not-really-open web. Search that requires JavaScript won’t cause the web to die. But it’s a sign of what’s to come (emphasis mine): Requiring JavaScript for Google Search is not about the fact that 99.9 percent of humans surfing the web have JavaScript enabled in their browsers. It’s about taking advantage of that fact to tightly control client access to Google Search results. But the nature of the true open web is that the server sticks to the specs for the HTTP protocol and the HTML content format, and clients are free to interpret that as they see fit. Original, novel, clever ways to do things with website output is what made the web so thrilling, fun, useful, and amazing. This JavaScript mandate is Google’s attempt at asserting that it will only serve search results to exactly the client software that it sees fit to serve. Requiring JavaScript is all about control. The web was founded on the idea of open access for all. But since that’s been completely and utterly abused (see LLM training datasets) we’re gonna lose it. The whole “freemium with ads” model that underpins the web was exploited for profit by AI at an industrial scale and that’s causing the “free and open web” to become the “paid and private web”. Universal access is quickly becoming select access — Google search results included. If you want to go down a rabbit hole of reading more about this, there’s the TechCrunch article John cites, a Hacker News thread, and this post from a company founded on providing search APIs. ⏎ Email :: Mastodon :: Bluesky #generalNotes

20 hours ago 2 votes
The Zettelkasten note taking methodology.

My thoughts about the Zettelkasten (Slip box) note taking methodology invented by the German sociologist Niklas Luhmann.

2 hours ago 1 votes
KEPCO Community Lounge by STUDIO I’ll

When I first visited the site, I felt overwhelmed. The entrance from the parking lot was elevated, exposing the interior,...

7 hours ago 1 votes
The Empty Hours

AI promises to automate both work and leisure. What will we do then? In 2005, I lived high up on a hill in Penang, from where I could literally watch the tech industry reshape the island and the nearby mainland. The common wisdom then was that automation would soon empty the factories across the country. Today, those same factories not only buzz with human activity — they’ve expanded dramatically, with manufacturing output up 130% and still employing 16% of Malaysia’s workforce. The work has shifted, evolved, adapted. We’re remarkably good at finding new things to do. I think about this often as I navigate my own relationship with AI tools. Last week, I asked an AI to generate some initial concepts for a client project — work that would have once filled pages of my sketchbook. As I watched the results populate my screen, my daughter asked what I was doing. “Letting the computer do some drawing for me,” I said. She considered this for a moment, then asked, “But you can draw already. If the computer does it for you, what will you do?” It’s the question of our age, isn’t it? As AI promises to take over not just routine tasks but creative ones — writing, design, music composition — we’re facing a prolonged period of anxiety. Not just about losing our jobs, but about losing our purpose. The industrial revolution promised to free us from physical labor and the digital revolution promised to free us from mental drudgery. Yet somehow we’ve ended up more stretched, more scheduled, more occupied than ever. Both were very real technological transitional periods; both had significant, measurable impacts on the economies of their time; neither ushered in a golden age of leisure. History shows that we — in the broadest sense — adapt. But here’s something to consider: adaptation takes time. At the height of the pre-industrial textile industry, 20% of all women and children in England were employed, hand-spinning textile fibers. This was in the late 18th century. Over the course of the following forty years, a process of mechanization took place that almost completely obviated the need for that particular workforce. But children working as hand-spinners at the pre-industrial height would have been well past middle-age by the time child-employment was no longer common. The transitional period would have lasted nearly the entirety of their working lives. Similarly, the decline of manufacturing in the United States elapsed over a period of nearly fifty years, from its peak in the mid-1960s to 2019, when a net loss of 3.5 million jobs was measured. Again, this transition was career-length — generational. In both transitions, new forms of work became available that would have been unforeseen prior to change being underway. We are only a handful of years into what we may someday know as the AI Revolution. It seems to be moving at a faster pace than either of its historical antecedents. Perhaps it truly is. Nevertheless, historical adaptation suggests that we look forward to the new kinds of work this transition will make a way for us to do. I wonder what they may be. AI, after all, isn’t just a faster way to accomplish specific tasks; investment in it suggests an expectation for much grander than that, on the order of anything that can be reduced to pattern recognition and reproduction. As it turns out, that’s most of what we do. So what’s left? What remains uniquely human when machines can answer our questions, organize and optimize our world, entertain us, and create our art? The answer might lie in the spaces between productivity — in the meaningful inefficiencies that machines are designed to eliminate. AI might be able to prove this someday, but anecdotally, it’s in the various moments of friction and delay throughout my day that I do my most active and creative thinking. While waiting for the water to heat up. Walking my dog. Brewing coffee. Standing in line. Maybe we’re approaching a grand reversal: after centuries of humans doing machine-like work, perhaps it’s time for humans to become more distinctly human. To focus not on what’s efficient or productive, but on what’s meaningful precisely because it can’t be automated: connection, contemplation, play. But this requires a radical shift in how we think about time and purpose. For generations, we’ve defined ourselves by our work, measured our days by our output. As AI threatens to take both our labor and our creative outlets, we will need to learn — or remember — how to exist without constant production and how to separate our basic human needs from economies of scale. The factories of Malaysia taught me something important: automation doesn’t move in a straight line. Human ingenuity finds new problems to solve, new work to do, new ways to be useful. But as AI promises to automate not just our labor but our leisure, we might finally be forced to confront the question my daughter so innocently posed: what will we do instead? This will not be easy. The answer, I hope, lies not just in finding new forms of work to replace the old, but in discovering what it means to be meaningfully unoccupied. The real challenge of the AI age might not be technological at all, but existential: learning to value the empty hours not for what we can fill them with, but for what they are. I believe in the intrinsic value of human life; one’s worth is no greater after years of labor and the accumulation of wealth and status than it was at its beginning. Life cannot be earned, just lived. This is a hard lesson. Wouldn’t it be strange if the most able teacher was not human but machine?

yesterday 2 votes
Missed Connections

Let me tell you about one of the best feelings. You have a problem. You bang your head on it for a while. Through the banging, you formulate a string of keywords describing the problem. You put those words into a search engine. You land on a forum or a blog post and read someone else’s words containing those keywords and more. Their words resonate with you deeply. They’re saying the exact same things you were saying to yourself in your head. You immediately know, “This person gets it!” You know they have an answer to your problem. They’ve seen what you’re seeing. And on top of it all, they provide a solution which fixes your problem! A sense of connection is now formed. You feel validated, understood, seen. They’ve been through what you’re going through, and they wrote about it to reach out to you — across time and space. I fell in love with the web for this reason, this feeling of connection. You could search the world and find someone who saw what you see, felt what you feel, went through what you’re going through. Contrast that with today. Today you have a problem. You bang your head on it. You ask a question in a prompt. And you get back something. But there’s no human behind it. Just a machine which takes human voices and de-personalizes them until the individual point of view is annihilated. And so too with it the sense of connection — the feeling of being validated, understood, seen. Every prompt a connection that could have been. A world of missed connections. Email :: Mastodon :: Bluesky

2 days ago 3 votes