Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
18
What technology’s drive for seamlessness gets wrong. I’ve been thinking lately about friction — not the physical force, but the small resistances we encounter in daily life. The tech industry has made eliminating friction its north star, pursuing ever more seamless experiences. Every tap saved, every decision automated, every interface made invisible — all pursued as necessary and celebrated as progress. But I’m increasingly convinced we’re missing something fundamental about human nature in this relentless pursuit of effortlessness. Consider how different it feels to discover something rather than have it given to you. Finding an unexpected book while looking for a recommended title can feel like destiny. Stumbling upon a great restaurant instead of Uber-ing to the top recommendation on Yelp creates a lasting and special memory. Or, how the experience of reading changes when you switch from a physical book to an e-reader with infinite options always a tap away. The friction in...
a week ago

More from Christopher Butler

Ten Books About AI Written Before the Year 2000

This by no means a definitive list, so don’t @ me! AI is an inescapable subject. There’s obviously an incredible headwind behind the computing progress of the last handful of years — not to mention the usual avarice — but there has also been nearly a century of thought put toward artificial intelligence. If you want to have a more robust understanding of what is at work beneath, say, the OpenAI chat box, pick any one of these texts. Each one would be worth a read — even a skim (this is by no means light reading). At the very least, familiarizing yourself with the intellectual path leading to now will help you navigate the funhouse of overblown marketing bullshit filling the internet right now, especially as it pertains to AGI. Read what the heavyweights had to say about it and you’ll see how many semantic games are being played while also moving the goalposts. Steps to an Ecology of Mind (1972) — Gregory Bateson. Through imagined dialogues with his daughter, Bateson explores how minds emerge from systems of information and communication, providing crucial insights for understanding artificial intelligence. The Sciences of the Artificial (1969) — Herbert Simon examines how artificial systems, including AI, differ from natural ones and introduces key concepts about bounded rationality. The Emperor’s New Mind (1989) — Roger Penrose. While arguing against strong AI, provides valuable insights into consciousness and computation that remain relevant to current AI discussions. Gödel, Escher, Bach: An Eternal Golden Braid (1979) — Douglas Hofstadter weaves together mathematics, art, and music to explore consciousness, self-reference, and emergent intelligence. Though not explicitly about AI, it provides fundamental insights into how complex cognition might emerge from simple rules and patterns. Perceptrons (1969) — Marvin Minsky & Seymour Papert. This controversial critique of neural networks temporarily halted research in the field but ultimately helped establish its theoretical foundations. Minsky and Papert’s mathematical analysis revealed both the limitations and potential of early neural networks. The Society of Mind (1986) — Marvin Minsky proposes that intelligence emerges from the interaction of simple agents working together, rather than from a single unified system. This theoretical framework remains relevant to understanding both human cognition and artificial intelligence. Computers and Thought (1963) — Edward Feigenbaum & Julian Feldman (editors) This is the first collection of articles about artificial intelligence, featuring contributions from pioneers like Herbert Simon and Allen Newell. It captures the foundational ideas and optimism of early AI research. Artificial Intelligence: A Modern Approach (1995) — Stuart Russell & Peter Norvig. This comprehensive textbook defined how AI would be taught for decades. It presents AI as rational agent design rather than human intelligence simulation, a framework that still influences the field. Computing Machinery and Intelligence (1950) — Alan Turing’s paper introduces the Turing Test and addresses fundamental questions about machine intelligence that we’re still grappling with today. It’s remarkable how many current AI debates were anticipated in this work. Cybernetics: Or Control and Communication in the Animal and the Machine (1948) — Norbert Wiener established the theoretical groundwork for understanding control systems in both machines and living things. His insights about feedback loops and communication remain crucial to understanding AI systems.

13 hours ago 2 votes
From Pages to Scenes

The Evolution of Digital Space The metaphors we use to describe digital spaces shape how we design them. When we moved from “pages” to “screens,” we were acknowledging a shift from static information to dynamic display. But even “screen” feels increasingly inadequate for describing what we’re actually creating. Modern digital experiences are more like scenes in a play — dynamic spaces where multiple elements interact based on context, user state, and system conditions. A user’s dashboard isn’t just a screen displaying information; it’s a scene where data, notifications, and interface elements play their roles according to complex choreography. As much as it may sound that way, this isn’t just semantic drift. When we design for “pages,” we think in terms of layout and arrangement. When we design for “screens,” we think in terms of display and responsiveness. But when we design for “scenes,” we think in terms of relationships and conditions — how elements interact, how states change, how context affects behavior. Consider a typical social media feed. It’s not really a screen of content — it’s a scene where various actors (posts, advertisements, notifications, user actions) interact according to multiple variables (time, engagement, user preferences, algorithmic decisions). Each element has its own behavioral logic, its own relationship to other elements, its own way of responding to user interaction. What’s more, these actors don’t just behave differently based on context — they can look radically different too. A data visualization widget might expand in size when it detects important changes in its data stream, or adopt animated behaviors when it interacts with related elements. A notification might shift its visual treatment entirely based on urgency or relationship to other active elements. Even something as simple as a status indicator might transform its appearance, motion, and sound based on complex conditions involving multiple scene elements. This layered complexity — where both behavior and appearance shift based on intricate interplays between scene elements — means we’re no longer just choreographing interactions. We’re directing a performance where every actor can transform both its role and its costume based on the unfolding drama. The evolution from page to screen to scene reflects a deeper shift in interaction design. We are no longer adapting static information for digital display. We’re choreographing complex interactions between dynamic elements, each responding to its own set of conditions and rules. This new metaphor demands different questions from designers. Instead of asking “How should this look?” we need to ask “How should this behave?” Instead of “Where should this go?” we need to ask “What role does this play?” The scene becomes our new unit of design — a space where interface elements aren’t just arranged, but directed.

yesterday 2 votes
The Empty Hours

AI promises to automate both work and leisure. What will we do then? In 2005, I lived high up on a hill in Penang, from where I could literally watch the tech industry reshape the island and the nearby mainland. The common wisdom then was that automation would soon empty the factories across the country. Today, those same factories not only buzz with human activity — they’ve expanded dramatically, with manufacturing output up 130% and still employing 16% of Malaysia’s workforce. The work has shifted, evolved, adapted. We’re remarkably good at finding new things to do. I think about this often as I navigate my own relationship with AI tools. Last week, I asked an AI to generate some initial concepts for a client project — work that would have once filled pages of my sketchbook. As I watched the results populate my screen, my daughter asked what I was doing. “Letting the computer do some drawing for me,” I said. She considered this for a moment, then asked, “But you can draw already. If the computer does it for you, what will you do?” It’s the question of our age, isn’t it? As AI promises to take over not just routine tasks but creative ones — writing, design, music composition — we’re facing a prolonged period of anxiety. Not just about losing our jobs, but about losing our purpose. The industrial revolution promised to free us from physical labor and the digital revolution promised to free us from mental drudgery. Yet somehow we’ve ended up more stretched, more scheduled, more occupied than ever. Both were very real technological transitional periods; both had significant, measurable impacts on the economies of their time; neither ushered in a golden age of leisure. History shows that we — in the broadest sense — adapt. But here’s something to consider: adaptation takes time. At the height of the pre-industrial textile industry, 20% of all women and children in England were employed, hand-spinning textile fibers. This was in the late 18th century. Over the course of the following forty years, a process of mechanization took place that almost completely obviated the need for that particular workforce. But children working as hand-spinners at the pre-industrial height would have been well past middle-age by the time child-employment was no longer common. The transitional period would have lasted nearly the entirety of their working lives. Similarly, the decline of manufacturing in the United States elapsed over a period of nearly fifty years, from its peak in the mid-1960s to 2019, when a net loss of 3.5 million jobs was measured. Again, this transition was career-length — generational. In both transitions, new forms of work became available that would have been unforeseen prior to change being underway. We are only a handful of years into what we may someday know as the AI Revolution. It seems to be moving at a faster pace than either of its historical antecedents. Perhaps it truly is. Nevertheless, historical adaptation suggests that we look forward to the new kinds of work this transition will make a way for us to do. I wonder what they may be. AI, after all, isn’t just a faster way to accomplish specific tasks; investment in it suggests an expectation for much grander than that, on the order of anything that can be reduced to pattern recognition and reproduction. As it turns out, that’s most of what we do. So what’s left? What remains uniquely human when machines can answer our questions, organize and optimize our world, entertain us, and create our art? The answer might lie in the spaces between productivity — in the meaningful inefficiencies that machines are designed to eliminate. AI might be able to prove this someday, but anecdotally, it’s in the various moments of friction and delay throughout my day that I do my most active and creative thinking. While waiting for the water to heat up. Walking my dog. Brewing coffee. Standing in line. Maybe we’re approaching a grand reversal: after centuries of humans doing machine-like work, perhaps it’s time for humans to become more distinctly human. To focus not on what’s efficient or productive, but on what’s meaningful precisely because it can’t be automated: connection, contemplation, play. But this requires a radical shift in how we think about time and purpose. For generations, we’ve defined ourselves by our work, measured our days by our output. As AI threatens to take both our labor and our creative outlets, we will need to learn — or remember — how to exist without constant production and how to separate our basic human needs from economies of scale. The factories of Malaysia taught me something important: automation doesn’t move in a straight line. Human ingenuity finds new problems to solve, new work to do, new ways to be useful. But as AI promises to automate not just our labor but our leisure, we might finally be forced to confront the question my daughter so innocently posed: what will we do instead? This will not be easy. The answer, I hope, lies not just in finding new forms of work to replace the old, but in discovering what it means to be meaningfully unoccupied. The real challenge of the AI age might not be technological at all, but existential: learning to value the empty hours not for what we can fill them with, but for what they are. I believe in the intrinsic value of human life; one’s worth is no greater after years of labor and the accumulation of wealth and status than it was at its beginning. Life cannot be earned, just lived. This is a hard lesson. Wouldn’t it be strange if the most able teacher was not human but machine?

3 days ago 4 votes
The Testing Trap

Meaningful design decisions flow from clear intent, not from data. “We don’t know what people want. We don’t even know what they do.” This confession — which so many clients never truly say but should — drives an almost compulsive need for testing and validation. Heat maps, A/B tests, user surveys — we’ve built an entire industry around the promise that enough data can bridge the gap between uncertainty and understanding. But here’s what testing actually tells us: what users do in artificial scenarios. It doesn’t tell us what they want, and it certainly doesn’t tell us what we should want them to do. We’ve confused observation with insight. A heat map might show where users click, but it won’t reveal why they click there or whether those clicks align with your business objectives. User testing might reveal pain points in your interface, but it won’t tell you if solving those pain points serves your strategic goals. The uncomfortable truth is that meaningful design decisions flow from clear intent, not from data. If you know exactly what outcome you want to achieve, you can design toward that outcome without needing to validate every decision with testing. This isn’t an argument against testing entirely. It’s an argument for testing with purpose. Before running any test, ask yourself: Do you have the intent to act on what you find? Do you have the means to act on what you find? If the answer to either question is no, you’re not testing for insight — you’re testing for comfort. You’re seeking permission to make decisions you should be making based on clear strategic intent. The most successful digital products weren’t built by following heat maps. They were built by teams with crystal-clear visions of what they wanted users to achieve. Testing can refine the path to that vision, but it can’t replace the vision itself.

5 days ago 4 votes
Digital Reality Digital Shock

Growing Up at the Dawn of Cyberspace For those of us born around 1980, William Gibson’s Neuromancer might be the most prophetic novel we never read as teenagers. Published in 1984, it predicted the digital world we would inherit: a reality where human consciousness extends into cyberspace, where corporations control the digital commons, and where being “jacked in” to a global information network is the default state of existence. But it was The Matrix, arriving in 1999 when I was nineteen, that captured something even more fundamental about our generation’s experience. Beyond its surface narrative of machines and simulated reality, beyond its Hot Topic aesthetic, the film tapped into a profound truth about coming of age in the digital era: the experience of ontological shock. Every generation experiences the disorientation of discovering the world isn’t what they thought it was. But for the last X’ers, this natural coming-of-age shock coincided with a collective technological awakening. Just as we were questioning the nature of reality and our place in it as young adults, the stable physical world of our childhood was being transformed by digital technology. The institutions, social structures, and ways of being that seemed permanent turned out to be as mutable as computer code. Neo’s journey in The Matrix — discovering his reality is a simulation and learning to see “the code” behind it — paralleled our own experience of watching the physical world become increasingly overlaid and mediated by digital systems. The film’s themes of paranoia and revelation resonated because we were living through our own red pill experience, watching as more and more of human experience moved into the digital realm that Gibson had imagined fifteen years before. The timing was uncanny. The Matrix arrived amid a perfect storm of millennial anxiety: Y2K fears about computers failing catastrophically, a disputed presidential election that would be decided by the Supreme Court, and then the shocking events of 9/11. For those of us just entering adulthood in the United States, these concurrent disruptions to technological, political, and social stability congealed into a generational dysphoria. The film’s paranoid questioning of reality felt less like science fiction and more like a documentary of our collective psychological state. This double shock — personal and technological — has shaped how I, and I suspect many of us, think about and design technology today. When you’ve experienced reality becoming suddenly permeable, you assume disruption, glitches, and the shock of others. You develop empathy for anyone confronting new technological paradigms. You understand the importance of transparency, of helping people see the systems they’re operating within rather than hiding them. Perhaps this is why our generation often approaches technology with a mix of fluency and skepticism. We’re comfortable in digital spaces, but we remember what came before. We know firsthand how quickly reality can transform, how easily new layers of mediation can become invisible, how important it is to maintain awareness of the code behind our increasingly digital existence. The paranoia of The Matrix wasn’t just science fiction — it was a preview of what it means to live in a world where the boundaries between physical and digital reality grow increasingly blurry. For those of us who came of age alongside the internet, that ontological shock never fully faded. Maybe it shouldn’t — I hold on to mine as an asset to my work and thinking.

6 days ago 5 votes

More in design

Ten Books About AI Written Before the Year 2000

This by no means a definitive list, so don’t @ me! AI is an inescapable subject. There’s obviously an incredible headwind behind the computing progress of the last handful of years — not to mention the usual avarice — but there has also been nearly a century of thought put toward artificial intelligence. If you want to have a more robust understanding of what is at work beneath, say, the OpenAI chat box, pick any one of these texts. Each one would be worth a read — even a skim (this is by no means light reading). At the very least, familiarizing yourself with the intellectual path leading to now will help you navigate the funhouse of overblown marketing bullshit filling the internet right now, especially as it pertains to AGI. Read what the heavyweights had to say about it and you’ll see how many semantic games are being played while also moving the goalposts. Steps to an Ecology of Mind (1972) — Gregory Bateson. Through imagined dialogues with his daughter, Bateson explores how minds emerge from systems of information and communication, providing crucial insights for understanding artificial intelligence. The Sciences of the Artificial (1969) — Herbert Simon examines how artificial systems, including AI, differ from natural ones and introduces key concepts about bounded rationality. The Emperor’s New Mind (1989) — Roger Penrose. While arguing against strong AI, provides valuable insights into consciousness and computation that remain relevant to current AI discussions. Gödel, Escher, Bach: An Eternal Golden Braid (1979) — Douglas Hofstadter weaves together mathematics, art, and music to explore consciousness, self-reference, and emergent intelligence. Though not explicitly about AI, it provides fundamental insights into how complex cognition might emerge from simple rules and patterns. Perceptrons (1969) — Marvin Minsky & Seymour Papert. This controversial critique of neural networks temporarily halted research in the field but ultimately helped establish its theoretical foundations. Minsky and Papert’s mathematical analysis revealed both the limitations and potential of early neural networks. The Society of Mind (1986) — Marvin Minsky proposes that intelligence emerges from the interaction of simple agents working together, rather than from a single unified system. This theoretical framework remains relevant to understanding both human cognition and artificial intelligence. Computers and Thought (1963) — Edward Feigenbaum & Julian Feldman (editors) This is the first collection of articles about artificial intelligence, featuring contributions from pioneers like Herbert Simon and Allen Newell. It captures the foundational ideas and optimism of early AI research. Artificial Intelligence: A Modern Approach (1995) — Stuart Russell & Peter Norvig. This comprehensive textbook defined how AI would be taught for decades. It presents AI as rational agent design rather than human intelligence simulation, a framework that still influences the field. Computing Machinery and Intelligence (1950) — Alan Turing’s paper introduces the Turing Test and addresses fundamental questions about machine intelligence that we’re still grappling with today. It’s remarkable how many current AI debates were anticipated in this work. Cybernetics: Or Control and Communication in the Animal and the Machine (1948) — Norbert Wiener established the theoretical groundwork for understanding control systems in both machines and living things. His insights about feedback loops and communication remain crucial to understanding AI systems.

13 hours ago 2 votes
The Zettelkasten note taking methodology.

My thoughts about the Zettelkasten (Slip box) note taking methodology invented by the German sociologist Niklas Luhmann.

2 days ago 8 votes
DJI flagship store by Various Associates

Chinese interior studio Various Associates has completed an irregular pyramid-shaped flagship store for drone brand DJI in Shenzhen, China. Located...

2 days ago 3 votes
Notes on Google Search Now Requiring JavaScript

John Gruber has a post about how Google’s search results now require JavaScript[1]. Why? Here’s Google: the change is intended to “better protect” Google Search against malicious activity, such as bots and spam Lol, the irony. Let’s turn to JavaScript for protection, as if the entire ad-based tracking/analytics world born out of JavaScript’s capabilities isn’t precisely what led to a less secure, less private, more exploited web. But whatever, “the web” is Google’s product so they can do what they want with it — right? Here’s John: Old original Google was a company of and for the open web. Post 2010-or-so Google is a company that sees the web as a de facto proprietary platform that it owns and controls. Those who experience the web through Google Chrome and Google Search are on that proprietary not-closed-per-se-but-not-really-open web. Search that requires JavaScript won’t cause the web to die. But it’s a sign of what’s to come (emphasis mine): Requiring JavaScript for Google Search is not about the fact that 99.9 percent of humans surfing the web have JavaScript enabled in their browsers. It’s about taking advantage of that fact to tightly control client access to Google Search results. But the nature of the true open web is that the server sticks to the specs for the HTTP protocol and the HTML content format, and clients are free to interpret that as they see fit. Original, novel, clever ways to do things with website output is what made the web so thrilling, fun, useful, and amazing. This JavaScript mandate is Google’s attempt at asserting that it will only serve search results to exactly the client software that it sees fit to serve. Requiring JavaScript is all about control. The web was founded on the idea of open access for all. But since that’s been completely and utterly abused (see LLM training datasets) we’re gonna lose it. The whole “freemium with ads” model that underpins the web was exploited for profit by AI at an industrial scale and that’s causing the “free and open web” to become the “paid and private web”. Universal access is quickly becoming select access — Google search results included. If you want to go down a rabbit hole of reading more about this, there’s the TechCrunch article John cites, a Hacker News thread, and this post from a company founded on providing search APIs. ⏎ Email :: Mastodon :: Bluesky #generalNotes

3 days ago 9 votes
Kedrovka cedar milk by Maria Korneva

Kedrovka is a brand of plant-based milk crafted for those who care about their health, value natural ingredients, and seek...

3 days ago 3 votes