Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
15
AI promises to automate both work and leisure. What will we do then? In 2005, I lived high up on a hill in Penang, from where I could literally watch the tech industry reshape the island and the nearby mainland. The common wisdom then was that automation would soon empty the factories across the country. Today, those same factories not only buzz with human activity — they’ve expanded dramatically, with manufacturing output up 130% and still employing 16% of Malaysia’s workforce. The work has shifted, evolved, adapted. We’re remarkably good at finding new things to do. I think about this often as I navigate my own relationship with AI tools. Last week, I asked an AI to generate some initial concepts for a client project — work that would have once filled pages of my sketchbook. As I watched the results populate my screen, my daughter asked what I was doing. “Letting the computer do some drawing for me,” I said. She considered this for a moment, then asked, “But you can draw...
2 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

Shut up, Siri

There will be no monoculture of human-computer interaction. Every day I see a new thinkpiece on “the post-screen future” or “UI-less design” or “the end of the click.” I even used to write things like that. But that’s because I had less experience with human-computer interaction than I have now. You see, there’s this contagion of belief that new technologies not only open new doors, but definitively close old ones. But that’s rarely true. The internet didn’t end radio. The iPhone didn’t end laptops or even desktop computers. And voice interfaces won’t end screens and manual interactions. There will be no monoculture of human-computer interaction. We may have the technology to make the click an unnecessary interaction convention; I doubt we have the desire. That is a good thing. Sure, we’ll talk to our machines, just not all the time. The definition of the click as “a mechanical act requiring decision, precision, and a split-second negotiation between choice and commitment” is a good one, because it details all the reasons why the click is so useful and effective. However, some might imagine that a sophisticated enough machine would obviate the need for any direct, physical interaction. After all, didn’t the characters in Star Trek walk around the ship, constantly invoking the ship’s Computer to give them answers by just speaking “Computer…” into the room and waiting for its response? They did! But they also had many screens and panels and did a lot of tapping and pressing that might as well have been clicking. Sure, Star Trek was made long before we had a good sense of what advanced computing might actually be capable of, and what it might actually be like to use. But it also might be that the creators of Star Trek held some insight into human-computer interaction that shaped their world building. Consider how your brain processes information. The eye-brain connection is one of the most sophisticated and efficient systems in human biology. You can scan a list of options and make comparisons in fractions of a second — far faster than listening to those same options read aloud. Suppose we found ourselves ordering dinner at a restaurant in a purely voice command future. I imagine that would be a lot like the moment when your server reads off the evening’s specials — what was that first one again? — but for the entire time and for everyone at the table. It would take too long, and it would be very annoying. That’s the thing about how our senses interact with the brain — they don’t all work in the same way. You can view more than one thing at a time, identify them, react to them, and process them virtually simultaneously, but you cannot come close to that kind of performance with sound. Imagine sitting across from two friends and both show you a picture at the same time. You’ll likely be able to identify both right away. Now imagine those two friends telling you something important at the same time. You’re almost certain to ask them to tell you again, one at a time. What’s more, our brains develop sophisticated spatial memory for visual interfaces. Regular users of any application know exactly where their favorite functions are located — they can navigate complex interfaces almost unconsciously, their cursor moving to the right spot without conscious thought. This kind of spatial memory simply doesn’t exist for voice commands, where every interaction requires active recall of the correct verbal command. Now imagine an office or public space where everyone is speaking commands to their devices. The cacophony would be unbearable. This highlights another crucial advantage of visual interfaces and direct selection: they’re silent. Sometimes we need to interact with our devices without broadcasting our actions to everyone around us. Voice interfaces remove this option for privacy and discretion in public spaces. The screen, by the way, tends to get the blame for all the negative things that have come with our increasingly digital lives — the distractions, intrusions, manipulations, and so on — but the screen itself isn’t to blame. In fact, the screen exists because of how incredibly useful it is as a memory surrogate. The screen is a surface for information and interaction, much like whiteboard, a chalkboard, a canvas, a scroll, or a patch of dirt once was long ago. The function it serves is to hold information for us — so that we don’t have to retain it in perfect memory. That’s why screens are useful, and that’s why — I think — they were still present on an imagined starship three centuries from now along with a conversant AI. “Clicking” — which is really just a shorthand for some direct selection method — is incredibly efficient, and increasingly so as the number of options increases. Imagine a list of three items, which is probably the simplest scenario. Speaking a selection command like, “the third one, please” is just as efficient as manually selecting the third one in the list. And this is probably true up to somewhere around 6 or 7 items — there’s an old principle having to do with our ability to hold no more than that number of individual pieces of information in our minds. But beyond that number, it gets more difficult without just being able to point. In order to say you want the ninth one in a list, for example, requires that you know it’s the ninth one in the list, which might take you a moment to figure out — certainly longer than just pointing at it. Consider also the computational efficiency. A click or tap requires minimal processing power — it’s a simple input with precise coordinates. Voice commands, on the other hand, require constant audio processing, speech recognition, and AI-driven interpretation. In a world increasingly concerned with energy consumption and computational resources, the efficiency of direct selection becomes even more relevant. It’s also worth noting that different interface modes serve different accessibility needs. While voice interfaces can be crucial for users with certain physical limitations, visual interfaces with direct selection are essential for users with hearing impairments or speech difficulties. The future isn’t about replacing one mode with another — it’s about ensuring that multiple modes of interaction are available to serve diverse needs. Perhaps each item in the list is clearly different. In that case, you might just be able to speak aloud which one you want. But what if they’re not that different? What if you aren’t sure what each item is? Perhaps these items aren’t even words, in which case you now have to describe them in a way that the machine can disambiguate. What if there are three dozen in a grid? At that level of density, tracking with your eye and some kind of pointer helps you move more rapidly through the information, to say nothing about making a final selection. Instead of imagining a wholesale replacement of visual interfaces, we should be thinking about how to better integrate different modes of interaction. How can voice and AI augment visual interfaces rather than replace them? How can we preserve the efficiency of visual processing while adding the convenience of voice commands? The click isn’t just a technological artifact — it’s a reflection of how humans process and interact with information. As long as we have eyes and spatial reasoning, we’ll need interfaces that leverage these capabilities. The future isn’t clickless; it’s multi-modal.

yesterday 2 votes
Dedigitization

When quantum computing breaks everything. You probably know someone who still keeps essential passwords scrawled on a post-it note stuck someplace. And you’ve probably urged them to set up a password manager and shred the note for God’s sake! But what if they’re on to something? They say that even a stopped watch is right twice a day. What if the post-its time has come back around again? What if paper is safer than code? As bitrot-prone as it is, I think we’ve been lulled into a sense of digital permanence. Whatever we save will last forever, preserved in perfect fidelity, accessible only to those who have permission to see it. This assumption underlies the way nearly every aspect of modern life has come to work, from banking to healthcare to personal communications. But here comes quantum computing, and it threatens to undo all of it. When sufficiently powerful quantum computers arrive, they’ll be able to break most encryption we use today. And when that happens, our most precious secrets might find their safest home in an unexpected place: paper. It sounds like the pitch for a TV show, I know. But this isn’t fantasy. Quantum computers leverage the strange properties of quantum mechanics to solve certain problems exponentially faster than classical computers. Among these problems is factoring large numbers — the mathematical operation that underlies most modern encryption. While current quantum computers aren’t yet powerful enough to break encryption, experts predict they will be within a decade. More concerning is that adversaries are already harvesting encrypted data, waiting for the day they can decrypt it. (For a sobering assessment of where quantum computing and encryption stand today, see Shor’s Algorithm, D-Wave, Quantum-Resistant Algorithms, the NSA’s Commercial National Security Algorithm Suite 2.0, and RAND’s forecast of how this will all go down.) This creates a new calculus around digital security: How long does information need to stay secret? For communications like text messages or emails, maybe a few years is enough. But what about medical records that should remain private for a lifetime? Or state secrets that need to remain confidential for generations? Or corporate intellectual property that must never be revealed? For information that needs permanent protection, we might need to look backward to move forward. Paper — or other physical storage media — offers something digital storage cannot: security through physical rather than mathematical barriers. You can’t hack paper. You can’t decrypt it. You can’t harvest it now and crack it later. The only way to access information stored on paper is to physically acquire it. They’re going to have to start screening Tinker Tailor Soldier Spy at CIA training again. Consider cryptocurrency as a bit of harbinger of this future. Bitcoin, despite being entirely digital, already requires physical security solutions. Hardware wallets — physical devices that store cryptographic keys — are considered the most secure way to protect digital assets. But even this hybrid approach depends upon encryption that quantum computers could eventually break. The very existence of these physical intermediaries hints at a fundamental truth: purely digital security may be impossible in a post-quantum world. Many organizations already maintain their most sensitive information in physical form. The U.S. military keeps certain critical systems air-gapped and documented on paper. Some banking systems still rely on physical ledgers as backups. Corporate lawyers often prefer paper for their most sensitive documents. These aren’t antiquated holdovers — they’re pragmatic solutions to security concerns that quantum computing will only make more relevant. But returning to paper doesn’t mean abandoning digital convenience entirely. A hybrid approach might emerge, where routine operations remain digital while truly sensitive information returns to physical form. This could lead to new systems and practices: secure physical storage facilities might become as common as data centers; document destruction might become as critical as data deletion; physical security might become as sophisticated as cybersecurity. Every sensitive government or corporate decision is made in conclave. The future might also see novel solutions beyond traditional paper. Biological storage — encoding information in DNA — could offer physical security with digital density. New materials might be developed specifically for secure information storage (of course, if you can put it in there, someone can probably get it out). We might even see the emergence of new forms of encryption based on physical rather than mathematical properties. Good lord what if you have to dance your password… The rise of quantum computing doesn’t mean the end of privacy, but it might mean the end of our assumption that digital is forever. In a world where no encryption is permanently secure, the most enduring secrets might be those written on paper, locked in a drawer, protected by physical rather than mathematical barriers. That person with the post-it note might just be ahead of their time — though perhaps they should consider moving it from the wall to a safe. This isn’t regression — it’s adaptation. Just as quantum computing represents a fundamental shift in how we process information, we might need a fundamental shift in how we protect it. It may be that the future of security looks a lot like its past.

2 days ago 4 votes
The Internet Can't Discover: A Case for New Technologies

The internet reflects us. But new technologies must explore the world beyond. We’ve spent more than two generations and trillions of dollars building the internet. It is, arguably, humanity’s most ambitious technological project. And yet, for all its power to process and reflect information, the internet cannot tell us a single new thing about the physical world around us. The internet cannot detect an approaching asteroid, discover a new species in the deep ocean, or detect changes in the Earth’s magnetic field. It cannot discover a new material or invent something new from it. The internet is, fundamentally, an introspective technology; it is a mirror, showing us only what we’ve already put into it. What we desperately need are more extrospective technologies — windows into the unknown. Consider what the internet actually does: it processes and transmits information that humans have already discovered, documented, and digitized. Even the most sophisticated applications it enables — like artificial intelligence — can only recombine and reinterpret existing information about ourselves and our own creations. They cannot generate genuinely new knowledge about the physical world. (Let’s debate whether pattern recognition truly fits the bill in another pieces.) When we marvel at AI’s capabilities, we’re really just admiring an increasingly sophisticated form of introspection. So what is an example of an extrospective technology? The James Webb Space Telescope is a good one. It offers a striking contrast to the internet’s introspective limitations. While we spend billions optimizing ways to look at ourselves, Webb actually shows us something new about our universe — it shows us more of it that we’ve ever seen before. Every image it sends back is a discovery — whether it’s the atmospheric composition of distant planets, the formation of early galaxies, or phenomena we hadn’t even thought to look for. Unlike the internet, which can only reflect what we already know about ourselves, Webb is quite literally a window for looking outward. It extends our vision not just beyond our natural capabilities, but beyond what any human has ever seen before. The imbalance between introspective and extrospective technologies in our society is striking. The entire James Webb Space Telescope project cost about $10 billion — roughly what Meta spends on its metaverse project in a single year. We have more engineers working on optimizing our digital reflections than on all of our outward-looking scientific instruments combined. The most talented minds of our generation are focused on perfecting the mirror rather than opening new windows to the universe. Of course, this isn’t just about how we spend our money — it’s about our relationship with discovery itself. The internet has made us incredibly efficient at examining and reprocessing our own knowledge, but it may have dulled our appetite for looking outward. We’ve become so accustomed to the immediate gratification of accessing existing information that we’ve forgotten the importance of the long, patient work of discovery. The technologies that can actually tell us something new about our world often operate on different timescales than introspective ones. A deep ocean sensor might need years to reveal meaningful patterns. A space telescope might require decades of operation before making a groundbreaking discovery. These extrospective technologies demand patience and sustained investment — qualities that our internet-shaped attention spans increasingly struggle with. Yet these are exactly the technologies we need most urgently. As we face unprecedented environmental challenges and explore new frontiers in space, we need more windows into the physical world, not better mirrors of our digital one. We need technologies that can warn us of environmental changes, reveal new resources, and help us understand our place in the universe. As enthusiastic as I am about things like the James Webb Telescope, I’m also fascinated by how much we still don’t know about this planet. Only 26% of the ocean floor has been mapped using high resolution sonar technology. Only 5% of the ocean has actually been physically explored. It has been estimated that the ocean’s ecosystem is home to somewhere between 700,000 and 1 million unique species. The vast majority of them — greater than 60% — remain undiscovered. Isn’t it astonishing how little we know about this thing that covers 70% of our planet’s surface? What could we create to learn more about the ocean? It has to be something truly novel — something other than a better submarine. The ocean is just one place to point an extrospective technology. Think of the greatest unknowns. How many of them will be illuminated by the internet? Will the internet expand our understanding of physics, or will it document what we find by some other means? Will the internet ever be more than a catalog of discoveries? Will the internet explain the nature of time, or will it just continue to draw from our finite supply of it? The internet will remain vital infrastructure, but it’s time to rebalance our technological investment. We need to direct more resources and talent toward extrospective technologies — tools that can tell us something new about the world beyond our screens. Only then can we move beyond endless introspection and toward genuine discovery of the universe that exists whether we look at it or not.

a week ago 7 votes
AI is here. I'm doing Office Hours.

Here’s why: I think we need a safe, private place to talk about what AI means for design, and more broadly, the future of work. AI looms as a double-layered threat. We are right to wonder, will AI take my job? But also, we worry that talking openly about our concerns and questions will haste the day — that we’ll look like stubborn luddites, unreliable leaders and teammates, or weak links in the chain. AI, and especially the conversation about it, is moving so quickly that simply keeping up with it takes more energy than we can even imagine putting toward starting to explore it. I’m hearing how stressful it feels to be expected to discover, learn, and use every new tool that becomes available and demonstrate ROI on that effort immediately. I think we’re all scrambling and worrying and wondering how long we can keep this up. And yet I remain optimistic. I’ve kept up with and explored AI exhaustively and I’ve felt every feeling and secretly thought every thought I’ve listed above. And as I believe about everything, it’s better to bring it into the light. I want to do that with you, and I think I might even be able to help. I think there’s a future for thinkers, designers, and firms of all kinds. There’s opportunity in discovery. How it Will Work Here’s the thing. Lots of people do this. In fact, several people whom I admire greatly have set up office hours just like this. But I never make use of them. Here are the reasons why — perhaps you can relate: Intimidation (even though they’ve offered, will they think I’m lame for taking them up on it?) Impostor Syndrome (they’ll realize I know nothing) Introversion (wait I have to talk to someone? Like right at their actual face?) So, a few notes: Ask me anything. I guarantee I have had the same question or worry at some point. Hey, wait, who says I know everything? I have something to learn from you, too. Our meeting will be private. Our secret. 100% confidential. We don’t have to go on video if you don’t want to. Nothing will be recorded. Slots on Wednesdays from 9am-10am and 1-2pm EST and Fridays 9am-10am EST. You can book a time right now, starting as early as tomorrow. 👇 I just took this picture — this is how I look and what it will be like to chat. Friendly is what I’m going for :)

a week ago 7 votes
The Interface Is Always There

There’s no such thing as UI-less anything. If it’s not one of your five senses, it’s an interface. This might seem like a bold claim in an era obsessed with “invisible” and “UI-less” design, but consider what an interface actually is: any mediating layer between you and information. When you use a voice assistant, you’re not experiencing a UI-less interaction — you’re using an audio interface. When you use gesture controls, you’re not bypassing an interface — you’re using a kinetic one. Even when you’re using a “seamless” AR experience, you’re still interfacing through visual overlays and spatial tracking. The dream of UI-less design is sold as a magically unmediated experience, but in reality it’s just something other than a couple of boxes with a screen. Don’t get me wrong; I have no problem with unboxing the computer. Let’s just not play marketing games with what that is. Call it distributed computing. Call it what it is. An unmediated digital experience is an oxymoron. Digital information must be translated into human-perceivable form through some kind of interface, whether that’s visual, auditory, tactile, or some combination thereof. This isn’t a limitation — it’s a fundamental aspect of how we process information. Our brains are wired to understand the world through sensory interfaces. We can’t directly perceive radio waves, but we can interface with them through devices that translate them into something our senses can process. Perhaps instead of chasing the impossible dream of invisible interfaces, we should focus on designing interfaces that work with our natural ways of processing information. After all, the best interfaces aren’t invisible — they’re intuitive. They don’t disappear; they become extensions of our sensory experience. The next time someone talks about creating a UI-less experience, ask them: Through which of the five senses do they expect users to perceive their product? Whatever the answer, that’s their interface.   P.S. You might think that a direct brain-computer interface will checkmate every point I’ve made here and then some. Perhaps. But I sincerely doubt that. A direct interface like that will open us up to receiving input at a volume and diversity and simultaneity that we’ve never experienced before. As plastic as the brain can be, I think such a thing will take time — perhaps on a generational scale of adaptation — to take route in human society.

a week ago 16 votes

More in design

Inside the dark forests of the internet

This is the second part of a series on the identity of social networks:

8 hours ago 3 votes
OZBE Café by KKOL Studio

OZBE is a new café brand occupying four floors that offers premium fruits and serves them by directly juicing or...

5 hours ago 2 votes
Reactive Haptics

When people think of haptics, they usually think of typing on mobile keyboards or tapping on trackpads. While impressive, these are fairly limited uses of haptics, both attempting to recreate a simple “click.” These are one-shot user events that don’t respond dynamically to the user. On the Android team, I explored a range of interactive […]

yesterday 6 votes
EMKA store

EMKA Store in Salaris is a unique project by Sinitsa Bureau, spanning 250 m². The concept emerged from a deep...

yesterday 3 votes
Shut up, Siri

There will be no monoculture of human-computer interaction. Every day I see a new thinkpiece on “the post-screen future” or “UI-less design” or “the end of the click.” I even used to write things like that. But that’s because I had less experience with human-computer interaction than I have now. You see, there’s this contagion of belief that new technologies not only open new doors, but definitively close old ones. But that’s rarely true. The internet didn’t end radio. The iPhone didn’t end laptops or even desktop computers. And voice interfaces won’t end screens and manual interactions. There will be no monoculture of human-computer interaction. We may have the technology to make the click an unnecessary interaction convention; I doubt we have the desire. That is a good thing. Sure, we’ll talk to our machines, just not all the time. The definition of the click as “a mechanical act requiring decision, precision, and a split-second negotiation between choice and commitment” is a good one, because it details all the reasons why the click is so useful and effective. However, some might imagine that a sophisticated enough machine would obviate the need for any direct, physical interaction. After all, didn’t the characters in Star Trek walk around the ship, constantly invoking the ship’s Computer to give them answers by just speaking “Computer…” into the room and waiting for its response? They did! But they also had many screens and panels and did a lot of tapping and pressing that might as well have been clicking. Sure, Star Trek was made long before we had a good sense of what advanced computing might actually be capable of, and what it might actually be like to use. But it also might be that the creators of Star Trek held some insight into human-computer interaction that shaped their world building. Consider how your brain processes information. The eye-brain connection is one of the most sophisticated and efficient systems in human biology. You can scan a list of options and make comparisons in fractions of a second — far faster than listening to those same options read aloud. Suppose we found ourselves ordering dinner at a restaurant in a purely voice command future. I imagine that would be a lot like the moment when your server reads off the evening’s specials — what was that first one again? — but for the entire time and for everyone at the table. It would take too long, and it would be very annoying. That’s the thing about how our senses interact with the brain — they don’t all work in the same way. You can view more than one thing at a time, identify them, react to them, and process them virtually simultaneously, but you cannot come close to that kind of performance with sound. Imagine sitting across from two friends and both show you a picture at the same time. You’ll likely be able to identify both right away. Now imagine those two friends telling you something important at the same time. You’re almost certain to ask them to tell you again, one at a time. What’s more, our brains develop sophisticated spatial memory for visual interfaces. Regular users of any application know exactly where their favorite functions are located — they can navigate complex interfaces almost unconsciously, their cursor moving to the right spot without conscious thought. This kind of spatial memory simply doesn’t exist for voice commands, where every interaction requires active recall of the correct verbal command. Now imagine an office or public space where everyone is speaking commands to their devices. The cacophony would be unbearable. This highlights another crucial advantage of visual interfaces and direct selection: they’re silent. Sometimes we need to interact with our devices without broadcasting our actions to everyone around us. Voice interfaces remove this option for privacy and discretion in public spaces. The screen, by the way, tends to get the blame for all the negative things that have come with our increasingly digital lives — the distractions, intrusions, manipulations, and so on — but the screen itself isn’t to blame. In fact, the screen exists because of how incredibly useful it is as a memory surrogate. The screen is a surface for information and interaction, much like whiteboard, a chalkboard, a canvas, a scroll, or a patch of dirt once was long ago. The function it serves is to hold information for us — so that we don’t have to retain it in perfect memory. That’s why screens are useful, and that’s why — I think — they were still present on an imagined starship three centuries from now along with a conversant AI. “Clicking” — which is really just a shorthand for some direct selection method — is incredibly efficient, and increasingly so as the number of options increases. Imagine a list of three items, which is probably the simplest scenario. Speaking a selection command like, “the third one, please” is just as efficient as manually selecting the third one in the list. And this is probably true up to somewhere around 6 or 7 items — there’s an old principle having to do with our ability to hold no more than that number of individual pieces of information in our minds. But beyond that number, it gets more difficult without just being able to point. In order to say you want the ninth one in a list, for example, requires that you know it’s the ninth one in the list, which might take you a moment to figure out — certainly longer than just pointing at it. Consider also the computational efficiency. A click or tap requires minimal processing power — it’s a simple input with precise coordinates. Voice commands, on the other hand, require constant audio processing, speech recognition, and AI-driven interpretation. In a world increasingly concerned with energy consumption and computational resources, the efficiency of direct selection becomes even more relevant. It’s also worth noting that different interface modes serve different accessibility needs. While voice interfaces can be crucial for users with certain physical limitations, visual interfaces with direct selection are essential for users with hearing impairments or speech difficulties. The future isn’t about replacing one mode with another — it’s about ensuring that multiple modes of interaction are available to serve diverse needs. Perhaps each item in the list is clearly different. In that case, you might just be able to speak aloud which one you want. But what if they’re not that different? What if you aren’t sure what each item is? Perhaps these items aren’t even words, in which case you now have to describe them in a way that the machine can disambiguate. What if there are three dozen in a grid? At that level of density, tracking with your eye and some kind of pointer helps you move more rapidly through the information, to say nothing about making a final selection. Instead of imagining a wholesale replacement of visual interfaces, we should be thinking about how to better integrate different modes of interaction. How can voice and AI augment visual interfaces rather than replace them? How can we preserve the efficiency of visual processing while adding the convenience of voice commands? The click isn’t just a technological artifact — it’s a reflection of how humans process and interact with information. As long as we have eyes and spatial reasoning, we’ll need interfaces that leverage these capabilities. The future isn’t clickless; it’s multi-modal.

yesterday 2 votes