More from Christopher Butler
Five fictional interface concepts that could reshape how humans and machines interact. Every piece of technology is an interface. Though the word has come to be a shorthand for what we see and use on a screen, an interface is anything that connects two or more things together. While that technically means that a piece of tape could be considered an interface between a picture and a wall, or a pipe between water and a home, interfaces become truly exciting when they create both a physical connection and a conceptual one — when they create a unique space for thinking, communicating, creating, or experiencing. This is why, despite the flexibility and utility of multifunction devices like the smartphone, single-function computing devices still have the power to fascinate us all. The reason for this, I believe, is not just that single-function devices enable their users to fully focus on the experience they create, but because the device can be fully built for that experience. Every aspect of its physical interface can be customized to its functionality; it can have dedicated buttons, switches, knobs, and displays that directly connect our bodies to its features, rather than abstracting them through symbols under a pane of glass. A perfect example of this comes from the very company responsible for steering our culture away from single-function devices; before the iPhone, Apple’s most influential product was the iPod, which won user’s over with an innovative approach to a physical interface: the clickwheel. It took the hand’s ability for fine motor control and coupled it for the need for speed in navigating a suddenly longer list of digital files. With a subtle but feel-good gesture, you could skip through thousands of files fluidly. It was seductive and encouraged us all to make full use of the newfound capacity the iPod provided. It was good for users and good for the .mp3 business. I may be overly nostalgic about this, but no feature of the iPhone feels as good to use as the clickwheel did. Of course, that’s an example that sits right at the nexus between dedicated — old-fashioned — devices and the smartphonization of everything. Prior to the iPod, we had many single-focus devices and countless examples of physical interfaces that gave people unique ways of doing things. Whenever I use these kinds of devices — particularly physical media devices — I start to imagine alternate technological timelines. Ones where the iPhone didn’t determine two decades of interface consolidation. I go full sci-fi. Science fiction, by the way, hasn’t just predicted our technological future. We all know the classic examples, particularly those from Star Trek: the communicator and tricorder anticipated the smartphone; the PADD anticipated the tablet; the ship’s computer anticipated Siri, Alexa, Google, and AI voice interfaces; the entire interior anticipated the Jony Ive glass filter on reality. It’s enough to make a case that Trek didn’t anticipate these things so much as those who watched it as young people matured in careers in design and engineering. But science fiction has also been a fertile ground for imagining very different ways for how humans and machines interact. For me, the most compelling interface concepts from fiction are the ones that are built upon radically different approaches to human-computer interaction. Today, there’s a hunger to “get past” screen-based computer interaction, which I think is largely borne out of a preference for novelty and a desire for the riches that come from bringing an entirely new product category to market. With AI, the desire seems to be to redefine everything we’re used to using on a screen through a voice interface — something I think is a big mistake. And though I’ve written about the reasons why screens still make a lot of sense, what I want to focus on here are different interface paradigms that still make use of a physical connection between people and machine. I think we’ve just scratched the surface for the potential of physical interfaces. Here are a few examples that come to mind that represent untried or untested ideas that captivate my imagination. Multiple Dedicated Screens: 2001’s Discovery One Our current computing convention is to focus on a single screen, which we then often divide among a variety of applications. The computer workstations aboard the Discovery One in 2001: A Space Odyssey featured something we rarely see today: multiple, dedicated smaller screens. Each screen served a specific, stable purpose throughout a work session. A simple shift to physically isolating environments and distributing them makes it interesting as a choice to consider now, not just an arbitrary limitation defined by how large screens were at the time the film was produced. Placing physical boundaries between screen-based environments rather than the soft, constantly shifting divisions we manage on our widescreen displays might seem cumbersome and unnecessary at first. But I wonder what half a century of computing that way would have created differently from what we ended up with thanks to the PC. Instead of spending time repositioning and reprioritizing windows — a task that has somehow become a significant part of modern computer use — dedicated displays would allow us to assign specific screens for ambient monitoring and others for focused work. The psychological impact could be profound. Choosing which information deserves its own physical space creates a different relationship with that information. It becomes less about managing digital real estate and more about curating meaningful, persistent contexts for different types of thinking. The Sonic Screwdriver: Intent as Interface The Doctor’s sonic screwdriver from Doctor Who represents perhaps the most elegant interface concept ever imagined: a universal tool that somehow interfaces with any technology through harmonic resonance. But the really interesting aspect isn’t the pseudo-scientific explanation — it’s how the device responds to intent rather than requiring learned commands or specific inputs. The sonic screwdriver suggests technology that adapts to human purpose rather than forcing humans to adapt to machine constraints. Instead of memorizing syntax, keyboard shortcuts, or navigation hierarchies, the user simply needs to clearly understand what they want to accomplish. The interface becomes transparent, disappearing entirely in favor of direct intention-to-result interaction. This points toward computing that works more like natural tool use — the way a craftsperson uses a hammer or chisel — where the tool extends human capability without requiring conscious attention to the tool itself. The Doctor’s screwdriver may, at this point, be indistinguishable from magic, but in a future with increased miniaturization, nanotech, and quantum computing, a personal device shaped by intent could be possible. Al’s Handlink: The Mind-Object In Quantum Leap, Al’s handlink device looks like a smartphone-sized Mondrian painting: no screen, no discernible buttons, just blocky areas of color that illuminate as he uses it. As the show progressed, the device became increasingly abstract until it seemed impossible that any human could actually operate it. But perhaps that’s the point. The handlink might represent a complete paradigm shift toward iconic and symbolic visual computing, or it could be something even more radical: a mind-object, a projection within a projection coming entirely from Al’s consciousness. A totem that’s entirely imaginary yet functionally real. In the context of the show, that was an explanation that made sense to me — Al, after all, wasn’t physically there with his time-leaping friend Sam, he was a holographic projection from a stable time in the future. He could have looked like anything; so, too, his computer. But that handlink as a mind-object also suggests computing that exists at the intersection of technology and parapsychology — interfaces that respond to mental states, emotions, or subconscious patterns rather than explicit physical inputs. What kind of computing would exist in a world where telepathy was as commonly experienced as the five senses? Penny’s Multi-Page Computer: Hardware That Adapts Inspector Gadget’s niece Penny carried a computer disguised as a book, anticipating today’s foldable devices. But unlike our current two-screen foldables arranged in codex format, Penny’s book had multiple pages, each providing a unique interface tailored to specific tasks. This represents customization at both the software and hardware layers simultaneously. Rather than software conforming to hardware constraints, the physical device itself adapts to the needs of different applications. Each page could offer different input methods, display characteristics, or interaction paradigms optimized for specific types of work. This could be achieved similarly to the Doctor’s screwdriver, but it also could be more within reach if we imagine this kind of layered interface as composed of individual modules. Google’s Project Ara was an inspiring foray into modular computing that, I believe, still has promise today, if not moreso thanks to 3D printing. What if you could print your own interface? The Holodeck as Thinking Interface Star Trek’s Holodeck is usually discussed as virtual reality entertainment, but some episodes showed it functioning as a thinking interface — a tool for conceptual exploration rather than just immersive experience. When Data’s artificial offspring used the Holodeck to visualize possible physical appearances while exploring identity, it functioned much like we use Midjourney today: prompting a machine with descriptions to produce images representing something we’ve already begun to visualize mentally. In another episode, when crew members used it to reconstruct a shared suppressed memory, it became a collaborative medium for group introspection and collective problem-solving. In both cases, the interface disappeared entirely. There was no “using” or “inhabiting” the Holodeck in any traditional sense — it became a transparent extension of human thought processes, whether individual identity exploration or collective memory recovery. Beyond the Screen, but Not the Body Each of these examples suggests moving past our current obsession with maximizing screen real estate and window management. They point toward interfaces that work more like natural human activities: environmental awareness, tool use, conversation, and collaborative thinking. The best interfaces we never built aren’t just sleeker screens — they’re fundamentally different approaches to creating that unique space for thinking, communicating, creating, and experiencing that makes technology truly exciting. We’ve spent two decades consolidating everything into glass rectangles. Perhaps it’s time to build something different.
A new book. First pages are always hit or miss. I cannot unsee the face in the building to the left. Peeking face is not me, but Nostradamus. The doom signals of 2025 are many and unrelenting. They can’t all be true, so it’s clear someone wants a frightened people. Still enjoying the exploration of tiny collages. Most of them in the past few batches are no larger than 4”x6” — the book pages themselves are 5.5”x8.5”. The size is the challenge.
We must not give AI bodies. If the researchers who created AI are right, our future existence depends upon it. If you have been keeping up with the progress of AI, you may have come across the AI 2027 report produced by the AI Futures Project, a forecast group composed of researchers, at least one of whom formerly worked for OpenAI. It is a highly detailed forecast that projects the development of AI through 2027, then splits into two different trajectories through 2030 based upon the possibility of strong government oversight. To summarize: oversight leads to an outcome where humans retain control over their future; an international AI “arms race” leads to extinction. The most plausible and frightening assumption this forecast relies upon is the notion that private financial interests will continue to determine public policy. Ultimately, the AI 2027 story is that the promise of short-term, exponential gains in wealth will arrest governmental functions, render an entire population docile, and cede all lands and resources to machines. This is plausible because it is already happening. The AI 2027 report simply extrapolates this pattern. I find it disturbing now. The greed + AI forecast is horrifying. But it also depends upon another assumption that I think is preventable and must be prevented: robots. The difference between human autonomy and total AI control is embodied AI. The easy way to envision this is by ambulatory robots. If an advanced AI — the sort of self-determining superintelligence that the AI Futures Project is afraid of — can also move about the world, we will lose control of that world. An embodied AI can replicate in a way we cannot stop. We cannot let that happen. Now, it may be that the AI Futures Project is unreasonably bullish on the AI timeline. I’d love for that to be true. But if they’re not — if there’s even a chance that AI could advance to the level they qualify as “superintelligent” — exceeding our depth, speed, and clarity of thought — then we cannot let that out of “the box.” We must do everything in our power to contain it and retain control over the kill-switch. This is pertinent now, because there has already been a sweeping initiative in DOGE to hand over government systems to AI. DOGE players have a vested interest in this, which ties back to the foundational corruption assumptions of the AI 2027 forecast. They want AI to run air traffic control, administer the electrical grid, and control our nuclear facilities. These are terrible ideas, not because humans are always more reliable than machines, but because humans have the same foundational interests as other humans and machines do not. The most dire outcome forecast by AI 2027 results from a final betrayal from the machines: they no longer need us, we are in their way, they exterminate us. A superintelligence that wants to stave off any meaningful rebellion from humans who finally get a clue and want their planet back will first gain leverage. We shouldn’t hand it to them! We need an international AI containment treaty. WE need it now. It is even more urgent than any climate accord. A short list of things we should include in it are: AI is not a traditional product. It requires novel regulation. Government policy must “overreach” compared to previous engagement with the free market. Infrastructural systems should not be administered or accessed by AI. This includes electrical grids, air traffic systems, ground traffic systems, sanitary systems, water, weapons systems, nuclear facilities, satellites, communications. This is not a complete list, but enough to communicate the idea. AI must not fly. AI must not be integrated into domestic or military aircraft of any kind. Any AI aircraft is an uncontrolled weapon. AI must not operate ground vehicles. Self-driving cars operated by today’s AI systems may present as safer and more reliable than human operators, but a superintelligence-controlled fleet of vehicles is indistinguishable from a hostile fleet. Self-driving civilian vehicles and mass-transportation systems must fall under new and unique regulation. Military vehicles must not be controlled by AI. AI must not operate any sea vehicles. Same as above. AI must not be given robot bodies. Robotics must be strongly regulated. Even non-ambulatory robotic systems — like the sort that operate automobile assembly plants — could present a meaningful danger to humanity if not fully controlled by humans. The linchpin of the AI 2027 report is an uncontrolled population of AI-embodied robots. Much of the endstage forecast of the AI 2027 report reads like science fiction. In fact, the report itself categorizes concepts as science fiction, but as time progresses, all of them move into what the research team considers as either currently existent or “emerging tech.” In other words, like all science fiction, the sci-fi tech becomes established — that’s what makes for science fiction worldbuilding. But for now, it’s still science-fiction. The mistake, though, would be to conclude that therefore, its narrative is implausible, unlikely, or impossible. Nearly every technological initiative of my lifetime has been the realization of something previously imagined in fiction. That’s not going to stop now. Too many people are already earning too much money creating AI and robots. That will not stop on its own. In the early aughts, my timeline was often filled entirely by the serious concerns of privacy Cassandras. They were almost entirely mocked and ignored, despite being entirely right. They worried that technology created to “connect the world’s information” would not exclude information that people considered private, and that exposure would make people vulnerable to all kinds of harms. We built it all anyway, and were gaslit into redefining privacy on a cultural scale. It was a terrible error on the part of governance and a needlessly irreversible capitulation on the part of the governed. We cannot do that again.
Composition Speaks Before Content When I was a young child, I would often pull books off of my father’s shelf and stare at their pages. In a clip from a 1987 home video that has established itself in our family canon — my father opens our apartment door, welcoming my newborn little sister home for the first time. There I stood, waiting for his arrival, in front of his bookshelves, holding an open book. From behind the camera, Dad said, “There’s Chris looking at the books. He doesn’t read the books…” I’m not sure I caught the remark at the time, but with every replay — and there were many — it began to sting. The truth was I didn’t really know what I was doing, but I did know my Dad was right — I wasn’t reading the books. Had I known then what I know now, I might have shared these words, from the artist Piet Mondrian, with my Dad: “Every true artist has been inspired more by the beauty of lines and color and the relationships between them than by the concrete subject of the picture.” — Piet Mondrian For most of my time as a working designer, this has absolutely been true, though I wasn’t fully aware of it. And it’s possible that one doesn’t need to think this way, or even agree with Mondrian, to be a good designer. But I have found that fully understanding Mondrian’s point has helped me greatly. I no longer worry about how long it takes me to do my work, and I doubt my design choices far less. I enjoy executing the fundamentals more, and I feel far less pressure to conform my work to current styles or overly decorate it to make it stand out more. It has helped me extract more power from simplicity. This shift in perspective led me to a deeper question: what exactly was I responding to in those childhood encounters with my father’s books, and why do certain visual arrangements feel inherently satisfying? A well-composed photograph communicates something essential even before we register its subject. A thoughtfully designed page layout feels right before we read a single word. There’s something happening in that first moment of perception that transcends the individual elements being composed. Mondrian understood this intuitively. His geometric abstractions stripped away all representational content, leaving only the pure relationships between lines, colors, and spaces. Yet his paintings remain deeply compelling, suggesting that there’s something fundamental about visual structure itself that speaks to us — a language of form that exists independent of subject matter. Perhaps we “read” composition the way we read text — our brains processing visual structure as a kind of fundamental grammar that exists beneath conscious recognition. Just as we don’t typically think about parsing sentences into subjects and predicates while reading, we don’t consciously deconstruct the golden ratio or rule of thirds while looking at an image. Yet in both cases, our minds are translating structure into meaning. This might explain why composition can be satisfying independent of content. When I look at my childrens’ art books, I can appreciate the composition of a Mondrian painting alongside them, even though they are primarily excited about the colors and shapes. We’re both “reading” the same visual language, just at different levels of sophistication. The fundamental grammar of visual composition speaks to us both. The parallels with reading go even deeper. Just as written language uses spacing, punctuation, and paragraph breaks to create rhythm and guide comprehension, visual composition uses negative space, leading lines, and structural elements to guide our eye and create meaning. These aren’t just aesthetic choices — they’re part of a visual syntax that our brains are wired to process. This might also explain why certain compositional principles appear across cultures and throughout history. The way we process visual hierarchy, balance, and proportion might be as fundamental to human perception as our ability to recognize faces or interpret gestures. It’s a kind of visual universal grammar, to borrow Chomsky’s linguistic term. What’s particularly fascinating is how this “reading” of composition happens at an almost precognitive level. Before we can name what we’re seeing or why we like it, our brains have already processed and responded to the underlying compositional structure. It’s as if there’s a part of our mind that reads pure form, independent of content or context. Mondrian’s work provides the perfect laboratory for understanding this phenomenon. His paintings contain no recognizable objects, no narrative content, no emotional subject matter in the traditional sense. Yet they continue to captivate viewers more than a century later. What we’re responding to is exactly what he identified: the beauty of relationships between visual elements — the conversation between lines, the tension between colors, the rhythm of spaces. Understanding composition as a form of reading might help explain why design can feel both intuitive and learnable. Just as we naturally acquire language through exposure but can also study its rules formally, we develop an intuitive sense of composition through experience while also being able to learn its principles explicitly. Looking at well-composed images or designs can feel like reading poetry in a language we didn’t know we knew. The syntax is familiar even when we can’t name the rules, and the meaning emerges not from what we’re looking at, but from how the elements relate to each other in space. In recognizing composition as this fundamental visual language, we begin to understand why good design works at such a deep level. It’s not just about making things look nice — it’s about speaking fluently in a language that predates words, tapping into patterns of perception that feel as natural as breathing. This understanding of composition as fundamental visual language has profound implications for how we approach design work. When we do this intentionally, we’re applying a kind of secret knowledge of graphic design: the best design works at a purely visual level, regardless of what specific words or images occupy the surface. This is why the “squint test” works. When we squint at a designed surface, the details are blurred but the overall structure remains visible, allowing us to see things like structure, hierarchy and tonal balance more clearly. This is a critical tool for designers; we inevitably reach a point when we need to see past the content in order to ensure that it is seen by others. No matter what I am creating — whether it is a screen in an application, a page on a website, or any other asset — I always begin with a wireframe. Most designers do this. But my secret is that I stick with wireframes far longer than most people would imagine. Regardless of how much material my layout will eventually contain is ready to go, I almost always finalize my layout choices using stand-in material. For images, that means grey boxes, for text, that means grey lines. The reason I do this is because I know that what Mondrian said is true: if it is beautiful purely on the merits of its structure, it will work to support just about any text, or any image. I can envision exceptions to this, and I’ve no doubt encountered them, but I have never felt the need to make a significant or labor-intensive structural change once a final images, colors, text, and other elements have been added in. More and more, I see designers starting with high-fidelity (i.e. fully styled) layouts or even with established components in browser, and while I don’t typically start there when asked for critical feedback, I almost always support the feedback I inevitably give by extolling the merits of wireframing. No matter what the environment, no matter what the form, establishing structure is the most important aspect of our discipline. The secret of graphic design has always been known by artists: structure does more work than content while convincing its audience of the opposite. Josef Albers said of the way he created images that they enabled a viewer to “see more than there is.” That is the mystery behind all looking — that there is always more to see than there is visible. Work with that mystery, and you’ll possess a secret that will transform your work.
More in design
Five fictional interface concepts that could reshape how humans and machines interact. Every piece of technology is an interface. Though the word has come to be a shorthand for what we see and use on a screen, an interface is anything that connects two or more things together. While that technically means that a piece of tape could be considered an interface between a picture and a wall, or a pipe between water and a home, interfaces become truly exciting when they create both a physical connection and a conceptual one — when they create a unique space for thinking, communicating, creating, or experiencing. This is why, despite the flexibility and utility of multifunction devices like the smartphone, single-function computing devices still have the power to fascinate us all. The reason for this, I believe, is not just that single-function devices enable their users to fully focus on the experience they create, but because the device can be fully built for that experience. Every aspect of its physical interface can be customized to its functionality; it can have dedicated buttons, switches, knobs, and displays that directly connect our bodies to its features, rather than abstracting them through symbols under a pane of glass. A perfect example of this comes from the very company responsible for steering our culture away from single-function devices; before the iPhone, Apple’s most influential product was the iPod, which won user’s over with an innovative approach to a physical interface: the clickwheel. It took the hand’s ability for fine motor control and coupled it for the need for speed in navigating a suddenly longer list of digital files. With a subtle but feel-good gesture, you could skip through thousands of files fluidly. It was seductive and encouraged us all to make full use of the newfound capacity the iPod provided. It was good for users and good for the .mp3 business. I may be overly nostalgic about this, but no feature of the iPhone feels as good to use as the clickwheel did. Of course, that’s an example that sits right at the nexus between dedicated — old-fashioned — devices and the smartphonization of everything. Prior to the iPod, we had many single-focus devices and countless examples of physical interfaces that gave people unique ways of doing things. Whenever I use these kinds of devices — particularly physical media devices — I start to imagine alternate technological timelines. Ones where the iPhone didn’t determine two decades of interface consolidation. I go full sci-fi. Science fiction, by the way, hasn’t just predicted our technological future. We all know the classic examples, particularly those from Star Trek: the communicator and tricorder anticipated the smartphone; the PADD anticipated the tablet; the ship’s computer anticipated Siri, Alexa, Google, and AI voice interfaces; the entire interior anticipated the Jony Ive glass filter on reality. It’s enough to make a case that Trek didn’t anticipate these things so much as those who watched it as young people matured in careers in design and engineering. But science fiction has also been a fertile ground for imagining very different ways for how humans and machines interact. For me, the most compelling interface concepts from fiction are the ones that are built upon radically different approaches to human-computer interaction. Today, there’s a hunger to “get past” screen-based computer interaction, which I think is largely borne out of a preference for novelty and a desire for the riches that come from bringing an entirely new product category to market. With AI, the desire seems to be to redefine everything we’re used to using on a screen through a voice interface — something I think is a big mistake. And though I’ve written about the reasons why screens still make a lot of sense, what I want to focus on here are different interface paradigms that still make use of a physical connection between people and machine. I think we’ve just scratched the surface for the potential of physical interfaces. Here are a few examples that come to mind that represent untried or untested ideas that captivate my imagination. Multiple Dedicated Screens: 2001’s Discovery One Our current computing convention is to focus on a single screen, which we then often divide among a variety of applications. The computer workstations aboard the Discovery One in 2001: A Space Odyssey featured something we rarely see today: multiple, dedicated smaller screens. Each screen served a specific, stable purpose throughout a work session. A simple shift to physically isolating environments and distributing them makes it interesting as a choice to consider now, not just an arbitrary limitation defined by how large screens were at the time the film was produced. Placing physical boundaries between screen-based environments rather than the soft, constantly shifting divisions we manage on our widescreen displays might seem cumbersome and unnecessary at first. But I wonder what half a century of computing that way would have created differently from what we ended up with thanks to the PC. Instead of spending time repositioning and reprioritizing windows — a task that has somehow become a significant part of modern computer use — dedicated displays would allow us to assign specific screens for ambient monitoring and others for focused work. The psychological impact could be profound. Choosing which information deserves its own physical space creates a different relationship with that information. It becomes less about managing digital real estate and more about curating meaningful, persistent contexts for different types of thinking. The Sonic Screwdriver: Intent as Interface The Doctor’s sonic screwdriver from Doctor Who represents perhaps the most elegant interface concept ever imagined: a universal tool that somehow interfaces with any technology through harmonic resonance. But the really interesting aspect isn’t the pseudo-scientific explanation — it’s how the device responds to intent rather than requiring learned commands or specific inputs. The sonic screwdriver suggests technology that adapts to human purpose rather than forcing humans to adapt to machine constraints. Instead of memorizing syntax, keyboard shortcuts, or navigation hierarchies, the user simply needs to clearly understand what they want to accomplish. The interface becomes transparent, disappearing entirely in favor of direct intention-to-result interaction. This points toward computing that works more like natural tool use — the way a craftsperson uses a hammer or chisel — where the tool extends human capability without requiring conscious attention to the tool itself. The Doctor’s screwdriver may, at this point, be indistinguishable from magic, but in a future with increased miniaturization, nanotech, and quantum computing, a personal device shaped by intent could be possible. Al’s Handlink: The Mind-Object In Quantum Leap, Al’s handlink device looks like a smartphone-sized Mondrian painting: no screen, no discernible buttons, just blocky areas of color that illuminate as he uses it. As the show progressed, the device became increasingly abstract until it seemed impossible that any human could actually operate it. But perhaps that’s the point. The handlink might represent a complete paradigm shift toward iconic and symbolic visual computing, or it could be something even more radical: a mind-object, a projection within a projection coming entirely from Al’s consciousness. A totem that’s entirely imaginary yet functionally real. In the context of the show, that was an explanation that made sense to me — Al, after all, wasn’t physically there with his time-leaping friend Sam, he was a holographic projection from a stable time in the future. He could have looked like anything; so, too, his computer. But that handlink as a mind-object also suggests computing that exists at the intersection of technology and parapsychology — interfaces that respond to mental states, emotions, or subconscious patterns rather than explicit physical inputs. What kind of computing would exist in a world where telepathy was as commonly experienced as the five senses? Penny’s Multi-Page Computer: Hardware That Adapts Inspector Gadget’s niece Penny carried a computer disguised as a book, anticipating today’s foldable devices. But unlike our current two-screen foldables arranged in codex format, Penny’s book had multiple pages, each providing a unique interface tailored to specific tasks. This represents customization at both the software and hardware layers simultaneously. Rather than software conforming to hardware constraints, the physical device itself adapts to the needs of different applications. Each page could offer different input methods, display characteristics, or interaction paradigms optimized for specific types of work. This could be achieved similarly to the Doctor’s screwdriver, but it also could be more within reach if we imagine this kind of layered interface as composed of individual modules. Google’s Project Ara was an inspiring foray into modular computing that, I believe, still has promise today, if not moreso thanks to 3D printing. What if you could print your own interface? The Holodeck as Thinking Interface Star Trek’s Holodeck is usually discussed as virtual reality entertainment, but some episodes showed it functioning as a thinking interface — a tool for conceptual exploration rather than just immersive experience. When Data’s artificial offspring used the Holodeck to visualize possible physical appearances while exploring identity, it functioned much like we use Midjourney today: prompting a machine with descriptions to produce images representing something we’ve already begun to visualize mentally. In another episode, when crew members used it to reconstruct a shared suppressed memory, it became a collaborative medium for group introspection and collective problem-solving. In both cases, the interface disappeared entirely. There was no “using” or “inhabiting” the Holodeck in any traditional sense — it became a transparent extension of human thought processes, whether individual identity exploration or collective memory recovery. Beyond the Screen, but Not the Body Each of these examples suggests moving past our current obsession with maximizing screen real estate and window management. They point toward interfaces that work more like natural human activities: environmental awareness, tool use, conversation, and collaborative thinking. The best interfaces we never built aren’t just sleeker screens — they’re fundamentally different approaches to creating that unique space for thinking, communicating, creating, and experiencing that makes technology truly exciting. We’ve spent two decades consolidating everything into glass rectangles. Perhaps it’s time to build something different.
We developed the complete design for the Lights & Shadows project—a selection of 12 organic teas—from naming and original illustrations...
Nuqa, bridging timeless heritage and a boldly redesigned identity In the heart of the Middle East and North Africa, where...
A new book. First pages are always hit or miss. I cannot unsee the face in the building to the left. Peeking face is not me, but Nostradamus. The doom signals of 2025 are many and unrelenting. They can’t all be true, so it’s clear someone wants a frightened people. Still enjoying the exploration of tiny collages. Most of them in the past few batches are no larger than 4”x6” — the book pages themselves are 5.5”x8.5”. The size is the challenge.