Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
7
What we lost when everything became a phone, and when the phone became everything. In 2001, I took a train from Providence to Detroit. What should have been a 12-hour journey stretched into 34 when we got caught in a Buffalo blizzard. As the train sat buried in rapidly accumulating snow, bathrooms failed, food ran out, and passengers struggled to cope with their containment. I had taken along my minidisc player and just three discs, assuming I’d spend most of the trip sleeping. With nothing else to do but stay put in my seat, I got to know those three albums very, very well. I’ve maintained a relationship with them with format fluidity. Over the course of my life, I’ve had copies of them on cassette tape, originals on compact disc, more copies on MiniDisc, purchased (and pirated) .mp3, .wav, and .flac files, and access through a dozen different streaming services. Regardless of how I listen to them, I am still transported back to that snow-bound train. After nearly twenty-five...
3 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

The Best Interfaces We Never Built

Five fictional interface concepts that could reshape how humans and machines interact. Every piece of technology is an interface. Though the word has come to be a shorthand for what we see and use on a screen, an interface is anything that connects two or more things together. While that technically means that a piece of tape could be considered an interface between a picture and a wall, or a pipe between water and a home, interfaces become truly exciting when they create both a physical connection and a conceptual one — when they create a unique space for thinking, communicating, creating, or experiencing. This is why, despite the flexibility and utility of multifunction devices like the smartphone, single-function computing devices still have the power to fascinate us all. The reason for this, I believe, is not just that single-function devices enable their users to fully focus on the experience they create, but because the device can be fully built for that experience. Every aspect of its physical interface can be customized to its functionality; it can have dedicated buttons, switches, knobs, and displays that directly connect our bodies to its features, rather than abstracting them through symbols under a pane of glass. A perfect example of this comes from the very company responsible for steering our culture away from single-function devices; before the iPhone, Apple’s most influential product was the iPod, which won user’s over with an innovative approach to a physical interface: the clickwheel. It took the hand’s ability for fine motor control and coupled it for the need for speed in navigating a suddenly longer list of digital files. With a subtle but feel-good gesture, you could skip through thousands of files fluidly. It was seductive and encouraged us all to make full use of the newfound capacity the iPod provided. It was good for users and good for the .mp3 business. I may be overly nostalgic about this, but no feature of the iPhone feels as good to use as the clickwheel did. Of course, that’s an example that sits right at the nexus between dedicated — old-fashioned — devices and the smartphonization of everything. Prior to the iPod, we had many single-focus devices and countless examples of physical interfaces that gave people unique ways of doing things. Whenever I use these kinds of devices — particularly physical media devices — I start to imagine alternate technological timelines. Ones where the iPhone didn’t determine two decades of interface consolidation. I go full sci-fi. Science fiction, by the way, hasn’t just predicted our technological future. We all know the classic examples, particularly those from Star Trek: the communicator and tricorder anticipated the smartphone; the PADD anticipated the tablet; the ship’s computer anticipated Siri, Alexa, Google, and AI voice interfaces; the entire interior anticipated the Jony Ive glass filter on reality. It’s enough to make a case that Trek didn’t anticipate these things so much as those who watched it as young people matured in careers in design and engineering. But science fiction has also been a fertile ground for imagining very different ways for how humans and machines interact. For me, the most compelling interface concepts from fiction are the ones that are built upon radically different approaches to human-computer interaction. Today, there’s a hunger to “get past” screen-based computer interaction, which I think is largely borne out of a preference for novelty and a desire for the riches that come from bringing an entirely new product category to market. With AI, the desire seems to be to redefine everything we’re used to using on a screen through a voice interface — something I think is a big mistake. And though I’ve written about the reasons why screens still make a lot of sense, what I want to focus on here are different interface paradigms that still make use of a physical connection between people and machine. I think we’ve just scratched the surface for the potential of physical interfaces. Here are a few examples that come to mind that represent untried or untested ideas that captivate my imagination. Multiple Dedicated Screens: 2001’s Discovery One Our current computing convention is to focus on a single screen, which we then often divide among a variety of applications. The computer workstations aboard the Discovery One in 2001: A Space Odyssey featured something we rarely see today: multiple, dedicated smaller screens. Each screen served a specific, stable purpose throughout a work session. A simple shift to physically isolating environments and distributing them makes it interesting as a choice to consider now, not just an arbitrary limitation defined by how large screens were at the time the film was produced. Placing physical boundaries between screen-based environments rather than the soft, constantly shifting divisions we manage on our widescreen displays might seem cumbersome and unnecessary at first. But I wonder what half a century of computing that way would have created differently from what we ended up with thanks to the PC. Instead of spending time repositioning and reprioritizing windows — a task that has somehow become a significant part of modern computer use — dedicated displays would allow us to assign specific screens for ambient monitoring and others for focused work. The psychological impact could be profound. Choosing which information deserves its own physical space creates a different relationship with that information. It becomes less about managing digital real estate and more about curating meaningful, persistent contexts for different types of thinking. The Sonic Screwdriver: Intent as Interface The Doctor’s sonic screwdriver from Doctor Who represents perhaps the most elegant interface concept ever imagined: a universal tool that somehow interfaces with any technology through harmonic resonance. But the really interesting aspect isn’t the pseudo-scientific explanation — it’s how the device responds to intent rather than requiring learned commands or specific inputs. The sonic screwdriver suggests technology that adapts to human purpose rather than forcing humans to adapt to machine constraints. Instead of memorizing syntax, keyboard shortcuts, or navigation hierarchies, the user simply needs to clearly understand what they want to accomplish. The interface becomes transparent, disappearing entirely in favor of direct intention-to-result interaction. This points toward computing that works more like natural tool use — the way a craftsperson uses a hammer or chisel — where the tool extends human capability without requiring conscious attention to the tool itself. The Doctor’s screwdriver may, at this point, be indistinguishable from magic, but in a future with increased miniaturization, nanotech, and quantum computing, a personal device shaped by intent could be possible. Al’s Handlink: The Mind-Object In Quantum Leap, Al’s handlink device looks like a smartphone-sized Mondrian painting: no screen, no discernible buttons, just blocky areas of color that illuminate as he uses it. As the show progressed, the device became increasingly abstract until it seemed impossible that any human could actually operate it. But perhaps that’s the point. The handlink might represent a complete paradigm shift toward iconic and symbolic visual computing, or it could be something even more radical: a mind-object, a projection within a projection coming entirely from Al’s consciousness. A totem that’s entirely imaginary yet functionally real. In the context of the show, that was an explanation that made sense to me — Al, after all, wasn’t physically there with his time-leaping friend Sam, he was a holographic projection from a stable time in the future. He could have looked like anything; so, too, his computer. But that handlink as a mind-object also suggests computing that exists at the intersection of technology and parapsychology — interfaces that respond to mental states, emotions, or subconscious patterns rather than explicit physical inputs. What kind of computing would exist in a world where telepathy was as commonly experienced as the five senses? Penny’s Multi-Page Computer: Hardware That Adapts Inspector Gadget’s niece Penny carried a computer disguised as a book, anticipating today’s foldable devices. But unlike our current two-screen foldables arranged in codex format, Penny’s book had multiple pages, each providing a unique interface tailored to specific tasks. This represents customization at both the software and hardware layers simultaneously. Rather than software conforming to hardware constraints, the physical device itself adapts to the needs of different applications. Each page could offer different input methods, display characteristics, or interaction paradigms optimized for specific types of work. This could be achieved similarly to the Doctor’s screwdriver, but it also could be more within reach if we imagine this kind of layered interface as composed of individual modules. Google’s Project Ara was an inspiring foray into modular computing that, I believe, still has promise today, if not moreso thanks to 3D printing. What if you could print your own interface? The Holodeck as Thinking Interface Star Trek’s Holodeck is usually discussed as virtual reality entertainment, but some episodes showed it functioning as a thinking interface — a tool for conceptual exploration rather than just immersive experience. When Data’s artificial offspring used the Holodeck to visualize possible physical appearances while exploring identity, it functioned much like we use Midjourney today: prompting a machine with descriptions to produce images representing something we’ve already begun to visualize mentally. In another episode, when crew members used it to reconstruct a shared suppressed memory, it became a collaborative medium for group introspection and collective problem-solving. In both cases, the interface disappeared entirely. There was no “using” or “inhabiting” the Holodeck in any traditional sense — it became a transparent extension of human thought processes, whether individual identity exploration or collective memory recovery. Beyond the Screen, but Not the Body Each of these examples suggests moving past our current obsession with maximizing screen real estate and window management. They point toward interfaces that work more like natural human activities: environmental awareness, tool use, conversation, and collaborative thinking. The best interfaces we never built aren’t just sleeker screens — they’re fundamentally different approaches to creating that unique space for thinking, communicating, creating, and experiencing that makes technology truly exciting. We’ve spent two decades consolidating everything into glass rectangles. Perhaps it’s time to build something different.

2 weeks ago 13 votes
The Designer's Hierarchy of Career Needs

Why compensation, edification, and recognition aren’t equally important—and getting the order wrong can derail your career. Success is subjective. It means many things to many different people. But I think there is a general model that anyone can use to build a design career. I believe that success in a design career should be evaluated against three criteria: compensation, edification, and recognition. But contrary to how the design industry operates — and the advice typically given to emerging designers — these aren’t equally important. They form a hierarchy, and getting the order wrong can derail a career before it even begins. Compensation Comes First Compensation is the most important first signal of a successful design career, because it is the thing that enables the continuation of work. If you’re not being paid adequately, your ability to keep working is directly limited. This is directly in opposition to the advice I got time and again at the start of my career, which essentially boiled down to: do what you love and the money and recognition will come. This is almost never true. There have been rare cases where it has been true for people who, ultimately, happened to be in the right place at the right time with the right relationships already in place. The post-hoc narrative of their lottery-like success leaves out all the luck and privilege and focuses entirely on the passion. These stories are intoxicating. They feel good, blur our vision, and result in a working hangover that can waylay someone for years if not the entirety of their increasingly despiriting career. What does adequate compensation look like? It’s not about getting rich — it’s about reaching a threshold where money anxiety doesn’t dominate your decision-making. Can you pay rent without stress? Buy groceries without calculating every purchase? Take a sick day without losing income? Have a modest emergency fund? If you can answer yes to these basics, you’ve achieved the compensation foundation that makes everything else possible. This might mean taking a corporate design job instead of the “cool” startup that pays in equity and promises. It might mean freelancing for boring clients instead of passion projects. It might mean saying no to unpaid opportunities, even when they seem prestigious. The key insight is that financial stability creates the mental space and time horizon necessary for meaningful career development. This is not glamorous. It sounds boring. It may even be boring, but it doesn’t need to last that long. It’s easier to make money once you’ve made money. Then Focus on Edification Once compensation has been taken care of, the majority of a designer’s effort should be put toward edification. I choose this word very intentionally. There is nothing wrong with passion, but passion is the fossil fuel of the soul. It’s not an intrinsic expression of humanity; it is inspired by experience, nurtured by love, commitment, and work, and focused by discipline, labor, and feedback. Passion gets all the credit for inspiration and none of the blame for pain, but it’s worth pointing out that the ancient application of this word had more to do with suffering than success. Edification, on the other hand, covers the full, necessary cycle that keeps us working as designers: interest, information, instruction, improvement. You couldn’t ask for a more profound measure of success than maintaining the cycle of edification for an entire career. If you feel intimidated by a project, it is an opportunity to learn. Focus your interest toward gathering new information. If you feel uncomfortable during a project, you are probably growing. Seek instruction from those who you know that make the kind of work you admire in a way you can respect. If you feel like the work could have been better, you’re probably right. You’re ready to work toward improvement. This process doesn’t just happen once; a successful career is the repetition of this cycle again and again. What does edification look like in practice? It’s choosing projects that teach you something new, even if they’re not the most glamorous. It’s working with people who challenge your thinking. It’s seeking feedback that makes you uncomfortable. It’s reading, experimenting, and building things outside of work requirements. It’s the difference between collecting paychecks and building expertise. Considering the cycle of edification should help you select the right opportunities. Does the problem space interest you intellectually? Will the project expand your skill set? Will you work with people from whom you can learn? These not only become more viable considerations once you’re not worried about making rent, but the essential path forward. The transition point between focusing on compensation and edification isn’t about reaching a specific salary number — it’s about achieving enough financial stability that you can think beyond survival. For some, this might happen quickly; for others, it may take several years. It might happen more than once in a career. The key is recognizing when you’ve moved from financial desperation to financial adequacy. Recognition Is Always Overrated Finally, recognition. This is probably the least valuable measure of success a designer could pursue and receive. It is subjective. It is fickle. It is fleeting. And yet, it is the bait used to lure inexperienced designers — to unpaid internships, low-paid jobs, free services and spec work of all kinds. The pitch is always the same: we can’t pay you, but we can offer you exposure. This is a lie. Attention is harder to come by than money these days, so when a person offers you one in lieu of another, know it’s an IOU that will never pay out. Most designers are better off bootstrapping their own recognition rather than hoping for a sliver of someone else’s limelight. I might not have understood or believed this at the start of my career; I take it as fact today, twenty years in. That said, I wouldn’t say that all recognition is worthless. Peer respect within your professional community has value — it can lead to better opportunities and collaborations. Having work you’re proud to show can open doors. But these forms of recognition should be byproducts of doing good work, not primary goals that drive decision-making. Design careers built upon recognition alone are indistinguishable from entertainment. The recognition trap is particularly dangerous early in a career because it exploits the natural desire for validation. Young designers are told that working for prestigious brands or winning awards will jumpstart their careers. Sometimes this works, but more often it leads to a cycle of undervalued work performed in hopes of future payoff that never materializes. Applying the Hierarchy Here’s how this hierarchy works in practice: Early career: Focus almost exclusively on compensation. Take the job that pays best, even if it’s not the most exciting. Learn what you can, but prioritize financial stability above all else. Mid-career:: Once you’ve achieved financial adequacy, shift focus to edification. Be more selective about projects and opportunities. Invest in skills and relationships that will compound over time. Established career:: Recognition may come naturally as a result of good work and years of experience. If it doesn’t, that’s fine too — you’ll have built something more valuable: expertise and financial security. Looking back, I can say that I put far more emphasis on external recognition and validation too early on in my career. I got a lot more of it – and let it distract me — ten years into my career than I do now, and it shows in my work. It’s better now than it was then, even if no one is talking about it. Every designer is better off putting whatever energy they’d expend on an attention fetch quest toward getting paid for their work, because it’s the money that will get you what you really need in the early days of your career: a roof over your head, food on the table, a good night’s sleep, and a way to get from here to there. If you have those things and are working in design, keep at it. Either external recognition will come or you’ll work long enough to realize that sometimes the most important recognition is self-bestowed. If you can be satisfied by work before anyone else sees it, you will need less of the very thing least capable of sustaining you. You will always get farther on your own steam than someone else’s.

2 weeks ago 47 votes
visual journal – 2025 June 7

A new book. First pages are always hit or miss. I cannot unsee the face in the building to the left. Peeking face is not me, but Nostradamus. The doom signals of 2025 are many and unrelenting. They can’t all be true, so it’s clear someone wants a frightened people. Still enjoying the exploration of tiny collages. Most of them in the past few batches are no larger than 4”x6” — the book pages themselves are 5.5”x8.5”. The size is the challenge.

2 weeks ago 16 votes
Why Embodied AI is the Red Line We Cannot Cross

We must not give AI bodies. If the researchers who created AI are right, our future existence depends upon it. If you have been keeping up with the progress of AI, you may have come across the AI 2027 report produced by the AI Futures Project, a forecast group composed of researchers, at least one of whom formerly worked for OpenAI. It is a highly detailed forecast that projects the development of AI through 2027, then splits into two different trajectories through 2030 based upon the possibility of strong government oversight. To summarize: oversight leads to an outcome where humans retain control over their future; an international AI “arms race” leads to extinction. The most plausible and frightening assumption this forecast relies upon is the notion that private financial interests will continue to determine public policy. Ultimately, the AI 2027 story is that the promise of short-term, exponential gains in wealth will arrest governmental functions, render an entire population docile, and cede all lands and resources to machines. This is plausible because it is already happening. The AI 2027 report simply extrapolates this pattern. I find it disturbing now. The greed + AI forecast is horrifying. But it also depends upon another assumption that I think is preventable and must be prevented: robots. The difference between human autonomy and total AI control is embodied AI. The easy way to envision this is by ambulatory robots. If an advanced AI — the sort of self-determining superintelligence that the AI Futures Project is afraid of — can also move about the world, we will lose control of that world. An embodied AI can replicate in a way we cannot stop. We cannot let that happen. Now, it may be that the AI Futures Project is unreasonably bullish on the AI timeline. I’d love for that to be true. But if they’re not — if there’s even a chance that AI could advance to the level they qualify as “superintelligent” — exceeding our depth, speed, and clarity of thought — then we cannot let that out of “the box.” We must do everything in our power to contain it and retain control over the kill-switch. This is pertinent now, because there has already been a sweeping initiative in DOGE to hand over government systems to AI. DOGE players have a vested interest in this, which ties back to the foundational corruption assumptions of the AI 2027 forecast. They want AI to run air traffic control, administer the electrical grid, and control our nuclear facilities. These are terrible ideas, not because humans are always more reliable than machines, but because humans have the same foundational interests as other humans and machines do not. The most dire outcome forecast by AI 2027 results from a final betrayal from the machines: they no longer need us, we are in their way, they exterminate us. A superintelligence that wants to stave off any meaningful rebellion from humans who finally get a clue and want their planet back will first gain leverage. We shouldn’t hand it to them! We need an international AI containment treaty. WE need it now. It is even more urgent than any climate accord. A short list of things we should include in it are: AI is not a traditional product. It requires novel regulation. Government policy must “overreach” compared to previous engagement with the free market. Infrastructural systems should not be administered or accessed by AI. This includes electrical grids, air traffic systems, ground traffic systems, sanitary systems, water, weapons systems, nuclear facilities, satellites, communications. This is not a complete list, but enough to communicate the idea. AI must not fly. AI must not be integrated into domestic or military aircraft of any kind. Any AI aircraft is an uncontrolled weapon. AI must not operate ground vehicles. Self-driving cars operated by today’s AI systems may present as safer and more reliable than human operators, but a superintelligence-controlled fleet of vehicles is indistinguishable from a hostile fleet. Self-driving civilian vehicles and mass-transportation systems must fall under new and unique regulation. Military vehicles must not be controlled by AI. AI must not operate any sea vehicles. Same as above. AI must not be given robot bodies. Robotics must be strongly regulated. Even non-ambulatory robotic systems — like the sort that operate automobile assembly plants — could present a meaningful danger to humanity if not fully controlled by humans. The linchpin of the AI 2027 report is an uncontrolled population of AI-embodied robots. Much of the endstage forecast of the AI 2027 report reads like science fiction. In fact, the report itself categorizes concepts as science fiction, but as time progresses, all of them move into what the research team considers as either currently existent or “emerging tech.” In other words, like all science fiction, the sci-fi tech becomes established — that’s what makes for science fiction worldbuilding. But for now, it’s still science-fiction. The mistake, though, would be to conclude that therefore, its narrative is implausible, unlikely, or impossible. Nearly every technological initiative of my lifetime has been the realization of something previously imagined in fiction. That’s not going to stop now. Too many people are already earning too much money creating AI and robots. That will not stop on its own. In the early aughts, my timeline was often filled entirely by the serious concerns of privacy Cassandras. They were almost entirely mocked and ignored, despite being entirely right. They worried that technology created to “connect the world’s information” would not exclude information that people considered private, and that exposure would make people vulnerable to all kinds of harms. We built it all anyway, and were gaslit into redefining privacy on a cultural scale. It was a terrible error on the part of governance and a needlessly irreversible capitulation on the part of the governed. We cannot do that again.

3 weeks ago 14 votes

More in design

Vita Tessera Cosmetics by DeepBlue design

VITA TESSERA COSMETICS COLLECTION Designing this skincare collection was a journey into calmness, purity, and quiet sophistication. Inspired by the...

6 hours ago 2 votes
Roots by F/Agency

Roots — a retailer offering healthy, farm-fresh, and natural products. The project involves adapting one of Russia’s largest grocery chains...

4 days ago 5 votes
The Best Interfaces We Never Built

Five fictional interface concepts that could reshape how humans and machines interact. Every piece of technology is an interface. Though the word has come to be a shorthand for what we see and use on a screen, an interface is anything that connects two or more things together. While that technically means that a piece of tape could be considered an interface between a picture and a wall, or a pipe between water and a home, interfaces become truly exciting when they create both a physical connection and a conceptual one — when they create a unique space for thinking, communicating, creating, or experiencing. This is why, despite the flexibility and utility of multifunction devices like the smartphone, single-function computing devices still have the power to fascinate us all. The reason for this, I believe, is not just that single-function devices enable their users to fully focus on the experience they create, but because the device can be fully built for that experience. Every aspect of its physical interface can be customized to its functionality; it can have dedicated buttons, switches, knobs, and displays that directly connect our bodies to its features, rather than abstracting them through symbols under a pane of glass. A perfect example of this comes from the very company responsible for steering our culture away from single-function devices; before the iPhone, Apple’s most influential product was the iPod, which won user’s over with an innovative approach to a physical interface: the clickwheel. It took the hand’s ability for fine motor control and coupled it for the need for speed in navigating a suddenly longer list of digital files. With a subtle but feel-good gesture, you could skip through thousands of files fluidly. It was seductive and encouraged us all to make full use of the newfound capacity the iPod provided. It was good for users and good for the .mp3 business. I may be overly nostalgic about this, but no feature of the iPhone feels as good to use as the clickwheel did. Of course, that’s an example that sits right at the nexus between dedicated — old-fashioned — devices and the smartphonization of everything. Prior to the iPod, we had many single-focus devices and countless examples of physical interfaces that gave people unique ways of doing things. Whenever I use these kinds of devices — particularly physical media devices — I start to imagine alternate technological timelines. Ones where the iPhone didn’t determine two decades of interface consolidation. I go full sci-fi. Science fiction, by the way, hasn’t just predicted our technological future. We all know the classic examples, particularly those from Star Trek: the communicator and tricorder anticipated the smartphone; the PADD anticipated the tablet; the ship’s computer anticipated Siri, Alexa, Google, and AI voice interfaces; the entire interior anticipated the Jony Ive glass filter on reality. It’s enough to make a case that Trek didn’t anticipate these things so much as those who watched it as young people matured in careers in design and engineering. But science fiction has also been a fertile ground for imagining very different ways for how humans and machines interact. For me, the most compelling interface concepts from fiction are the ones that are built upon radically different approaches to human-computer interaction. Today, there’s a hunger to “get past” screen-based computer interaction, which I think is largely borne out of a preference for novelty and a desire for the riches that come from bringing an entirely new product category to market. With AI, the desire seems to be to redefine everything we’re used to using on a screen through a voice interface — something I think is a big mistake. And though I’ve written about the reasons why screens still make a lot of sense, what I want to focus on here are different interface paradigms that still make use of a physical connection between people and machine. I think we’ve just scratched the surface for the potential of physical interfaces. Here are a few examples that come to mind that represent untried or untested ideas that captivate my imagination. Multiple Dedicated Screens: 2001’s Discovery One Our current computing convention is to focus on a single screen, which we then often divide among a variety of applications. The computer workstations aboard the Discovery One in 2001: A Space Odyssey featured something we rarely see today: multiple, dedicated smaller screens. Each screen served a specific, stable purpose throughout a work session. A simple shift to physically isolating environments and distributing them makes it interesting as a choice to consider now, not just an arbitrary limitation defined by how large screens were at the time the film was produced. Placing physical boundaries between screen-based environments rather than the soft, constantly shifting divisions we manage on our widescreen displays might seem cumbersome and unnecessary at first. But I wonder what half a century of computing that way would have created differently from what we ended up with thanks to the PC. Instead of spending time repositioning and reprioritizing windows — a task that has somehow become a significant part of modern computer use — dedicated displays would allow us to assign specific screens for ambient monitoring and others for focused work. The psychological impact could be profound. Choosing which information deserves its own physical space creates a different relationship with that information. It becomes less about managing digital real estate and more about curating meaningful, persistent contexts for different types of thinking. The Sonic Screwdriver: Intent as Interface The Doctor’s sonic screwdriver from Doctor Who represents perhaps the most elegant interface concept ever imagined: a universal tool that somehow interfaces with any technology through harmonic resonance. But the really interesting aspect isn’t the pseudo-scientific explanation — it’s how the device responds to intent rather than requiring learned commands or specific inputs. The sonic screwdriver suggests technology that adapts to human purpose rather than forcing humans to adapt to machine constraints. Instead of memorizing syntax, keyboard shortcuts, or navigation hierarchies, the user simply needs to clearly understand what they want to accomplish. The interface becomes transparent, disappearing entirely in favor of direct intention-to-result interaction. This points toward computing that works more like natural tool use — the way a craftsperson uses a hammer or chisel — where the tool extends human capability without requiring conscious attention to the tool itself. The Doctor’s screwdriver may, at this point, be indistinguishable from magic, but in a future with increased miniaturization, nanotech, and quantum computing, a personal device shaped by intent could be possible. Al’s Handlink: The Mind-Object In Quantum Leap, Al’s handlink device looks like a smartphone-sized Mondrian painting: no screen, no discernible buttons, just blocky areas of color that illuminate as he uses it. As the show progressed, the device became increasingly abstract until it seemed impossible that any human could actually operate it. But perhaps that’s the point. The handlink might represent a complete paradigm shift toward iconic and symbolic visual computing, or it could be something even more radical: a mind-object, a projection within a projection coming entirely from Al’s consciousness. A totem that’s entirely imaginary yet functionally real. In the context of the show, that was an explanation that made sense to me — Al, after all, wasn’t physically there with his time-leaping friend Sam, he was a holographic projection from a stable time in the future. He could have looked like anything; so, too, his computer. But that handlink as a mind-object also suggests computing that exists at the intersection of technology and parapsychology — interfaces that respond to mental states, emotions, or subconscious patterns rather than explicit physical inputs. What kind of computing would exist in a world where telepathy was as commonly experienced as the five senses? Penny’s Multi-Page Computer: Hardware That Adapts Inspector Gadget’s niece Penny carried a computer disguised as a book, anticipating today’s foldable devices. But unlike our current two-screen foldables arranged in codex format, Penny’s book had multiple pages, each providing a unique interface tailored to specific tasks. This represents customization at both the software and hardware layers simultaneously. Rather than software conforming to hardware constraints, the physical device itself adapts to the needs of different applications. Each page could offer different input methods, display characteristics, or interaction paradigms optimized for specific types of work. This could be achieved similarly to the Doctor’s screwdriver, but it also could be more within reach if we imagine this kind of layered interface as composed of individual modules. Google’s Project Ara was an inspiring foray into modular computing that, I believe, still has promise today, if not moreso thanks to 3D printing. What if you could print your own interface? The Holodeck as Thinking Interface Star Trek’s Holodeck is usually discussed as virtual reality entertainment, but some episodes showed it functioning as a thinking interface — a tool for conceptual exploration rather than just immersive experience. When Data’s artificial offspring used the Holodeck to visualize possible physical appearances while exploring identity, it functioned much like we use Midjourney today: prompting a machine with descriptions to produce images representing something we’ve already begun to visualize mentally. In another episode, when crew members used it to reconstruct a shared suppressed memory, it became a collaborative medium for group introspection and collective problem-solving. In both cases, the interface disappeared entirely. There was no “using” or “inhabiting” the Holodeck in any traditional sense — it became a transparent extension of human thought processes, whether individual identity exploration or collective memory recovery. Beyond the Screen, but Not the Body Each of these examples suggests moving past our current obsession with maximizing screen real estate and window management. They point toward interfaces that work more like natural human activities: environmental awareness, tool use, conversation, and collaborative thinking. The best interfaces we never built aren’t just sleeker screens — they’re fundamentally different approaches to creating that unique space for thinking, communicating, creating, and experiencing that makes technology truly exciting. We’ve spent two decades consolidating everything into glass rectangles. Perhaps it’s time to build something different.

2 weeks ago 13 votes
Lights&Shadows by Modern World Studio

We developed the complete design for the Lights & Shadows project—a selection of 12 organic teas—from naming and original illustrations...

2 weeks ago 33 votes