More from Ryan Mulligan
Hey there. It has been a minute since my last post. I was semi-recently tagged by Zach Leatherman to (optionally) participate in this year's Blog Questions Challenge. I had planned on doing it then. But life really hit hard as we entered this year and it has not let up. Energy dedicated to my personal webspace has been non-existent. I am tired. Hopefully this post can help shake off some of the rust, bring me back to writing and sharing with you lovely folks. I won't be tagging anyone to do this challenge. However, if you're inspired to write your own after reading mine, I'd love for you to share it with me. Why did you start blogging in the first place? Blogging has always been a part of my web experience. Earliest I can remember is building my band a GeoCities website back in high school. I'd share short passages about new song ideas, how last night's show went, stuff like that. I also briefly had a Xanga blog running. My memory is totally faded on what exactly I wrote in there—I'm not eager to dig up high school feelings either—but fairly certain all of those entries are just lost digital history. Having an "online journal" was such a fresh idea at the time. Sharing felt more natural and real before the social media platforms took over. [blows raspberry] I've completely dated myself and probably sound like "old man yells at cloud" right now. Anyway, I pretty much stopped blogging for a while after high school. I turned my efforts back to pen on paper, keeping journals of lyrics, thoughts, and feelings mostly to myself. My dev-focused blogging that you may be familiar with really only spans the last decade, give or take a couple years. What platform are you using to manage your blog and why? At the moment and the forseeable future, I'm using 11ty. I published a short post about migrating to 11ty back in 2021. I still feel the same sentiments and still admire those same people. And many new community friends as well! Have you blogged on other platforms before? I've definitely used WordPress but I can't remember what the heck I was even blogging about during that time. Then I switched to just writing posts directly in HTML files and FTP'ing them up to some server somewhere. Pretty silly in retrospect, but boy did I feel alive. How do you write your posts? Always via laptop, never on my phone. I manage posts in markdown files, push them up to a GitHub repo and let that automatically redeploy my site on Netlify. Editing content is done in VSCode. I've debated switching to some lightweight CMS, connecting to Notion or Obsidian, but why introduce any more complexity and mess with what works fine for me? When do you feel most inspired to write? Typically I'll write up a post about something new I discovered while on my wild coding escapades, whether at work or in my free time. If I have trouble finding solutions to my particular problem on the world wide webs, I'm even more inclined to post about it. Most of my ideas are pursued on weekends, but I've had some early morning or late night weekday sessions. What I'm trying to say is that anytime is a good time for blogging. It's like pizza when it's on a bagel. Do you publish immediately after writing, or do you let it simmer a bit as a draft? It depends. If I had been writing for a long period of time, I find it best to take a breather before publishing. When I feel ready, I'll post and share with a small group for feedback, find grammatical errors. Then I eventually add it to whatever social channels feel right. Used to be Twitter, but straight up screw that garbage temple. I'll likely post on Bluesky, toot on Mastodon. Other times I'll slap a new post on this site and not share it on any socials. Let the RSS feeds do their magic. What's your favorite post on your blog? I don't know if I have a favorite. Can I love them all equally? Well, besides that CSS Marquee one. Damn that blog post for becoming so popular. Any future plans for your blog? Once things settle down in life, I think I'll be ready for a redesign. I had a blast building the current version inspired by Super Mario Wonder. Until then? More blogging. It won't be super soon, but I do have a few zesty article ideas percolating in this old, tired brain.
Once again, here I am, hackin' away on horizontal scroll ideas. This iteration starts with a custom HTML tag. All the necessities for scroll overflow, scroll snapping, and row layout are handled with CSS. Then, as a little progressive enhancement treat, button elements are connected that scroll the previous or next set of items into view when clicked. Behold! The holy grail of scrolling rails... the scrolly-rail! CodePen demo GitHub repo Open CodePen demo I'm being quite facetious about the "holy grail" part, if that's not clear. 😅 This is an initial try on an idea I'll likely experiment more with. I've shared some thoughts on potential future improvements at the end of the post. With that out of the way, let's explore! The HTML Wrap any collection of items with the custom tag: <scrolly-rail> <ul> <li>1</li> <li>2</li> <li>3</li> <!-- and so on--> </ul> </scrolly-rail> The custom element script checks if the direct child within scrolly-rail is a wrapper element, which is true for the above HTML. While it is possible to have items without a wrapper element, if the custom element script runs and button controls are connected, sentinel elements are inserted at the start and end bounds of the scroll container. Wrapping the items makes controlling spacing between them much easier, avoiding any undesired gaps appearing due to these sentinels. We'll discover what the sentinels are for later in the post. The CSS Here are the main styles for the component: scrolly-rail { display: flex; overflow-x: auto; overscroll-behavior-x: contain; scroll-snap-type: x mandatory; @media (prefers-reduced-motion: no-preference) { scroll-behavior: smooth; } } When JavaScript is enabled, sentinel elements are inserted before and after the unordered list (<ul>) element in the HTML example above. Flexbox ensures that the sentinels are positioned on either side of the element. We'll find out why later in this post. Containing the overscroll behavior will prevent us accidentally triggering browser navigation when scrolling beyond either edge of the scrolly-rail container. scroll-snap-type enforces mandatory scroll snapping. Smooth scrolling behavior applies when items scroll into view on button click, or if interactive elements (links, buttons, etc.) inside items overflowing the visible scroll area are focused. Finally, scroll-snap-align: start should be set on the elements that will snap into place. This snap position aligns an item to the beginning of the scroll snap container. In the above HTML, this would apply to the <li> elements. scrolly-rail li { scroll-snap-align: start; } As mentioned earlier, this is everything our component needs for layout, inline scrolling, and scroll snapping. Note that the CodePen demo takes it a step further with some additional padding and margin styles (check out the demo CSS panel). However, if we'd like to wire up controls, we'll need to include the custom element script in our HTML. The custom element script Include the script file on the page. <script type="module" src="scrolly-rail.js"></script> To connect the previous/next button elements, give each an id value and add these values to the data-control-* attributes on the custom tag. <scrolly-rail data-control-previous="btn-previous" data-control-next="btn-next" > <!-- ... --> </scrolly-rail> <button id="btn-previous" class="btn-scrolly-rail">Previous</button> <button id="btn-next" class="btn-scrolly-rail">Next</button> Now clicking these buttons will pull the previous or next set of items into view. The amount of items to scroll by is based on how many are fully visible in the scroll container. For example, if we see three visible items, clicking the "next" button will scroll the subsequent three items into view. Observing inline scroll bounds Notice that the "previous" button element in the demo's top component. As we begin to scroll to the right, the button appears. Scrolling to the end causes the "next" button to disappear. Similarly, for the bottom component we can see either button fade when their respective scroll bound is reached. Recall the sentinels discussed earlier in this post? With a little help from the Intersection Observer API, the component watches for either sentinel intersecting the visible scroll area, indicating that we've reached a boundary. When this happens, a data-bound attribute is toggled on the respective button. This presents the opportunity to alter styles and provide additional visual feedback. .btn-scrolly-rail { /** default styles */ } .btn-scrolly-rail[data-bound] { /* styles to apply to button at boundary */ } Future improvements I'd love to hear from the community most specifically on improving the accessibility story here. Here are some general notes: I debated if button clicks should pass feedback to screen readers such as "Scrolled next three items into view" or "Reached scroll boundary" but felt unsure if that created unforeseen confusion. For items that contain interactive elements: If a new set of items scroll into view and a user tabs into the item list, should the initial focusable element start at the snap target? This could pair well with navigating the list using keyboard arrow keys. Is it worth authoring intersecting sentinel "enter/leave" events that we can listen for? Something like: Scroll bound reached? Do a thing. Leaving scroll bound? Revert the thing we just did or do another thing. Side note: prevent these events from firing when the component script initializes. How might this code get refactored once scroll snap events are widely available? I imagine we could check for when the first or last element becomes the snap target to handle toggling data-bound attributes. Then we can remove Intersection Observer functionality. And if any folks have other scroll component solutions to share, please reach out or open an issue on the repo.
Over the last few months or so, I have been fairly consistent with getting outside for Sunday morning runs. A series of lower body issues had prevented me from doing so for many years, but it was an exercise I had enjoyed back then. It took time to rebuild that habit and muscle but I finally bested the behavior of doing so begrudgingly. Back in the day (what a weird phrase to say, how old am I?) I would purchase digital copies of full albums. I'd use my run time to digest the songs in the order the artist intended. Admittedly, I've become a lazy listener now, relying on streaming services to surface playlists that I mindlessly select to get going. I want to be better than that, but that's a story for another time. These days, my mood for music on runs can vary: Some sessions I'll pop in headphones and throw on some tunes, other times I head out free of devices (besides a watch to track all those sweet, sweaty workout stats) and simply take in the city noise. Before I headed out for my journey this morning, a friend shared a track from an album of song covers in tribute to The Refused's The Shape Of Punk To Come. The original is a treasured classic, a staple LP from my younger years, and I can still remember the feeling of the first time it struck my ears. Its magic is reconjured every time I hear it. When that reverb-soaked feedback starts on Worms of the Senses / Faculties of the Skull, my heart rate begins to ascend. The anticipation builds, my entire body well aware of the explosion of sound imminent. As my run began, I wasn't sure if I had goosebumps from the morning chill or the wall of noise about to ensue. My legs were already pumping. I was fully present, listening intently, ready for the blast. The sound abruptly detonated sending me rocketing down the street towards the rising sun. My current running goal is 4-in-40, traversing four miles under forty minutes. I'm certainly no Prefontaine, but it's a fair enough objective for my age and ability. I'll typically finish my journey in that duration or slightly spill over the forty-minute mark. Today was different. Listening to The Shape Of Punk To Come sent me cruising an extra quarter mile beyond the four before my workout ended. The unstoppable energy from that album is truly pure runner's fuel. There's certainly some layer of nostalgia, my younger spirit awakened and reignited by thrashing guitars and frantic rhythms, but many elements and themes on this record were so innovative at the time it was released. New Noise is a prime example that executes the following feeling flawlessly: Build anticipation, increase the energy level, and then right as the song seems prepped to blast off, switch to something unexpected. In this case, the guitars drop out to make way for some syncopated celestial synths layered over a soft drum rhythm. The energy sits in a holding pattern, unsure whether it should burst or cool down, when suddenly— Can I scream?! Oh my goodness, yes. Yes you can. I quickly morphed into a runner decades younger. I had erupted, my entire being barreling full speed ahead. The midpoint of this track pulls out the same sequence of build up, drop off, and teasing just long enough before unleashing another loud burst of noise, driving to its explosive outro. As the song wraps up, "The New Beat!" is howled repeatedly to a cheering crowd that, I would imagine, had not been standing still. I definitely needed a long stretch after this run.
I recently stumbled on a super cool, well-executed hover effect from the clerk.com website where a bloom of tiny pixels light up, their glow staggering from the center to the edges of its container. With some available free time over this Thanksgiving break, I hacked together my own version of a pixel canvas background shimmer. It quickly evolved into a pixel-canvas Web Component that can be enjoyed in the demo below. The component script and demo code have also been pushed up to a GitHub repo. Open CodePen demo Usage Include the component script and then insert a pixel-canvas custom element inside the container it should fill. <script type="module" src="pixel-canvas.js"></script> <div class="container"> <pixel-canvas></pixel-canvas> <!-- other elements --> </div> The pixel-canvas stretches to the edges of the parent container. When the parent is hovered, glimmering pixel fun ensues. Options The custom element has a few optional attributes available to customize the effect. Check out the CodePen demo's html panel to see how each variation is made. data-colors takes a comma separated list of color values. data-gap sets the amount of space between each pixel. data-speed controls the general duration of the shimmer. This value is slightly randomized on each pixel that, in my opinion, adds a little more character. data-no-focus is a boolean attribute that tells the Web Component to not run its animation whenever sibling elements are focused. The animation runs on sibling focus by default. There's likely more testing and tweaking necessary before I'd consider using this anywhere, but my goal was to run with this inspiration simply for the joy of coding. What a mesmerizing concept. I tip my hat to the creative engineers over at Clerk.
More in design
IVIA Natural Skincare, Redefined Ivia is not just skincare—it’s a quiet celebration of self. Crafted with precision and purity, each...
What we lost when everything became a phone, and when the phone became everything. In 2001, I took a train from Providence to Detroit. What should have been a 12-hour journey stretched into 34 when we got caught in a Buffalo blizzard. As the train sat buried in rapidly accumulating snow, bathrooms failed, food ran out, and passengers struggled to cope with their containment. I had taken along my minidisc player and just three discs, assuming I’d spend most of the trip sleeping. With nothing else to do but stay put in my seat, I got to know those three albums very, very well. I’ve maintained a relationship with them with format fluidity. Over the course of my life, I’ve had copies of them on cassette tape, originals on compact disc, more copies on MiniDisc, purchased (and pirated) .mp3, .wav, and .flac files, and access through a dozen different streaming services. Regardless of how I listen to them, I am still transported back to that snow-bound train. After nearly twenty-five years, I have come to assume that this effect would be permanent. But I never expected it to intensify — in a sudden feeling of full return to the body of my youth — like it did when I dug out my old MiniDisc player, recharged its battery, and pressed play on the very same discs I held back in 2001. The momentary flash of being back on that train, of the raw exhilaration of the cold and of being alone in it, of reinhabiting a young mind still reeling from what was formative, culture-wide shock on September 11th — it all came back. This was truly a blast from the past. In some ways, I am simply describing true nostalgia. I had a sense of return, and a mix of pleasure and pain. But unlike other times, when simply replaying some music would trigger recall, this was as if the physical objects — the player and discs themselves — contained the original moment, and turning it on and pressing play released it back into my mind. To the Everything Machine and back When Steve Jobs unveiled the first iPhone, he presented it as three essential devices in one: “an iPod, a phone, and an internet communicator.” The audience cheered at each revelation. Of course they did — who wouldn’t want to carry one device instead of three? For a citizen of the early aughts, a single, “everything machine” was the dream. The consolidation seemed like an obvious win for convenience, for progress, for the future itself. Nearly twenty years later, we can see that this convergence did more than just empty our pockets of multiple devices. It fundamentally transformed our relationship with technology and information. Today’s iPhone isn’t just a unified tool for known purposes; it has become Marshall McLuhan’s medium-as-message, reshaping not just how we do things but what we choose to do and think about, what we know and want to know, what we believe and are. I doubt even Steve Jobs, a man capable of grandiosity to the extreme, could have imagined the epistemological and ontological effects of the iPhone. This realization has been progressive. Books, films, music, and a near constant conversation have been the public reckoning with the everything machine. We grapple with our newly acquired digital addiction in as many ways as it manifests. We do everything we can to counter the everything machine. One thing I have done, mostly out of curiosity, is to go back to the single-function devices I have accumulated over the years. Some of them have been put away, turned-off for longer than they were ever out an don. Simply turning them back on has been illuminating. Each one has reactivated a memory. Each one has reminded me of what it was like to use it for the first time, back at the time at which it was the latest and greatest — when it hinted at a world to come as much as it achieved something its present required. What started as a backward-looking survey of sorts — sifting through a catalog of dusty devices and once-murky memories — revealed something unexpected: Not only did these older, limited devices create a different kind of relationship with technology, catalyzing imagination rather than just consuming attention, there is still a place for them today. For context, here’s a list of the more interesting devices I have in what is a small, personal museum of technology: A partial catalog of my personal device library Device Media Year Nintendo GameBoy Video Game Console 1989 Qualcomm QCP-860 Mobile Phone 1999 Sony CMT CP11 Desktop Audio System 2000 Sony MXD-D40 CD/MiniDisc Deck 2001 Apple iPod 1st Generation mp3 Player 2001 Handspring Visor PDA 2001 Cybiko Classic PDA 2001 Tascam MD-350 MiniDisc Player/Recorder 2001 Sony MZ-B10 Portable MiniDisc Player/Recorder 2002 Siemens C55 Mobile Phone 2002 Sony CLIÉ PEG-SJ22 PDA 2003 BlackBerry Quark Smartphone 2003 Canon PowerShot A70 Digital Camera 2003 Sony Net MD Walkman MZ-N920 Portable MiniDisc Player/Recorder 2004 Sony DCR-HC36 MiniDV Camcorder 2006 OLPC XO Laptop Computer 2007 Sony NWZ-S615F Digital Media Player 2007 Sony NWZ-A815 Digital Media Player 2007 Sony NWZ-A726 Digital Media Player 2008 Cambridge Audio CXC Compact Disc Transport 2015 Sony NW-E394 Digital Media Player 2016 Sony NW-A105 Digital Media Player 2019 Yoto Player 1st Generation Audio Player 2020 Yoto Mini Audio Player 2021 Cambridge Audio CXA81 Integrated Amplifier 2020 easier to use. But if it is a better experience for the writer, who can argue with that? After all, in a world of as many options as we have, ease is not the only measure of value; there are as many measures as there are choices. Subjective experience might as well take the lead. There is also a common worry that returning to single-purpose devices is risky — that their media is somehow more fragile than cloud-hosted digital content. But I’ve found the opposite to be true. I returned to Blu-Ray when favorite shows vanished from streaming services. I started recording voices and broadcasts to MiniDisc when I realized how many digital files I’d lost between phone upgrades. My old MiniDiscs still work perfectly, my miniDV tapes still play, my GameBoy cartridges still save games. It’s not the media that’s fragile, it’s the platform. And sometimes, the platform wasn’t fragile, the market was. MiniDisc is, again, a great example of this. The discs were more portable, robust, and more easily recordable than larger Compact Discs and the players were smaller and more fully-featured. But they ran right into mp3 players in the marketplace. The average consumer valued high capacity and convenience over audio quality and recording features. But guess which devices still work just as they did back then with less effort? The MiniDisc players. Most mp3 players that aren’t also phones require a much greater effort to use today because of their dependence upon another computer and software that hasn’t been maintained. And, unlike most devices made today, older devices are much more easily repaired and modified. Of my list above, not a single device failed to do what it was created to do. Besides comprising a museum of personal choices, these devices are a fascinating timeline of interface design. Each one represents a unique experiment in human-computer interaction, often feeling alien compared to today’s homogeneous landscape of austere, glass-fronted rectangles. Re-exploring them reminds me that just because an idea was left behind doesn’t mean it wasn’t valuable. Their diversity of approaches to simple problems suggests paths not taken, possibilities still worth considering. That the interface is physical, and in some cases, also graphical, makes for a unique combination of efficiency and sensory pleasure. Analog enthusiasts, particularly in the high-fi space, will opine on things like “knob-feel,” and they have a point. When a button, switch, or knob has been created to meet our hands and afford us fine-tuning control over something buried within a circuitboard, it creates an experience totally unlike tapping a symbol projected onto glass. It’s not that one is objectively better than another — and context obviously matters here - but no haptic engine has replicated what a switch wired with intention can do for a fingertip. Today’s smartphone reviewer’s will mention button “clickiness,” but if that’s what gets you excited, I encourage you to flip a GameBoy’s switch again and feel the click that precedes the Nintendo chime; eject a MiniDisc and feel the spring-loaded mechanisms vibration agains the palm of your hand; drag the first iPod’s clickwheel with your thumb in a way that turned a low-fi text list of titles into something with weight. Physicality is what makes a device an extension of a body. Function is what makes a device an extension of a mind. And single-function devices, I believe, do this better. By doing less, of course, they can only be so distracting. Compared to an everything machine and the permanent state of cognitive fracture they’ve created, this is something we should look back upon with more than a bit of nostalgia. We still have something to learn from a device that is intentionally limited and can fully embody that limitation. But the single-function device doesn’t just do less; it creates a different kind of mental space. A GameBoy can only play games, but this limitation means you’re fully present with what you’re doing. A miniDV camcorder might be less convenient than a smartphone for capturing video, but its dedicated purpose makes you think more intentionally about what you’re recording and why. For many contemporary enthusiasts, the limitations of old media create artifacts and effects that are now aesthetically desirable: they want the lower resolution, the glitchiness, and the blurring of old camcorders in the same way that modern digital camera users apply software-driven film emulation recipes to synthesize the effects once produced by developing physical film. The limitation heightens the creation. Each device a doorway These devices remind us that technological progress isn’t always linear. Sometimes what we gain in convenience, we lose in engagement. The friction of switching between different devices might have been — and remains — inefficient, but it created natural boundaries between different modes of activity. Each device was a doorway to a specific kind of experience, rather than a portal to endless possibility. Modern devices have their place. When it comes to remaining in communication, I wouldn’t trade my current smartphone for the phone I used twenty years ago. As critical as I am of the everything machine, I’m inclined to work on building better personal use habits than I am to replace it with a worse experience of the features I use. But there is also room for rediscovering old devices and maintaining relationships with technologies that do less. I actually prefer playing movies and music on physical media than through a streaming interface; I would jump at the chance to reimagine my smartphone with fewer features and a more analog interface. Limitations expand our experience by engaging our imagination. Unlimited options arrest our imagination by capturing us in the experience of choice. One, I firmly believe, is necessary for creativity, while the other is its opiate. Generally speaking, we don’t need more features. We need more focus. Anyone working in interaction and product design can learn from rediscovering how older devices engaged the mind and body to create an experience far more expansive than their function. The future of computing, I hope, is one that will integrate the concept of intentional limitation. I think our minds and memories will depend upon it.
Roots — a retailer offering healthy, farm-fresh, and natural products. The project involves adapting one of Russia’s largest grocery chains...
Five fictional interface concepts that could reshape how humans and machines interact. Every piece of technology is an interface. Though the word has come to be a shorthand for what we see and use on a screen, an interface is anything that connects two or more things together. While that technically means that a piece of tape could be considered an interface between a picture and a wall, or a pipe between water and a home, interfaces become truly exciting when they create both a physical connection and a conceptual one — when they create a unique space for thinking, communicating, creating, or experiencing. This is why, despite the flexibility and utility of multifunction devices like the smartphone, single-function computing devices still have the power to fascinate us all. The reason for this, I believe, is not just that single-function devices enable their users to fully focus on the experience they create, but because the device can be fully built for that experience. Every aspect of its physical interface can be customized to its functionality; it can have dedicated buttons, switches, knobs, and displays that directly connect our bodies to its features, rather than abstracting them through symbols under a pane of glass. A perfect example of this comes from the very company responsible for steering our culture away from single-function devices; before the iPhone, Apple’s most influential product was the iPod, which won user’s over with an innovative approach to a physical interface: the clickwheel. It took the hand’s ability for fine motor control and coupled it for the need for speed in navigating a suddenly longer list of digital files. With a subtle but feel-good gesture, you could skip through thousands of files fluidly. It was seductive and encouraged us all to make full use of the newfound capacity the iPod provided. It was good for users and good for the .mp3 business. I may be overly nostalgic about this, but no feature of the iPhone feels as good to use as the clickwheel did. Of course, that’s an example that sits right at the nexus between dedicated — old-fashioned — devices and the smartphonization of everything. Prior to the iPod, we had many single-focus devices and countless examples of physical interfaces that gave people unique ways of doing things. Whenever I use these kinds of devices — particularly physical media devices — I start to imagine alternate technological timelines. Ones where the iPhone didn’t determine two decades of interface consolidation. I go full sci-fi. Science fiction, by the way, hasn’t just predicted our technological future. We all know the classic examples, particularly those from Star Trek: the communicator and tricorder anticipated the smartphone; the PADD anticipated the tablet; the ship’s computer anticipated Siri, Alexa, Google, and AI voice interfaces; the entire interior anticipated the Jony Ive glass filter on reality. It’s enough to make a case that Trek didn’t anticipate these things so much as those who watched it as young people matured in careers in design and engineering. But science fiction has also been a fertile ground for imagining very different ways for how humans and machines interact. For me, the most compelling interface concepts from fiction are the ones that are built upon radically different approaches to human-computer interaction. Today, there’s a hunger to “get past” screen-based computer interaction, which I think is largely borne out of a preference for novelty and a desire for the riches that come from bringing an entirely new product category to market. With AI, the desire seems to be to redefine everything we’re used to using on a screen through a voice interface — something I think is a big mistake. And though I’ve written about the reasons why screens still make a lot of sense, what I want to focus on here are different interface paradigms that still make use of a physical connection between people and machine. I think we’ve just scratched the surface for the potential of physical interfaces. Here are a few examples that come to mind that represent untried or untested ideas that captivate my imagination. Multiple Dedicated Screens: 2001’s Discovery One Our current computing convention is to focus on a single screen, which we then often divide among a variety of applications. The computer workstations aboard the Discovery One in 2001: A Space Odyssey featured something we rarely see today: multiple, dedicated smaller screens. Each screen served a specific, stable purpose throughout a work session. A simple shift to physically isolating environments and distributing them makes it interesting as a choice to consider now, not just an arbitrary limitation defined by how large screens were at the time the film was produced. Placing physical boundaries between screen-based environments rather than the soft, constantly shifting divisions we manage on our widescreen displays might seem cumbersome and unnecessary at first. But I wonder what half a century of computing that way would have created differently from what we ended up with thanks to the PC. Instead of spending time repositioning and reprioritizing windows — a task that has somehow become a significant part of modern computer use — dedicated displays would allow us to assign specific screens for ambient monitoring and others for focused work. The psychological impact could be profound. Choosing which information deserves its own physical space creates a different relationship with that information. It becomes less about managing digital real estate and more about curating meaningful, persistent contexts for different types of thinking. The Sonic Screwdriver: Intent as Interface The Doctor’s sonic screwdriver from Doctor Who represents perhaps the most elegant interface concept ever imagined: a universal tool that somehow interfaces with any technology through harmonic resonance. But the really interesting aspect isn’t the pseudo-scientific explanation — it’s how the device responds to intent rather than requiring learned commands or specific inputs. The sonic screwdriver suggests technology that adapts to human purpose rather than forcing humans to adapt to machine constraints. Instead of memorizing syntax, keyboard shortcuts, or navigation hierarchies, the user simply needs to clearly understand what they want to accomplish. The interface becomes transparent, disappearing entirely in favor of direct intention-to-result interaction. This points toward computing that works more like natural tool use — the way a craftsperson uses a hammer or chisel — where the tool extends human capability without requiring conscious attention to the tool itself. The Doctor’s screwdriver may, at this point, be indistinguishable from magic, but in a future with increased miniaturization, nanotech, and quantum computing, a personal device shaped by intent could be possible. Al’s Handlink: The Mind-Object In Quantum Leap, Al’s handlink device looks like a smartphone-sized Mondrian painting: no screen, no discernible buttons, just blocky areas of color that illuminate as he uses it. As the show progressed, the device became increasingly abstract until it seemed impossible that any human could actually operate it. But perhaps that’s the point. The handlink might represent a complete paradigm shift toward iconic and symbolic visual computing, or it could be something even more radical: a mind-object, a projection within a projection coming entirely from Al’s consciousness. A totem that’s entirely imaginary yet functionally real. In the context of the show, that was an explanation that made sense to me — Al, after all, wasn’t physically there with his time-leaping friend Sam, he was a holographic projection from a stable time in the future. He could have looked like anything; so, too, his computer. But that handlink as a mind-object also suggests computing that exists at the intersection of technology and parapsychology — interfaces that respond to mental states, emotions, or subconscious patterns rather than explicit physical inputs. What kind of computing would exist in a world where telepathy was as commonly experienced as the five senses? Penny’s Multi-Page Computer: Hardware That Adapts Inspector Gadget’s niece Penny carried a computer disguised as a book, anticipating today’s foldable devices. But unlike our current two-screen foldables arranged in codex format, Penny’s book had multiple pages, each providing a unique interface tailored to specific tasks. This represents customization at both the software and hardware layers simultaneously. Rather than software conforming to hardware constraints, the physical device itself adapts to the needs of different applications. Each page could offer different input methods, display characteristics, or interaction paradigms optimized for specific types of work. This could be achieved similarly to the Doctor’s screwdriver, but it also could be more within reach if we imagine this kind of layered interface as composed of individual modules. Google’s Project Ara was an inspiring foray into modular computing that, I believe, still has promise today, if not moreso thanks to 3D printing. What if you could print your own interface? The Holodeck as Thinking Interface Star Trek’s Holodeck is usually discussed as virtual reality entertainment, but some episodes showed it functioning as a thinking interface — a tool for conceptual exploration rather than just immersive experience. When Data’s artificial offspring used the Holodeck to visualize possible physical appearances while exploring identity, it functioned much like we use Midjourney today: prompting a machine with descriptions to produce images representing something we’ve already begun to visualize mentally. In another episode, when crew members used it to reconstruct a shared suppressed memory, it became a collaborative medium for group introspection and collective problem-solving. In both cases, the interface disappeared entirely. There was no “using” or “inhabiting” the Holodeck in any traditional sense — it became a transparent extension of human thought processes, whether individual identity exploration or collective memory recovery. Beyond the Screen, but Not the Body Each of these examples suggests moving past our current obsession with maximizing screen real estate and window management. They point toward interfaces that work more like natural human activities: environmental awareness, tool use, conversation, and collaborative thinking. The best interfaces we never built aren’t just sleeker screens — they’re fundamentally different approaches to creating that unique space for thinking, communicating, creating, and experiencing that makes technology truly exciting. We’ve spent two decades consolidating everything into glass rectangles. Perhaps it’s time to build something different.
We developed the complete design for the Lights & Shadows project—a selection of 12 organic teas—from naming and original illustrations...