Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
42
My aesthetic sensibilities represented in 100 images. These images are subject to change without warning.
3 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

Order is Always More Important than Action in Design

Before users can meaningfully act, they must understand — a principle our metrics-obsessed design culture has forgotten. Today’s metrics-obsessed design culture is too fixated on action. Clicks, conversions, and other easily quantified metrics have become our purpose. We’re so focused on outcomes that we’ve lost sight of what makes them valuable and what even makes them possible in the first place: order and understanding. The primary function of design is not to prompt action. It’s to bring form to intent through order: arranging and prioritizing information so that those who encounter it can see it, perceive it, and understand it. Why has action become our focus? Simple: it’s easier to measure than understanding. We can track how many people clicked a button but not how many people grasped the meaning behind it. We can measure time spent on a page but not comprehension gained during that time. And so, following the path of least resistance, we’ve collectively decided that what’s easy to measure must be what’s most important to optimize, leaving action metrics the only means by which the success of design is determined. This is backward. Action without understanding is merely manipulation — a short-term victory that creates long-term problems. Users who take actions without fully comprehending why become confused, frustrated, and ultimately distrustful of both the design and the organization behind it. A dirty little secret of action metrics is how often the success signal — a button click or a form submission — is immediately followed by a meandering session of actions that obviously signals confusion and possibly even regret. Often, confusion is easier to perceive from session data than much else. Even when action is an appropriate goal, it’s not a guaranteed outcome. Information can be perfectly clear and remain unpersuasive because persuasion is not entirely within the designer’s control. Information is at its most persuasive when it is (1) clear, (2) truthful, and (3) aligned with the intent of the recipient. As designers, we can only directly control the first two factors. As for alignment with user intent, we can attempt to influence this through audience targeting, but let’s be honest about the limitations. Audience targeting relies on data that we choose to believe is far more accurate than it actually is. We have geolocation, sentiment analysis, rich profiling, and nearly criminally invasive tracking, and yet, most networks think I am an entirely different kind of person than I am. And even if they got the facts right, they couldn’t truly promise intent-alignment at the accuracy they do without mind-reading. The other dirty secret of most marketing is we attempt to close the gap with manipulation designed to work on most people. We rationalize this by saying, “yeah, it’s cringe, but it works.” Because we prioritize action over understanding, we encourage designs that exploit psychological triggers rather than foster comprehension. Dark patterns, artificial scarcity, misleading comparisons, straight up negging — these are the tools of action-obsessed design. They may drive short-term metrics, but they erode trust and damage relationships with users. This misplaced emphasis also distorts our design practice. Specific tactics like button placement and styling, form design, and conventional call-to-action patterns carry disproportionate weight in our approach. These elements are important, but fixating on them distracts designers from the craft of order: information architecture, information design, typography, and layout — the foundational elements essential to clear communication. What might design look like if we properly valued order over action? First, we would invest more in information architecture and content strategy — the disciplines most directly concerned with creating meaningful order. These would not be phases to rush through, but central aspects of the design process. We would trust words more rather than chasing layout and media trends. Second, we would develop better ways to evaluate understanding. Qualitative methods like comprehension testing would be given as much weight as conversion rates. We would ask not just “Did users do what we wanted?” but “Did users understand what we were communicating?” This isn’t difficult or labor intensive, but it does require actually talking to people. Third, we would respect the user’s right not to act. We would recognize that sometimes the appropriate response to even the clearest information is to walk away or do nothing. None of this means that action isn’t important. Of course it is. A skeptic might ask: “What is the purpose of understanding if no action is taken?” In many cases, this is a fair question. The entire purpose of certain designs — like landing pages — may be to engage an audience and motivate their action. In such cases, measuring success through clicks and conversions not only makes sense, it’s really the only signal that can be quantified. But this doesn’t diminish the foundational role that understanding plays in supporting meaningful action, or the fact that overemphasis on action metrics can undercut the effectiveness of communication. Actions built on misunderstanding are like houses built on sand — they will inevitably collapse. When I say that order is more important than action, I don’t mean that action isn’t important. But there is no meaningful action without understanding, and there is no understanding without order. By placing order first in our design priorities, we don’t abandon action — we create the necessary foundation for it. We align our practice with our true purpose: not to trick people into doing things, but to help them see, know, and comprehend so they can make informed decisions about what to do next.

22 hours ago 1 votes
Printing Everything and Owning Nothing

Something is starting to happen. As of right now, 3D Printer ownership is niche. Many know what it is, but very few people have one. This will change rapidly over the next few years. Plenty of contemporary sci-fi have depicted futures where everything is “printed.” The exact recipe of the “ink” is very much TBD, but the idea has taken hold. But I’ve been waiting for the consumer-level signals. I just saw one — an article about how Philips, the maker of my electric shaver, will be releasing printable accessories. You won’t be able to print a razor itself, but you will be able to print the blade guards — the fragile plastic snap-ons that enable you to control the depth of your cut. This seems neat, right? But it’s really an ingenious monthly recurring revenue strategy for Philips. The idea is, how many people own our electric shavers? What’s the lifespan of those shavers? Can we close the gap between purchase events? Obviously, yes. I have many well-worn guards for my shaver. Would I spend, say, a couple of dollars to print a fresh replacement that snaps in like new? Probably. If I had a printer. That’s going to start to be the pitch. The printer will be a utility. Not having one will be…weird, backward, luddite. Give it a few years. But the distance between discretionary accessories and the actual thing you need is quite short. Once major manufacturer’s demonstrate the sustainable demand for printables as MRR, it’s going to be a fast transition to printing the actual thing and therefore, most-objects-as-a-service. Regulating the supply chain will have as much to do with this as all the paranoid plutocrat energy I can muster in my imagination, obviously. I’m not into it; just calling it now.

3 days ago 2 votes
The Compound Interest of Small Ideas

Many Small Ideas Are Worth More Than One Big One When it comes to thinking, we’ve been sold a high-risk investment strategy. Our cultural narratives around innovation celebrate the breakthrough, the paradigm shift, the disruptive, revolutionary * concept that changes everything overnight. We romanticize the lone genius with the world-altering epiphany — the risk-taker who bets everything on a single, volatile stock rather than building a diverse portfolio of ideas. There are few of those, because most of the time, risk-taking like that does not pay out. But when risk-taking like this does pay out, it’s at a high enough margin to shape culture, both in terms of what it is and the stories we tell about it later. Those stories distort how we think. What if the most reliable path to value creation isn’t the jackpot but the compound interest of many small, good ideas accumulated over time? What if consistent good thinking — applied daily, generating modest but regular returns — ultimately outperforms the high-risk, high-reward strategy of innovation hunting? Despite the jackpot narratives, this kind of thinking is what creates progress in every field. The iPhone wasn’t just one big idea; it was thousands of small ideas about interface design, material science, battery technology, and user psychology converging at the right moment and then persisting in its pace. Remember, the first generation iPhone was exciting, but it took a few versions of enormous collective effort before it fully delivered on its promise. Even something as seemingly revolutionary as penicillin required countless incremental improvements in cultivation, production, and delivery before it could save lives at scale. The mythology of the big idea is as harmful as it is inaccurate. It creates a distorted view of how meaningful work happens and how careers develop. It suggests that value creation is binary and sudden rather than cumulative and gradual. It implies that if you haven’t had your “big breakthrough” yet, you’re somehow failing or falling behind. What history records as “big ideas” were usually the visible peaks of mountains built from countless smaller insights, most attributed to others or lost to history entirely. When we romanticize the peak, we terraform the mountain with impatience in defiance of reality. This myth particularly undermines those whose contributions come through consistent, quality thinking rather than abrupt, visible change. The designer who improves a dozen user flows each year might never have a portfolio piece that makes the industry press, but their cumulative impact can transform a product used by millions. The researcher who methodically explores adjacent questions might never publish the headline-grabbing study, but their work builds the foundation upon which others’ insights depend. I’ve seen this play out in my own career in design. My most successful work has not been built upon single breakthrough concepts but on a series of small, interconnected insights about user needs, technological constraints, and business goals. Nearly every useful idea I have had seems, on its own, quite minor — a tweak to a navigation pattern, a refinement of how information is presented, a subtle shift in terminology — but together, they have created experiences that worked meaningfully better than what came before. And, in my estimation, the greatest value I have created in my career has been by teaching others to think this way too. The practice of generating many little ideas creates a resilience that big-idea hunting lacks. When your value comes from consistent good thinking rather than occasional brilliance, you’re not dependent on lightning striking. You’re building a renewable resource — your ability to notice problems and formulate potential solutions — rather than extracting a finite one. The practice aspect of this is critical. The reality is that good ideas come from experience. The more you act on small insights — or just good thinking — the more you learn, which makes your next notions all the more potent. We tend to think of ideas as gems to be mined — rare, unique, coveted, priceless — but they are actually among our most renewable of resources. One begets another. You just have to do things — most of them routine — not wait for something entirely new to do. The desire for novelty robs us of repetition’s treasure. This is particularly true in our rapidly changing technological landscape. A single big idea, even if genuinely novel, can quickly become irrelevant as the context around it shifts. But the ability to consistently produce relevant small ideas adapts alongside changing circumstances. The person who can generate ten thoughtful approaches to a problem will outlast the one still searching for the perfect solution. Conceptual breakthroughs, are, of course, real. But for most of us, in most fields, our contribution will come through consistent, quality thinking applied over time — perhaps in pursuit of a breakthrough but not always. Often, in pursuit of maintenance, stability, or just getting something basic done well. That’s not settling for less; it’s recognizing where true value often lies. Instead of asking ourselves “What’s my big idea?” we should be asking “What’s my next good idea?” And the one after that. And after that. The compound interest of regular, thoughtful contribution almost always outperforms the lottery ticket of the big breakthrough. Write every idea down. Most will be useless, late, mimics, and impossible. But some will be good and you never really know in advance when their time will come. When you do this, you’ll eventually realize that compound interest – the ideas will be all around you. In a world obsessed with disruption and overnight success, there’s something radical about valuing consistency, incrementalism, and the patient accumulation of small improvements.   Don’t get me started on how words like “disruption” and “revolutionary” are a form of semantic gaslighting. We used to call these things “theft,” “piracy,” and “racketeering.”

4 days ago 4 votes
Screens Are Good, Actually

A screen isn’t a technological distraction to overcome but a powerful cognitive prosthetic for external memory. Screens get a lot of blame these days. They’re accused of destroying attention spans, ruining sleep, enabling addiction, isolating us from one another, and eroding our capacity for deep thought. “Screen time” has become shorthand for everything wrong with modern technology and its grip on our lives. And as a result, those of us in more design and technology-focused spheres now face a persistent propaganda that screens are an outmoded interaction device, holding us back from some sort of immersive techno-utopia. They are not, and that utopia is a fantasy. The screen itself is obviously not to blame — what’s on the screen is. When we use “screen” as a catch-all for our digital dissatisfaction, we’re conflating the surface with what it displays. It’s like blaming paper for misleading news. We might dismiss this simply as a matter of semantics, but language creates understanding and behavior. The more we sum up the culture of what screens display with the word “screens,” the more we push ourselves toward the wrong solution. The most recent version of this is the idea of the “screenless interface” and the recurring nonsense of clickbait platitudes like “The best interface is no interface.” What we mean when we talk about the “screen” matters. And so it’s worth asking, what is a screen, really? And why can’t we seem to get “past” screens when it comes to human-computer interaction? For all our talk of ambient computing, voice interfaces, and immersive realities, screens remain central to our digital lives. Even as companies like Apple and Meta pour billions into developing headsets meant to replace screens, what do they actually deliver? Heavy headgear that just places smaller screens closer to our eyes. Sure, they can provide a persistent immersive experience that a stationary panel cannot. But a headset’s persistent immersion doesn’t make a panel’s stationary nature a bug. What makes a screen especially useful is not what it projects at you, but what happens when you look away from it. It is then that a screen serves a fundamental cognitive purpose that dates back to the earliest human experiences and tools. A screen is a memory surrogate. It’s a surface that holds information so we don’t have to keep it all in our heads. In this way, it’s the direct descendant of some of humanity’s most transformative devices: the dirt patch where our ancestors scratched out the first symbols, the cave wall that preserved their visions, the clay tablet that tracked their trades, the papyrus that extended their memories, the parchment that connected them across distances, the chalkboard that multiplied their teaching. Think of Einstein’s office at Princeton, with its blackboards covered in equations. Those boards weren’t distractions from his thought — they were extensions of it. They allowed him to externalize complex ideas, manipulate them visually, and free his mind from the burden — the impossibility — of holding every variable simultaneously. Our digital screens serve the same purpose, albeit with far greater complexity and interactivity. They hold vast amounts of information that would overwhelm our working memory. They visualize data in ways our minds can grasp. They show us possibilities we couldn’t otherwise envision. They hold them all in place for us, so that we can look away and then easily find them again when we return our gaze. Comparing screens to Einstein’s chalkboards, of course, is a limited metaphor. Screens also display endless streams of addictive content designed to capture and hold our attention. But that’s not an inherent property of screens themselves — it’s a consequence of the business models driving what appears on them. The screen isn’t the attention thief; it’s merely the scene of the crime. (And yes, I do think that future generations will think of today’s attention economy in the same way that we think of other past norms as injustices.) The connection between screens and attention matters, of course, because our brains have evolved to emphasize and prioritize visual processing. We can absorb and interpret visual information with remarkable efficiency; simply scanning a screen can convey more, faster, than listening to the same content read aloud. Visual processing also operates somewhat independently from our verbal reasoning, allowing us to think about what we’re seeing rather than using that cognitive capacity to process incoming language. We can scan at the speed of thought, but we can only listen at the speed of speech. This is why efforts to create “screenless” interfaces often end up feeling limiting rather than liberating. Voice assistants work beautifully for discrete, simple tasks but become frustrating when dealing with complex information or multiple options. Information conveyed in sound has no place to be held; it can only be repeated. The screen persists because it matches fundamental aspects of human cognition by being a tool that, among other things, offers us persistence: a place to hold information. None of this is to dismiss legitimate concerns about how we currently use screens. The content displayed, the contexts of use, the business models driving development — all deserve critical examination. But blaming the screen itself misses the point, misdirects our efforts to build healthier relationships with technology, and wastes our time on ridiculous technological fetch-quests for the next big device. Perhaps instead of dreaming about moving “beyond screens,” we should focus on creating better screens and better screen experiences. “Better screens” is a problem of materials, longevity, energy consumption, light, and heat. There’s so many things we could improve! “Better screen experiences” is a matter of cultural evolution, a generational project we can undertake together right now by thinking about what kind of information is worth being held for us by screens, as opposed to what kind of information is capable of holding our gaze captive. The screen isn’t the problem. It’s one of our most powerful cognitive prosthetics, a brain buffer. Our screens are, together, a platform for cultural creation, the latest in a long line of surfaces that have enriched human existence. De-screening is not just a bad idea that misunderstands how brains work, and not just an insincere sales pitch for a new gadget. It’s an entirely wrong turn toward a worse future with more of the same, only noisier.

6 days ago 8 votes
In Defense of Screens

Screens are good, actually. Screens get a lot of blame these days. They’re accused of destroying attention spans, ruining sleep, enabling addiction, isolating us from one another, and eroding our capacity for deep thought. “Screen time” has become shorthand for everything wrong with modern technology and its grip on our lives. And as a result, those of us in more design and technology-focused spheres now face a persistent propaganda that screens are an outmoded interaction device, holding us back from some sort of immersive techno-utopia. They are not, and that utopia is a fantasy. The screen itself is obviously not to blame — what’s on the screen is. When we use “screen” as a catch-all for our digital dissatisfaction, we’re conflating the surface with what it displays. It’s like blaming paper for misleading news. We might dismiss this simply as a matter of semantics, but language creates understanding and behavior. The more we sum up the culture of what screens display with the word “screens,” the more we push ourselves toward the wrong solution. The most recent version of this is the idea of the “screenless interface” and the recurring nonsense of clickbait platitudes like “The best interface is no interface.” What we mean when we talk about the “screen” matters. And so it’s worth asking, what is a screen, really? And why can’t we seem to get “past” screens when it comes to human-computer interaction? For all our talk of ambient computing, voice interfaces, and immersive realities, screens remain central to our digital lives. Even as companies like Apple and Meta pour billions into developing headsets meant to replace screens, what do they actually deliver? Heavy headgear that just places smaller screens closer to our eyes. Sure, they can provide a persistent immersive experience that a stationary panel cannot. But a headset’s persistent immersion doesn’t make a panel’s stationary nature a bug. What makes a screen especially useful is not what it projects at you, but what happens when you look away from it. It is then that a screen serves a fundamental cognitive purpose that dates back to the earliest human experiences and tools. A screen is a memory surrogate. It’s a surface that holds information so we don’t have to keep it all in our heads. In this way, it’s the direct descendant of some of humanity’s most transformative devices: the dirt patch where our ancestors scratched out the first symbols, the cave wall that preserved their visions, the clay tablet that tracked their trades, the papyrus that extended their memories, the parchment that connected them across distances, the chalkboard that multiplied their teaching. Think of Einstein’s office at Princeton, with its blackboards covered in equations. Those boards weren’t distractions from his thought — they were extensions of it. They allowed him to externalize complex ideas, manipulate them visually, and free his mind from the burden — the impossibility — of holding every variable simultaneously. Our digital screens serve the same purpose, albeit with far greater complexity and interactivity. They hold vast amounts of information that would overwhelm our working memory. They visualize data in ways our minds can grasp. They show us possibilities we couldn’t otherwise envision. They hold them all in place for us, so that we can look away and then easily find them again when we return our gaze. Comparing screens to Einstein’s chalkboards, of course, is a limited metaphor. Screens also display endless streams of addictive content designed to capture and hold our attention. But that’s not an inherent property of screens themselves — it’s a consequence of the business models driving what appears on them. The screen isn’t the attention thief; it’s merely the scene of the crime. (And yes, I do think that future generations will think of today’s attention economy in the same way that we think of other past norms as injustices.) The connection between screens and attention matters, of course, because our brains have evolved to emphasize and prioritize visual processing. We can absorb and interpret visual information with remarkable efficiency; simply scanning a screen can convey more, faster, than listening to the same content read aloud. Visual processing also operates somewhat independently from our verbal reasoning, allowing us to think about what we’re seeing rather than using that cognitive capacity to process incoming language. We can scan at the speed of thought, but we can only listen at the speed of speech. This is why efforts to create “screenless” interfaces often end up feeling limiting rather than liberating. Voice assistants work beautifully for discrete, simple tasks but become frustrating when dealing with complex information or multiple options. Information conveyed in sound has no place to be held; it can only be repeated. The screen persists because it matches fundamental aspects of human cognition by being a tool that, among other things, offers us persistence: a place to hold information. None of this is to dismiss legitimate concerns about how we currently use screens. The content displayed, the contexts of use, the business models driving development — all deserve critical examination. But blaming the screen itself misses the point, misdirects our efforts to build healthier relationships with technology, and wastes our time on ridiculous technological fetch-quests for the next big device. Perhaps instead of dreaming about moving “beyond screens,” we should focus on creating better screens and better screen experiences. “Better screens” is a problem of materials, longevity, energy consumption, light, and heat. There’s so many things we could improve! “Better screen experiences” is a matter of cultural evolution, a generational project we can undertake together right now by thinking about what kind of information is worth being held for us by screens, as opposed to what kind of information is capable of holding our gaze captive. The screen isn’t the problem. It’s one of our most powerful cognitive prosthetics, a brain buffer. Our screens are, together, a platform for cultural creation, the latest in a long line of surfaces that have enriched human existence. De-screening is not just a bad idea that misunderstands how brains work, and not just an insincere sales pitch for a new gadget. It’s an entirely wrong turn toward a worse future with more of the same, only noisier.

6 days ago 7 votes

More in design

ZARA flagship store by AIM Architecture & Art Recherche Industrie

As a mass fashion retailer for many decades, it’s somewhat surprising that ZARA has quietly pulled the architecture card to...

10 hours ago 1 votes
Order is Always More Important than Action in Design

Before users can meaningfully act, they must understand — a principle our metrics-obsessed design culture has forgotten. Today’s metrics-obsessed design culture is too fixated on action. Clicks, conversions, and other easily quantified metrics have become our purpose. We’re so focused on outcomes that we’ve lost sight of what makes them valuable and what even makes them possible in the first place: order and understanding. The primary function of design is not to prompt action. It’s to bring form to intent through order: arranging and prioritizing information so that those who encounter it can see it, perceive it, and understand it. Why has action become our focus? Simple: it’s easier to measure than understanding. We can track how many people clicked a button but not how many people grasped the meaning behind it. We can measure time spent on a page but not comprehension gained during that time. And so, following the path of least resistance, we’ve collectively decided that what’s easy to measure must be what’s most important to optimize, leaving action metrics the only means by which the success of design is determined. This is backward. Action without understanding is merely manipulation — a short-term victory that creates long-term problems. Users who take actions without fully comprehending why become confused, frustrated, and ultimately distrustful of both the design and the organization behind it. A dirty little secret of action metrics is how often the success signal — a button click or a form submission — is immediately followed by a meandering session of actions that obviously signals confusion and possibly even regret. Often, confusion is easier to perceive from session data than much else. Even when action is an appropriate goal, it’s not a guaranteed outcome. Information can be perfectly clear and remain unpersuasive because persuasion is not entirely within the designer’s control. Information is at its most persuasive when it is (1) clear, (2) truthful, and (3) aligned with the intent of the recipient. As designers, we can only directly control the first two factors. As for alignment with user intent, we can attempt to influence this through audience targeting, but let’s be honest about the limitations. Audience targeting relies on data that we choose to believe is far more accurate than it actually is. We have geolocation, sentiment analysis, rich profiling, and nearly criminally invasive tracking, and yet, most networks think I am an entirely different kind of person than I am. And even if they got the facts right, they couldn’t truly promise intent-alignment at the accuracy they do without mind-reading. The other dirty secret of most marketing is we attempt to close the gap with manipulation designed to work on most people. We rationalize this by saying, “yeah, it’s cringe, but it works.” Because we prioritize action over understanding, we encourage designs that exploit psychological triggers rather than foster comprehension. Dark patterns, artificial scarcity, misleading comparisons, straight up negging — these are the tools of action-obsessed design. They may drive short-term metrics, but they erode trust and damage relationships with users. This misplaced emphasis also distorts our design practice. Specific tactics like button placement and styling, form design, and conventional call-to-action patterns carry disproportionate weight in our approach. These elements are important, but fixating on them distracts designers from the craft of order: information architecture, information design, typography, and layout — the foundational elements essential to clear communication. What might design look like if we properly valued order over action? First, we would invest more in information architecture and content strategy — the disciplines most directly concerned with creating meaningful order. These would not be phases to rush through, but central aspects of the design process. We would trust words more rather than chasing layout and media trends. Second, we would develop better ways to evaluate understanding. Qualitative methods like comprehension testing would be given as much weight as conversion rates. We would ask not just “Did users do what we wanted?” but “Did users understand what we were communicating?” This isn’t difficult or labor intensive, but it does require actually talking to people. Third, we would respect the user’s right not to act. We would recognize that sometimes the appropriate response to even the clearest information is to walk away or do nothing. None of this means that action isn’t important. Of course it is. A skeptic might ask: “What is the purpose of understanding if no action is taken?” In many cases, this is a fair question. The entire purpose of certain designs — like landing pages — may be to engage an audience and motivate their action. In such cases, measuring success through clicks and conversions not only makes sense, it’s really the only signal that can be quantified. But this doesn’t diminish the foundational role that understanding plays in supporting meaningful action, or the fact that overemphasis on action metrics can undercut the effectiveness of communication. Actions built on misunderstanding are like houses built on sand — they will inevitably collapse. When I say that order is more important than action, I don’t mean that action isn’t important. But there is no meaningful action without understanding, and there is no understanding without order. By placing order first in our design priorities, we don’t abandon action — we create the necessary foundation for it. We align our practice with our true purpose: not to trick people into doing things, but to help them see, know, and comprehend so they can make informed decisions about what to do next.

22 hours ago 1 votes
Glenmorangie whisky collection by Butterfly Cannon

Glenmorangie wanted to celebrate their Head of Whisky Creation’s combined passion for whisky and wine, through the release of three...

2 days ago 3 votes
Printing Everything and Owning Nothing

Something is starting to happen. As of right now, 3D Printer ownership is niche. Many know what it is, but very few people have one. This will change rapidly over the next few years. Plenty of contemporary sci-fi have depicted futures where everything is “printed.” The exact recipe of the “ink” is very much TBD, but the idea has taken hold. But I’ve been waiting for the consumer-level signals. I just saw one — an article about how Philips, the maker of my electric shaver, will be releasing printable accessories. You won’t be able to print a razor itself, but you will be able to print the blade guards — the fragile plastic snap-ons that enable you to control the depth of your cut. This seems neat, right? But it’s really an ingenious monthly recurring revenue strategy for Philips. The idea is, how many people own our electric shavers? What’s the lifespan of those shavers? Can we close the gap between purchase events? Obviously, yes. I have many well-worn guards for my shaver. Would I spend, say, a couple of dollars to print a fresh replacement that snaps in like new? Probably. If I had a printer. That’s going to start to be the pitch. The printer will be a utility. Not having one will be…weird, backward, luddite. Give it a few years. But the distance between discretionary accessories and the actual thing you need is quite short. Once major manufacturer’s demonstrate the sustainable demand for printables as MRR, it’s going to be a fast transition to printing the actual thing and therefore, most-objects-as-a-service. Regulating the supply chain will have as much to do with this as all the paranoid plutocrat energy I can muster in my imagination, obviously. I’m not into it; just calling it now.

3 days ago 2 votes
Fang Eyewear Showroom by M-D Design Studio

the Fang Eyewear Showroom by architecture firm M-D Design Studio, a project which reimagines the traditional showroom in the town...

6 days ago 7 votes