Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
136
This is the best media player for children. In the month before the pandemic shut everything down, I was in the midst of some research on how designers — and other kinds of creative experts and consultants — can best communicate results. I was looking at a variety of case study models and trying to devise a system that would best suit my clients goals and abilities. That’s when I found myself looking at a case study that Pentagram published of their work on the Yoto. I was immediately impressed by the quality of the work — no surprise there — but I was also incredibly intrigued by the product itself. Like many of us at that point in late February, 2020, I had a sense that we were about to be stuck at home for a while. This seemed like a really neat thing for a kid who might otherwise defer to a screen. Ten minutes later, I had purchased one. More than three years later, I can say that my first impression was accurate. There are many ways to qualify that — what it actually is...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

Screens Are Good, Actually

A screen isn’t a technological distraction to overcome but a powerful cognitive prosthetic for external memory. Screens get a lot of blame these days. They’re accused of destroying attention spans, ruining sleep, enabling addiction, isolating us from one another, and eroding our capacity for deep thought. “Screen time” has become shorthand for everything wrong with modern technology and its grip on our lives. And as a result, those of us in more design and technology-focused spheres now face a persistent propaganda that screens are an outmoded interaction device, holding us back from some sort of immersive techno-utopia. They are not, and that utopia is a fantasy. The screen itself is obviously not to blame — what’s on the screen is. When we use “screen” as a catch-all for our digital dissatisfaction, we’re conflating the surface with what it displays. It’s like blaming paper for misleading news. We might dismiss this simply as a matter of semantics, but language creates understanding and behavior. The more we sum up the culture of what screens display with the word “screens,” the more we push ourselves toward the wrong solution. The most recent version of this is the idea of the “screenless interface” and the recurring nonsense of clickbait platitudes like “The best interface is no interface.” What we mean when we talk about the “screen” matters. And so it’s worth asking, what is a screen, really? And why can’t we seem to get “past” screens when it comes to human-computer interaction? For all our talk of ambient computing, voice interfaces, and immersive realities, screens remain central to our digital lives. Even as companies like Apple and Meta pour billions into developing headsets meant to replace screens, what do they actually deliver? Heavy headgear that just places smaller screens closer to our eyes. Sure, they can provide a persistent immersive experience that a stationary panel cannot. But a headset’s persistent immersion doesn’t make a panel’s stationary nature a bug. What makes a screen especially useful is not what it projects at you, but what happens when you look away from it. It is then that a screen serves a fundamental cognitive purpose that dates back to the earliest human experiences and tools. A screen is a memory surrogate. It’s a surface that holds information so we don’t have to keep it all in our heads. In this way, it’s the direct descendant of some of humanity’s most transformative devices: the dirt patch where our ancestors scratched out the first symbols, the cave wall that preserved their visions, the clay tablet that tracked their trades, the papyrus that extended their memories, the parchment that connected them across distances, the chalkboard that multiplied their teaching. Think of Einstein’s office at Princeton, with its blackboards covered in equations. Those boards weren’t distractions from his thought — they were extensions of it. They allowed him to externalize complex ideas, manipulate them visually, and free his mind from the burden — the impossibility — of holding every variable simultaneously. Our digital screens serve the same purpose, albeit with far greater complexity and interactivity. They hold vast amounts of information that would overwhelm our working memory. They visualize data in ways our minds can grasp. They show us possibilities we couldn’t otherwise envision. They hold them all in place for us, so that we can look away and then easily find them again when we return our gaze. Comparing screens to Einstein’s chalkboards, of course, is a limited metaphor. Screens also display endless streams of addictive content designed to capture and hold our attention. But that’s not an inherent property of screens themselves — it’s a consequence of the business models driving what appears on them. The screen isn’t the attention thief; it’s merely the scene of the crime. (And yes, I do think that future generations will think of today’s attention economy in the same way that we think of other past norms as injustices.) The connection between screens and attention matters, of course, because our brains have evolved to emphasize and prioritize visual processing. We can absorb and interpret visual information with remarkable efficiency; simply scanning a screen can convey more, faster, than listening to the same content read aloud. Visual processing also operates somewhat independently from our verbal reasoning, allowing us to think about what we’re seeing rather than using that cognitive capacity to process incoming language. We can scan at the speed of thought, but we can only listen at the speed of speech. This is why efforts to create “screenless” interfaces often end up feeling limiting rather than liberating. Voice assistants work beautifully for discrete, simple tasks but become frustrating when dealing with complex information or multiple options. Information conveyed in sound has no place to be held; it can only be repeated. The screen persists because it matches fundamental aspects of human cognition by being a tool that, among other things, offers us persistence: a place to hold information. None of this is to dismiss legitimate concerns about how we currently use screens. The content displayed, the contexts of use, the business models driving development — all deserve critical examination. But blaming the screen itself misses the point, misdirects our efforts to build healthier relationships with technology, and wastes our time on ridiculous technological fetch-quests for the next big device. Perhaps instead of dreaming about moving “beyond screens,” we should focus on creating better screens and better screen experiences. “Better screens” is a problem of materials, longevity, energy consumption, light, and heat. There’s so many things we could improve! “Better screen experiences” is a matter of cultural evolution, a generational project we can undertake together right now by thinking about what kind of information is worth being held for us by screens, as opposed to what kind of information is capable of holding our gaze captive. The screen isn’t the problem. It’s one of our most powerful cognitive prosthetics, a brain buffer. Our screens are, together, a platform for cultural creation, the latest in a long line of surfaces that have enriched human existence. De-screening is not just a bad idea that misunderstands how brains work, and not just an insincere sales pitch for a new gadget. It’s an entirely wrong turn toward a worse future with more of the same, only noisier.

yesterday 2 votes
In Defense of Screens

Screens are good, actually. Screens get a lot of blame these days. They’re accused of destroying attention spans, ruining sleep, enabling addiction, isolating us from one another, and eroding our capacity for deep thought. “Screen time” has become shorthand for everything wrong with modern technology and its grip on our lives. And as a result, those of us in more design and technology-focused spheres now face a persistent propaganda that screens are an outmoded interaction device, holding us back from some sort of immersive techno-utopia. They are not, and that utopia is a fantasy. The screen itself is obviously not to blame — what’s on the screen is. When we use “screen” as a catch-all for our digital dissatisfaction, we’re conflating the surface with what it displays. It’s like blaming paper for misleading news. We might dismiss this simply as a matter of semantics, but language creates understanding and behavior. The more we sum up the culture of what screens display with the word “screens,” the more we push ourselves toward the wrong solution. The most recent version of this is the idea of the “screenless interface” and the recurring nonsense of clickbait platitudes like “The best interface is no interface.” What we mean when we talk about the “screen” matters. And so it’s worth asking, what is a screen, really? And why can’t we seem to get “past” screens when it comes to human-computer interaction? For all our talk of ambient computing, voice interfaces, and immersive realities, screens remain central to our digital lives. Even as companies like Apple and Meta pour billions into developing headsets meant to replace screens, what do they actually deliver? Heavy headgear that just places smaller screens closer to our eyes. Sure, they can provide a persistent immersive experience that a stationary panel cannot. But a headset’s persistent immersion doesn’t make a panel’s stationary nature a bug. What makes a screen especially useful is not what it projects at you, but what happens when you look away from it. It is then that a screen serves a fundamental cognitive purpose that dates back to the earliest human experiences and tools. A screen is a memory surrogate. It’s a surface that holds information so we don’t have to keep it all in our heads. In this way, it’s the direct descendant of some of humanity’s most transformative devices: the dirt patch where our ancestors scratched out the first symbols, the cave wall that preserved their visions, the clay tablet that tracked their trades, the papyrus that extended their memories, the parchment that connected them across distances, the chalkboard that multiplied their teaching. Think of Einstein’s office at Princeton, with its blackboards covered in equations. Those boards weren’t distractions from his thought — they were extensions of it. They allowed him to externalize complex ideas, manipulate them visually, and free his mind from the burden — the impossibility — of holding every variable simultaneously. Our digital screens serve the same purpose, albeit with far greater complexity and interactivity. They hold vast amounts of information that would overwhelm our working memory. They visualize data in ways our minds can grasp. They show us possibilities we couldn’t otherwise envision. They hold them all in place for us, so that we can look away and then easily find them again when we return our gaze. Comparing screens to Einstein’s chalkboards, of course, is a limited metaphor. Screens also display endless streams of addictive content designed to capture and hold our attention. But that’s not an inherent property of screens themselves — it’s a consequence of the business models driving what appears on them. The screen isn’t the attention thief; it’s merely the scene of the crime. (And yes, I do think that future generations will think of today’s attention economy in the same way that we think of other past norms as injustices.) The connection between screens and attention matters, of course, because our brains have evolved to emphasize and prioritize visual processing. We can absorb and interpret visual information with remarkable efficiency; simply scanning a screen can convey more, faster, than listening to the same content read aloud. Visual processing also operates somewhat independently from our verbal reasoning, allowing us to think about what we’re seeing rather than using that cognitive capacity to process incoming language. We can scan at the speed of thought, but we can only listen at the speed of speech. This is why efforts to create “screenless” interfaces often end up feeling limiting rather than liberating. Voice assistants work beautifully for discrete, simple tasks but become frustrating when dealing with complex information or multiple options. Information conveyed in sound has no place to be held; it can only be repeated. The screen persists because it matches fundamental aspects of human cognition by being a tool that, among other things, offers us persistence: a place to hold information. None of this is to dismiss legitimate concerns about how we currently use screens. The content displayed, the contexts of use, the business models driving development — all deserve critical examination. But blaming the screen itself misses the point, misdirects our efforts to build healthier relationships with technology, and wastes our time on ridiculous technological fetch-quests for the next big device. Perhaps instead of dreaming about moving “beyond screens,” we should focus on creating better screens and better screen experiences. “Better screens” is a problem of materials, longevity, energy consumption, light, and heat. There’s so many things we could improve! “Better screen experiences” is a matter of cultural evolution, a generational project we can undertake together right now by thinking about what kind of information is worth being held for us by screens, as opposed to what kind of information is capable of holding our gaze captive. The screen isn’t the problem. It’s one of our most powerful cognitive prosthetics, a brain buffer. Our screens are, together, a platform for cultural creation, the latest in a long line of surfaces that have enriched human existence. De-screening is not just a bad idea that misunderstands how brains work, and not just an insincere sales pitch for a new gadget. It’s an entirely wrong turn toward a worse future with more of the same, only noisier.

yesterday 1 votes
We Cannot Talk About AI Without Talking About Capitalism, Fascism, and Liberty

Let me begin with a disambiguation: I’m not talking about AI as some theoretical intelligence emerging from non-biological form — the sentient computer of science fiction. That, I suppose, can be thought about in an intellectual vacuum, to a point. I’m talking about AI, the product. The thing being sold to us daily, packaged in press releases and demo videos, embedded in services and platforms. AI is, fundamentally, about money. It’s about making promises and raising investment based upon those promises. The promises alone create a future — not necessarily because they’ll come true, but because enough capital, deployed with enough conviction, warps reality around it. When companies raise billions on the promise of AI dominance, they’re not just predicting a future; they’re manufacturing one. Venture capital, at the highest levels, tends to look from the outside like anti-competitive racketeering than finance. Enough investment, however localized in a handful of companies, can shape an entire industry or even an entire economy, regardless of whether it makes any sense whatsoever. And let’s be clear: the Big Tech firms investing in AI aren’t simply responding to market forces; they’re creating them, defining them, controlling them. Nobody asked for AI; we’ve been told to collaborate. Which demonstrates that capitalism, like AI, is no longer a theoretical model about nice, tidy ideas like free markets and competition. The reality of modern capitalism reveals it to be, at best, a neutral system made non-neutral by its operators. The invisible hand isn’t invisible because it’s magical; it’s invisible because we’re not supposed to see whose hand it actually is. You want names though? I don’t have them all. That’s the point. It’s easy to blame the CEOs whose names are browbeat into our heads over and over again, but beyond them is what I think of as The Fear of the Un-captured Dollar and the Unowned Person — a secret society of people who seem to believe that human potential is one thing: owning all the stuff, wielding all the power, seizing all the attention. We now exist in what people call “late-stage capitalism,” where meaningful competition only occurs among those with the most capital, and their battles wreck the landscape around them. We scatter and dash amidst the rubble like the unseen NPCs of Metropolis while the titans clash in the sky. When capital becomes this concentrated, it exerts power at the level of sovereign nations. This reveals the theater that is the so-called power of governments. Nation-states increasingly seem like local franchises in a global system run by capital. This creates fundamental vulnerabilities in governmental systems that have not yet been tested by the degeneracy of late-stage capitalism. And when that happens, the lack of power of the individual is laid bare — in the chat window, in the browser, on the screen, in the home, in the city, in the state, in the world. The much-lauded “democratic” technology of the early internet has given way to systems of surveillance and manipulation so comprehensive they would make 20th century authoritarians weep with envy, not to mention a fear-induced appeasement to the destruction of norms and legal protections that spreads across our entire culture like an overnight frost of fascism. AI accelerates this process. It centralizes power by centralizing the capacity to process and act upon information. It creates unprecedented asymmetries between those who own the models and those who are modeled. Every interaction with an AI system becomes a one-way mirror: you see your reflection, while on the other side, entities you cannot see learn about you, categorize you, and make predictions about you. So when a person resists AI, don’t assume they’re stubbornly digging their heels into the shifting sands of an outmoded ground. Perhaps give them credit for thinking logically and drawing a line between themselves and a future that treats them as nothing more than a bit in the machine. Resistance to AI isn’t necessarily Luddism. It isn’t a fear of progress. It might instead be a clear-eyed assessment of what kind of “progress” is actually being offered — and at what cost. Liberty in the age of AI requires more than just formal rights. It demands structural changes to how technology is developed, deployed, and governed. It requires us to ask not just “what can this technology do?” but “who benefits from what this technology does?” And that conversation cannot happen if we insist on discussing AI as if it exists in a political and economic vacuum — as if the only questions worth asking are technical ones. The most important questions about AI aren’t about algorithms or capabilities; they’re about power and freedom. To think about AI without thinking about capitalism, fascism, and liberty isn’t just incomplete — it’s dangerous. It blinds us to the real stakes of the transformation happening around us, encouraging us to focus on the technology rather than the systems that control it and the ends toward which it’s deployed. Is it possible to conceive of AI that is “good” — as in distributed, not centralized; protective of intellectual property, not a light-speed pirate of the world’s creative output; respectful of privacy, not a listening agent of the powers-that-be; selectively and intentionally deployed where humans need the help, not a leveler of human purpose? (Anil Dash has some great points about this.) Perhaps, but such an AI is fundamentally incompatible with the system in which the AI we have has been created. As AI advances, we face a choice: Will we allow it to become another tool for concentrating power and wealth? Or will we insist upon human dignity and liberty? The answer depends not on technological developments, but on our collective willingness to recognize AI for what it is: not a force of nature, but a product of flawed human choices embedded in vulnerable human systems.

a week ago 2 votes
Periodical 22 – Technological Entitlement

Technological entitlement, knowledge-assumptions, and other things. Are we entitled to technology? A quick thought experiment: A new technological advance gives humans the ability to fly. Does it also confer upon us the right to fly? Let’s say this isn’t a Rocketeer situation — not a jetpack — but some kind of body-hugging anti-gravitic field, just to make it look and feel ever so much more magical and irresistible. Would that be worthy of study and experimentation? I’d have to say yes. But would it be a good idea to use it? I’d have to say no. We’ve learned this lesson already. Are we entitled to access to anyone, anytime? That’s a tough one; it tugs on ideas of access itself — what that means, and how — as well as ideas of inaccess, like privacy. But let’s just say I’m walking down the street and see a stranger passing by. Is it my right to cross the street to say hello? I would say so. And I can use that right for many purposes, some polite — such as introducing myself — and some not so — like abruptly sharing some personal belief of mine. Fortunately, this stranger has the right to ignore me and continue on their way. And that’s where my rights end, I think. I don’t have the right to follow them shouting. It turns out that’s what Twitter was. We got the jetpack of interpersonal communication: a technology that gives us the ability to reach anyone anytime. With it came plenty of good things — good kinds of access, a good kind of leveling. A person could speak past, say, some bureaucratic barrier that would have previously kept them silent. But also, it allowed people with the right measure of influence to inundate millions of other people with lies to the point of warping the reality around them and reducing news to rereading and reprinting those lies just because they were said. Leave something like this in place long enough, and the technology itself becomes an illegitimate proxy for a legitimate right. Free speech, after all, does not equal an unchallenged social media account. Steeper and slicker is the technological slope from can to should to must. – Today I learned that before Four Tet, Kieran Hebden was the guitarist for a group called Fridge. I listened to their second album, Semaphore, this morning and it’s a fun mix of noises that feels very connected to the Four Tet I’ve known. The reason I mention this, though, is that it represents a pretty important principle for us all to remember. Don’t assume someone knows something! I’ve been a Four Tet fan ever since a friend included a song from Pause on a mix he made for me back in 2003. Ask me for my top ten records of all time, and I’ll probably include Pause. And yet it was only today, over two decades later, after watching a Four Tet session on YouTube, that I thought to read the Four Tet Wikipedia page. – Other Things I’ve been staring at Pavel Ripley’s sketchbooks this week. It has been especially rare for me to find other people who use sketchbooks in the same way I do — as a means and end, not just a means. If you look at his work you’ll see what I mean. Just completely absorbing. My bud Blagoj, who has excellent taste, sent this Vercel font called Geist my way a while back. It has everything I like in a font — many weights, many glyphs, and all the little details at its edges and corners. USING IT. These hand-lettered magazine covers are so good. I’m vibing with these cosmic watercolors by Lou Benesch. An Oral History of WIRED’s Original Website is worth reading (paywall tho), and especially in an ad-blocking browser (I endorse Vivaldi), because as much as I love them, WIRED’s website has devolved into a truly hostile environment. “As a middle-aged man, I would’ve saved loads on therapy if I’d read Baby-Sitters Club books as a kid.” SAME. Richard Scarry and the art of children’s literature. If you’re reading this via RSS, that’s really cool! Email me — butler.christopher@proton.me — and let me know!

2 weeks ago 2 votes
You Can Be a Great Designer and Be Completely Unknown

I often find myself contemplating the greatest creators in history — those rare artists, designers, and thinkers whose work transformed how we see the world. What constellation of circumstances made them who they were? Where did their ideas originate? Who mentored them? Would history remember them had they lived in a different time or place? Leonardo da Vinci stands as perhaps the most singular creative mind in recorded history — the quintessential “Renaissance Man” whose breadth of curiosity and depth of insight seem almost superhuman. Yet examples like Leonardo can create a misleading impression that true greatness emerges only once in a generation or century. Leonardo lived among roughly 10-13 million Italians — was greatness truly as rare as one in ten million? We know several of his contemporaries, but still, the ratio remains vanishingly small. This presents us with two possibilities: either exceptional creative ability is almost impossibly rare, or greatness is more common than we realize and the rarity is recognition. I believe firmly in the latter. Especially today, when we live in an attention economy that equates visibility with value. Social media follower counts, speaking engagements, press mentions, and industry awards have become the measuring sticks of design success. This creates a distorted picture of what greatness in design actually means. The truth is far simpler and more liberating: you can be a great designer and be completely unknown. The most elegant designs often fade into the background, becoming invisible through their perfect functionality. Day to day life is scattered with the artifacts of unrecognized ingenuity — the comfortable grip of a vegetable peeler, the intuitive layout of a highway sign, or the satisfying click of a well-made light switch. These artifacts represent design excellence precisely because they don’t call attention to themselves or their creators. Who is responsible for them? I don’t know. That doesn’t mean they’re not out there. This invisibility extends beyond physical objects. The information architect who structures a medical records system that saves lives through its clarity and efficiency may never receive public recognition. The interaction designer who simplifies a complex government form, making essential services accessible to vulnerable populations, might never be celebrated on design blogs or win prestigious awards. Great design isn’t defined by who knows your name, but by how well your work serves human needs. It’s measured in the problems solved, the frustrations eased, the moments of delight created, and the dignity preserved through thoughtful solutions. These metrics operate independently of fame or recognition. Our obsession with visibility also creates a troubling dynamic: design that prioritizes being noticed over being useful. This leads to visual pollution, cognitive overload, and solutions that serve the designer’s portfolio more than the user’s needs. When recognition becomes the goal, the work itself often suffers. I was among the few who didn’t immediately recoil at the brash aesthetics of the Tesla Cybertruck, but it turns out that no amount of exterior innovation changes the fact that it is just not a good truck. There’s something particularly authentic about unknown masters — those who pursue excellence for its own sake, refining their craft out of personal commitment rather than in pursuit of accolades. They understand that their greatest achievements might never be attributed to them, and they create anyway. Their satisfaction comes from the integrity of the work itself. This isn’t to dismiss the value of recognition when it’s deserved, or to suggest that great designers shouldn’t be celebrated. Rather, it’s a reminder that the correlation between quality and fame is weak at best, and that we should be suspicious of any definition of design excellence that depends on visibility. This is especially so today. The products of digital and interaction design are mayflies; most of what we make is lost to the rapid churn of the industry even before it can be lost to anyone’s memory. The next time you use something that works so well you barely notice it, remember that somewhere, a designer solved a problem so thoroughly that both the problem and its solution became invisible. That designer might not be famous, might not have thousands of followers, might not be invited to speak at conferences — but they’ve achieved something remarkable: greatness through invisibility. Design greatness is not measured by the recognition of authorship, but in the creation of work so essential it becomes as inevitable as gravity, as unremarkable as air, and as vital as both.

2 weeks ago 2 votes

More in design

Fang Eyewear Showroom by M-D Design Studio

the Fang Eyewear Showroom by architecture firm M-D Design Studio, a project which reimagines the traditional showroom in the town...

17 hours ago 2 votes
Screens Are Good, Actually

A screen isn’t a technological distraction to overcome but a powerful cognitive prosthetic for external memory. Screens get a lot of blame these days. They’re accused of destroying attention spans, ruining sleep, enabling addiction, isolating us from one another, and eroding our capacity for deep thought. “Screen time” has become shorthand for everything wrong with modern technology and its grip on our lives. And as a result, those of us in more design and technology-focused spheres now face a persistent propaganda that screens are an outmoded interaction device, holding us back from some sort of immersive techno-utopia. They are not, and that utopia is a fantasy. The screen itself is obviously not to blame — what’s on the screen is. When we use “screen” as a catch-all for our digital dissatisfaction, we’re conflating the surface with what it displays. It’s like blaming paper for misleading news. We might dismiss this simply as a matter of semantics, but language creates understanding and behavior. The more we sum up the culture of what screens display with the word “screens,” the more we push ourselves toward the wrong solution. The most recent version of this is the idea of the “screenless interface” and the recurring nonsense of clickbait platitudes like “The best interface is no interface.” What we mean when we talk about the “screen” matters. And so it’s worth asking, what is a screen, really? And why can’t we seem to get “past” screens when it comes to human-computer interaction? For all our talk of ambient computing, voice interfaces, and immersive realities, screens remain central to our digital lives. Even as companies like Apple and Meta pour billions into developing headsets meant to replace screens, what do they actually deliver? Heavy headgear that just places smaller screens closer to our eyes. Sure, they can provide a persistent immersive experience that a stationary panel cannot. But a headset’s persistent immersion doesn’t make a panel’s stationary nature a bug. What makes a screen especially useful is not what it projects at you, but what happens when you look away from it. It is then that a screen serves a fundamental cognitive purpose that dates back to the earliest human experiences and tools. A screen is a memory surrogate. It’s a surface that holds information so we don’t have to keep it all in our heads. In this way, it’s the direct descendant of some of humanity’s most transformative devices: the dirt patch where our ancestors scratched out the first symbols, the cave wall that preserved their visions, the clay tablet that tracked their trades, the papyrus that extended their memories, the parchment that connected them across distances, the chalkboard that multiplied their teaching. Think of Einstein’s office at Princeton, with its blackboards covered in equations. Those boards weren’t distractions from his thought — they were extensions of it. They allowed him to externalize complex ideas, manipulate them visually, and free his mind from the burden — the impossibility — of holding every variable simultaneously. Our digital screens serve the same purpose, albeit with far greater complexity and interactivity. They hold vast amounts of information that would overwhelm our working memory. They visualize data in ways our minds can grasp. They show us possibilities we couldn’t otherwise envision. They hold them all in place for us, so that we can look away and then easily find them again when we return our gaze. Comparing screens to Einstein’s chalkboards, of course, is a limited metaphor. Screens also display endless streams of addictive content designed to capture and hold our attention. But that’s not an inherent property of screens themselves — it’s a consequence of the business models driving what appears on them. The screen isn’t the attention thief; it’s merely the scene of the crime. (And yes, I do think that future generations will think of today’s attention economy in the same way that we think of other past norms as injustices.) The connection between screens and attention matters, of course, because our brains have evolved to emphasize and prioritize visual processing. We can absorb and interpret visual information with remarkable efficiency; simply scanning a screen can convey more, faster, than listening to the same content read aloud. Visual processing also operates somewhat independently from our verbal reasoning, allowing us to think about what we’re seeing rather than using that cognitive capacity to process incoming language. We can scan at the speed of thought, but we can only listen at the speed of speech. This is why efforts to create “screenless” interfaces often end up feeling limiting rather than liberating. Voice assistants work beautifully for discrete, simple tasks but become frustrating when dealing with complex information or multiple options. Information conveyed in sound has no place to be held; it can only be repeated. The screen persists because it matches fundamental aspects of human cognition by being a tool that, among other things, offers us persistence: a place to hold information. None of this is to dismiss legitimate concerns about how we currently use screens. The content displayed, the contexts of use, the business models driving development — all deserve critical examination. But blaming the screen itself misses the point, misdirects our efforts to build healthier relationships with technology, and wastes our time on ridiculous technological fetch-quests for the next big device. Perhaps instead of dreaming about moving “beyond screens,” we should focus on creating better screens and better screen experiences. “Better screens” is a problem of materials, longevity, energy consumption, light, and heat. There’s so many things we could improve! “Better screen experiences” is a matter of cultural evolution, a generational project we can undertake together right now by thinking about what kind of information is worth being held for us by screens, as opposed to what kind of information is capable of holding our gaze captive. The screen isn’t the problem. It’s one of our most powerful cognitive prosthetics, a brain buffer. Our screens are, together, a platform for cultural creation, the latest in a long line of surfaces that have enriched human existence. De-screening is not just a bad idea that misunderstands how brains work, and not just an insincere sales pitch for a new gadget. It’s an entirely wrong turn toward a worse future with more of the same, only noisier.

yesterday 2 votes
nuvéa body lotion by Aiham Othman

This project involves a packaging series for nuvéa, a brand focused on hydration, softness, and sensory beauty. The design seamlessly...

2 days ago 3 votes
Language Needs Innovation

In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?” But those are bad questions he says. Why? the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.” Both answers are correct, because they are using different meanings of the verb “to exist.” He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned. To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature. The point he goes on to make is that our language has to evolve and adapt with our knowledge. Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world. Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape: For those standing below, things above are below, while things below are above, and this is the case around the entire earth. On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli: How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning. So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar. Email · Mastodon · Bluesky

3 days ago 3 votes
Solid Order Jewelry by ADS

Solid Order is a young fine jewelry brand from China, known for its neutral aesthetic inspired by geometric forms and...

3 days ago 3 votes