More from Jim Nielsen’s Blog
In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?” But those are bad questions he says. Why? the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.” Both answers are correct, because they are using different meanings of the verb “to exist.” He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned. To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature. The point he goes on to make is that our language has to evolve and adapt with our knowledge. Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world. Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape: For those standing below, things above are below, while things below are above, and this is the case around the entire earth. On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli: How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning. So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar. Email · Mastodon · Bluesky
Via Jeremy Keith’s link blog I found this article: Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs. It’s about the disillusionment of designers since the ~2010s. Having ridden that wave myself, there’s a lot of very relatable stuff in there about how design has evolved as a profession. But before we get into the meat of the article, there’s some bangers worth acknowledging, like this: Amazon – the most used website in the world – looks like a bunch of pop-up ads stitched together. lol, burn. Haven’t heard Amazon described this way, but it’s spot on. The hard truth, as pointed out in the article, is this: bad design doesn’t hurt profit margins. Or at least there’s no immediately-obvious, concrete data or correlation that proves this. So most decision makers don’t care. You know what does help profit margins? Spending less money. Cost-savings initiatives. Those always provide a direct, immediate, seemingly-obvious correlation. So those initiatives get prioritized. Fuzzy human-centered initiatives (humanities-adjacent stuff), are difficult to quantitatively (and monetarily) measure. “Let’s stop printing paper and sending people stuff in the mail. It’s expensive. Send them emails instead.” Boom! Money saved for everyone. That’s easier to prioritize than asking, “How do people want us to communicate with them — if at all?” Nobody ever asks that last part. Designers quickly realized that in most settings they serve the business first, customers second — or third, or fourth, or... Shar Biggers [says] designers are “realising that much of their work is being used to push for profit rather than change..” Meet the new boss. Same as the old boss. As students, designers are encouraged to make expressive, nuanced work, and rewarded for experimentation and personal voice. The implication, of course, is that this is what a design career will look like: meaningful, impactful, self-directed. But then graduation hits, and many land their first jobs building out endless Google Slides templates or resizing banner ads...no one prepared them for how constrained and compromised most design jobs actually are. Reality hits hard. And here’s the part Jeremy quotes: We trained people to care deeply and then funnelled them into environments that reward detachment. And the longer you stick around, the more disorienting the gap becomes – especially as you rise in seniority. You start doing less actual design and more yapping: pitching to stakeholders, writing brand strategy decks, performing taste. Less craft, more optics; less idealism, more cynicism. Less work advocating for your customers, more work for advocating for yourself and your team within the organization itself. Then the cynicism sets in. We’re not making software for others. We’re making company numbers go up, so our numbers ($$$) will go up. Which reminds me: Stephanie Stimac wrote about reaching 1 year at Igalia and what stood out to me in her post was that she didn’t feel a pressing requirement to create visibility into her work and measure (i.e. prove) its impact. I’ve never been good at that. I’ve seen its necessity, but am just not good at doing it. Being good at building is great. But being good at the optics of building is often better — for you, your career, and your standing in many orgs. Anyway, back to Elizabeth’s article. She notes you’ll burn out trying to monetize something you love — especially when it’s in pursuit of maintaining a cost of living. Once your identity is tied up in the performance, it’s hard to admit when it stops feeling good. It’s a great article and if you’ve been in the design profession of building software, it’s worth your time. Email · Mastodon · Bluesky
Here’s Sean Voisen writing about how programming is a feeling: For those of us who enjoy programming, there is a deep satisfaction that comes from solving problems through well-written code, a kind of ineffable joy found in the elegant expression of a system through our favorite syntax. It is akin to the same satisfaction a craftsperson might find at the end of the day after toiling away on well-made piece of furniture, the culmination of small dopamine hits that come from sweating the details on something and getting them just right. Maybe nobody will notice those details, but it doesn’t matter. We care, we notice, we get joy from the aesthetics of the craft. This got me thinking about the idea of satisfaction in craft. Where does it come from? In part, I think, it comes from arriving at a deeper, and more intimate understanding of and relationship to what you’re working with. For example, I think of a sushi chef. I’m not a sushi chef, but I’ve tried my hand at making rolls and I’ve seen Jiro Dreams of Sushi, so I have a speck of familiarity with the spectrum from beginner to expert. When you first start out, you’re focused on the outcome. “Can I do this? Let see if I can pull it off.” Then comes the excitement of, “Hey I made my own roll!” That’s as far as many of us go. But if you keep going, you end up in a spot where you’re more worried about what goes into the roll than the outcome of roll itself. Where was the fish sourced from? How was it sourced? Was it ever frozen? A million and one questions about what goes into the process, which inevitably shape what comes out of it. And I think an obsession with the details of what goes in drives your satisfaction of what comes out. In today’s moment, I wonder if AI tools help or hinder fostering a sense of wonder in what it means to craft something? When you craft something, you’re driven further into the essence of the materials you work. But AI can easily reverse this, where you care less about what goes in and only what comes out. One question I’m asking myself is: do I care more or less about what I’ve made when I’m done using AI to help make it? Email · Mastodon · Bluesky
I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough). Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries. But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less. I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way? These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined: I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things]. He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth: I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.” That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter. Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?” But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty. I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box. But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally). As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do. And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol. I think my own career evolution has gone something like what Brian describes: Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.” Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?” Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.” Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?” Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?” Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?” [As you can see, I have some personal issues I need to work through…] As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words. I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged. All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels. “I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.” Email · Mastodon · Bluesky
More in design
the Fang Eyewear Showroom by architecture firm M-D Design Studio, a project which reimagines the traditional showroom in the town...
A screen isn’t a technological distraction to overcome but a powerful cognitive prosthetic for external memory. Screens get a lot of blame these days. They’re accused of destroying attention spans, ruining sleep, enabling addiction, isolating us from one another, and eroding our capacity for deep thought. “Screen time” has become shorthand for everything wrong with modern technology and its grip on our lives. And as a result, those of us in more design and technology-focused spheres now face a persistent propaganda that screens are an outmoded interaction device, holding us back from some sort of immersive techno-utopia. They are not, and that utopia is a fantasy. The screen itself is obviously not to blame — what’s on the screen is. When we use “screen” as a catch-all for our digital dissatisfaction, we’re conflating the surface with what it displays. It’s like blaming paper for misleading news. We might dismiss this simply as a matter of semantics, but language creates understanding and behavior. The more we sum up the culture of what screens display with the word “screens,” the more we push ourselves toward the wrong solution. The most recent version of this is the idea of the “screenless interface” and the recurring nonsense of clickbait platitudes like “The best interface is no interface.” What we mean when we talk about the “screen” matters. And so it’s worth asking, what is a screen, really? And why can’t we seem to get “past” screens when it comes to human-computer interaction? For all our talk of ambient computing, voice interfaces, and immersive realities, screens remain central to our digital lives. Even as companies like Apple and Meta pour billions into developing headsets meant to replace screens, what do they actually deliver? Heavy headgear that just places smaller screens closer to our eyes. Sure, they can provide a persistent immersive experience that a stationary panel cannot. But a headset’s persistent immersion doesn’t make a panel’s stationary nature a bug. What makes a screen especially useful is not what it projects at you, but what happens when you look away from it. It is then that a screen serves a fundamental cognitive purpose that dates back to the earliest human experiences and tools. A screen is a memory surrogate. It’s a surface that holds information so we don’t have to keep it all in our heads. In this way, it’s the direct descendant of some of humanity’s most transformative devices: the dirt patch where our ancestors scratched out the first symbols, the cave wall that preserved their visions, the clay tablet that tracked their trades, the papyrus that extended their memories, the parchment that connected them across distances, the chalkboard that multiplied their teaching. Think of Einstein’s office at Princeton, with its blackboards covered in equations. Those boards weren’t distractions from his thought — they were extensions of it. They allowed him to externalize complex ideas, manipulate them visually, and free his mind from the burden — the impossibility — of holding every variable simultaneously. Our digital screens serve the same purpose, albeit with far greater complexity and interactivity. They hold vast amounts of information that would overwhelm our working memory. They visualize data in ways our minds can grasp. They show us possibilities we couldn’t otherwise envision. They hold them all in place for us, so that we can look away and then easily find them again when we return our gaze. Comparing screens to Einstein’s chalkboards, of course, is a limited metaphor. Screens also display endless streams of addictive content designed to capture and hold our attention. But that’s not an inherent property of screens themselves — it’s a consequence of the business models driving what appears on them. The screen isn’t the attention thief; it’s merely the scene of the crime. (And yes, I do think that future generations will think of today’s attention economy in the same way that we think of other past norms as injustices.) The connection between screens and attention matters, of course, because our brains have evolved to emphasize and prioritize visual processing. We can absorb and interpret visual information with remarkable efficiency; simply scanning a screen can convey more, faster, than listening to the same content read aloud. Visual processing also operates somewhat independently from our verbal reasoning, allowing us to think about what we’re seeing rather than using that cognitive capacity to process incoming language. We can scan at the speed of thought, but we can only listen at the speed of speech. This is why efforts to create “screenless” interfaces often end up feeling limiting rather than liberating. Voice assistants work beautifully for discrete, simple tasks but become frustrating when dealing with complex information or multiple options. Information conveyed in sound has no place to be held; it can only be repeated. The screen persists because it matches fundamental aspects of human cognition by being a tool that, among other things, offers us persistence: a place to hold information. None of this is to dismiss legitimate concerns about how we currently use screens. The content displayed, the contexts of use, the business models driving development — all deserve critical examination. But blaming the screen itself misses the point, misdirects our efforts to build healthier relationships with technology, and wastes our time on ridiculous technological fetch-quests for the next big device. Perhaps instead of dreaming about moving “beyond screens,” we should focus on creating better screens and better screen experiences. “Better screens” is a problem of materials, longevity, energy consumption, light, and heat. There’s so many things we could improve! “Better screen experiences” is a matter of cultural evolution, a generational project we can undertake together right now by thinking about what kind of information is worth being held for us by screens, as opposed to what kind of information is capable of holding our gaze captive. The screen isn’t the problem. It’s one of our most powerful cognitive prosthetics, a brain buffer. Our screens are, together, a platform for cultural creation, the latest in a long line of surfaces that have enriched human existence. De-screening is not just a bad idea that misunderstands how brains work, and not just an insincere sales pitch for a new gadget. It’s an entirely wrong turn toward a worse future with more of the same, only noisier.
This project involves a packaging series for nuvéa, a brand focused on hydration, softness, and sensory beauty. The design seamlessly...
In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?” But those are bad questions he says. Why? the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.” Both answers are correct, because they are using different meanings of the verb “to exist.” He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned. To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature. The point he goes on to make is that our language has to evolve and adapt with our knowledge. Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world. Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape: For those standing below, things above are below, while things below are above, and this is the case around the entire earth. On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli: How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning. So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar. Email · Mastodon · Bluesky
Solid Order is a young fine jewelry brand from China, known for its neutral aesthetic inspired by geometric forms and...