More from Christopher Butler
Let me begin with a disambiguation: I’m not talking about AI as some theoretical intelligence emerging from non-biological form — the sentient computer of science fiction. That, I suppose, can be thought about in an intellectual vacuum, to a point. I’m talking about AI, the product. The thing being sold to us daily, packaged in press releases and demo videos, embedded in services and platforms. AI is, fundamentally, about money. It’s about making promises and raising investment based upon those promises. The promises alone create a future — not necessarily because they’ll come true, but because enough capital, deployed with enough conviction, warps reality around it. When companies raise billions on the promise of AI dominance, they’re not just predicting a future; they’re manufacturing one. Venture capital, at the highest levels, tends to look from the outside like anti-competitive racketeering than finance. Enough investment, however localized in a handful of companies, can shape an entire industry or even an entire economy, regardless of whether it makes any sense whatsoever. And let’s be clear: the Big Tech firms investing in AI aren’t simply responding to market forces; they’re creating them, defining them, controlling them. Nobody asked for AI; we’ve been told to collaborate. Which demonstrates that capitalism, like AI, is no longer a theoretical model about nice, tidy ideas like free markets and competition. The reality of modern capitalism reveals it to be, at best, a neutral system made non-neutral by its operators. The invisible hand isn’t invisible because it’s magical; it’s invisible because we’re not supposed to see whose hand it actually is. You want names though? I don’t have them all. That’s the point. It’s easy to blame the CEOs whose names are browbeat into our heads over and over again, but beyond them is what I think of as The Fear of the Un-captured Dollar and the Unowned Person — a secret society of people who seem to believe that human potential is one thing: owning all the stuff, wielding all the power, seizing all the attention. We now exist in what people call “late-stage capitalism,” where meaningful competition only occurs among those with the most capital, and their battles wreck the landscape around them. We scatter and dash amidst the rubble like the unseen NPCs of Metropolis while the titans clash in the sky. When capital becomes this concentrated, it exerts power at the level of sovereign nations. This reveals the theater that is the so-called power of governments. Nation-states increasingly seem like local franchises in a global system run by capital. This creates fundamental vulnerabilities in governmental systems that have not yet been tested by the degeneracy of late-stage capitalism. And when that happens, the lack of power of the individual is laid bare — in the chat window, in the browser, on the screen, in the home, in the city, in the state, in the world. The much-lauded “democratic” technology of the early internet has given way to systems of surveillance and manipulation so comprehensive they would make 20th century authoritarians weep with envy, not to mention a fear-induced appeasement to the destruction of norms and legal protections that spreads across our entire culture like an overnight frost of fascism. AI accelerates this process. It centralizes power by centralizing the capacity to process and act upon information. It creates unprecedented asymmetries between those who own the models and those who are modeled. Every interaction with an AI system becomes a one-way mirror: you see your reflection, while on the other side, entities you cannot see learn about you, categorize you, and make predictions about you. So when a person resists AI, don’t assume they’re stubbornly digging their heels into the shifting sands of an outmoded ground. Perhaps give them credit for thinking logically and drawing a line between themselves and a future that treats them as nothing more than a bit in the machine. Resistance to AI isn’t necessarily Luddism. It isn’t a fear of progress. It might instead be a clear-eyed assessment of what kind of “progress” is actually being offered — and at what cost. Liberty in the age of AI requires more than just formal rights. It demands structural changes to how technology is developed, deployed, and governed. It requires us to ask not just “what can this technology do?” but “who benefits from what this technology does?” And that conversation cannot happen if we insist on discussing AI as if it exists in a political and economic vacuum — as if the only questions worth asking are technical ones. The most important questions about AI aren’t about algorithms or capabilities; they’re about power and freedom. To think about AI without thinking about capitalism, fascism, and liberty isn’t just incomplete — it’s dangerous. It blinds us to the real stakes of the transformation happening around us, encouraging us to focus on the technology rather than the systems that control it and the ends toward which it’s deployed. Is it possible to conceive of AI that is “good” — as in distributed, not centralized; protective of intellectual property, not a light-speed pirate of the world’s creative output; respectful of privacy, not a listening agent of the powers-that-be; selectively and intentionally deployed where humans need the help, not a leveler of human purpose? (Anil Dash has some great points about this.) Perhaps, but such an AI is fundamentally incompatible with the system in which the AI we have has been created. As AI advances, we face a choice: Will we allow it to become another tool for concentrating power and wealth? Or will we insist upon human dignity and liberty? The answer depends not on technological developments, but on our collective willingness to recognize AI for what it is: not a force of nature, but a product of flawed human choices embedded in vulnerable human systems.
Technological entitlement, knowledge-assumptions, and other things. Are we entitled to technology? A quick thought experiment: A new technological advance gives humans the ability to fly. Does it also confer upon us the right to fly? Let’s say this isn’t a Rocketeer situation — not a jetpack — but some kind of body-hugging anti-gravitic field, just to make it look and feel ever so much more magical and irresistible. Would that be worthy of study and experimentation? I’d have to say yes. But would it be a good idea to use it? I’d have to say no. We’ve learned this lesson already. Are we entitled to access to anyone, anytime? That’s a tough one; it tugs on ideas of access itself — what that means, and how — as well as ideas of inaccess, like privacy. But let’s just say I’m walking down the street and see a stranger passing by. Is it my right to cross the street to say hello? I would say so. And I can use that right for many purposes, some polite — such as introducing myself — and some not so — like abruptly sharing some personal belief of mine. Fortunately, this stranger has the right to ignore me and continue on their way. And that’s where my rights end, I think. I don’t have the right to follow them shouting. It turns out that’s what Twitter was. We got the jetpack of interpersonal communication: a technology that gives us the ability to reach anyone anytime. With it came plenty of good things — good kinds of access, a good kind of leveling. A person could speak past, say, some bureaucratic barrier that would have previously kept them silent. But also, it allowed people with the right measure of influence to inundate millions of other people with lies to the point of warping the reality around them and reducing news to rereading and reprinting those lies just because they were said. Leave something like this in place long enough, and the technology itself becomes an illegitimate proxy for a legitimate right. Free speech, after all, does not equal an unchallenged social media account. Steeper and slicker is the technological slope from can to should to must. – Today I learned that before Four Tet, Kieran Hebden was the guitarist for a group called Fridge. I listened to their second album, Semaphore, this morning and it’s a fun mix of noises that feels very connected to the Four Tet I’ve known. The reason I mention this, though, is that it represents a pretty important principle for us all to remember. Don’t assume someone knows something! I’ve been a Four Tet fan ever since a friend included a song from Pause on a mix he made for me back in 2003. Ask me for my top ten records of all time, and I’ll probably include Pause. And yet it was only today, over two decades later, after watching a Four Tet session on YouTube, that I thought to read the Four Tet Wikipedia page. – Other Things I’ve been staring at Pavel Ripley’s sketchbooks this week. It has been especially rare for me to find other people who use sketchbooks in the same way I do — as a means and end, not just a means. If you look at his work you’ll see what I mean. Just completely absorbing. My bud Blagoj, who has excellent taste, sent this Vercel font called Geist my way a while back. It has everything I like in a font — many weights, many glyphs, and all the little details at its edges and corners. USING IT. These hand-lettered magazine covers are so good. I’m vibing with these cosmic watercolors by Lou Benesch. An Oral History of WIRED’s Original Website is worth reading (paywall tho), and especially in an ad-blocking browser (I endorse Vivaldi), because as much as I love them, WIRED’s website has devolved into a truly hostile environment. “As a middle-aged man, I would’ve saved loads on therapy if I’d read Baby-Sitters Club books as a kid.” SAME. Richard Scarry and the art of children’s literature. If you’re reading this via RSS, that’s really cool! Email me — butler.christopher@proton.me — and let me know!
I often find myself contemplating the greatest creators in history — those rare artists, designers, and thinkers whose work transformed how we see the world. What constellation of circumstances made them who they were? Where did their ideas originate? Who mentored them? Would history remember them had they lived in a different time or place? Leonardo da Vinci stands as perhaps the most singular creative mind in recorded history — the quintessential “Renaissance Man” whose breadth of curiosity and depth of insight seem almost superhuman. Yet examples like Leonardo can create a misleading impression that true greatness emerges only once in a generation or century. Leonardo lived among roughly 10-13 million Italians — was greatness truly as rare as one in ten million? We know several of his contemporaries, but still, the ratio remains vanishingly small. This presents us with two possibilities: either exceptional creative ability is almost impossibly rare, or greatness is more common than we realize and the rarity is recognition. I believe firmly in the latter. Especially today, when we live in an attention economy that equates visibility with value. Social media follower counts, speaking engagements, press mentions, and industry awards have become the measuring sticks of design success. This creates a distorted picture of what greatness in design actually means. The truth is far simpler and more liberating: you can be a great designer and be completely unknown. The most elegant designs often fade into the background, becoming invisible through their perfect functionality. Day to day life is scattered with the artifacts of unrecognized ingenuity — the comfortable grip of a vegetable peeler, the intuitive layout of a highway sign, or the satisfying click of a well-made light switch. These artifacts represent design excellence precisely because they don’t call attention to themselves or their creators. Who is responsible for them? I don’t know. That doesn’t mean they’re not out there. This invisibility extends beyond physical objects. The information architect who structures a medical records system that saves lives through its clarity and efficiency may never receive public recognition. The interaction designer who simplifies a complex government form, making essential services accessible to vulnerable populations, might never be celebrated on design blogs or win prestigious awards. Great design isn’t defined by who knows your name, but by how well your work serves human needs. It’s measured in the problems solved, the frustrations eased, the moments of delight created, and the dignity preserved through thoughtful solutions. These metrics operate independently of fame or recognition. Our obsession with visibility also creates a troubling dynamic: design that prioritizes being noticed over being useful. This leads to visual pollution, cognitive overload, and solutions that serve the designer’s portfolio more than the user’s needs. When recognition becomes the goal, the work itself often suffers. I was among the few who didn’t immediately recoil at the brash aesthetics of the Tesla Cybertruck, but it turns out that no amount of exterior innovation changes the fact that it is just not a good truck. There’s something particularly authentic about unknown masters — those who pursue excellence for its own sake, refining their craft out of personal commitment rather than in pursuit of accolades. They understand that their greatest achievements might never be attributed to them, and they create anyway. Their satisfaction comes from the integrity of the work itself. This isn’t to dismiss the value of recognition when it’s deserved, or to suggest that great designers shouldn’t be celebrated. Rather, it’s a reminder that the correlation between quality and fame is weak at best, and that we should be suspicious of any definition of design excellence that depends on visibility. This is especially so today. The products of digital and interaction design are mayflies; most of what we make is lost to the rapid churn of the industry even before it can be lost to anyone’s memory. The next time you use something that works so well you barely notice it, remember that somewhere, a designer solved a problem so thoroughly that both the problem and its solution became invisible. That designer might not be famous, might not have thousands of followers, might not be invited to speak at conferences — but they’ve achieved something remarkable: greatness through invisibility. Design greatness is not measured by the recognition of authorship, but in the creation of work so essential it becomes as inevitable as gravity, as unremarkable as air, and as vital as both.
Our world treats information like it’s always good. More data, more content, more inputs — we want it all without thinking twice. To say that the last twenty-five years of culture have centered around info-maximalism wouldn’t be an exaggeration. I hope we’re coming to the end of that phase. More than ever before, it feels like we have to — that we just can’t go on like this. But the solution cannot come from within; it won’t be a better tool or even better information to get us out of this mess. It will be us, feeling and acting differently. Think about this comparison: Information is to wisdom what pornography is to real intimacy. I’m not here to moralize, so I compare to pornography with all the necessary trepidation. Without judgement, it’s my observation that pornography depicts physical connection while creating emotional distance. I think information is like that. There’s a difference between information and wisdom that hinges on volume. More information promises to show us more of reality, but too much of it can easily hide the truth. Information can be pornography — a simulation that, when consumed without limits, can weaken our ability to experience the real thing. When we feel overwhelmed by information — anxious and unable to process what we’ve already taken in — we’re realizing that “more” doesn’t help us find truth. But because we have also established information as a fundamental good in our society, failure to keep up with it, make sense of it, and even profit from it feels like a personal moral failure. There is only one way out of that. We don’t need another filter. We need a different emotional response to information. We should not only question why our accepted spectrum of emotional response to information — in the general sense — is mostly limited to the space between curiosity and desire, but actively develop a capacity for disgust when it becomes too much. And it has become too much. Some people may say that we just need better information skills and tools, not less information. But this misses how fundamentally our minds need space and time to turn information into understanding. When every moment is filled with new inputs, we can’t fully absorb, process, and reflect upon what we’ve consumed. Reflection, not consumptions, creates wisdom. Reflection requires quiet, isolation, and inactivity. Some people say that while technology has expanded over the last twenty-five years, culture hasn’t. If they needed a good defense for that idea, well, I think this is it: A world without idleness is a truly world without creativity. I’m using loaded moral language here for a purpose — to illustrate an imbalance in our information-saturated culture. Idleness is a pejorative these days, though it needn’t be. We don’t refer to compulsive information consumption as gluttony, though we should. And if attention is our most precious resource — as an information-driven economy would imply — why isn’t its commercial exploitation condemned as avarice? As I ask these questions I’m really looking for where individuals like you and me have leverage. If our attention is our currency, then leverage will come with the capacity to not pay it. To not look, to not listen, to not react, to not share. And as has always been true of us human beings, actions are feelings echoed outside the body. We must learn not just to withhold our attention but to feel disgust at ceaseless claims to it.
How elimination, curation, and optimization can help us see through the technological mirror. Technology functions as both mirror and lens — reflecting our self-image while simultaneously shaping how we see everything else. This metaphor of recursion, while perhaps obvious once stated, is one that most people instinctively resist. Why this resistance? I think it is because the observation is not only about a kind of recursion, but it is itself recursive. The contexts in which we discuss technology’s distorting effects tend to be highly technological — internet-based forums, messaging, social media, and the like. It’s difficult to clarify from within, isn’t it? When we try to analyze or critique a technology while using it to do so, it’s as if we’re critiquing the label from inside the bottle. And these days, the bottle is another apt metaphor; it often feels like technology is something we are trapped within. And that’s just at the surface — the discussion layer. It goes much deeper. It’s astounding to confront the reality that nearly all the means by which we see and understand ourselves are technological. So much of modern culture is in its artifacts, and the rest couldn’t be described without them. There have been oral traditions, of course, but once we started making things, they grew scarce. For a human in the twenty-first century, self awareness, cultural identification, and countless other aspects of existence are all, in some way or another, technological. It’s difficult to question the mirror’s image when we’ve never seen ourselves without it. The interfaces through which we perceive ourselves and interpret the world are so integrated into our experience that recognizing their presence, let alone their distorting effects, requires an almost impossible perspective shift. Almost impossible. Because of course it can be done. In fact, I think it’s a matter of small steps evenly distributed throughout a normal lifestyle. It’s not a matter of secret initiation or withdrawing from society, though I think it can sometimes feel that way. How, then, can one step outside the mirror’s view? I’ve found three categories of action particularly helpful: Elimination One option we always have is to simply not use a thing. I often think about how fascinating it is that to not use a particular technology in our era seems radical — truly counter-cultural. The more drastic rejecting any given technology seems, the better an example it is of how dependent we have become upon it. Imagine how difficult a person’s life would be today if they were to entirely reject the internet. There’s no law in our country against opting out of the internet, but the countless day-to-day dependencies upon the it nearly amount to a cumulative obligation to be connected to it. Nevertheless, a person could do it. Few would, but they could. This kind of “brute force” response to technology has become a YouTube genre — the “I Went 30 Days Without ____” video is quite popular. And this is obviously because of how much effort it requires to eliminate even comparatively minor technologies from one’s life. Not the entire internet, but just social media, or just streaming services, or just a particular device or app. Elimination isn’t easy, but I’m a fan of it. The Amish are often thought of as simply rejecting modernity, but that’s not an accurate description of what actually motivates their way of life. Religion plays a foundational role, of course, but each Amish community works together to decide upon many aspects of how they live, including what technologies they adopt. Their guiding principle is whether a thing or practice strengthens their community. And their decision is a collective one. I find that inspiring. When I reject a technology, I do so because I either don’t feel I need it or because I feel that it doesn’t help me live the way I want to live. It’s not forever, and it isn’t with judgement for anyone else but me. These are probably my most radical eliminations: most social media (I still reluctantly have a LinkedIn profile), streaming services (except YouTube), all “smart home” devices of any kind, smartwatches, and for the last decade and counting, laptops. Don’t @ me because you can’t ;) Curation What I have in mind here is curation of information, not of technologies. Since it is simply impossible to consume all information, we all curate in some way, whether we’re aware of it or not. For some, though, this might actually be a matter of what technologies they use — for example, if a person only uses Netflix, then they only see what Netflix shows them. That’s curation, but Netflix is doing the work. However, I think it’s a good exercise to do a bit more curation of one’s own. I believe that if curation is going to be beneficial, it must involve being intentional about one’s entire media diet — what information we consume, from which sources, how frequently, and why. This last part requires the additional work of discerning what motivates and funds various information sources. Few, if any, are truly neutral. The reality is that as information grows in volume, the challenge of creating useful filters for it increases to near impossibility. Information environments operated on algorithms filter information for you based upon all kinds of factors, some of which align with your preferences and many of which don’t. There are many ways to avoid this, they are all more inconvenient than a social media news feed, and it is imperative that more people make the effort to do them. They range from subscribing to carefully-chosen sources, to using specialized apps, feed readers, ad and tracking-blocking browsers and VPNs to control how information gets to you. I recommend all of that and a constant vigilance because, sadly, there is no filter that will only show you the true stuff. Optimization Finally, there’s optimization — the fine-tuning you can do to nearly anything and everything you use. I’ve become increasingly active in seeking out and adjusting even the most detailed of application and device settings, shaping my experiences to be quieter, more limited, and aligned with my intentions rather than the manufacturers’ defaults. I spent thirty minutes nearly redesigning my entire experience in Slack in ways I had never been aware were even possible until recently. It’s made a world of difference to me. Just the other day, I found a video that had several recommendations for altering default settings in Mac OS that have completely solved daily annoyances I have just tolerated for years. I am always adjusting the way I organize files, the apps I use, and the way I use them because I think optimization is always worthwhile. And if I can’t optimize it, I’m likely to eliminate it. None of these approaches offers perfect protection from technological mediation, but together they create meaningful space for more direct control over your experience. But perhaps most important is creating physical spaces that remain relatively untouched by digital technology. I often think back to long trips I took before the era of ubiquitous computing and connection. During a journey from Providence to Malaysia in 2004, I stowed my laptop and cell phone knowing they’d be useless to me during 24 hours of transit. There was no in-cabin wifi, no easy way to have downloaded movies to my machine in advance, no place to even plug anything in. I spent most of that trip looking out the window, counting minutes, and simply thinking — a kind of unoccupied time that has become nearly extinct since then. What makes technological discernment in the digital age particularly challenging is that we’re drowning in a pit of rope where the only escape is often another rope. Information technology is designed to be a nearly wraparound lens on reality; it often feels like the only way to keep using a thing is to use another thing that limits the first thing. People who know me well have probably heard me rant for years about phone cases — “why do I need a case for my case?!” These days, the sincere answer to many peoples’ app overwhelm is another app. It’s almost funny. And yet, I do remain enthusiastic about technology’s creative potential. The ability to shape our world by making new things is an incredible gift. But we’ve gone overboard, creating new technologies simply because we can, without a coherent idea of how they’ll shape the world. This makes us bystanders to what Kevin Kelly describes as “what technology wants” — the agenda inherent in digital technology that makes it far from neutral. What we ultimately seek isn’t escape from technology itself, but recovery of certain human experiences that technology tends to overwhelm: sustained attention, silence, direct observation, unstructured thought, and the sense of being fully present rather than partially elsewhere. The most valuable skill in our digital age isn’t technical proficiency but technological discernment — the wisdom to know when to engage, when to disconnect, and how to shape our tools to serve our deeper human needs rather than allowing ourselves to be shaped by them. “It does us no good to make fantastic progress if we do not know how to live with it.” – Thomas Merton
More in design
Founded in 1986 by a French entrepreneur Eric Woog, Matsuri introduced kaiten sushi, or conveyor belt sushi, to Paris with...
In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?” But those are bad questions he says. Why? the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.” Both answers are correct, because they are using different meanings of the verb “to exist.” He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned. To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature. The point he goes on to make is that our language has to evolve and adapt with our knowledge. Our grammar developed from our limited experience, before we know what we know now and before we became aware of how imprecise it was in describing the richness of the natural world. Rovelli gives an example of this from a text of antiquity which uses confusing grammar to get at the idea of the Earth having a spherical shape: For those standing below, things above are below, while things below are above, and this is the case around the entire earth. On its face, that is a very confusing sentence full of contradictions. But the idea in there is profound: the Earth is round and direction is relative to the observer. Here’s Rovelli: How is it possible that “things above are below, while things below are above"? It makes no sense…But if we reread it bearing in mind the shape and the physics of the Earth, the phrase becomes clear: its author is saying that for those who live at the Antipodes (in Australia), the direction “upward” is the same as “downward” for those who are in Europe. He is saying, that is, that the direction “above” changes from one place to another on the Earth. He means that what is above with respect to Sydney is below with respect to us. The author of this text, written two thousand years ago, is struggling to adapt his language and his intuition to a new discovery: the fact that the Earth is a sphere, and that “up” and “down” have a meaning that changes between here and there. The terms do not have, as previously thought, a single and universal meaning. So language needs innovation as much as any technological or scientific achievement. Otherwise we find ourselves arguing over questions of deep import in a way that ultimately amounts to merely a question of grammar. Email · Mastodon · Bluesky
Solid Order is a young fine jewelry brand from China, known for its neutral aesthetic inspired by geometric forms and...
Via Jeremy Keith’s link blog I found this article: Elizabeth Goodspeed on why graphic designers can’t stop joking about hating their jobs. It’s about the disillusionment of designers since the ~2010s. Having ridden that wave myself, there’s a lot of very relatable stuff in there about how design has evolved as a profession. But before we get into the meat of the article, there’s some bangers worth acknowledging, like this: Amazon – the most used website in the world – looks like a bunch of pop-up ads stitched together. lol, burn. Haven’t heard Amazon described this way, but it’s spot on. The hard truth, as pointed out in the article, is this: bad design doesn’t hurt profit margins. Or at least there’s no immediately-obvious, concrete data or correlation that proves this. So most decision makers don’t care. You know what does help profit margins? Spending less money. Cost-savings initiatives. Those always provide a direct, immediate, seemingly-obvious correlation. So those initiatives get prioritized. Fuzzy human-centered initiatives (humanities-adjacent stuff), are difficult to quantitatively (and monetarily) measure. “Let’s stop printing paper and sending people stuff in the mail. It’s expensive. Send them emails instead.” Boom! Money saved for everyone. That’s easier to prioritize than asking, “How do people want us to communicate with them — if at all?” Nobody ever asks that last part. Designers quickly realized that in most settings they serve the business first, customers second — or third, or fourth, or... Shar Biggers [says] designers are “realising that much of their work is being used to push for profit rather than change..” Meet the new boss. Same as the old boss. As students, designers are encouraged to make expressive, nuanced work, and rewarded for experimentation and personal voice. The implication, of course, is that this is what a design career will look like: meaningful, impactful, self-directed. But then graduation hits, and many land their first jobs building out endless Google Slides templates or resizing banner ads...no one prepared them for how constrained and compromised most design jobs actually are. Reality hits hard. And here’s the part Jeremy quotes: We trained people to care deeply and then funnelled them into environments that reward detachment. And the longer you stick around, the more disorienting the gap becomes – especially as you rise in seniority. You start doing less actual design and more yapping: pitching to stakeholders, writing brand strategy decks, performing taste. Less craft, more optics; less idealism, more cynicism. Less work advocating for your customers, more work for advocating for yourself and your team within the organization itself. Then the cynicism sets in. We’re not making software for others. We’re making company numbers go up, so our numbers ($$$) will go up. Which reminds me: Stephanie Stimac wrote about reaching 1 year at Igalia and what stood out to me in her post was that she didn’t feel a pressing requirement to create visibility into her work and measure (i.e. prove) its impact. I’ve never been good at that. I’ve seen its necessity, but am just not good at doing it. Being good at building is great. But being good at the optics of building is often better — for you, your career, and your standing in many orgs. Anyway, back to Elizabeth’s article. She notes you’ll burn out trying to monetize something you love — especially when it’s in pursuit of maintaining a cost of living. Once your identity is tied up in the performance, it’s hard to admit when it stops feeling good. It’s a great article and if you’ve been in the design profession of building software, it’s worth your time. Email · Mastodon · Bluesky
Weekly curated resources for designers — thinkers and makers.