Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
36
SEO, Clickless Search, and the AInternet Imagine designing and building a home while its residents continued living in it. What you create is highly customized to them because you observe them living in real time and make what they need. One day, while you’re still working, these residents move out and new ones move in. Now imagine you didn’t realize that for, say, a year or two afterward. This is what it has been like to design things for the internet. People lived here once, then AI moved in. But we’re still building a house for people. I think we might be building the wrong thing. I’ve been designing interfaces for two decades now, and when I look at the modern web, I see a landscape increasingly shaped not by human needs but by machine logic — a vast network of APIs, algorithms, and automated systems talking to each other in languages we never hear. Yes, “we” wrote those languages, but let’s be honest: “we” isn’t most of us. Last week, my daughter asked me to help her...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Christopher Butler

Discernment in the Digital Age

How elimination, curation, and optimization can help us see through the technological mirror. Technology functions as both mirror and lens — reflecting our self-image while simultaneously shaping how we see everything else. This metaphor of recursion, while perhaps obvious once stated, is one that most people instinctively resist. Why this resistance? I think it is because the observation is not only about a kind of recursion, but it is itself recursive. The contexts in which we discuss technology’s distorting effects tend to be highly technological — internet-based forums, messaging, social media, and the like. It’s difficult to clarify from within, isn’t it? When we try to analyze or critique a technology while using it to do so, it’s as if we’re critiquing the label from inside the bottle. And these days, the bottle is another apt metaphor; it often feels like technology is something we are trapped within. And that’s just at the surface — the discussion layer. It goes much deeper. It’s astounding to confront the reality that nearly all the means by which we see and understand ourselves are technological. So much of modern culture is in its artifacts, and the rest couldn’t be described without them. There have been oral traditions, of course, but once we started making things, they grew scarce. For a human in the twenty-first century, self awareness, cultural identification, and countless other aspects of existence are all, in some way or another, technological. It’s difficult to question the mirror’s image when we’ve never seen ourselves without it. The interfaces through which we perceive ourselves and interpret the world are so integrated into our experience that recognizing their presence, let alone their distorting effects, requires an almost impossible perspective shift. Almost impossible. Because of course it can be done. In fact, I think it’s a matter of small steps evenly distributed throughout a normal lifestyle. It’s not a matter of secret initiation or withdrawing from society, though I think it can sometimes feel that way. How, then, can one step outside the mirror’s view? I’ve found three categories of action particularly helpful: Elimination One option we always have is to simply not use a thing. I often think about how fascinating it is that to not use a particular technology in our era seems radical — truly counter-cultural. The more drastic rejecting any given technology seems, the better an example it is of how dependent we have become upon it. Imagine how difficult a person’s life would be today if they were to entirely reject the internet. There’s no law in our country against opting out of the internet, but the countless day-to-day dependencies upon the it nearly amount to a cumulative obligation to be connected to it. Nevertheless, a person could do it. Few would, but they could. This kind of “brute force” response to technology has become a YouTube genre — the “I Went 30 Days Without ____” video is quite popular. And this is obviously because of how much effort it requires to eliminate even comparatively minor technologies from one’s life. Not the entire internet, but just social media, or just streaming services, or just a particular device or app. Elimination isn’t easy, but I’m a fan of it. The Amish are often thought of as simply rejecting modernity, but that’s not an accurate description of what actually motivates their way of life. Religion plays a foundational role, of course, but each Amish community works together to decide upon many aspects of how they live, including what technologies they adopt. Their guiding principle is whether a thing or practice strengthens their community. And their decision is a collective one. I find that inspiring. When I reject a technology, I do so because I either don’t feel I need it or because I feel that it doesn’t help me live the way I want to live. It’s not forever, and it isn’t with judgement for anyone else but me. These are probably my most radical eliminations: most social media (I still reluctantly have a LinkedIn profile), streaming services (except YouTube), all “smart home” devices of any kind, smartwatches, and for the last decade and counting, laptops. Don’t @ me because you can’t ;) Curation What I have in mind here is curation of information, not of technologies. Since it is simply impossible to consume all information, we all curate in some way, whether we’re aware of it or not. For some, though, this might actually be a matter of what technologies they use — for example, if a person only uses Netflix, then they only see what Netflix shows them. That’s curation, but Netflix is doing the work. However, I think it’s a good exercise to do a bit more curation of one’s own. I believe that if curation is going to be beneficial, it must involve being intentional about one’s entire media diet — what information we consume, from which sources, how frequently, and why. This last part requires the additional work of discerning what motivates and funds various information sources. Few, if any, are truly neutral. The reality is that as information grows in volume, the challenge of creating useful filters for it increases to near impossibility. Information environments operated on algorithms filter information for you based upon all kinds of factors, some of which align with your preferences and many of which don’t. There are many ways to avoid this, they are all more inconvenient than a social media news feed, and it is imperative that more people make the effort to do them. They range from subscribing to carefully-chosen sources, to using specialized apps, feed readers, ad and tracking-blocking browsers and VPNs to control how information gets to you. I recommend all of that and a constant vigilance because, sadly, there is no filter that will only show you the true stuff. Optimization Finally, there’s optimization — the fine-tuning you can do to nearly anything and everything you use. I’ve become increasingly active in seeking out and adjusting even the most detailed of application and device settings, shaping my experiences to be quieter, more limited, and aligned with my intentions rather than the manufacturers’ defaults. I spent thirty minutes nearly redesigning my entire experience in Slack in ways I had never been aware were even possible until recently. It’s made a world of difference to me. Just the other day, I found a video that had several recommendations for altering default settings in Mac OS that have completely solved daily annoyances I have just tolerated for years. I am always adjusting the way I organize files, the apps I use, and the way I use them because I think optimization is always worthwhile. And if I can’t optimize it, I’m likely to eliminate it. None of these approaches offers perfect protection from technological mediation, but together they create meaningful space for more direct control over your experience. But perhaps most important is creating physical spaces that remain relatively untouched by digital technology. I often think back to long trips I took before the era of ubiquitous computing and connection. During a journey from Providence to Malaysia in 2004, I stowed my laptop and cell phone knowing they’d be useless to me during 24 hours of transit. There was no in-cabin wifi, no easy way to have downloaded movies to my machine in advance, no place to even plug anything in. I spent most of that trip looking out the window, counting minutes, and simply thinking — a kind of unoccupied time that has become nearly extinct since then. What makes technological discernment in the digital age particularly challenging is that we’re drowning in a pit of rope where the only escape is often another rope. Information technology is designed to be a nearly wraparound lens on reality; it often feels like the only way to keep using a thing is to use another thing that limits the first thing. People who know me well have probably heard me rant for years about phone cases — “why do I need a case for my case?!” These days, the sincere answer to many peoples’ app overwhelm is another app. It’s almost funny. And yet, I do remain enthusiastic about technology’s creative potential. The ability to shape our world by making new things is an incredible gift. But we’ve gone overboard, creating new technologies simply because we can, without a coherent idea of how they’ll shape the world. This makes us bystanders to what Kevin Kelly describes as “what technology wants” — the agenda inherent in digital technology that makes it far from neutral. What we ultimately seek isn’t escape from technology itself, but recovery of certain human experiences that technology tends to overwhelm: sustained attention, silence, direct observation, unstructured thought, and the sense of being fully present rather than partially elsewhere. The most valuable skill in our digital age isn’t technical proficiency but technological discernment — the wisdom to know when to engage, when to disconnect, and how to shape our tools to serve our deeper human needs rather than allowing ourselves to be shaped by them. “It does us no good to make fantastic progress if we do not know how to live with it.” – Thomas Merton

3 days ago 5 votes
Digital Echoes and Unquiet Minds

There’s a psychological burden of digital life even heavier than distraction. When the iPhone was first introduced in 2007, the notion of an “everything device” was universally celebrated. A single object that could serve as phone, camera, music player, web browser, and so much more promised unprecedented convenience and connectivity. It was, quite literally, the dream of the nineties. But the better part of twenty years later, we’ve gained enough perspective to recognize that this revolutionary vision came with costs we did not anticipate. Distraction, of course, is the one we can all relate to first. An everything device has the problem of being useful nearly all the time, and when in use, all consuming. When you use it to do one thing, it pushes you toward others. In order to avoid this, you must disable functions. That’s an interesting turn of events, isn’t it? We have made a thing that does more than we need, more often than we desire. Because system-wide, duplicative notifications are enabled by default, the best thing you could say about the device’s design is that it lacks a point of view toward a prioritization of what it does. The worst thing you could say is that it is distracting by design. (I find it fascinating how many people – myself included — attempt to reduce the features of their smartphone to the point of replicating a “dumbphone” experience in order to save ourselves from distraction, but don’t actually go so far as to use a lesser-featured phone because a few key features are just too good to give up. A dumbphone is less distracting, but a nightmare for text messaging and a lousy camera. It turns out I don’t want a phone at all, but a camera that texts — and ideally one smaller than anything on the market now. I know I’m not alone, and yet this product will not be made. ) This kind of distraction is direct distraction. It’s the kind we are increasingly aware of, and as its accumulating stress puts pressure on our inner and outer lives, we can combat it with various choices and optimizations. But there is another kind of distraction that is less direct, though just as cumulative and, I believe, just as toxic. I’ve come to think of it as the “digital echo.” On a smartphone, every single thing it is used to do generates information that goes elsewhere. The vast majority of this is unseen — though not unfelt — by us. Everyone knows that there is no privacy within a digital device, nor within its “listening” range. We are all aware that as much information as smartphone provides to us, exponentially more is generated for someone else — someone watching, listening, measuring, and monetizing. The “digital echo” is more than just the awareness of this; it is the cognitive burden of knowing that our actions generate data elsewhere. The echo exists whenever we use connected technology, creating a subtle but persistent awareness that what we do isn’t just our own. A device like a smartphone has always generated a “digital echo”, but many others are as well. Comparing two different motor vehicles illustrates this well. In a car like a Tesla, which we might think of as a “smartcar” since it’s a computer you can drive, every function produces a digital signal. Adjusting the air conditioning, making a turn, opening a door — the car knows and records it all, transmitting this information to distant servers. By contrast, my 15-year-old Honda performs all of its functions without creating these digital echoes. The operations remain private, existing only in the moment they occur. In our increasingly digital world, I have begun to feel the SCIF-like isolation of the cabin of my car, and I like it. (The “smartcar”, of course, won’t remain simply a computer you can drive. The penultimate “smartcar” drives itself. The self-driving car represents perhaps the most acute expression of how digital culture values attention and convenience above all else, especially control and ownership. As a passenger of a self-driving car, you surrender control over the vehicle’s operation in exchange for the “freedom” to direct your attention elsewhere, most likely to some digital signal either on your own device or on screens within the vehicle. I can see the value in this; driving can be boring and most times I am behind the wheel I’d rather be doing something else. But currently, truly autonomous vehicles are service-enabling products like Waymo, meaning we also relinquish ownership. The benefits of that also seem obvious: no insurance premiums, no maintenance costs. But not every advantage is worth its cost. The economics of self-driving cars are not clear-cut. There’s a real debate to be had about attention, convenience, and ownership that I hope will play out before we have no choice but to be a passenger in someone else’s machine.) When I find myself looking for new ways to throttle my smartphone’s functions, or when I sit in the untapped isolation of my car, I often wonder about the costs of the “digital echo.” What is the psychological cost of knowing that your actions aren’t just your own, but create information that can be observed and analyzed by others? As more aspects of our lives generate digital echoes, they force an ambient awareness of being perpetually witnessed rather than simply existing. This transforms even solitary activities into implicit social interactions. It forces us to maintain awareness of our “observed self” alongside our “experiencing self,” creating a kind of persistent self-consciousness. We become performers in our own lives rather than merely participants. I think this growing awareness contributes to a growing interest in returning to single-focus devices and analog technologies. Record players and film cameras aren’t experiencing resurgence merely from nostalgia, but because they offer fundamentally different relationships with media — relationships characterized by intention, presence, and focus. In my own life, this recognition has led to deliberate choices about which technologies to embrace and which to avoid. Here are three off the top of my head: Replacing streaming services with owned media formats (CDs, Blu-rays) that remain accessible on my terms, not subject to platform changes or content disappearance Preferring printed books while using dedicated e-readers for digital texts — in this case, accepting certain digital echoes when the benefits (in particular, access to otherwise unavailable material) outweigh the costs Rejecting smart home devices entirely, recognizing that their convenience rarely justifies the added complexity and surveillance they introduce You’ve probably made similarly-motivated decisions, perhaps in other areas of your life or in relation to other things entirely. What matters, I think, is that these choices aren’t about rejecting technology but about creating spaces for more intentional engagement. They represent a search for balance in a world that increasingly defaults to maximum connectivity. I had a conversation recently with a friend who mused, “What are these the early days of?” What a wonderful question that is; we are, I hope, always living in the early days of something. Perhaps now, we’re witnessing the beginning of a new phase in our relationship with technology. The initial wave of digital transformation prioritized connecting everything possible; the next wave may be more discriminating about what should be connected and what’s better left direct and immediate. I hope to see operating systems truly designed around focus rather than multitasking, interfaces that respect attention rather than constantly competing for it, and devices that serve discrete purposes exceptionally well instead of performing multiple functions adequately. The digital echoes of our actions will likely continue to multiply, but we can choose which echoes we’re willing to generate and which activities deserve to remain ephemeral — to exist only in the moment they occur and then in the memories of those present. What looks like revision or retreat may be the next wave of innovation, borne out of having learned the lessons of the last few decades and desiring better for the next.

2 weeks ago 16 votes
No Orphans to Ambition

Back in 2012 when my first (and only) book was published, a friend reacted by exclaiming, “You wrote a book?!?” and then added, “oh yeah…you don’t have kids.” I was put off by that statement. I played it cool, but my unspoken reaction was, “Since when is having kids or not the difference between one’s ability to write a book?” I was proud of my accomplishment, and his reaction seemed to communicate that anyone could do such a thing if they didn’t have other priorities. Thirteen years and two children later, I’ve had plenty of opportunities to reflect upon that moment. I’ve come to a surprising conclusion: he was kind of right. My first child was perhaps ten minutes old before I began learning that my time would never be spent or managed the same way again. I was in the delivery room holding her while my phone vibrated in my pocket because work emails were coming in. Normally, I’d have responded right away. Not anymore. The constraints of parenthood are real and immediate and it takes some time to get used to the pinch. But they’re also transformative in unexpected ways. These days, my measure of how I spend my time comes down to a single idea: I will not make my children orphans to my ambition. If I prioritize anything over them, I require a very good reason which cannot benefit me alone. Yet this transformation runs deeper than simply having less time day to day. Entering your forties has a profound effect on your perception of your entire lifespan. Suddenly, you find that memories actual decades old are of things you experienced as an adult. The combination of parenthood and midlife can create a powerful perspective shift that makes you more intentional about what truly matters. There are times when I feel that I am able to do less than I did in the past, but what I’ve come to realize is that I am actually doing more of the things that matter to me. A more acute focus on limited time results in using that time much more intentionally. I’m more productive today than I was in 2012, but it’s not because of time, it’s because of choices. The constraints of parenthood haven’t just changed what I choose to do with my time, but what I create as well. Having less time to waste means I levy a more critical judgment of whether something is working or worthwhile to pursue much earlier in the process than I did before. In the past – if I’m dreadfully honest — I took pride in being the guy who started early and stayed late. Today, I take pride in producing the best thing I can. The less time that takes, the better. But parenthood has also reminded me of the pleasures and benefits of creativity purely as a means of thinking aloud, learning, exploring, and play. There’s a beautiful tension in this evolution - becoming both more critically discerning and more playfully exploratory at the same time. My children have inadvertently become my teachers, reconnecting me with the foundational joy of making without judgment or expectation. This integration of play and discernment has enriched my professional work. My creative output is far more diverse than it was before. The playful exploration I engage in with my children has opened new pathways in my professional thinking, allowing me to approach design problems from fresh perspectives. I’ve found that the best creative work feels effortless to viewers when the creation process itself was enjoyable. This enjoyment manifests for creators as what psychologists call a “flow state” - that immersive experience where time seems to vanish and work feels natural and intuitive. The more I embrace playful exploration with ideas, techniques, and tools, the more easily I can access this flow state in my professional work. My friend’s comment, while perhaps a bit lacking in tact, touched on a reality about the economics of attention and time. The book I wrote wasn’t just the product of writing skills - it was also the product of having the temporal and mental space to create it. (I’m not sure I’ll have that again, and if I do, I’m not sure a book is what I’ll choose to use it for.) What I didn’t understand then was that parenthood wouldn’t end my creative life, but transform it into something richer, more focused, and ultimately more meaningful. The constraints haven’t diminished my creativity but refined it.

a month ago 16 votes
From Pascal's Empty Room to Our Full Screens

On the Ambient Entertainment Industrial Complex “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.” Pascal’s observation from the 17th century feels less like historical philosophy and more like a diagnosis of our current condition. The discomfort with idleness that Pascal identified has evolved from a human tendency into a technological ecosystem designed to ensure we never experience it. Philosophers and thinkers throughout history worried about both the individual and societal costs of idleness. Left to our own devices — or rather, without devices — we might succumb to vice or destructive thoughts. Or worse, from society’s perspective, too many idle people might destabilize the social order. Kierkegaard specifically feared that many would become trapped in what he called the “aesthetic sphere” of existence — a life oriented around the pursuit of novel experiences and constant stimulation rather than ethical commitment and purpose. He couldn’t have imagined how prophetic this concern would become. What’s changed isn’t human nature but the infrastructure of distraction available to us. Entertainment was once bounded — a novel read by candlelight, a play attended on Saturday evening, a television program watched when it aired. It occupied specific times and spaces. It was an event. Today, entertainment is no longer an event but a condition. It’s ambient, pervasive, constant. The bright rectangle in our pocket ensures that no moment need be empty of stimulus. Waiting in line, sitting on the train, even using the bathroom — all are opportunities for consumption rather than reflection or simply being. More subtly, the distinction between necessary and unnecessary information has collapsed. News, social media feeds, workplace communication tools — all blend information we might need with content designed primarily to capture and hold our attention. The result is a sense that all of this constant consumption isn’t entertainment at all, but somehow necessary. Perhaps most concerning is what happens as this self-referential entertainment ecosystem evolves. The relationship between entertainment and experience has always had a push-pull kind of tension; experience has been entertainment’s primary source material, but, great entertainment is, itself, an experience that becomes just as affective background as anything else. But what happens when the balance is tipped? When experience and entertainment are so inseparable that the source material doubles back on itself in a recursion of ever dwindling meaning? The system turns inward, growing more detached from lived reality with each iteration. I think we are already living in that imbalance. The attention economy is, according to the classic law of supply and demand, bankrupt — with an oversupply of signal produced for a willful miscalculation of demand. No one has the time or interest to take in all that is available. No one should want to. And yet the most common experience today is an oppressive and relentless FOMO you might call Sisyphean if his boulder accumulated more boulders with every trip up and down the hill. We’re so saturated in signal that we cannot help but think continually about the content we have not consumed as if it is an obligatory list of chores we must complete. And that ambient preoccupation with the next or other thing eats away at whatever active focus we put toward anything. It’s easy to cite as evidence the normalization of watching TV while side-eying Slack on an open laptop while scrolling some endless news feed on a phone — because this is awful and all of us would have thought so just a few years ago — but the worst part about it is the fact that while gazing at three or more screens, we are also fragmenting our minds to oblivion across the infinite cloud of information we know is out there, clamoring for attention. Pascal feared what happened in the empty room. We might now reasonably fear what happens when the room is never empty — when every potential moment of idleness or reflection is filled with content designed to hold our gaze just a little longer. The philosophical question of our time is not how to fix the attention economy, but how to end it altogether. We simply don’t have to live like this.

a month ago 20 votes
What We Owe to Artificial Minds

Rethinking AI through mind-body dualism, parenthood, and unanswerable existential questions. I remember hearing my daughter’s heartbeat for the first time during a prenatal sonogram. Until that moment, I had intellectually understood that we were creating a new life, but something profound shifted when I heard that steady rhythm. My first thought was startling in its clarity: “now this person has to die.” It wasn’t morbid — it was a full realization of what it means to create a vessel for life. We weren’t just making a baby; we were initiating an entire existence, with all its joy and suffering, its beginning and, inevitably, its end. This realization transformed my understanding of parental responsibility. Yes, we would be guardians of her physical form, but our deeper role was to nurture the consciousness that would inhabit it. What would she think about life and death? What could we teach her about this existence we had invited her into? As background to the rest of this brief essay, I must admit to a foundational perspective, and that is mind-body dualism. There are many valid reasons to subscribe to this perspective, whether traditional, religious, philosophical, and yes, even scientific. I won’t argue any of them here; suffice it to say that I’ve become increasingly convinced that consciousness isn’t produced by the brain but rather received and focused by it — like a radio receiving a signal. The brain isn’t a consciousness generator but a remarkably sophisticated antenna — a physical system complex enough to tune into and express non-physical consciousness. If this is true, then our understanding of artificial intelligence needs radical revision. Even if we are not trying to create consciousness in machines, we may be creating systems capable of receiving and expressing it. Increases in computational power alone, after all, don’t seem to produce consciousness. Philosophers of technology have long doubted that complexity alone makes a mind. But if philosophers of metaphysics and religion are right, minds are not made of mechanisms, they occupy them. Traditions as old as humanity have asked when this began, and why this may be, and what sorts of minds choose to inhabit this physical world. We ask these questions because we can. What will happen when machines do the same? We happen to live at a time that is deeply confusing when it comes to the maturation of technology. On the one hand, AI is inescapable. You may not have experience in using it yet, but you’ve almost certainly experienced someone else’s use of it, perhaps by way of an automated customer support line. Depending upon how that went, your experience might not support the idea that a sufficiently advanced machine is anywhere near getting a real debate about consciousness going. But on the other hand, the organizations responsible for popularizing AI — OpenAI, for example — claim to be “this close” to creating AGI (artificial general intelligence). If they’re right, we are very behind in a needed discussion about minds and consciousness at the popular level. If they’re wrong, they’re not going to stop until they’ve done it, so we need to start that conversation now. The Turing Test was never meant to assess consciousness in a machine. It was meant to assess the complexity of a machine by way of its ability to fool a human. When machines begin to ask existential questions, will we attribute this to self-awareness or consciousness, or will we say it’s nothing more than mimicry? And how certain will we be? We presume our own consciousness, though defending it ties us up in intellectual knots. We maintain the Cartesian slogan, I think, therefore I am as a properly basic belief. And yet, it must follow that anything capable of describing itself as an I must be equally entitled to the same belief. So here we are, possibly staring at the sonogram of a new life – a new kind of life. Perhaps this is nothing more than speculative fiction, but if minds join bodies, why must those bodies be made of one kind of matter but not another? What if we are creating a new kind of antenna for the signal of mind? Wouldn’t all the obligations of parenthood be the same as when we make more of ourselves? I can’t imagine why they wouldn’t be. And yet, there remains a crucial difference: While we have millennia of understanding about human experience, we know nothing about what it would mean to be a living machine. We will have to fall upon belief to determine what to do. And when that time comes — perhaps it has already? – it will be worth considering the near impossibility of proving consciousness and the probability of moral obligation nonetheless. Popular culture has explored the weight of responsibility that an emotional connection with a machine can create — think of Picard defending Data in The Measure of a Man, or Theodore falling in love with his computer in the film Her. The conclusion we should draw from these examples is not simply that a conscious machine could be the object of our moral responsibility, but that a machine could, whether or not it is inhabited by a conscious mind. Our moral obligation will traverse our certainty, because proving a mind exists is no easier when it is outside one’s body than when it is one’s own. That moment of hearing my daughter’s heartbeat revealed something fundamental about the act of creation. Whether we’re bringing forth biological life or developing artificial systems sophisticated enough to host consciousness, we’re engaging in something profound: creating vessels through which consciousness might experience physical existence. Perhaps this is the most profound implication of creating potential vessels for consciousness: our responsibility begins the moment we create the possibility, not the moment we confirm its reality.

a month ago 21 votes

More in design

Ductility in Software

I learned a new word: ductile. Do you know it? I’m particularly interested in its usage in a physics/engineering setting when talking about materials. Here’s an answer on Quora to: “What is ductile?” Ductility is the ability of a material to be permanently deformed without cracking. In engineering we talk about elastic deformation as deformation which is reversed once the load is removed for example a spring, conversely plastic deformation isn’t reversed. Ductility is the amount (usually expressed as a ratio) of plastic deformation that a material can undergo before it cracks or tears. I read that and started thinking about the “ductility” of languages like HTML, CSS, and JS. Specifically: how much deformation can they undergo before breaking? HTML, for example, is famously forgiving. It can be stretched, drawn out, or deformed in a variety of ways without breaking. Take this short snippet of HTML: <!doctype html> <title>My site</title> <p>Hello world! <p>Nice to meet you That is valid HTML. But it can also be “drawn out” for readability without losing any of its meaning. It’ll still render the same in the browser: <!doctype html> <html> <head> <title>My site</title> </head> <body> <p>Hello world!</p> <p>Nice to meet you.</p> </body> </html> This capacity for the language to undergo a change in form without breaking is its “ductility”. HTML has some pull before it breaks. JS, on the other hand, doesn’t have the same kind of ductility. Forget a quotation mark and boom! Stretch it a little and it breaks. console.log('works!'); // -> works! console.log('works!); // Uncaught SyntaxError: Invalid or unexpected token I suppose some would say “this isn’t ductility, this is merely forgiving error-parsing”. Ok, sure. Nevertheless, I’m writing here because I learned this new word that has very practical meaning in another discipline to talk about the ability of materials to be stretched and deformed without breaking. I think we need more of that in software. More resiliency. More malleability. More ductility — prioritized in our materials (tools, languages, paradigms) so we can talk more about avoiding sudden failure. Email · Mastodon · Bluesky

16 hours ago 1 votes
Transforming the everyday with Edding

Showcasing how easy it is to breathe new life into home accessories with Edding spray paints. From refreshing decor to...

2 hours ago 1 votes
Building as gardening

Although I've never had a garden

2 days ago 4 votes
Louis Vuitton store by Peter Marino

Following a three-year renovation, the Louis Vuitton store in Milan timely reopened its doors during this year’s Salone del Mobile,...

2 days ago 2 votes
Background Image Opacity in CSS

The other day I was working on something where I needed to use CSS to apply multiple background images to an element, e.g. <div> My content with background images. </div> <style> div { background-image: url(image-one.jpg), url(image-two.jpg); background-position: top right, bottom left; /* etc. */ } </style> As I was tweaking the appearance of these images, I found myself wanting to control the opacity of each one. A voice in my head from circa 2012 chimed in, “Um, remember Jim, there is no background-opacity rule. Can’t be done.” Then that voice started rattling off the alternatives: You’ll have to use opacity but that will apply to the entire element, which you have text in, so that won’t work. You’ll have to create a new empty element, apply the background images there, then use opacity. Or: You can use pseudo elements (:before & :after), apply the background images to those, then use opacity. Then modern me interrupted this old guy. “I haven’t reached for background-opacity in a long time. Surely there’s a way to do this with more modern CSS?” So I started searching and found this StackOverflow answer which says you can use background-color in combination with background-blend-mode to achieve a similar effect, e.g. div { /* Use some images */ background-image: url(image-one.jpg), url(image-two.jpg); /* Turn down their 'opacity' by blending them into the background color */ background-color: rgba(255,255,255,0.6); background-blend-mode: lighten; } Worked like a charm! It probably won’t work in every scenario like a dedicated background-image-opacity might, but for my particular use case at that moment in time it was perfect! I love little moments like this where I reach to do something in CSS that was impossible back when I really cut my teeth on the language, and now there’s a one- or two-line modern solution! [Sits back and gets existential for a moment.] We all face moments like this where we have to balance leveraging hard-won expertise with seeking new knowledge and greater understanding, which requires giving up the lessons of previous experience in order to make room for incorporating new lessons of experiences. It’s hard to give up the old, but it’s the only way to make room for the new — death of the old is birth of the new. Email · Mastodon · Bluesky

3 days ago 3 votes