More from Jim Nielsen’s Blog
Carson Gross has a post about vendoring which brought back memories of how I used to build websites in ye olden days, back in the dark times before npm. “Vendoring” is where you copy dependency source files directly into your project (usually in a folder called /vendor) and then link to them — all of this being a manual process. For example: Find jquery.js or reset.css somewhere on the web (usually from the project’s respective website, in my case I always pulled jQuery from the big download button on jQuery.com and my CSS reset from Eric Meyer’s website). Copy that file into /vendor, e.g. /vendor/jquery@1.2.3.js Pull it in where you need it, e.g. <script src="/vendor/jquery@1.2.3.js"> And don’t get me started on copying your transitive dependencies (the dependencies of your dependencies). That gets complicated when you’re vendoring by hand! Now-a-days package managers and bundlers automate all of this away: npm i what you want, import x from 'pkg', and you’re on your way! It’s so easy (easy to get all that complexity). But, as the HTMX article points out, a strength can also be a weakness. It’s not all net gain (emphasis mine): Because dealing with large numbers of dependencies is difficult, vendoring encourages a culture of independence. You get more of what you make easy, and if you make dependencies easy, you get more of them. I like that — you get more of what you make easy. Therefore: be mindful of what you make easy! As Carson points out, dependency management tools foster a culture of dependence — imagine that! I know I keep lamenting Deno’s move away from HTTP imports by default, but I think this puts a finger on why I’m sad: it perpetuates the status quo, whereas a stance on aligning imports with how the browser works would not perpetuate this dependence on dependency resolution tooling. There’s no package manager or dependency resolution algorithm for the browser. I was thinking about all of this the other day when I then came across this thread of thoughts from Dave Rupert on Mastodon. Dave says: I prefer to use and make simpler, less complex solutions that intentionally do less. But everyone just wants the thing they have now but faster and crammed with more features (which are juxtaposed) He continues with this lesson from his startup Luro: One of my biggest takeaways from Luro is that it’s hard-to-impossible to sell a process change. People will bolt stuff onto their existing workflow (ecosystem) all day long, but it takes a religious conversion to change process. Which really helped me put words to my feelings regarding HTTP imports in Deno: i'm less sad about the technical nature of the feature, and more about what it represented as a potential “religious revival” in the area of dependency management in JS. package.json & dep management has become such an ecosystem unto itself that it seems only a Great Reawakening™️ will change it. I don’t have a punchy point to end this article. It’s just me working through my feelings. Email · Mastodon · Bluesky
I like to try different apps. What makes trying different apps incredible is a layer of interoperability — standardized protocols, data formats, etc. When I can bring my data from one app to another, that’s cool. Cool apps are interoperable. They work with my data, rather than own it. For example, the other day I was itching to try a new RSS reader. I’ve used Reeder (Classic) for ages. But every once in a while I like to try something different. This is super easy because lots of clients support syncing to Feedbin. It’s worth pointing out: Feedbin has their own app. But they don’t force you to use it. You’re free to use any RSS client you want that supports their service. So all I have to do is download a new RSS client, login to Feedbin, and boom! An experience of my data in a totally different app from a totally different developer. That’s amazing! And you know how long it took? Seconds. No data export. No account migration. Doing stuff with my blog is similar. If I want to try a different authoring experience, all my posts are just plain-text markdown files on disk. Any app that can operate on plain-text files is a potential new app to try. No shade on them, but this why I personally don’t use apps like Bear. Don’t get me wrong, I love so much about Bear. But it wants to keep your data in its own own proprietary, note-keeping safe. You can’t just open your notes in Bear in another app. Importing is required. But there’s a big difference between apps that import (i.e. copy) your existing data and ones that interoperably work with it. Email can also be this way. I use Gmail, which supports IMAP, so I can open my mail in lots of different clients — and believe me, I've tried a lot of email clients over the years. Sparrow Mailbox Spark Outlook Gmail (desktop web, mobile app) Apple Mail Airmail This is why I don’t use un-standardized email features because I know I can’t take them with me. It’s also why I haven’t tried email providers like HEY! Because they don't support open protocols so I can’t swap clients when I want. My email is a dataset, and I want to be able to access it with any existing or future client. I don't want to be stuck with the same application for interfacing with my data forever (and have it tied to a company). I love this way of digital life, where you can easily explore different experiences of your data. I wish it was relevant to other areas of my digital life. I wish I could: Download a different app to view/experience my photos Download a different app to view/experience my music Download a different app to view/read my digital books In a world like this, applications would compete on an experience of my data, rather than on owning it. The world’s a big place. The entire world doesn’t need one singular photo experience to Rule Them All. Let’s have experiences that are as unique and varied as us. Email · Mastodon · Bluesky
I learned a new word: ductile. Do you know it? I’m particularly interested in its usage in a physics/engineering setting when talking about materials. Here’s an answer on Quora to: “What is ductile?” Ductility is the ability of a material to be permanently deformed without cracking. In engineering we talk about elastic deformation as deformation which is reversed once the load is removed for example a spring, conversely plastic deformation isn’t reversed. Ductility is the amount (usually expressed as a ratio) of plastic deformation that a material can undergo before it cracks or tears. I read that and started thinking about the “ductility” of languages like HTML, CSS, and JS. Specifically: how much deformation can they undergo before breaking? HTML, for example, is famously forgiving. It can be stretched, drawn out, or deformed in a variety of ways without breaking. Take this short snippet of HTML: <!doctype html> <title>My site</title> <p>Hello world! <p>Nice to meet you That is valid HTML. But it can also be “drawn out” for readability without losing any of its meaning. It’ll still render the same in the browser: <!doctype html> <html> <head> <title>My site</title> </head> <body> <p>Hello world!</p> <p>Nice to meet you.</p> </body> </html> This capacity for the language to undergo a change in form without breaking is its “ductility”. HTML has some pull before it breaks. JS, on the other hand, doesn’t have the same kind of ductility. Forget a quotation mark and boom! Stretch it a little and it breaks. console.log('works!'); // -> works! console.log('works!); // Uncaught SyntaxError: Invalid or unexpected token I suppose some would say “this isn’t ductility, this is merely forgiving error-parsing”. Ok, sure. Nevertheless, I’m writing here because I learned this new word that has very practical meaning in another discipline to talk about the ability of materials to be stretched and deformed without breaking. I think we need more of that in software. More resiliency. More malleability. More ductility — prioritized in our materials (tools, languages, paradigms) so we can talk more about avoiding sudden failure. Email · Mastodon · Bluesky
The other day I was working on something where I needed to use CSS to apply multiple background images to an element, e.g. <div> My content with background images. </div> <style> div { background-image: url(image-one.jpg), url(image-two.jpg); background-position: top right, bottom left; /* etc. */ } </style> As I was tweaking the appearance of these images, I found myself wanting to control the opacity of each one. A voice in my head from circa 2012 chimed in, “Um, remember Jim, there is no background-opacity rule. Can’t be done.” Then that voice started rattling off the alternatives: You’ll have to use opacity but that will apply to the entire element, which you have text in, so that won’t work. You’ll have to create a new empty element, apply the background images there, then use opacity. Or: You can use pseudo elements (:before & :after), apply the background images to those, then use opacity. Then modern me interrupted this old guy. “I haven’t reached for background-opacity in a long time. Surely there’s a way to do this with more modern CSS?” So I started searching and found this StackOverflow answer which says you can use background-color in combination with background-blend-mode to achieve a similar effect, e.g. div { /* Use some images */ background-image: url(image-one.jpg), url(image-two.jpg); /* Turn down their 'opacity' by blending them into the background color */ background-color: rgba(255,255,255,0.6); background-blend-mode: lighten; } Worked like a charm! It probably won’t work in every scenario like a dedicated background-image-opacity might, but for my particular use case at that moment in time it was perfect! I love little moments like this where I reach to do something in CSS that was impossible back when I really cut my teeth on the language, and now there’s a one- or two-line modern solution! [Sits back and gets existential for a moment.] We all face moments like this where we have to balance leveraging hard-won expertise with seeking new knowledge and greater understanding, which requires giving up the lessons of previous experience in order to make room for incorporating new lessons of experiences. It’s hard to give up the old, but it’s the only way to make room for the new — death of the old is birth of the new. Email · Mastodon · Bluesky
I saw these going around, but didn’t think I’d ever see myself get tagged — then Eric assuaged my FOMO. As I’ve done elsewhere talking about how I blog, I’m gonna try and impose a character limit to my answers (~240). I’m not sure if that makes my job as the writer easier or harder, but it should make your job as the reader easier. Why did you start blogging in the first place? I think I started because everything I learned about building on the web came from reading other people’s blogs online, so I wanted to be a “web person” like them. What platform are you using to manage your blog and why did you choose it? At the time of this writing (April 2025): I write in iA Writer. Code for my blog and notes is on GitHub. Deployment/hosting is via Netlify. I’ve arrived at this setup less from a combination of choice and evolution. As me and my writing evolve, my process and tools evolve too. Have you blogged on other platforms before? Blogspot, way back in the day. It’s no longer up, which is probably for the best. I was posting stuff I made from following “make this in Photoshop” tutorials. Or I’d practice trying to visually express silly puns. Or I’d make visual mashups of culture at the time. How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog? For a detailed history of changes on how I blog, I blog about blogging under #myBlog and I blog about microblogging under #myNotes. Read any of those posts for insights into my ever-changing process. When do you feel most inspired to write? When I read other people’s thoughts. Do you publish immediately after writing, or do you let it simmer a bit as a draft? I’m a simmerer. Rarely does a post go from thought to published in one sitting. For example, here’s a screenshot of my current simmering drafts (note my sophisticated editorial process of assigning each draft a letter prefix for sorting based on my appetite for finishing it). What are you generally interested in writing about? Stuff I make. Or stuff others make. Or thoughts I think while reading thoughts others think. I have a tags page that tries to capture what I write categorically — for example, I blog notes from books I read, and podcasts I listen to — but TBH it’s not the greatest taxonomy of my writing. Reductively: I blog about web design and development. Who are you writing for? Whoa, that question got me more introspective than I expected. Gonna move on before this becomes an existential crisis. What’s your favorite post on your blog? I used to highlight some of my favs on my home page, but I stopped. Choosing favorites is hard. My blog posts are like my kids: I love them all equally, lol. I suppose my favorite blog post is the one I’ll publish next. Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature? Will I redesign? Lol, the question is: when will you redesign? Tag ‘em Sorry if I mention someone who’s already been tagged: Piper Haywood — Love Piper’s mix of the personal and professional. Still have bookmarked to try grandma’s recipe. Tyler Gaw — Have loved and respected this dude since I met him at my first “real” webdev job in NYC. David Bushnell — Been enjoying David’s short- and long-form writing a lot as of late. Plus we feel the same about Deno & HTTP modules. Katie Langerman - Ah gotcha, that’s not a blog link. It’s Bluesky. But I’ve followed Katie on the socials and always enjoy her perspective. Not sure she has a personal blog, so this is a vote of confidence in her starting one :) Jan Miksovsky — Jan is doing really cool stuff with Web Origami (also just a super nice guy). Sorry, I’m not gonna ping any of these folks. If they read my blog, they’ll see their names. Otherwise, dear reader, consider it a suggestion to go subscribe to their stuff. Email · Mastodon · Bluesky
More in design
Makarounas Vineyards Unveils Exclusive Release of Morokanella Fermented in a Concrete Egg. Makarounas Winery presents its latest limited release—an exceptional...
Carson Gross has a post about vendoring which brought back memories of how I used to build websites in ye olden days, back in the dark times before npm. “Vendoring” is where you copy dependency source files directly into your project (usually in a folder called /vendor) and then link to them — all of this being a manual process. For example: Find jquery.js or reset.css somewhere on the web (usually from the project’s respective website, in my case I always pulled jQuery from the big download button on jQuery.com and my CSS reset from Eric Meyer’s website). Copy that file into /vendor, e.g. /vendor/jquery@1.2.3.js Pull it in where you need it, e.g. <script src="/vendor/jquery@1.2.3.js"> And don’t get me started on copying your transitive dependencies (the dependencies of your dependencies). That gets complicated when you’re vendoring by hand! Now-a-days package managers and bundlers automate all of this away: npm i what you want, import x from 'pkg', and you’re on your way! It’s so easy (easy to get all that complexity). But, as the HTMX article points out, a strength can also be a weakness. It’s not all net gain (emphasis mine): Because dealing with large numbers of dependencies is difficult, vendoring encourages a culture of independence. You get more of what you make easy, and if you make dependencies easy, you get more of them. I like that — you get more of what you make easy. Therefore: be mindful of what you make easy! As Carson points out, dependency management tools foster a culture of dependence — imagine that! I know I keep lamenting Deno’s move away from HTTP imports by default, but I think this puts a finger on why I’m sad: it perpetuates the status quo, whereas a stance on aligning imports with how the browser works would not perpetuate this dependence on dependency resolution tooling. There’s no package manager or dependency resolution algorithm for the browser. I was thinking about all of this the other day when I then came across this thread of thoughts from Dave Rupert on Mastodon. Dave says: I prefer to use and make simpler, less complex solutions that intentionally do less. But everyone just wants the thing they have now but faster and crammed with more features (which are juxtaposed) He continues with this lesson from his startup Luro: One of my biggest takeaways from Luro is that it’s hard-to-impossible to sell a process change. People will bolt stuff onto their existing workflow (ecosystem) all day long, but it takes a religious conversion to change process. Which really helped me put words to my feelings regarding HTTP imports in Deno: i'm less sad about the technical nature of the feature, and more about what it represented as a potential “religious revival” in the area of dependency management in JS. package.json & dep management has become such an ecosystem unto itself that it seems only a Great Reawakening™️ will change it. I don’t have a punchy point to end this article. It’s just me working through my feelings. Email · Mastodon · Bluesky
Our world treats information like it’s always good. More data, more content, more inputs — we want it all without thinking twice. To say that the last twenty-five years of culture have centered around info-maximalism wouldn’t be an exaggeration. I hope we’re coming to the end of that phase. More than ever before, it feels like we have to — that we just can’t go on like this. But the solution cannot come from within; it won’t be a better tool or even better information to get us out of this mess. It will be us, feeling and acting differently. Think about this comparison: Information is to wisdom what pornography is to real intimacy. I’m not here to moralize, so I compare to pornography with all the necessary trepidation. Without judgement, it’s my observation that pornography depicts physical connection while creating emotional distance. I think information is like that. There’s a difference between information and wisdom that hinges on volume. More information promises to show us more of reality, but too much of it can easily hide the truth. Information can be pornography — a simulation that, when consumed without limits, can weaken our ability to experience the real thing. When we feel overwhelmed by information — anxious and unable to process what we’ve already taken in — we’re realizing that “more” doesn’t help us find truth. But because we have also established information as a fundamental good in our society, failure to keep up with it, make sense of it, and even profit from it feels like a personal moral failure. There is only one way out of that. We don’t need another filter. We need a different emotional response to information. We should not only question why our accepted spectrum of emotional response to information — in the general sense — is mostly limited to the space between curiosity and desire, but actively develop a capacity for disgust when it becomes too much. And it has become too much. Some people may say that we just need better information skills and tools, not less information. But this misses how fundamentally our minds need space and time to turn information into understanding. When every moment is filled with new inputs, we can’t fully absorb, process, and reflect upon what we’ve consumed. Reflection, not consumptions, creates wisdom. Reflection requires quiet, isolation, and inactivity. Some people say that while technology has expanded over the last twenty-five years, culture hasn’t. If they needed a good defense for that idea, well, I think this is it: A world without idleness is a truly world without creativity. I’m using loaded moral language here for a purpose — to illustrate an imbalance in our information-saturated culture. Idleness is a pejorative these days, though it needn’t be. We don’t refer to compulsive information consumption as gluttony, though we should. And if attention is our most precious resource — as an information-driven economy would imply — why isn’t its commercial exploitation condemned as avarice? As I ask these questions I’m really looking for where individuals like you and me have leverage. If our attention is our currency, then leverage will come with the capacity to not pay it. To not look, to not listen, to not react, to not share. And as has always been true of us human beings, actions are feelings echoed outside the body. We must learn not just to withhold our attention but to feel disgust at ceaseless claims to it.
Challenge Develop strong brand foundations for an international supplement company with a proven product to help them take the US...
I like to try different apps. What makes trying different apps incredible is a layer of interoperability — standardized protocols, data formats, etc. When I can bring my data from one app to another, that’s cool. Cool apps are interoperable. They work with my data, rather than own it. For example, the other day I was itching to try a new RSS reader. I’ve used Reeder (Classic) for ages. But every once in a while I like to try something different. This is super easy because lots of clients support syncing to Feedbin. It’s worth pointing out: Feedbin has their own app. But they don’t force you to use it. You’re free to use any RSS client you want that supports their service. So all I have to do is download a new RSS client, login to Feedbin, and boom! An experience of my data in a totally different app from a totally different developer. That’s amazing! And you know how long it took? Seconds. No data export. No account migration. Doing stuff with my blog is similar. If I want to try a different authoring experience, all my posts are just plain-text markdown files on disk. Any app that can operate on plain-text files is a potential new app to try. No shade on them, but this why I personally don’t use apps like Bear. Don’t get me wrong, I love so much about Bear. But it wants to keep your data in its own own proprietary, note-keeping safe. You can’t just open your notes in Bear in another app. Importing is required. But there’s a big difference between apps that import (i.e. copy) your existing data and ones that interoperably work with it. Email can also be this way. I use Gmail, which supports IMAP, so I can open my mail in lots of different clients — and believe me, I've tried a lot of email clients over the years. Sparrow Mailbox Spark Outlook Gmail (desktop web, mobile app) Apple Mail Airmail This is why I don’t use un-standardized email features because I know I can’t take them with me. It’s also why I haven’t tried email providers like HEY! Because they don't support open protocols so I can’t swap clients when I want. My email is a dataset, and I want to be able to access it with any existing or future client. I don't want to be stuck with the same application for interfacing with my data forever (and have it tied to a company). I love this way of digital life, where you can easily explore different experiences of your data. I wish it was relevant to other areas of my digital life. I wish I could: Download a different app to view/experience my photos Download a different app to view/experience my music Download a different app to view/read my digital books In a world like this, applications would compete on an experience of my data, rather than on owning it. The world’s a big place. The entire world doesn’t need one singular photo experience to Rule Them All. Let’s have experiences that are as unique and varied as us. Email · Mastodon · Bluesky