Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
42
Carson Gross has a post about vendoring which brought back memories of how I used to build websites in ye olden days, back in the dark times before npm. “Vendoring” is where you copy dependency source files directly into your project (usually in a folder called /vendor) and then link to them — all of this being a manual process. For example: Find jquery.js or reset.css somewhere on the web (usually from the project’s respective website, in my case I always pulled jQuery from the big download button on jQuery.com and my CSS reset from Eric Meyer’s website). Copy that file into /vendor, e.g. /vendor/jquery@1.2.3.js Pull it in where you need it, e.g. <script src="/vendor/jquery@1.2.3.js"> And don’t get me started on copying your transitive dependencies (the dependencies of your dependencies). That gets complicated when you’re vendoring by hand! Now-a-days package managers and bundlers automate all of this away: npm i what you want, import x from 'pkg', and you’re on your way! It’s so easy...
4 months ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Trying to Make Sense of Casing Conventions on the Web

(I present to you my stream of consciousness on the topic of casing as it applies to the web platform.) I’m reading about the new command and commandfor attributes — which I’m super excited about, declarative behavior invocation in HTML? YES PLEASE!! — and one thing that strikes me is the casing in these APIs. For example, the command attribute has a variety of values in HTML which correspond to APIs in JavaScript. The show-popover attribute value maps to .showPopover() in JavaScript. hide-popover maps to .hidePopover(), etc. So what we have is: lowercase in attribute names e.g. commandfor="..." kebab-case in attribute values e.g. show-popover camelCase for JS counterparts e.g. showPopover() After thinking about this a little more, I remember that HTML attributes names are case insensitive, so the browser will normalize them to lowercase during parsing. Given that, I suppose you could write commandFor="..." but it’s effectively the same. Ok, lowercase attribute names in HTML makes sense. The related popover attributes follow the same convention: popovertarget popovertargetaction And there are many other attribute names in HTML that are lowercase, e.g.: maxlength novalidate contenteditable autocomplete formenctype So that all makes sense. But wait, there are some attribute names with hyphens in them, like aria-label="..." and data-value="...". So why isn’t it command-for="..."? Well, upon further reflection, I suppose those attributes were named that way for extensibility’s sake: they are essentially wildcard attributes that represent a family of attributes that are all under the same namespace: aria-* and data-*. But wait, isn’t that an argument for doing popover-target and popover-target-action? Or command and command-for? But wait (I keep saying that) there are kebab-case attribute names in HTML — like http-equiv on the <meta> tag, or accept-charset on the form tag — but those seem more like legacy exceptions. It seems like the only answer here is: there is no rule. Naming is driven by convention and decisions are made on a case-by-case basis. But if I had to summarize, it would probably be that the default casing for new APIs tends to follow the rules I outlined at the start (and what’s reflected in the new command APIs): lowercase for HTML attributes names kebab-case for HTML attribute values camelCase for JS counterparts Let’s not even get into SVG attribute names We need one of those “bless this mess” signs that we can hang over the World Wide Web. Email · Mastodon · Bluesky

2 days ago 10 votes
Successive Prototypes Bridge the Gap Between Idea and Reality

Dismissing an idea because it doesn’t work in your head is doing a disservice to the idea. (Same for dismissing someone else’s idea because it doesn’t work in your head.) The only way to truly know if an idea works is to test it. The gap between an idea and reality is the work. You can’t dismiss something as “not working” without doing the work. When collaborating with others, different ideas can be put forward which end up in competition with each other. We debate which is best, but verbal descriptions don’t do justice to ideas — so the idea that wins is the one whose champion is the most persuasive (or has the most institutional authority). You don’t want that. You want an environment where ideas can be evaluated based on their substance and not on the personal attributes of the person advocating them. This is the value of prototypes. We can’t visualize or predict how our own ideas will play out, let alone other people’s. This is why it’s necessary to bring them to life, have them take concrete form. It’s the only way to do them justice. (Picture a cute puppy in your head. I’ve got one too. Now how do we determine who’s imagining the cuter puppy? We can’t. We have to produce a concrete manifestation for contrast and comparison.) Prototypes are how we bridge the gap between idea and reality. They’re an iterative, evolutionary, exploratory form of birthing ideas that test their substance. People will bow out to a good persuasive argument. They’ll bow out to their boss saying it should be one way or another. But it’s hard to bow out to a good idea you can see, taste, touch, smell, or use. Email · Mastodon · Bluesky

a week ago 14 votes
Consistent Navigation Across My Inconsistent Websites, Part II

I refreshed the little thing that let’s you navigate consistently between my inconsistent subdomains (video recording). Here’s the tl;dr on the update: I had to remove some features on each site to make this feel right. Takeaway: adding stuff is easy, removing stuff is hard. The element is a web component and not even under source control (🤫). I serve it directly from my cdn. If I want to make an update, I tweak the file on disk and re-deploy. Takeaway: cowboy codin’, yee-haw! Live free and die hard. So. Many. Iterations. All of which led to what? A small, iterative evolution. Takeaway: it’s ok for design explorations to culminate in updates that look more like an evolution than a mutation. Want more info on the behind-the-scenes work? Read on! Design Explorations It might look like a simple iteration on what I previously had, but that doesn’t mean I didn’t explore the universe of possibilities first before coming back to the current iteration. v0: Tabs! A tab-like experience seemed the most natural, but how to represent it? I tried a few different ideas. On top. On bottom. Different visual styles, etc. And of course, gotta explore how that plays out on desktop too. Some I liked, some I didn’t. As much as I wanted to play with going to the edges of the viewport, I realized that every browser is different and you won't be able to get a consistent “bleed-like” visual experience across browsers. For example, if you try to make tabs that bleed to the edges, it looks nice in a frame in Figma, and even in some browsers. But it won’t look right in all browser, like iOS Safari. So I couldn’t reliably leverage the idea of a bounded canvas as a design element — which, I should’ve known, has always been the case with the web. v1: Bottom Tabs With a Site Theme I really like this pattern on mobile devices, so I thought maybe I’d consider it for navigating between my sites. But how to theme across differently-styled sites? The favicon styles seemed like a good bet! And, of course, what do to on larger devices? Just stacking it felt like overkill, so I explored moving it to the edge. I actually prototyped this in code, but I didn’t like how it felt so I scratched the idea and went other directions. v2: The Unification The more I explored what to do with this element, the more it started taking on additional responsibility. “What if I unified its position with site-specific navigation?” I thought. This led to design explorations where the disparate subdomains began to take on not just a unified navigational element, but a unified header. And I made small, stylistic explorations with the tabs themselves too. You can see how I played toyed with the idea of a consistent header across all my sites (not an intended goal, but ya know, scope creep gets us all). As I began to explore more possibilities than I planned for, things started to get out of hand. v3: Do More. MORE. MORE!! Questions I began asking: Why aren’t these all under the same domain?! What if I had a single domain for feeds across all of them, e.g. feeds.jim-nielsen.com? What about icons instead of words? Wait, wait, wait Jim. Consistent navigation across inconsistent sites. That’s the goal. Pare it back a little. v4: Reigning It Back In To counter my exploratory ambitions, I told myself I needed to ship something without the need to modify the entire design style of all my sites. So how do I do that? That got me back to a simpler premise: consistent navigation across my inconsistent sites. Better — and implementable. Technical Details The implementation was pretty simple. I basically just forked my previous web component and changed some styles. That’s it. The only thing I did different was I moved the web component JS file from being part of my www.jim-nielsen.com git repository to a standalone file (not under git control) on my CDN. This felt like one of the exceptions to the rule of always keeping stuff under version control. It’s more of the classic FTP-style approach to web development. Granted, it’s riskier, but it’s also way more flexible. And I’m good with that trade-off for now. (Ask me again in a few months if I’ve done anything terrible and now have regrets.) Each site implements the component like this (with a different subdomain attribute for each site): <script type="module" src="https://cdn.jim-nielsen.com/shared/jim-site-switcher.js"></script> <jim-site-switcher subdomain="blog"></jim-site-switcher> That’s really all there is to say. Thanks to Zach for prodding me to make this post. Email · Mastodon · Bluesky

2 weeks ago 13 votes
Bottomless Subtleties

Jason Fried writes in his post “Knives and battleships”: Specific tools and familiar ingredients combined in different ratios, different molds, for different purposes. Like a baker working from the same tight set of pantry ingredients to make a hundred distinct recipes. You wouldn't turn to them and say "enough with the butter, flour, sugar, baking powder, and eggs already!" Getting the same few things right in different ways is a career's worth of work. Mastery comes from a lifetime of putting together the basics in different combinations. I think of Beethoven’s 5th and its famous “short-short-short-long” motif. The entire symphony is essentially the same core idea repeated and developed relentlessly! The same four notes (da-da-da-DAH!) moving between instruments, changing keys, etc. Beethoven took something basic — a four note motif — and extracted an enormous set of variations. Its genius is in illustrating how much can be explored and expressed within constraints (rather than piling on “more and more” novel stuff). Back to Jason’s point: the simplest building blocks in any form — music, code, paint, cooking — implemented with restraint can be combined in an almost infinite set of pleasing ways. As Devine noted (and I constantly link back to): we haven’t even begun to scratch the surface of what we can do with less. Email · Mastodon · Bluesky

2 weeks ago 16 votes
Just a Little More Context Bro, I Promise, and It’ll Fix Everything

Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky

3 weeks ago 16 votes

More in programming

Engineering excellence starts on edge

The best engineering teams take control of their tools. They help develop the frameworks and libraries they depend on, and they do this by running production code on edge — the unreleased next version. That's where progress is made, that's where participation matters most. This sounds scary at first. Edge? Isn't that just another word for danger? What if there's a bug?! Yes, what if? Do you think bugs either just magically appear or disappear? No, they're put there by programmers and removed by the very same. If you want bug-free frameworks and libraries, you have to work for it, but if you do, the reward for your responsibility is increased engineering excellence. Take Rails 8.1, as an example. We just released the first beta version at Rails World, but Shopify, GitHub, 37signals, and a handful of other frontier teams have already been running this code in production for almost a year. Of course, there were bugs along the way, but good automated testing and diligent programmers caught virtually all of them before they went to production. It didn't always used to be this way. Once upon a time, I felt like I had one of the only teams running Rails on edge in production. But now two of the most important web apps in the world are doing the same! At an incredible scale and criticality. This has allowed both of them, and the few others with the same frontier ambition, to foster a truly elite engineering culture. One that isn't just a consumer of open source software, but a real-time co-creator. This is a step function in competence and prowess for any team. It's also an incredible motivation boost. When your programmers are able to directly influence the tools they're working with, they're far more likely to do so, and thus they go deeper, learn more, and create connections to experts in the same situation elsewhere. But this requires being able to immediately use the improvements or bug fixes they help devise. It doesn't work if you sit around waiting patiently for the next release before you dare dive in. Far more companies could do this. Far more companies should do this. Whether it's with Ruby, Rails, Omarchy, or whatever you're using, your team could level up by getting more involved, taking responsibility for finding issues on edge, and reaping the reward of excellence in the process. So what are you waiting on?

14 hours ago 4 votes
Dreams of Late Summer

Here on a summer night in the grass and lilac smell Drunk on the crickets and the starry sky, Oh what fine stories we could tell With this moonlight to tell them by. A summer night, and you, and paradise, So lovely and so filled with grace, Above your head, the universe has hung its … Continue reading Dreams of Late Summer →

an hour ago 2 votes
Apologies and forgiveness

The first in a series of posts about doing things the right way

yesterday 7 votes
Understanding Bazel remote caching

A deep dive into the Action Cache, the CAS, and the security issues that arise from using Bazel with a remote cache but without remote execution

yesterday 8 votes
Trying to Make Sense of Casing Conventions on the Web

(I present to you my stream of consciousness on the topic of casing as it applies to the web platform.) I’m reading about the new command and commandfor attributes — which I’m super excited about, declarative behavior invocation in HTML? YES PLEASE!! — and one thing that strikes me is the casing in these APIs. For example, the command attribute has a variety of values in HTML which correspond to APIs in JavaScript. The show-popover attribute value maps to .showPopover() in JavaScript. hide-popover maps to .hidePopover(), etc. So what we have is: lowercase in attribute names e.g. commandfor="..." kebab-case in attribute values e.g. show-popover camelCase for JS counterparts e.g. showPopover() After thinking about this a little more, I remember that HTML attributes names are case insensitive, so the browser will normalize them to lowercase during parsing. Given that, I suppose you could write commandFor="..." but it’s effectively the same. Ok, lowercase attribute names in HTML makes sense. The related popover attributes follow the same convention: popovertarget popovertargetaction And there are many other attribute names in HTML that are lowercase, e.g.: maxlength novalidate contenteditable autocomplete formenctype So that all makes sense. But wait, there are some attribute names with hyphens in them, like aria-label="..." and data-value="...". So why isn’t it command-for="..."? Well, upon further reflection, I suppose those attributes were named that way for extensibility’s sake: they are essentially wildcard attributes that represent a family of attributes that are all under the same namespace: aria-* and data-*. But wait, isn’t that an argument for doing popover-target and popover-target-action? Or command and command-for? But wait (I keep saying that) there are kebab-case attribute names in HTML — like http-equiv on the <meta> tag, or accept-charset on the form tag — but those seem more like legacy exceptions. It seems like the only answer here is: there is no rule. Naming is driven by convention and decisions are made on a case-by-case basis. But if I had to summarize, it would probably be that the default casing for new APIs tends to follow the rules I outlined at the start (and what’s reflected in the new command APIs): lowercase for HTML attributes names kebab-case for HTML attribute values camelCase for JS counterparts Let’s not even get into SVG attribute names We need one of those “bless this mess” signs that we can hang over the World Wide Web. Email · Mastodon · Bluesky

2 days ago 10 votes