More from Jim Nielsen’s Blog
After shipping my work transforming HTML with Netlify’s edge functions I realized I have a little bug: the order of the icons specified in the URL doesn’t match the order in which they are displayed on screen. Why’s this happening? I have a bunch of links in my HTML document, like this: <icon-list> <a href="/1/">…</a> <a href="/2/">…</a> <a href="/3/">…</a> <!-- 2000+ more --> </icon-list> I use html-rewriter in my edge function to strip out the HTML for icons not specified in the URL. So for a request to: /lookup?id=1&id=2 My HTML will be transformed like so: <icon-list> <!-- Parser keeps these two --> <a href="/1/">…</a> <a href="/2/">…</a> <!-- But removes this one --> <a href="/3/">…</a> </icon-list> Resulting in less HTML over the wire to the client. But what about the order of the IDs in the URL? What if the request is to: /lookup?id=2&id=1 Instead of: /lookup?id=1&id=2 In the source HTML document containing all the icons, they’re marked up in reverse chronological order. But the request for this page may specify a different order for icons in the URL. So how do I rewrite the HTML to match the URL’s ordering? The problem is that html-rewriter doesn’t give me a fully-parsed DOM to work with. I can’t do things like “move this node to the top” or “move this node to position x”. With html-rewriter, you only “see” each element as it streams past. Once it passes by, your chance at modifying it is gone. (It seems that’s just the way these edge function tools are designed to work, keeps them lean and performant and I can’t shoot myself in the foot). So how do I change the icon’s display order to match what’s in the URL if I can’t modify the order of the elements in the HTML? CSS to the rescue! Because my markup is just a bunch of <a> tags inside a custom element and I’m using CSS grid for layout, I can use the order property in CSS! All the IDs are in the URL, and their position as parameters has meaning, so I assign their ordering to each element as it passes by html-rewriter. Here’s some pseudo code: // Get all the IDs in the URL const ids = url.searchParams.getAll("id"); // Select all the icons in the HTML rewriter.on("icon-list a", { element: (element) => { // Get the ID const id = element.getAttribute('id'); // If it's in our list, set it's order // position from the URL if (ids.includes(id)) { const order = ids.indexOf(id); element.setAttribute( "style", `order: ${order}` ); // Otherwise, remove it } else { element.remove(); } }, }); Boom! I didn’t have to change the order in the source HTML document, but I can still get the displaying ordering to match what’s in the URL. I love shifty little workarounds like this! Email · Mastodon · Bluesky
A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”. So a link blog, if you will. As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?” So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns. I wrote some JavaScript to: Fetch the HTML page at whitehouse.gov/wire Parse it with cheerio Select all the external links on the page Return a list of links and their headline text In a few minutes I had a quick analysis of what kind of links were on the page: This immediately sparked my curiosity to know more about the meta information around the links, like: If you grouped all the links together, which sites get linked to the most? What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words? What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)? So I got to building. Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality. My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API. After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage. From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like: Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page. Parse the links and pull out the top-level domains so I could group links by domain occurrence. Create charts and graphs to visualize the structured data I had created. Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop! Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating! It’s been about a month and a half since I started this and I have about fifty days worth of data. The results? Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025: youtube.com (133) foxnews.com (72) thepostmillennial.com (67) foxbusiness.com (66) breitbart.com (64) x.com (63) reuters.com (51) truthsocial.com (48) nypost.com (47) dailywire.com (36) From the links, here’s a word cloud of the most commonly recurring words in the link headlines: “trump” (343) “president” (145) “us” (134) “big” (131) “bill” (127) “beautiful” (113) “trumps” (92) “one” (72) “million” (57) “house” (56) The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool! If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience. Email · Mastodon · Bluesky
I’ve long wanted the ability to create custom collections of icons from my icon gallery. Today I can browse collections of icons that share pre-defined metadata (e.g. “Show me all icons tagged as blue”) but I can’t create your own arbitrary collections of icons. That is, until now! I created a page at /lookup that allows you to specify however many id search params you want and it will pull all the matching icons into a single page. Here’s an example of macOS icons that follow the squircle shape but break out of it ever-so-slightly (something we’ll lose with macOS Tahoe). It requires a little know how to construct the URL, something I’ll address later, but it works for my own personal purposes at the moment. So how did I build it? Implementation So the sites are built with a static site generator, but this feature requires an ability to dynamically construct a page based on the icons specified in the URL, e.g. /lookup?id=foo&id=bar&id=baz How do I get that to work? I can’t statically pre-generate every possible combination[1] so what are my options? Create a “shell” page that uses JavaScript to read the search params, query a JSON API, and render whichever icons are specified in the URL. Send an HTML page with all icons over the wire, then use JavaScript to reach into the DOM and remove all icons whose IDs aren’t specified in the page URL. Render the page on the server with just the icons specified in the request URL. No. 1: this is fine, but I don’t have a JSON API for clients to query and I don’t want to create one. Plus I have to duplicate template logic, etc. I’m already rendering lists of icons in my static site generator, so can’t I just do that? Which leads me to: No. 2: this works, but I do have 2000+ icons so the resulting HTML page (I tried it) is almost 2MB if I render everything (whereas that same request for ~4 icons but filtered by the server would be like 11kb). There’s gotta be a way to make that smaller, which leads me to: No. 3: this is great, but it does require I have a “server” to construct pages at request time. Enter Netlify’s Edge Functions which allow you to easily transform an existing HTML page before it gets to the client. To get this working in my case, I: Create /lookup/index.html that has all 2000+ icons on it (trivial with my current static site generator). Create a lookup.ts edge function that intercepts the request to /lookup/index.html Read the search params for the request and get all specified icon IDs, e.g. /lookup?id=a&id=b&id=c turns into ['a','b','c'] Following Netlify’s example of transforming an HTML response, use HTMLRewriter to parse my HTML with all 2000+ icons in it then remove all icons that aren’t in my list of IDs, e.g. <a id='a'>…</a><a id='z'>…</a> might get pruned down to <a id='a'>…</a> Transform the parsed HTML back into a Response and return it to the client from the function. It took me a second to get all the Netlify-specific configurations right (put the function in ./netlify/edge-functions not ./netlify/functions, duh) but once I strictly followed all of Netlify’s rules it was working! (You gotta use their CLI tool to get things working on localhost and test it yourself.) Con-clusions I don’t particularly love that this ties me to a bespoke feature of Netlify’s platform — even though it works really well! But that said, if I ever switched hosts this wouldn’t be too difficult to change. If my new host provided control over the server, nothing changes about the URL for this page (/lookup?id=…). And if I had to move it all to the client, I could do that too. In that sense, I’m tying myself to Netlify from a developer point of view but not from an end-user point of view (everything still works at the URL-level) and I’m good with that trade-off. Just out of curiosity, I asked ChatGPT: if you have approximately 2,000 unique items, how many possible combinations of those IDs can be passed in a URL like /lookup?id=1&id=2? It said the number is 2^2000 which is “astronomically large” and “far more than atoms in the universe”. So statically pre-generating them is out of the question. ⏎ Email · Mastodon · Bluesky
Here’s a screenshot of my inbox from when I was on the last leg of my flight home from family summer vacation: That’s pretty representative of the flurry of emails I get when I fly, e.g.: Check in now Track your bags Your flight will soon depart Your flight will soon board Your flight is boarding Information on your connecting flight Tell us how we did In addition to email, the airline has my mobile number and I have its app, so a large portion of my email notifications are also sent as 1) push notifications to my devices, as well as 2) messages to my mobile phone number. So when the plane begins boarding, for example, I’m told about it with an email, a text, and a push notification. I put up with it because I’ve tried pruning my stream of notifications from the airlines in the past, only to lose out on a vital notification about a change or delay. It feels like my two options are: Get all notifications multiple times via email, text, and in-app push. Get most notifications via one channel, but somehow miss the most vital one. All of this serendipitously coincided with me reading a recent piece from Nicholas Carr where he described these kinds of notifications as “little data”: all those fleeting, discrete bits of information that swarm around us like gnats on a humid summer evening. That feels apt, as I find myself swiping at lots of little data gnats swarming in my email, message, and notification inboxes. No wondering they call it “fly”ing 🥁 Email · Mastodon · Bluesky
I recently got my copy of the Internet Phone Book. Look who’s hiding on the bottom inside spread of page 32: The book is divided into a number of categories — such as “Small”, “Text”, and “Ecology” — and I am beyond flattered to be listed under the category “HTML”! You can dial my site at number 223. As the authors note, the sites of the internet represented in this book are not described by adjectives like “attention”, “competition”, and “promotion”. Instead they’re better suited by adjectives like “home”, “love”, and “glow”. These sites don’t look to impose their will on you, soliciting that you share, like, and subscribe. They look to spark curiosity, mystery, and wonder, letting you decide for yourself how to respond to the feelings of this experience. But why make a printed book listing sites on the internet? That’s crazy, right? Here’s the book’s co-author Kristoffer Tjalve in the introduction: With the Internet Phone Book, we bring the web, the medium we love dearly, and call it into a thousand-year old tradition [of print] I love that! I think the juxtaposition of websites in a printed phone book is exactly the kind of thing that makes you pause and reconsider the medium of the web in a new light. Isn’t that exactly what art is for? Kristoffer continues: Elliot and I began working on diagram.website, a map with hundreds of links to the internet beyond platform walls. We envisioned this map like a night sky in a nature reserve—removed from the light pollution of cities—inviting a sense of awe for the vastness of the universe, or in our case, the internet. We wanted people to know that the poetic internet already existed, waiting for them…The result of that conversation is what you now hold in your hands. The web is a web because of its seemingly infinite number of interconnected sites, not because of it’s half-dozen social platforms. It’s called the web, not the mall. There’s an entire night sky out there to discover! Email · Mastodon · Bluesky
More in programming
The mystery In the previous article, I briefly mentioned a slight difference between the ESP-Prog and the reproduced circuit, when it comes to EN: Focusing on EN, it looks like the voltage level goes back to 3.3V much faster on the ESP-Prog than on the breadboard circuit. The grid is horizontally spaced at 2ms, so … Continue reading Overanalyzing a minor quirk of Espressif’s reset circuit → The post Overanalyzing a minor quirk of Espressif’s reset circuit appeared first on Quentin Santos.
<![CDATA[It has been a year since I set up my System76 Merkaat with Linux Mint. In July of 2024 I migrated from ChromeOS and the Merkaat has been my daily driver on the desktop. A year later I have nothing major to report, which is the point. Despite the occasional unplanned reinstallation I have been enjoying the stability of Linux and just using the PC. This stability finally enabled me to burn bridges with mainstream operating systems and fully embrace Linux and open systems. I'm ready to handle the worst and get back to work. Just a few years ago the frustration of troubleshooting a broken system would have made me seriously consider the switch to a proprietary solution. But a year of regular use, with an ordinary mix of quiet moments and glitches, gave me the confidence to stop worrying and learn to love Linux. linux a href="https://remark.as/p/journal.paoloamoroso.com/my-first-year-since-coming-back-to-linux"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>
Explaining nil interface{} gotcha in Go A footgun In Go empty interface is an interface without any methods, typed as interface{}. A zero value of interface{} is nil: var v interface{} // compiler sets this to nil, you could explicitly write = nil if v == nil { fmt.Printf("v is nil\n") } else { fmt.Printf("v is NOT nil\n") } Try online This prints: v is nil. However, this sometimes trips people up: type Foo struct { } var v interface{} var nilFoo *Foo // implicilty initialized by compiler to nil if nilFoo == nil { fmt.Printf("nilFoo is nil.") } else { fmt.Printf("nilFoo is NOT nil.") } v = nilFoo if v == nil { fmt.Printf("v is nil\n") } else { fmt.Printf("v is NOT nil\n") } Try online This prints: nilFoo is nil. v is NOT nil. On surface level, this is wrong: t is a nil. We assigned a nil to v but it doesn’t equal to nil? How to check if interface{} is nil of any pointer type? func isNilPointer(i interface{}) bool { if i == nil { return false // interface itself is nil } v := reflect.ValueOf(i) return v.Kind() == reflect.Ptr && v.IsNil() } type Foo struct { } var pf *Foo var v interface{} = pf if isNilPointer(v) { fmt.Printf("v is nil pointer\n") } else { fmt.Printf("v is NOT nil pointer\n") } Try online Why There’s a reason for this perplexing behavior. nil is an abstract value. If you come from C/C++ or Java/C#, you might think that this is equivalent of NULL pointer or null reference. It isn’t. nil is a symbol that represents a zero value of pointers, channels, maps, slices. Logically interface{} combines type and value. You can think of it as a tuple (type, value). An uninitialized value of interface{} is a tuple without a type and value (no type, no value). In Go uninitialized value is zero value and since nil is an abstract value representing zero value for several types, it makes sense to use it for zero value of interface{}. So: zero value of interface{} is nil which is (no type, no value). When we assigned nilFoo to v, the value is (*Foo, nil). Are you surprised that (no type, no value) is not the same as (*Foo, nil)? To understand this gotcha, you have to understand two things. One: nil is an abstract value that only has a meaning in context. Consider this: var ch chan (bool) var m map[string]bool if ch == m { fmt.Printf("ch is equal to m\n") } Try online This snippet doesn’t even compile: Error:./prog.go:8:11: invalid operation: ch == m (mismatched types chan bool and map[string]bool). Both ch and m are nil but you can’t compare them because they are of different types. nil != nil because nil is an abstract concept, not an actual value. Two: nil value of interface{} is (no type, no value). Once you understand the above, you’ll understand why nil doesn’t compare to (type, nil) e.g. (*Foo, nil) or (map[string]bool, nil) or (int, 0) or (string, ""). Bad design or inevitable consequence of previous decisions? Many claim it’s a bad design. No-one describes what a better design would look like. Let’s play act a Go language designer. You’ve already designed concrete types, you came up with notion of zero value and created nil to denote zero value for pointers, channels, maps, slices. You’re now designing interface{} as a logical tuple of (type, value). The zero value is obviously (no type, no value). You have to figure how to represent the zero value. A different symbol for interface{} zero value Instead of using nil you could create a different symbol e.g. zeroInteface. You could then write: var v interface{} var v2 interface{} = &Foo{nil} var v3 interface{} = int(0) if v == zeroInteface { // this is true } if v2 == nil { // tihs is true } if v3 == nil { // is it true or not? } Is this a better design? I don’t think so. We don’t have zeroPointer, zeroMap, zeroChanel etc. so this breaks consistency. It sticks out like a sore zeroInterface. And v == nil is subtle. Not all values wrapped in an interface{} have zero value of nil. What should happen if you compare to (int, 0) given that 0 is zero value of int? Damn the consistency, let’s do what user expects You could ditch the strict logic of nil values and special case the if v == nil for interface{} to do what people superficially expect to happen. You then have to answer the question below: what happens when you do if (int, 0) == nil? The biggest issue is that you’ve lost ability to distinguish between (no type, no value) and (type, nil). They both compare to nil so how would you test for (no type, no value) but not (type, nil)? It doesn’t seem like a better design either. Your proposal Now that you understand the problem and seen two ideas for how to fix it, it’s your turn to design a better solution. I tried and the above 2 are the only ideas I had. We are boxed by existing notions of zero values and using nil to represent them. We could explore designs that re-think those assumptions but would that be Go anymore? It’s easy to complain that something is a bad design. It’s much harder, often impossible, to design something better.
I signed up for Hinge. Holy shit with the boosts. How does someone who works on this wake up every morning and feel okay about themselves? Similarly with the tip screens, Uber algorithm, all the zero sum bullshit using all the tricks of psychology to extract a little bit more from every interaction in society. Nudge. Nudge. NUDGE. Want to partake in normal society like buying a coffee, going on a date, getting a ride, paying a friend. Oh, there’s a middle man now. An evil ominous middleman using state of the art AI algorithms to extract just a little bit more from you. But eventually the market will fix this, right? People will feel sick of being manipulated and move elsewhere? Ahhh, but they see that coming long before you do. They have dashboards. Quick Jeeves, tune the AI to make people feel less manipulated. Give them a little bit more for now, we have to think about maximizing lifetime customer value here. Oh the AI already did this on its own? Jeeves you’ve been replaced! People perpetually on the edge. You want to opt out of this all you say? Good luck running a competitive business! Every metric is now a target. You better maximize engagement or you will lose engagement this is a red queen’s race we can’t afford to lose! Burn all the social capital, burn all your values, FEED IT ALL TO MOLOCH! Someday, people will have to realize we live in a society. What will it take? A complete self cannibalization to the point you can’t eat your own mouth? It sure as hell isn’t going to be people opting out, that’s a collective action problem you can’t solve. Democracy, haha, you think the algorithms will let you vote to kill them? Your vote is as decoupled from action as the amount Uber pays the driver is decoupled from the fare that you pay. There’s no reform here, there’s only revolution. Will it simply be a huge financial collapse? Or do we need World War 3? And even World War 3 is on a spectrum. Will mass starvation fix this? Or will the attitude of thinking it’s okay to manipulate others at scale persist even past that? He’s got his, and I’ve got mine… If you open a government S&P 500 account for everyone with $1,000 at birth that’ll pay their social security cause it like…goes up…wait who’s creating this value again? It’s not okay. Advertising is not okay. Price discrimination is not okay. Using big data, machine learning, and psychology to manipulate others at scale is not okay. But you aren’t going to learn this lesson until you have fed a huge majority of your customers to Moloch. Modern capitialism is wireheading. Release the hypnodrones. How many cans of Pepsi did you want them to consume an hour again?
I've never seen so many developers curious about leaving the Mac and giving Linux a go. Something has really changed in the last few years. Maybe Linux just got better? Maybe powerful mini PCs made it easier? Maybe Apple just fumbled their relationship with developers one too many times? Maybe it's all of it. But whatever the reason, the vibe shift is noticeable. This is why the future is so hard to predict! People have been joking about "The Year of Linux on the Desktop" since the late 90s. Just like self-driving cars were supposed to be a thing back in 2017. And now, in the year of our Lord 2025, it seems like we're getting both! I also wouldn't underestimate the cultural influence of a few key people. PewDiePie sharing his journey into Arch and Hyprland with his 110 million followers is important. ThePrimeagen moving to Arch and Hyprland is important. Typecraft teaching beginners how to build an Arch and Hyprland setup from scratch is important (and who I just spoke to about Omarchy). Gabe Newell's Steam Deck being built on Arch and pushing Proton to over 20,000 compatible Linux games is important. You'll notice a trend here, which is that Arch Linux, a notoriously "difficult" distribution, is at the center of much of this new engagement. Despite the fact that it's been around since 2003! There's nothing new about Arch, but there's something new about the circles of people it's engaging. I've put Arch at the center of Omarchy too. Originally just because that was what Hyprland recommended. Then, after living with the wonders of 90,000+ packages on the community-driven AUR package repository, for its own sake. It's really good! But while Arch (and Hyprland) are having a moment amongst a new crowd, it's also "just" Linux at its core. And Linux really is the star of the show. The perfect, free, and open alternative that was just sitting around waiting for developers to finally have had enough of the commercial offerings from Apple and Microsoft. Now obviously there's a taste of "new vegan sees vegans everywhere" here. You start talking about Linux, and you'll hear from folks already in the community or those considering the move too. It's easy to confuse what you'd like to be true with what is actually true. And it's definitely true that Linux is still a niche operating system on the desktop. Even among developers. Apple and Microsoft sit on the lion's share of the market share. But the mind share? They've been losing that fast. The window is open for a major shift to happen. First gradually, then suddenly. It feels like morning in Linux land!