More from Jim Nielsen’s Blog
Dismissing an idea because it doesn’t work in your head is doing a disservice to the idea. (Same for dismissing someone else’s idea because it doesn’t work in your head.) The only way to truly know if an idea works is to test it. The gap between an idea and reality is the work. You can’t dismiss something as “not working” without doing the work. When collaborating with others, different ideas can be put forward which end up in competition with each other. We debate which is best, but verbal descriptions don’t do justice to ideas — so the idea that wins is the one whose champion is the most persuasive (or has the most institutional authority). You don’t want that. You want an environment where ideas can be evaluated based on their substance and not on the personal attributes of the person advocating them. This is the value of prototypes. We can’t visualize or predict how our own ideas will play out, let alone other people’s. This is why it’s necessary to bring them to life, have them take concrete form. It’s the only way to do them justice. (Picture a cute puppy in your head. I’ve got one too. Now how do we determine who’s imagining the cuter puppy? We can’t. We have to produce a concrete manifestation for contrast and comparison.) Prototypes are how we bridge the gap between idea and reality. They’re an iterative, evolutionary, exploratory form of birthing ideas that test their substance. People will bow out to a good persuasive argument. They’ll bow out to their boss saying it should be one way or another. But it’s hard to bow out to a good idea you can see, taste, touch, smell, or use. Email · Mastodon · Bluesky
I refreshed the little thing that let’s you navigate consistently between my inconsistent subdomains (video recording). Here’s the tl;dr on the update: I had to remove some features on each site to make this feel right. Takeaway: adding stuff is easy, removing stuff is hard. The element is a web component and not even under source control (🤫). I serve it directly from my cdn. If I want to make an update, I tweak the file on disk and re-deploy. Takeaway: cowboy codin’, yee-haw! Live free and die hard. So. Many. Iterations. All of which led to what? A small, iterative evolution. Takeaway: it’s ok for design explorations to culminate in updates that look more like an evolution than a mutation. Want more info on the behind-the-scenes work? Read on! Design Explorations It might look like a simple iteration on what I previously had, but that doesn’t mean I didn’t explore the universe of possibilities first before coming back to the current iteration. v0: Tabs! A tab-like experience seemed the most natural, but how to represent it? I tried a few different ideas. On top. On bottom. Different visual styles, etc. And of course, gotta explore how that plays out on desktop too. Some I liked, some I didn’t. As much as I wanted to play with going to the edges of the viewport, I realized that every browser is different and you won't be able to get a consistent “bleed-like” visual experience across browsers. For example, if you try to make tabs that bleed to the edges, it looks nice in a frame in Figma, and even in some browsers. But it won’t look right in all browser, like iOS Safari. So I couldn’t reliably leverage the idea of a bounded canvas as a design element — which, I should’ve known, has always been the case with the web. v1: Bottom Tabs With a Site Theme I really like this pattern on mobile devices, so I thought maybe I’d consider it for navigating between my sites. But how to theme across differently-styled sites? The favicon styles seemed like a good bet! And, of course, what do to on larger devices? Just stacking it felt like overkill, so I explored moving it to the edge. I actually prototyped this in code, but I didn’t like how it felt so I scratched the idea and went other directions. v2: The Unification The more I explored what to do with this element, the more it started taking on additional responsibility. “What if I unified its position with site-specific navigation?” I thought. This led to design explorations where the disparate subdomains began to take on not just a unified navigational element, but a unified header. And I made small, stylistic explorations with the tabs themselves too. You can see how I played toyed with the idea of a consistent header across all my sites (not an intended goal, but ya know, scope creep gets us all). As I began to explore more possibilities than I planned for, things started to get out of hand. v3: Do More. MORE. MORE!! Questions I began asking: Why aren’t these all under the same domain?! What if I had a single domain for feeds across all of them, e.g. feeds.jim-nielsen.com? What about icons instead of words? Wait, wait, wait Jim. Consistent navigation across inconsistent sites. That’s the goal. Pare it back a little. v4: Reigning It Back In To counter my exploratory ambitions, I told myself I needed to ship something without the need to modify the entire design style of all my sites. So how do I do that? That got me back to a simpler premise: consistent navigation across my inconsistent sites. Better — and implementable. Technical Details The implementation was pretty simple. I basically just forked my previous web component and changed some styles. That’s it. The only thing I did different was I moved the web component JS file from being part of my www.jim-nielsen.com git repository to a standalone file (not under git control) on my CDN. This felt like one of the exceptions to the rule of always keeping stuff under version control. It’s more of the classic FTP-style approach to web development. Granted, it’s riskier, but it’s also way more flexible. And I’m good with that trade-off for now. (Ask me again in a few months if I’ve done anything terrible and now have regrets.) Each site implements the component like this (with a different subdomain attribute for each site): <script type="module" src="https://cdn.jim-nielsen.com/shared/jim-site-switcher.js"></script> <jim-site-switcher subdomain="blog"></jim-site-switcher> That’s really all there is to say. Thanks to Zach for prodding me to make this post. Email · Mastodon · Bluesky
Jason Fried writes in his post “Knives and battleships”: Specific tools and familiar ingredients combined in different ratios, different molds, for different purposes. Like a baker working from the same tight set of pantry ingredients to make a hundred distinct recipes. You wouldn't turn to them and say "enough with the butter, flour, sugar, baking powder, and eggs already!" Getting the same few things right in different ways is a career's worth of work. Mastery comes from a lifetime of putting together the basics in different combinations. I think of Beethoven’s 5th and its famous “short-short-short-long” motif. The entire symphony is essentially the same core idea repeated and developed relentlessly! The same four notes (da-da-da-DAH!) moving between instruments, changing keys, etc. Beethoven took something basic — a four note motif — and extracted an enormous set of variations. Its genius is in illustrating how much can be explored and expressed within constraints (rather than piling on “more and more” novel stuff). Back to Jason’s point: the simplest building blocks in any form — music, code, paint, cooking — implemented with restraint can be combined in an almost infinite set of pleasing ways. As Devine noted (and I constantly link back to): we haven’t even begun to scratch the surface of what we can do with less. Email · Mastodon · Bluesky
Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky
Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky
More in programming
I always had a diffuse idea of why people are spending so much time and money on amateur radio. Once I got my license and started to amass radios myself, it became more clear.
What does it mean when someone writes that a programming language is “strongly typed”? I’ve known for many years that “strongly typed” is a poorly-defined term. Recently I was prompted on Lobsters to explain why it’s hard to understand what someone means when they use the phrase. I came up with more than five meanings! how strong? The various meanings of “strongly typed” are not clearly yes-or-no. Some developers like to argue that these kinds of integrity checks must be completely perfect or else they are entirely worthless. Charitably (it took me a while to think of a polite way to phrase this), that betrays a lack of engineering maturity. Software engineers, like any engineers, have to create working systems from imperfect materials. To do so, we must understand what guarantees we can rely on, where our mistakes can be caught early, where we need to establish processes to catch mistakes, how we can control the consequences of our mistakes, and how to remediate when somethng breaks because of a mistake that wasn’t caught. strong how? So, what are the ways that a programming language can be strongly or weakly typed? In what ways are real programming languages “mid”? Statically typed as opposed to dynamically typed? Many languages have a mixture of the two, such as run time polymorphism in OO languages (e.g. Java), or gradual type systems for dynamic languages (e.g. TypeScript). Sound static type system? It’s common for static type systems to be deliberately unsound, such as covariant subtyping in arrays or functions (Java, again). Gradual type systems migh have gaping holes for usability reasons (TypeScript, again). And some type systems might be unsound due to bugs. (There are a few of these in Rust.) Unsoundness isn’t a disaster, if a programmer won’t cause it without being aware of the risk. For example: in Lean you can write “sorry” as a kind of “to do” annotation that deliberately breaks soundness; and Idris 2 has type-in-type so it accepts Girard’s paradox. Type safe at run time? Most languages have facilities for deliberately bypassing type safety, with an “unsafe” library module or “unsafe” language features, or things that are harder to spot. It can be more or less difficult to break type safety in ways that the programmer or language designer did not intend. JavaScript and Lua are very safe, treating type safety failures as security vulnerabilities. Java and Rust have controlled unsafety. In C everything is unsafe. Fewer weird implicit coercions? There isn’t a total order here: for instance, C has implicit bool/int coercions, Rust does not; Rust has implicit deref, C does not. There’s a huge range in how much coercions are a convenience or a source of bugs. For example, the PHP and JavaScript == operators are made entirely of WAT, but at least you can use === instead. How fancy is the type system? To what degree can you model properties of your program as types? Is it convenient to parse, not validate? Is the Curry-Howard correspondance something you can put into practice? Or is it only capable of describing the physical layout of data? There are probably other meanings, e.g. I have seen “strongly typed” used to mean that runtime representations are abstract (you can’t see the underlying bytes); or in the past it sometimes meant a language with a heavy type annotation burden (as a mischaracterization of static type checking). how to type So, when you write (with your keyboard) the phrase “strongly typed”, delete it, and come up with a more precise description of what you really mean. The desiderata above are partly overlapping, sometimes partly orthogonal. Some of them you might care about, some of them not. But please try to communicate where you draw the line and how fuzzy your line is.
(Last week's newsletter took too long and I'm way behind on Logic for Programmers revisions so short one this time.1) In classical logic, two operators F/G are duals if F(x) = !G(!x). Three examples: x || y is the same as !(!x && !y). <>P ("P is possibly true") is the same as ![]!P ("not P isn't definitely true"). some x in set: P(x) is the same as !(all x in set: !P(x)). (1) is just a version of De Morgan's Law, which we regularly use to simplify boolean expressions. (2) is important in modal logic but has niche applications in software engineering, mostly in how it powers various formal methods.2 The real interesting one is (3), the "quantifier duals". We use lots of software tools to either find a value satisfying P or check that all values satisfy P. And by duality, any tool that does one can do the other, by seeing if it fails to find/check !P. Some examples in the wild: Z3 is used to solve mathematical constraints, like "find x, where f(x) >= 0. If I want to prove a property like "f is always positive", I ask z3 to solve "find x, where !(f(x) >= 0), and see if that is unsatisfiable. This use case powers a LOT of theorem provers and formal verification tooling. Property testing checks that all inputs to a code block satisfy a property. I've used it to generate complex inputs with certain properties by checking that all inputs don't satisfy the property and reading out the test failure. Model checkers check that all behaviors of a specification satisfy a property, so we can find a behavior that reaches a goal state G by checking that all states are !G. Here's TLA+ solving a puzzle this way.3 Planners find behaviors that reach a goal state, so we can check if all behaviors satisfy a property P by asking it to reach goal state !P. The problem "find the shortest traveling salesman route" can be broken into some route: distance(route) = n and all route: !(distance(route) < n). Then a route finder can find the first, and then convert the second into a some and fail to find it, proving n is optimal. Even cooler to me is when a tool does both finding and checking, but gives them different "meanings". In SQL, some x: P(x) is true if we can query for P(x) and get a nonempty response, while all x: P(x) is true if all records satisfy the P(x) constraint. Most SQL databases allow for complex queries but not complex constraints! You got UNIQUE, NOT NULL, REFERENCES, which are fixed predicates, and CHECK, which is one-record only.4 Oh, and you got database triggers, which can run arbitrary queries and throw exceptions. So if you really need to enforce a complex constraint P(x, y, z), you put in a database trigger that queries some x, y, z: !P(x, y, z) and throws an exception if it finds any results. That all works because of quantifier duality! See here for an example of this in practice. Duals more broadly "Dual" doesn't have a strict meaning in math, it's more of a vibe thing where all of the "duals" are kinda similar in meaning but don't strictly follow all of the same rules. Usually things X and Y are duals if there is some transform F where X = F(Y) and Y = F(X), but not always. Maybe the category theorists have a formal definition that covers all of the different uses. Usually duals switch properties of things, too: an example showing some x: P(x) becomes a counterexample of all x: !P(x). Under this definition, I think the dual of a list l could be reverse(l). The first element of l becomes the last element of reverse(l), the last becomes the first, etc. A more interesting case is the dual of a K -> set(V) map is the V -> set(K) map. IE the dual of lived_in_city = {alice: {paris}, bob: {detroit}, charlie: {detroit, paris}} is city_lived_in_by = {paris: {alice, charlie}, detroit: {bob, charlie}}. This preserves the property that x in map[y] <=> y in dual[x]. And after writing this I just realized this is partial retread of a newsletter I wrote a couple months ago. But only a partial retread! ↩ Specifically "linear temporal logics" are modal logics, so "eventually P ("P is true in at least one state of each behavior") is the same as saying !always !P ("not P isn't true in all states of all behaviors"). This is the basis of liveness checking. ↩ I don't know for sure, but my best guess is that Antithesis does something similar when their fuzzer beats videogames. They're doing fuzzing, not model checking, but they have the same purpose check that complex state spaces don't have bugs. Making the bug "we can't reach the end screen" can make a fuzzer output a complete end-to-end run of the game. Obvs a lot more complicated than that but that's the general idea at least. ↩ For CHECK to constraint multiple records you would need to use a subquery. Core SQL does not support subqueries in check. It is an optional database "feature outside of core SQL" (F671), which Postgres does not support. ↩
Omarchy 2.0 was released on Linux's 34th birthday as a gift to perhaps the greatest open-source project the world has ever known. Not only does Linux run 95% of all servers on the web, billions of devices as an embedded OS, but it also turns out to be an incredible desktop environment! It's crazy that it took me more than thirty years to realize this, but while I spent time in Apple's walled garden, the free software alternative simply grew better, stronger, and faster. The Linux of 2025 is not the Linux of the 90s or the 00s or even the 10s. It's shockingly more polished, capable, and beautiful. It's been an absolute honor to celebrate Linux with the making of Omarchy, the new Linux distribution that I've spent the last few months building on top of Arch and Hyprland. What began as a post-install script has turned into a full-blown ISO, dedicated package repository, and flourishing community of thousands of enthusiasts all collaborating on making it better. It's been improving rapidly with over twenty releases since the premiere in late June, but this Version 2.0 update is the biggest one yet. If you've been curious about giving Linux a try, you're not afraid of an operating system that asks you to level up and learn a little, and you want to see what a totally different computing experience can look and feel like, I invite you to give it a go. Here's a full tour of Omarchy 2.0.
In 2020, Apple released the M1 with a custom GPU. We got to work reverse-engineering the hardware and porting Linux. Today, you can run Linux on a range of M1 and M2 Macs, with almost all hardware working: wireless, audio, and full graphics acceleration. Our story begins in December 2020, when Hector Martin kicked off Asahi Linux. I was working for Collabora working on Panfrost, the open source Mesa3D driver for Arm Mali GPUs. Hector put out a public call for guidance from upstream open source maintainers, and I bit. I just intended to give some quick pointers. Instead, I bought myself a Christmas present and got to work. In between my university coursework and Collabora work, I poked at the shader instruction set. One thing led to another. Within a few weeks, I drew a triangle. In 3D graphics, once you can draw a triangle, you can do anything. Pretty soon, I started work on a shader compiler. After my final exams that semester, I took a few days off from Collabora to bring up an OpenGL driver capable of spinning gears with my new compiler. Over the next year, I kept reverse-engineering and improving the driver until it could run 3D games on macOS. Meanwhile, Asahi Lina wrote a kernel driver for the Apple GPU. My userspace OpenGL driver ran on macOS, leaving her kernel driver as the missing piece for an open source graphics stack. In December 2022, we shipped graphics acceleration in Asahi Linux. In January 2023, I started my final semester in my Computer Science program at the University of Toronto. For years I juggled my courses with my part-time job and my hobby driver. I faced the same question as my peers: what will I do after graduation? Maybe Panfrost? I started reverse-engineering of the Mali Midgard GPU back in 2017, when I was still in high school. That led to an internship at Collabora in 2019 once I graduated, turning into my job throughout four years of university. During that time, Panfrost grew from a kid’s pet project based on blackbox reverse-engineering, to a professional driver engineered by a team with Arm’s backing and hardware documentation. I did what I set out to do, and the project succeeded beyond my dreams. It was time to move on. What did I want to do next? Finish what I started with the M1. Ship a great driver. Bring full, conformant OpenGL drivers to the M1. Apple’s drivers are not conformant, but we should strive for the industry standard. Bring full, conformant Vulkan to Apple platforms, disproving the myth that Vulkan isn’t suitable for Apple hardware. Bring Proton gaming to Asahi Linux. Thanks to Valve’s work for the Steam Deck, Windows games can run better on Linux than even on Windows. Why not reap those benefits on the M1? Panfrost was my challenge until we “won”. My next challenge? Gaming on Linux on M1. Once I finished my coursework, I started full-time on gaming on Linux. Within a month, we shipped OpenGL 3.1 on Asahi Linux. A few weeks later, we passed official conformance for OpenGL ES 3.1. That put us at feature parity with Panfrost. I wanted to go further. OpenGL (ES) 3.2 requires geometry shaders, a legacy feature not supported by either Arm or Apple hardware. The proprietary OpenGL drivers emulate geometry shaders with compute, but there was no open source prior art to borrow. Even though multiple Mesa drivers need geometry/tessellation emulation, nobody did the work to get there. My early progress on OpenGL was fast thanks to the mature common code in Mesa. It was time to pay it forward. Over the rest of the year, I implemented geometry/tessellation shader emulation. And also the rest of the owl. In January 2024, I passed conformance for the full OpenGL 4.6 specification, finishing up OpenGL. Vulkan wasn’t too bad, either. I polished the OpenGL driver for a few months, but once I started typing a Vulkan driver, I passed 1.3 conformance in a few weeks. What remained was wiring up the geometry/tessellation emulation to my shiny new Vulkan driver, since those are required for Direct3D. Et voilà, Proton games. Along the way, Karol Herbst passed OpenCL 3.0 conformance on the M1, running my compiler atop his “rusticl” frontend. Meanwhile, when the Vulkan 1.4 specification was published, we were ready and shipped a conformant implementation on the same day. After that, I implemented sparse texture support, unlocking Direct3D 12 via Proton. …Now what? Ship a great driver? Check. Conformant OpenGL 4.6, OpenGL ES 3.2, and OpenCL 3.0? Check. Conformant Vulkan 1.4? Check. Proton gaming? Check. That’s a wrap. We’ve succeeded beyond my dreams. The challenges I chased, I have tackled. The drivers are fully upstream in Mesa. Performance isn’t too bad. With the Vulkan on Apple myth busted, conformant Vulkan is now coming to macOS via LunarG’s KosmicKrisp project building on my work. Satisfied, I am now stepping away from the Apple ecosystem. My friends in the Asahi Linux orbit will carry the torch from here. As for me? Onto the next challenge!