More from Jim Nielsen’s Blog
Some lessons I’ve learned from experience. 1. Install Stuff Indiscriminately From npm Become totally dependent on others, that’s why they call them “dependencies” after all! Lean in to it. Once your dependencies break — and they will, time breaks all things — then you can spend lots of time and energy (which was your goal from the beginning) ripping out those dependencies and replacing them with new dependencies that will break later. Why rip them out? Because you can’t fix them. You don’t even know how they work, that’s why you introduced them in the first place! Repeat ad nauseam (that is, until you decide you don’t want to make websites that require lots of your time and energy, but that’s not your goal if you’re reading this article). 2. Pick a Framework Before You Know You Need One Once you hitch your wagon to a framework (a dependency, see above) then any updates to your site via the framework require that you first understand what changed in the framework. More of your time and energy expended, mission accomplished! 3. Always, Always Require a Compilation Step Put a critical dependency between working on your website and using it in the browser. You know, some mechanism that is required to function before you can even see your website — like a complication step or build process. The bigger and more complex, the better. This is a great way to spend lots of time and energy working on your website. (Well, technically it’s not really working on your website. It’s working on the thing that spits out your website. So you’ll excuse me for recommending something that requires your time and energy that isn’t your website — since that’s not the stated goal — but trust me, this apparent diversion will directly affect the overall amount of time and energy you spend making a website. So, ultimately, it will still help you reach our stated goal.) Requiring that the code you write be transpiled, compiled, parsed, and evaluated before it can be used in your website is a great way to spend extra time and energy making a website (as opposed to, say, writing code as it will be run which would save you time and energy and is not our goal here). More? Do you have more advice on building a website that will require a lot of your time and energy? Share your recommendations with others, in case they’re looking for such advice. Email · Mastodon · Bluesky
Here’s Jony Ive in his Stripe interview: What we make stands testament to who we are. What we make describes our values. It describes our preoccupations. It describes beautiful succinctly our preoccupation. I’d never really noticed the connection between these two words: occupation and preoccupation. What comes before occupation? Pre-occupation. What comes before what you do for a living? What you think about. What you’re preoccupied with. What you think about will drive you towards what you work on. So when you’re asking yourself, “What comes next? What should I work on?” Another way of asking that question is, “What occupies my thinking right now?” And if what you’re occupied with doesn’t align with what you’re preoccupied with, perhaps it's time for a change. Email · Mastodon · Bluesky
Here’s Jony Ive talking to Patrick Collison about measurement and numbers: People generally want to talk about product attributes that you can measure easily with a number…schedule, costs, speed, weight, anything where you can generally agree that six is a bigger number than two He says he used to get mad at how often people around him focused on the numbers of the work over other attributes of the work. But after giving it more thought, he now has a more generous interpretation of why we do this: because we want relate to each other, understand each other, and be inclusive of one another. There are many things we can’t agree on, but it’s likely we can agree that six is bigger than two. And so in this capacity, numbers become a tool for communicating with each other, albeit a kind of least common denominator — e.g. “I don’t agree with you at all, but I can’t argue that 134 is bigger than 87.” This is conducive to a culture where we spend all our time talking about attributes we can easily measure (because then we can easily communicate and work together) and results in a belief that the only things that matter are those which can be measured. People will give lip service to that not being the case, e.g. “We know there are things that can’t be measured that are important.” But the reality ends up being: only that which can be assigned a number gets managed, and that which gets managed is imbued with importance because it is allotted our time, attention, and care. This reminds me of the story of the judgement of King Solomon, an archetypal story found in cultures around the world. Here’s the story as summarized on Wikipedia: Solomon ruled between two women who both claimed to be the mother of a child. Solomon ordered the baby be cut in half, with each woman to receive one half. The first woman accepted the compromise as fair, but the second begged Solomon to give the baby to her rival, preferring the baby to live, even without her. Solomon ordered the baby given to the second woman, as her love was selfless, as opposed to the first woman's selfish disregard for the baby's actual well-being In an attempt to resolve the friction between two individuals, an appeal was made to numbers as an arbiter. We can’t agree on who the mother is, so let’s make it a numbers problem. Reduce the baby to a number and we can agree! But that doesn’t work very well, does it? I think there is a level of existence where measurement and numbers are a sound guide, where two and two make four and two halves make a whole. But, as humans, there is another level of existence where mathematical propositions don’t translate. A baby is not a quantity. A baby is an entity. Take a whole baby and divide it up by a sword and you do not half two halves of a baby. I am not a number. I’m an individual. Indivisible. What does this all have to do with software? Software is for us as humans, as individuals, and because of that I believe there is an aspect of its nature where metrics can’t take you.cIn fact, not only will numbers not guide you, they may actually misguide you. I think Robin Rendle articulated this well in his piece “Trust the vibes”: [numbers] are not representative of human experience or human behavior and can’t tell you anything about beauty or harmony or how to be funny or what to do next and then how to do it. Wisdom is knowing when to use numbers and when to use something else. Email · Mastodon · Bluesky
Exploring diagram.website, I came across The Computer is a Feeling by Tim Hwang and Omar Rizwan: the modern internet exerts a tyranny over our imagination. The internet and its commercial power has sculpted the computer-device. It's become the terrain of flat, uniform, common platforms and protocols, not eccentric, local, idiosyncratic ones. Before computers were connected together, they were primarily personal. Once connected, they became primarily social. The purpose of the computer shifted to become social over personal. The triumph of the internet has also impoverished our sense of computers as a tool for private exploration rather than public expression. The pre-network computer has no utility except as a kind of personal notebook, the post-network computer demotes this to a secondary purpose. Smartphones are indisputably the personal computer. And yet, while being so intimately personal, they’re also the largest distribution of behavior-modification devices the world has ever seen. We all willing carry around in our pockets a device whose content is largely designed to modify our behavior and extract our time and money. Making “computer” mean computer-feelings and not computer-devices shifts the boundaries of what is captured by the word. It removes a great many things – smartphones, language models, “social” “media” – from the domain of the computational. It also welcomes a great many things – notebooks, papercraft, diary, kitchen – back into the domain of the computational. I love the feeling of a personal computer, one whose purpose primarily resides in the domain of the individual and secondarily supports the social. It’s part of what I love about the some of the ideas embedded in local-first, which start from the principle of owning and prioritizing what you do on your computer first and foremost, and then secondarily syncing that to other computers for the use of others. Email · Mastodon · Bluesky
After publishing my Analysis of Links From The White House’s “Wire” Website, Tina Nguyen, political correspondent at The Verge, reached out with some questions. Her questions made me realize that the numbers in my analysis weren’t quite correct (I wasn’t de-depulicating links across days, so I fixed that problem). More pointedly, she asked about the most popular domain the White House was linking to: YouTube. Specifically, were the links to YouTube 1) independent content creators, 2) the White House itself, or 3) a mix. A great question. I didn’t know the answer but wanted to find out. A little JavaScript code in my spreadsheet and boom, I had all the YouTube links in one place. I couldn’t really discern from the links themselves what I was looking at. A number of them were to the /live/ subpath, meaning I was looking at links to live streaming events. But most of the others were YouTube’s standard /watch?v=:id which leaves the content and channel behind the URL opaque. The only real way to know was to click through to each one. I did a random sampling and found most of the ones I clicked on all went to The White House’s own YouTube channel. I told Tina as much, sent here the data I had, and she reported on it in an article at The Verge. Tina’s question did get me wondering: precisely how many of those links are to the White House’s own YouTube channel vs. other content creators? Once again, writing scripts that process data, talk to APIs, and put it all into 2-dimensional tables in a spreadsheet was super handy. I looked at all the YouTube links, extracted the video ID, then queried the YouTube API for information about the video (like what channel it belongs to). Once I had the script working as expected for a single cell, it was easy to do the spreadsheet thing where you just “drag down” to autocomplete all the other cells with video IDs. The result? From May 8th to July 6th there were 78 links to YouTube from wh.gov/wire, which breaks down as follows: 73 links to videos on the White House’s own YouTube channel 2 links to videos on the channel “Department of Defense” 1 link to a video on the channel “Pod Force One with Miranda Devine” 1 link to a video on the channel “Breitbart News” 1 link to a video that has since been taken down “due to a copyright claim by Sony Music Publishing” (so I’m not sure whose channel that was) Email · Mastodon · Bluesky
More in programming
We don’t want to bury the lede: we have raised a $100M Series B, led by a new strategic partner in USIT with participation from all existing Oxide investors. To put that number in perspective: over the nearly six year lifetime of the company, we have raised $89M; our $100M Series B more than doubles our total capital raised to date — and positions us to make Oxide the generational company that we have always aspired it to be. If this aspiration seems heady now, it seemed absolutely outlandish when we were first raising venture capital in 2019. Our thesis was that cloud computing was the future of all computing; that running on-premises would remain (or become!) strategically important for many; that the entire stack — hardware and software — needed to be rethought from first principles to serve this market; and that a large, durable, public company could be built by whomever pulled it off. This scope wasn’t immediately clear to all potential investors, some of whom seemed to latch on to one aspect or another without understanding the whole. Their objections were revealing: "We know you can build this," began more than one venture capitalist (at which we bit our tongue; were we not properly explaining what we intended to build?!), "but we don’t think that there is a market." Entrepreneurs must become accustomed to rejection, but this flavor was particularly frustrating because it was exactly backwards: we felt that there was in fact substantial technical risk in the enormity of the task we put before ourselves — but we also knew that if we could build it (a huge if!) there was a huge market, desperate for cloud computing on-premises. Fortunately, in Eclipse Ventures we found investors who saw what we saw: that the most important products come when we co-design hardware and software together, and that the on-premises market was sick of being told that they either don’t exist or that they don’t deserve modernity. These bold investors — like the customers we sought to serve — had been waiting for this company to come along; we raised seed capital, and started building. And build it we did, making good on our initial technical vision: We did our own board designs, allowing for essential system foundation like a true hardware root-of-trust and end-to-end power observability. We did our own microcontroller operating system, and used it to replace the traditional BMC. We did our own platform enablement software, eliminating the traditional UEFI BIOS and its accompanying flotilla of vulnerabilities. We did our own host hypervisor, assuring an integrated and seamless user experience — and eliminating the need for a third-party hypervisor and its concomitant rapacious software licensing. We did our own switch — and our own switch runtime — eliminating entire universes of integration complexity and operational nightmares. We did our own integrated storage service, allowing the rack-scale system to have reliable, available, durable, elastic instance storage without necessitating a dependency on a third party. We did our own control plane, a sophisticated distributed system building on the foundation of our hardware and software components to deliver the API-driven services that modernity demands: elastic compute, virtual networking, and virtual storage. While these technological components are each very important (and each is in service to specific customer problems when deploying infrastructure on-premises), the objective is the product, not its parts. The journey to a product was long, but we ticked off the milestones. We got the boards brought up. We got the switch transiting packets. We got the control plane working. We got the rack manufactured. We passed FCC compliance. And finally, two years ago, we shipped our first system! Shortly thereafter, more milestones of the variety you can only get after shipping: our first update of the software in the field; our first update-delivered performance improvements; our first customer-requested features added as part of an update. Later that year, we hit general commercial availability, and things started accelerating. We had more customers — and our first multi-rack customer. We had customers go on the record about why they had selected Oxide — and customers describing the wins that they had seen deploying Oxide. Customers starting landing faster now: enterprise sales cycles are infamously long, but we were finding that we were going from first conversations to a delivered product surprisingly quickly. The quickening pace always seemed to be due in some way to our transparency: new customers were listeners to our podcast, or they had read our RFDs, or they had perused our documentation, or they had looked at the source code itself. With growing customer enthusiasm, we were increasingly getting questions about what it would look like to buy a large number of Oxide racks. Could we manufacture them? Could we support them? Could we make them easy to operate together? Into this excitement, a new potential investor, USIT, got to know us. They asked terrific questions, and we found a shared disposition towards building lasting value and doing it the right way. We learned more about them, too, and especially USIT’s founder, Thomas Tull. The more we each learned about the other, the more there was to like. And importantly, USIT had the vision for us that we had for ourselves: that there was a big, important market here — and that it was uniquely served by Oxide. We are elated to announce this new, exciting phase of the company. It’s not necessarily in our nature to celebrate fundraising, but this is a big milestone, because it will allow us to address our customers' most pressing questions around scale (manufacturing scale, system scale, operations scale) and roadmap scope. We have always believed in our mission, but this raise gives us a new sense of confidence when we say it: we’re going to kick butt, have fun, not cheat (of course!), love our customers — and change computing forever.
I'm way too discombobulated from getting next month's release of Logic for Programmers ready, so I'm pulling a idea from the slush pile. Basically I wanted to come up with a mental model of arrays as a concept that explained APL-style multidimensional arrays and tables but also why there weren't multitables. So, arrays. In all languages they are basically the same: they map a sequence of numbers (I'll use 1..N)1 to homogeneous values (values of a single type). This is in contrast to the other two foundational types, associative arrays (which map an arbitrary type to homogeneous values) and structs (which map a fixed set of keys to heterogeneous values). Arrays appear in PLs earlier than the other two, possibly because they have the simplest implementation and the most obvious application to scientific computing. The OG FORTRAN had arrays. I'm interested in two structural extensions to arrays. The first, found in languages like nushell and frameworks like Pandas, is the table. Tables have string keys like a struct and indexes like an array. Each row is a struct, so you can get "all values in this column" or "all values for this row". They're heavily used in databases and data science. The other extension is the N-dimensional array, mostly seen in APLs like Dyalog and J. Think of this like arrays-of-arrays(-of-arrays), except all arrays at the same depth have the same length. So [[1,2,3],[4]] is not a 2D array, but [[1,2,3],[4,5,6]] is. This means that N-arrays can be queried on any axis. ]x =: i. 3 3 0 1 2 3 4 5 6 7 8 0 { x NB. first row 0 1 2 0 {"1 x NB. first column 0 3 6 So, I've had some ideas on a conceptual model of arrays that explains all of these variations and possibly predicts new variations. I wrote up my notes and did the bare minimum of editing and polishing. Somehow it ended up being 2000 words. 1-dimensional arrays A one-dimensional array is a function over 1..N for some N. To be clear this is math functions, not programming functions. Programming functions take values of a type and perform computations on them. Math functions take values of a fixed set and return values of another set. So the array [a, b, c, d] can be represented by the function (1 -> a ++ 2 -> b ++ 3 -> c ++ 4 -> d). Let's write the set of all four element character arrays as 1..4 -> char. 1..4 is the function's domain. The set of all character arrays is the empty array + the functions with domain 1..1 + the functions with domain 1..2 + ... Let's call this set Array[Char]. Our compilers can enforce that a type belongs to Array[Char], but some operations care about the more specific type, like matrix multiplication. This is either checked with the runtime type or, in exotic enough languages, with static dependent types. (This is actually how TLA+ does things: the basic collection types are functions and sets, and a function with domain 1..N is a sequence.) 2-dimensional arrays Now take the 3x4 matrix i. 3 4 0 1 2 3 4 5 6 7 8 9 10 11 There are two equally valid ways to represent the array function: A function that takes a row and a column and returns the value at that index, so it would look like f(r: 1..3, c: 1..4) -> Int. A function that takes a row and returns that column as an array, aka another function: f(r: 1..3) -> g(c: 1..4) -> Int.2 Man, (2) looks a lot like currying! In Haskell, functions can only have one parameter. If you write (+) 6 10, (+) 6 first returns a new function f y = y + 6, and then applies f 10 to get 16. So (+) has the type signature Int -> Int -> Int: it's a function that takes an Int and returns a function of type Int -> Int.3 Similarly, our 2D array can be represented as an array function that returns array functions: it has type 1..3 -> 1..4 -> Int, meaning it takes a row index and returns 1..4 -> Int, aka a single array. (This differs from conventional array-of-arrays because it forces all of the subarrays to have the same domain, aka the same length. If we wanted to permit ragged arrays, we would instead have the type 1..3 -> Array[Int].) Why is this useful? A couple of reasons. First of all, we can apply function transformations to arrays, like "combinators". For example, we can flip any function of type a -> b -> c into a function of type b -> a -> c. So given a function that takes rows and returns columns, we can produce one that takes columns and returns rows. That's just a matrix transposition! Second, we can extend this to any number of dimensions: a three-dimensional array is one with type 1..M -> 1..N -> 1..O -> V. We can still use function transformations to rearrange the array along any ordering of axes. Speaking of dimensions: What are dimensions, anyway Okay, so now imagine we have a Row × Col grid of pixels, where each pixel is a struct of type Pixel(R: int, G: int, B: int). So the array is Row -> Col -> Pixel But we can also represent the Pixel struct with a function: Pixel(R: 0, G: 0, B: 255) is the function where f(R) = 0, f(G) = 0, f(B) = 255, making it a function of type {R, G, B} -> Int. So the array is actually the function Row -> Col -> {R, G, B} -> Int And then we can rearrange the parameters of the function like this: {R, G, B} -> Row -> Col -> Int Even though the set {R, G, B} is not of form 1..N, this clearly has a real meaning: f[R] is the function mapping each coordinate to that coordinate's red value. What about Row -> {R, G, B} -> Col -> Int? That's for each row, the 3 × Col array mapping each color to that row's intensities. Really any finite set can be a "dimension". Recording the monitor over a span of time? Frame -> Row -> Col -> Color -> Int. Recording a bunch of computers over some time? Computer -> Frame -> Row …. This is pretty common in constraint satisfaction! Like if you're conference trying to assign talks to talk slots, your array might be type (Day, Time, Room) -> Talk, where Day/Time/Room are enumerations. An implementation constraint is that most programming languages only allow integer indexes, so we have to replace Rooms and Colors with numerical enumerations over the set. As long as the set is finite, this is always possible, and for struct-functions, we can always choose the indexing on the lexicographic ordering of the keys. But we lose type safety. Why tables are different One more example: Day -> Hour -> Airport(name: str, flights: int, revenue: USD). Can we turn the struct into a dimension like before? In this case, no. We were able to make Color an axis because we could turn Pixel into a Color -> Int function, and we could only do that because all of the fields of the struct had the same type. This time, the fields are different types. So we can't convert {name, flights, revenue} into an axis. 4 One thing we can do is convert it to three separate functions: airport: Day -> Hour -> Str flights: Day -> Hour -> Int revenue: Day -> Hour -> USD But we want to keep all of the data in one place. That's where tables come in: an array-of-structs is isomorphic to a struct-of-arrays: AirportColumns( airport: Day -> Hour -> Str, flights: Day -> Hour -> Int, revenue: Day -> Hour -> USD, ) The table is a sort of both representations simultaneously. If this was a pandas dataframe, df["airport"] would get the airport column, while df.loc[day1] would get the first day's data. I don't think many table implementations support more than one axis dimension but there's no reason they couldn't. These are also possible transforms: Hour -> NamesAreHard( airport: Day -> Str, flights: Day -> Int, revenue: Day -> USD, ) Day -> Whatever( airport: Hour -> Str, flights: Hour -> Int, revenue: Hour -> USD, ) In my mental model, the heterogeneous struct acts as a "block" in the array. We can't remove it, we can only push an index into the fields or pull a shared column out. But there's no way to convert a heterogeneous table into an array. Actually there is a terrible way Most languages have unions or product types that let us say "this is a string OR integer". So we can make our airport data Day -> Hour -> AirportKey -> Int | Str | USD. Heck, might as well just say it's Day -> Hour -> AirportKey -> Any. But would anybody really be mad enough to use that in practice? Oh wait J does exactly that. J has an opaque datatype called a "box". A "table" is a function Dim1 -> Dim2 -> Box. You can see some examples of what that looks like here Misc Thoughts and Questions The heterogeneity barrier seems like it explains why we don't see multiple axes of table columns, while we do see multiple axes of array dimensions. But is that actually why? Is there a system out there that does have multiple columnar axes? The array x = [[a, b, a], [b, b, b]] has type 1..2 -> 1..3 -> {a, b}. Can we rearrange it to 1..2 -> {a, b} -> 1..3? No. But we can rearrange it to 1..2 -> {a, b} -> PowerSet(1..3), which maps rows and characters to columns with that character. [(a -> {1, 3} ++ b -> {2}), (a -> {} ++ b -> {1, 2, 3}]. We can also transform Row -> PowerSet(Col) into Row -> Col -> Bool, aka a boolean matrix. This makes sense to me as both forms are means of representing directed graphs. Are other function combinators useful for thinking about arrays? Does this model cover pivot tables? Can we extend it to relational data with multiple tables? Systems Distributed Talk (will be) Online The premier will be August 6 at 12 CST, here! I'll be there to answer questions / mock my own performance / generally make a fool of myself. Sacrilege! But it turns out in this context, it's easier to use 1-indexing than 0-indexing. In the years since I wrote that article I've settled on "each indexing choice matches different kinds of mathematical work", so mathematicians and computer scientists are best served by being able to choose their index. But software engineers need consistency, and 0-indexing is overall a net better consistency pick. ↩ This is right-associative: a -> b -> c means a -> (b -> c), not (a -> b) -> c. (1..3 -> 1..4) -> Int would be the associative array that maps length-3 arrays to integers. ↩ Technically it has type Num a => a -> a -> a, since (+) works on floats too. ↩ Notice that if each Airport had a unique name, we could pull it out into AirportName -> Airport(flights, revenue), but we still are stuck with two different values. ↩
Every 6 months or so, I decide to leave my cave and check out what the cool kids are doing with AI. Apparently the latest trend is to use fancy command line tools to write code using LLMs. This is a very nice change, since it suddenly makes AI compatible with my allergy to getting out of the terminal. The most popular of these tools seems to be Claude Code. It promises to be able to build in total autonomy, being able to use search code, write code, run tests, lint, and commit the changes. While this sounds great on paper, I’m not keen on getting locked into vendor tools from an unprofitable company. At some point, they will either need to raise their prices, enshittify their product, or most likely do both. So I went looking for what the free and open source alternatives are. Picking a model There’s a large amount of open source large language models on the market, with new ones getting released all the time. However, they are not all ready to be used locally in coding tasks, so I had to try a bunch of them before settling on one. deepseek-r1:8b Deepseek is the most popular open source model right now. It was created by the eponymous Chinese company. It made the news by beating numerous benchmarks while being trained on a budget that is probably lower than the compensation of some OpenAI workers. The 8b variant only weights 5.2 GB and runs decently on limited hardware, like my three years old Mac. This model is famous for forgetting about world events from 1989, but also seems to have a few issues when faced with concrete coding tasks. It is a reasoning model, meaning it “thinks” before acting, which should lead to improved accuracy. In practice, it regularly gets stuck indefinitely searching where it should start and jumping from one problem to the other in a loop. This can happen even on simple problems, and made it unusable for me. mistral:7b Mistral is the French alternative to American and Chinese models. I have already talked about their 7b model on this blog. It is worth noting that they have kept updating their models, and it should now be much more accurate than two years ago. Mistral is not a reasoning model, so it will jump straight to answering. This is very good if you’re working with tasks where speed and low compute use are a priority. Sadly, the accuracy doesn’t seem good enough for coding. Even on simple tasks, it will hallucinate functions or randomly delete parts of the code I didn’t want to touch. qwen3:8b Another model from China, qwen3 was created by the folks at Alibaba. It also claims impressive benchmark results, and can work as both a reasoning or non-thinking model. It was made with modern AI tooling in mind, by supporting MCPs and a framework for agentic development. This model actually seems to work as expected, providing somewhat accurate code output while not hanging in the reasoning part. Since it runs decently on my local setup, I decided to stick to that model for now. Setting up a local API with Ollama Ollama is now the default way to download and run local LLMs. It can be simply installed by downloading it from their website. Once installed, it works like Docker for models, by giving us access to commands like pull, run, or rm. Ollama will expose an API on localhost which can be used by other programs. For example, you can use it from your Python programs through ollama-python. Pair programming with aider The next piece of software I installed is aider. I assume it’s pronounced like the French word, but I could not confirm that. Aider describes itself as a “pair programming” application. Its main job is to pass context to the model, let it write the output to files, run linters, and commit the changes. Getting started It can be installed using the official Python package or via Homebrew if you use Mac. Once it is installed, just navigate to your code repository and launch it: export OLLAMA_API_BASE=http://127.0.0.1:11434 aider --model ollama_chat/qwen3:8b The CLI should automatically create some configuration files and add them to the repo’s .gitignore. Usage Aider isn’t meant to be left alone in complete autonomy. You’ll have to guide the AI through the process of making changes to your repository. To start, use the /add command to add files you want to focus on. Those files will be passed entirely to the model’s context and the model will be able to write in them. You can then ask questions using the /ask command. If you want to generate code, a good strategy can be to starting by requesting a plan of actions. When you want it to actually write to the files, you can prompt it using the /code command. This is also the default mode. There’s no absolute guarantee that it will follow a plan if you agreed on one previously, but it is still a good idea to have one. The /architect command seems to automatically ask for a plan, accept it, and write the code. The specificity of this command is that it lets you use different models to plan and write the changes. Refactoring I tried coding with aider in a few situations to see how it performs in practice. First, I tried making it do a simple refactoring on Itako, which is a project of average complexity. When pointed to the exact part of code where the issues happened, and explained explicitly what to do, the model managed to change the target struct according to the instructions. It did unexpectedly change a function that was outside the scope of what I asked, but this was easy to spot. On paper, this looks like a success. In practice, the time spent crafting a prompt, waiting for the AI to run and fixing the small issue that came up immensely exceeds the 10 minutes it would have taken me to edit the file myself. I don’t think coding that way would lead me to a massive performance improvement for now. Greenfield project For a second scenario, I wanted to see how it would perform on a brand-new project. I quickly set up a Python virtual environment, and asked aider to work with me at building a simple project. We would be opening a file containing Japanese text, parsing it with fugashi, and counting the words. To my surprise, this was a disaster. All I got was a bunch on hallucination riddled python that wouldn’t run under any circumstances. It may be that the lack of context actually made it harder for the model to generate code. Troubleshooting Finally, I went back to Itako, and decided to check how it would perform on common troubleshooting tasks. I introduced a few bugs to my code and gathered some error messages. I then proceeded to simply give aider the files mentioned by the error message and just use /ask to have it explain the errors to me, without requiring it to implement the code. This part did work very well. If I compare it with Googling unknown error messages, I think this can cut the time spent on the issue by half This is not just because Google is getting worse every day, but the model having access to the actual code does give it a massive advantage. I do think this setup is something I can use instead of the occasional frustration of scrolling through StackOverflow threads when something unexpected breaks. What about the Qwen CLI? With everyone jumping on the trend of CLI tools for LLMs, the Qwen team released its own Qwen Code. It can be installed using npm, and connects to a local model if configured like this: export OPENAI_API_KEY="ollama" export OPENAI_BASE_URL="http://localhost:11434/v1/" export OPENAI_MODEL="qwen3:8b" Compared to aider, it aims at being fully autonomous. For example, it will search your repository using grep. However, I didn’t manage to get it to successfully write any code. The tool seems optimized for larger, online models, with context sizes up to 1M tokens. Our local qwen3 context only has a 40k tokens context size, which can get overwhelmed very quickly when browsing entire code repositories. Even when I didn’t run out of context, the tool mysteriously failed when trying to write files. It insists it can only write to absolute paths, which the model doesn’t seem to agree with providing. I did not investigate the issue further.
Ideals are supposed to be unattainable for the great many. If everyone could be the smartest, strongest, prettiest, or best, there would be no need for ideals — we'd all just be perfect. But we're not, so ideals exist to show us the peak of humanity and to point our ambition and appreciation toward it. This is what I always hated about the 90s. It was a decade that made it cool to be a loser. It was the decade of MTV's Beavis and Butt-Head. It was the age of grunge. I'm generationally obliged to like Nirvana, but what a perfectly depressive, suicidal soundtrack to loser culture. Naomi Wolf's The Beauty Myth was published in 1990. It took a critical theory-like lens on beauty ideals, and finding it all so awfully oppressive. Because, actually, seeing beautiful, slim people in advertising or media is bad. Because we don't all look like that! And who's even to say what "beauty" is, anyway? It's all just socially constructed! The final stage of that dead-end argument appeared as an ad here in Copenhagen thirty years later during the 2020 insanity: I passed it every day biking the boys to school for weeks. Next to other slim, fit Danes also riding their bikes. None of whom resembled the grotesque display of obesity towering over them on their commute from Calvin Klein. While this campaign was laughably out of place in Copenhagen, it's possible that it brought recognition and representation in some parts of America. But a celebration of ideals it was not. That's the problem with the whole "representation" narrative. It proposes we're all better off if all we see is a mirror of ourselves, however obese, lazy, ignorant, or incompetent, because at least it won't be "unrealistic". Screw that. The last thing we need is a patronizing message that however little you try, you're perfect just the way you are. No, the beauty of ideals is that they ask more of us. Ask us to pursue knowledge, fitness, and competence by taking inspiration from the best human specimens. Thankfully, no amount of post-modern deconstruction or academic theory babble seems capable of suppressing the intrinsic human yearning for excellence forever. The ideals are finally starting to emerge again.
Some lessons I’ve learned from experience. 1. Install Stuff Indiscriminately From npm Become totally dependent on others, that’s why they call them “dependencies” after all! Lean in to it. Once your dependencies break — and they will, time breaks all things — then you can spend lots of time and energy (which was your goal from the beginning) ripping out those dependencies and replacing them with new dependencies that will break later. Why rip them out? Because you can’t fix them. You don’t even know how they work, that’s why you introduced them in the first place! Repeat ad nauseam (that is, until you decide you don’t want to make websites that require lots of your time and energy, but that’s not your goal if you’re reading this article). 2. Pick a Framework Before You Know You Need One Once you hitch your wagon to a framework (a dependency, see above) then any updates to your site via the framework require that you first understand what changed in the framework. More of your time and energy expended, mission accomplished! 3. Always, Always Require a Compilation Step Put a critical dependency between working on your website and using it in the browser. You know, some mechanism that is required to function before you can even see your website — like a complication step or build process. The bigger and more complex, the better. This is a great way to spend lots of time and energy working on your website. (Well, technically it’s not really working on your website. It’s working on the thing that spits out your website. So you’ll excuse me for recommending something that requires your time and energy that isn’t your website — since that’s not the stated goal — but trust me, this apparent diversion will directly affect the overall amount of time and energy you spend making a website. So, ultimately, it will still help you reach our stated goal.) Requiring that the code you write be transpiled, compiled, parsed, and evaluated before it can be used in your website is a great way to spend extra time and energy making a website (as opposed to, say, writing code as it will be run which would save you time and energy and is not our goal here). More? Do you have more advice on building a website that will require a lot of your time and energy? Share your recommendations with others, in case they’re looking for such advice. Email · Mastodon · Bluesky