More from the singularity is nearer
Hold my hand, grow my skin Erica Western Geiger Counter Do you have any addictions? You may not register them as such, perhaps because they don’t lead to anything you consider harmful consequences. But you have them. In some ways, all your behavior is compulsive. What would the alternative be? A point is, if we have something that we can predict this video Free will comes from the “veil of computability”, things look random until you find the pattern. I was at a bar last night and this girl told me you can’t predict humans, and the exact example she used was that it’s not like y = mx + b Oh, if only she knew. The dreams of my childhood have come true, studying machine learning has shown me how I work. I tried to explain that instead of 2 parameters it’s 100 trillion parameters, and it’s the slightly different y = relu(w@x) + b a bunch of times, you have to put some nonlinearities in there cause linear systems can only approximate a small class of functions. But this explanation was not heard at a bar. She was so confident she was right, and like I don’t even know where to start. Reader of this blog, do you know? AI is coming and we are so unbelievably unprepared. What is this garbage and this garbage. It’s nerd shit and political propaganda. The amount of power over nature that the Silicon Valley death cult is stumbling into is horrifying, and these high priests don’t have a basic grasp of people. No humanities education (perhaps the programs were gutted on purpose). Are we ready for the hypnodrones? How the fuck is targeted advertising legal and culturally okay? This will not stop until they take our free will from us. There’s a fire that burns today Better Nukes don’t end humanity. Current path AI doesn’t end humanity. It just ends all the machines and hands the world over to the street people. Now I see how the dark ages happened. If all the humans died today, all the machines would shortly follow. If all the machines died today, humanity would keep on going. Pay attention to this milestone. To date, machines are not robust, and evolution may be efficient at robust search. If it is, we get dark ages. If it’s not and we find a shortcut, God only knows.
This is not going to be a cakewalk like self driving cars. Most of comma’s competition is now out of business, taking billions and billions of dollars with it. Re: Tesla and FSD, we always expected Tesla to have the lead, but it’s not a winner take all market, it will look more like iOS vs Android. comma has been around for 10 years, is profitable, and is now growing rapidly. In self driving, most of the competition wasn’t even playing the right game. This isn’t how it is for ML frameworks. tinygrad’s competition is playing the right game, open source, and run by some quite smart people. But this is my second startup, so hopefully taking a bit more risk is appropriate. For comma to win, all it would take is people in 2016 being wrong about LIDAR, mapping, end to end, and hand coding, which hopefully we all agree now that they were. For tinygrad to win, it requires something much deeper to be wrong about software development in general. As it stands now, tinygrad is 14556 lines. Line count is not a perfect proxy for complexity, but when you have differences of multiple orders of magnitude, it might mean something. I asked ChatGPT to estimate the lines of code in PyTorch, JAX, and MLIR. JAX = 400k MLIR = 950k PyTorch = 3300k They range from one to two orders of magnitude off. And this isn’t even including all the libraries and drivers the other frameworks rely on, CUDA, cuBLAS, Triton, nccl, LLVM, etc…. tinygrad includes every single piece of code needed to drive an AMD RDNA3 GPU except for LLVM, and we plan to remove LLVM in a year or two as well. But so what? What does line count matter? One hypothesis is that tinygrad is only smaller because it’s not speed or feature competitive, and that if and when it becomes competitive, it will also be that many lines. But I just don’t think that’s true. tinygrad is already feature competitive, and for speed, I think the bitter lesson also applies to software. When you look at the machine learning ecosystem, you realize it’s just the same problems over and over again. The problem of multi machine, multi GPU, multi SM, multi ALU, cross machine memory scheduling, DRAM scheduling, SRAM scheduling, register scheduling, it’s all the same underlying problem at different scales. And yet, in all the current ecosystems, there are completely different codebases and libraries at each scale. I don’t think this stands. I suspect there is a simple formulation of the problem underlying all of the scheduling. Of course, this problem will be in NP and hard to optimize, but I’m betting the bitter lesson wins here. The goal of the tinygrad project is to abstract away everything except the absolute core problem in the cleanest way possible. This is why we need to replace everything. A model for the hardware is simple compared to a model for CUDA. If we succeed, tinygrad will not only be the fastest NN framework, but it will be under 25k lines all in, GPT-5 scale training job to MMIO on the PCIe bus! Here are the steps to get there: Expose the underlying search problem spanning several orders of magnitude. Due to the execution of neural networks not being data dependent, this problem is very amenable to search. Make sure your formulation is simple and complete. Fully capture all dimensions of the search space. The optimization goal is simple, run faster. Apply the state of the art in search. Burn compute. Use LLMs to guide. Use SAT solvers. Reinforcement learning. It doesn’t matter, there’s no way to cheat this goal. Just see if it runs faster. If this works, not only do we win with tinygrad, but hopefully people begin to rethink software in general. Of course, it’s a big if, this isn’t like comma where it was hard to lose. But if it wins… The main thing to watch is development speed. Our bet has to be that tinygrad’s development speed is outpacing the others. We have the AMD contract to train LLaMA 405B as fast as NVIDIA due in a year, let’s see if we succeed.
“For example, if one believes that affirmative action is good for black people, does it make sense to demand affirmative action in hostile or dogmatic terms? Obviously it would be more productive to take a diplomatic and conciliatory approach that would make at least verbal and symbolic concessions to white people who think that affirmative action discriminates against them. But leftist activists do not take such an approach because it would not satisfy their emotional needs.” – Unabomber Manifesto To date, the Trump administration has been an absolute tragedy. It has been the acting out of emotions. There are no adults in the room. I’m not saying there would have been adults in the room with the Kamala regime either, but I had some hopes for positive change with the Trump tech-bro alliance and now they are gone. At least truths are being laid bare versus heads being buried in the sands of joy, but I think there was a much better way. For example, I don’t support America funding the war in Ukraine. But the way Zelensky was treated is just dumb. See the Unabomber quote above, between this and the Munich speech, Mr. JD Vance, I hope your emotional needs are being met (at the expense of the good will of our allies). It's the economy, stupid Regardless of anyone’s long-term objectives in the US, be they decoupling from China, bringing manufacturing to the US, bringing lifestyle improvements to US citizens; I think it’s unquestionable that uncertainty about the future was needlessly increased. And unless the uncertainty was the goal, I can’t figure out why things were done the way they were. And if the uncertainty was the goal…uhhh…is our government captured by Russian or Chinese agents? Because that’s who benefits. I don’t trust the news very much. I have no idea if the guy in the El Salavdor prison had a fair trial, if the students being deported are criminals, or even if they are being deported at all. It’s really hard and time consuming to get to the truth about any of these things. However, when markets crash. that is obviously real. With the news, there’s usually no way to trade on it being real or fake, there’s nobody to take the other side. But with big public markets, there’s very deep liquidity if you think they are priced wrong. In addition to the 10% the market is down, the dollar is also down 10%. Considering the market is priced in dollars, it’s closer to 20% down. And even worse on top of all of this, prices are going up due to the tariffs. Was crashing the economy the goal? A side-effect of a greater plan? Because given how this was executed, a 3-week old LLM could have told you shit was gonna crash. And it’s not going to bring manufacturing back. I have done manufacturing in America for years, and anyone with any experience could have told you that this wouldn’t work, manufacturing requires long term investment and long term investment requires stability. What was the real goal here? Elon, you need to reconcile with your daughter Andrew Callaghan did a good piece on Elon’s radicalization. I get it, we have all been there. For me it was Gamergate (which still has a terrible wikipedia page that doesn’t explain what it was). But this doesn’t have to be you forever. You are the closest thing to an adult in any room in America. When you compare America to China, it’s really more like comparing Elon to China. ULA is a little joke compared to CNSA. And look into what percent of US car exports are Teslas. The man is singlehandedly beating the rest of the US combined. If you want any hope of standing against China, your political coalition better include him. Elon has been pretty politically quiet lately. I’m sure he knew exactly what would happen with the tariffs, but he couldn’t stop them. I got fooled too, thought it could be different this time. But it’s no different from 2017. (btw, we are finally beating climate change thanks to cheap solar panels from China) I know the idea of PR is against a lot of what you believe in, but you need to heads down put together a large scale PR campaign, distance yourself from this train wreck, denounce stupid fake right wing conspiracy theories, reconcile with your daughter (from a reader of sci-fi and The Culture, is the trans thing that hard to understand?), resolve your stupid beef with OpenAI (we are all disappointed, but you don’t have a great track record for open source either), and start building a new political party. Pro large scale legal immigration, not a single illegal border crosser. Pro choice (within reason), and also pro gun (within reason). Inclusive and diverse, with an unwavering focus on merit. Anti crime, with an understanding that victimless crime is not crime. Expose higher education and the medical system to the free market (watch how fast prices fall) Free market and trade, but not an unregulated market. Markets require regulation to be free. It’s probably the only shot we have against China. The current Republicans and Democrats are just far too stupid; the Chinese are watching this tariff drama and laughing their asses off. Their plans are measured in centuries. America, do you want to be a protectionist backwater? If so, and all the thymos is gone, then there’s no place for me there. If this is really the way things are going, the only thing for anyone to do is leave. We’ll see how it shapes up in the next few years. But if the racists or the other racists are still running the show, we really are just cooked. Enjoy your handouts to black people and your handouts to white people in a poverty stricken shithole.
You know about Critical Race Theory, right? It says that if there’s an imbalance in, say, income between races, it must be due to discrimination. This is what wokism seems to be, and it’s moronic and false. The right wing has invented something equally stupid. Introducing Critical Trade Theory, stolen from this tweet. If there’s an imbalance in trade between countries, it must be due to unfair practices. (not due to the obvious, like one country is 10x richer than the other) There’s really only one way the trade deficits will go away, and that’s if trade goes to zero (or maybe if all these countries become richer than America). Same thing with the race deficits, no amount of “leg up” bullshit will change them. Why are all the politicians in America anti-growth anti-reality idiots who want to drive us into the poor house? The way this tariff shit is being done is another stupid form of anti-merit benefits to chosen groups of people, with a whole lot of grift to go along with it. Makes me just not want to play.
More in programming
Every 6 months or so, I decide to leave my cave and check out what the cool kids are doing with AI. Apparently the latest trend is to use fancy command line tools to write code using LLMs. This is a very nice change, since it suddenly makes AI compatible with my allergy to getting out of the terminal. The most popular of these tools seems to be Claude Code. It promises to be able to build in total autonomy, being able to use search code, write code, run tests, lint, and commit the changes. While this sounds great on paper, I’m not keen on getting locked into vendor tools from an unprofitable company. At some point, they will either need to raise their prices, enshittify their product, or most likely do both. So I went looking for what the free and open source alternatives are. Picking a model There’s a large amount of open source large language models on the market, with new ones getting released all the time. However, they are not all ready to be used locally in coding tasks, so I had to try a bunch of them before settling on one. deepseek-r1:8b Deepseek is the most popular open source model right now. It was created by the eponymous Chinese company. It made the news by beating numerous benchmarks while being trained on a budget that is probably lower than the compensation of some OpenAI workers. The 8b variant only weights 5.2 GB and runs decently on limited hardware, like my three years old Mac. This model is famous for forgetting about world events from 1989, but also seems to have a few issues when faced with concrete coding tasks. It is a reasoning model, meaning it “thinks” before acting, which should lead to improved accuracy. In practice, it regularly gets stuck indefinitely searching where it should start and jumping from one problem to the other in a loop. This can happen even on simple problems, and made it unusable for me. mistral:7b Mistral is the French alternative to American and Chinese models. I have already talked about their 7b model on this blog. It is worth noting that they have kept updating their models, and it should now be much more accurate than two years ago. Mistral is not a reasoning model, so it will jump straight to answering. This is very good if you’re working with tasks where speed and low compute use are a priority. Sadly, the accuracy doesn’t seem good enough for coding. Even on simple tasks, it will hallucinate functions or randomly delete parts of the code I didn’t want to touch. qwen3:8b Another model from China, qwen3 was created by the folks at Alibaba. It also claims impressive benchmark results, and can work as both a reasoning or non-thinking model. It was made with modern AI tooling in mind, by supporting MCPs and a framework for agentic development. This model actually seems to work as expected, providing somewhat accurate code output while not hanging in the reasoning part. Since it runs decently on my local setup, I decided to stick to that model for now. Setting up a local API with Ollama Ollama is now the default way to download and run local LLMs. It can be simply installed by downloading it from their website. Once installed, it works like Docker for models, by giving us access to commands like pull, run, or rm. Ollama will expose an API on localhost which can be used by other programs. For example, you can use it from your Python programs through ollama-python. Pair programming with aider The next piece of software I installed is aider. I assume it’s pronounced like the French word, but I could not confirm that. Aider describes itself as a “pair programming” application. Its main job is to pass context to the model, let it write the output to files, run linters, and commit the changes. Getting started It can be installed using the official Python package or via Homebrew if you use Mac. Once it is installed, just navigate to your code repository and launch it: export OLLAMA_API_BASE=http://127.0.0.1:11434 aider --model ollama_chat/qwen3:8b The CLI should automatically create some configuration files and add them to the repo’s .gitignore. Usage Aider isn’t meant to be left alone in complete autonomy. You’ll have to guide the AI through the process of making changes to your repository. To start, use the /add command to add files you want to focus on. Those files will be passed entirely to the model’s context and the model will be able to write in them. You can then ask questions using the /ask command. If you want to generate code, a good strategy can be to starting by requesting a plan of actions. When you want it to actually write to the files, you can prompt it using the /code command. This is also the default mode. There’s no absolute guarantee that it will follow a plan if you agreed on one previously, but it is still a good idea to have one. The /architect command seems to automatically ask for a plan, accept it, and write the code. The specificity of this command is that it lets you use different models to plan and write the changes. Refactoring I tried coding with aider in a few situations to see how it performs in practice. First, I tried making it do a simple refactoring on Itako, which is a project of average complexity. When pointed to the exact part of code where the issues happened, and explained explicitly what to do, the model managed to change the target struct according to the instructions. It did unexpectedly change a function that was outside the scope of what I asked, but this was easy to spot. On paper, this looks like a success. In practice, the time spent crafting a prompt, waiting for the AI to run and fixing the small issue that came up immensely exceeds the 10 minutes it would have taken me to edit the file myself. I don’t think coding that way would lead me to a massive performance improvement for now. Greenfield project For a second scenario, I wanted to see how it would perform on a brand-new project. I quickly set up a Python virtual environment, and asked aider to work with me at building a simple project. We would be opening a file containing Japanese text, parsing it with fugashi, and counting the words. To my surprise, this was a disaster. All I got was a bunch on hallucination riddled python that wouldn’t run under any circumstances. It may be that the lack of context actually made it harder for the model to generate code. Troubleshooting Finally, I went back to Itako, and decided to check how it would perform on common troubleshooting tasks. I introduced a few bugs to my code and gathered some error messages. I then proceeded to simply give aider the files mentioned by the error message and just use /ask to have it explain the errors to me, without requiring it to implement the code. This part did work very well. If I compare it with Googling unknown error messages, I think this can cut the time spent on the issue by half This is not just because Google is getting worse every day, but the model having access to the actual code does give it a massive advantage. I do think this setup is something I can use instead of the occasional frustration of scrolling through StackOverflow threads when something unexpected breaks. What about the Qwen CLI? With everyone jumping on the trend of CLI tools for LLMs, the Qwen team released its own Qwen Code. It can be installed using npm, and connects to a local model if configured like this: export OPENAI_API_KEY="ollama" export OPENAI_BASE_URL="http://localhost:11434/v1/" export OPENAI_MODEL="qwen3:8b" Compared to aider, it aims at being fully autonomous. For example, it will search your repository using grep. However, I didn’t manage to get it to successfully write any code. The tool seems optimized for larger, online models, with context sizes up to 1M tokens. Our local qwen3 context only has a 40k tokens context size, which can get overwhelmed very quickly when browsing entire code repositories. Even when I didn’t run out of context, the tool mysteriously failed when trying to write files. It insists it can only write to absolute paths, which the model doesn’t seem to agree with providing. I did not investigate the issue further.
Ideals are supposed to be unattainable for the great many. If everyone could be the smartest, strongest, prettiest, or best, there would be no need for ideals — we'd all just be perfect. But we're not, so ideals exist to show us the peak of humanity and to point our ambition and appreciation toward it. This is what I always hated about the 90s. It was a decade that made it cool to be a loser. It was the decade of MTV's Beavis and Butt-Head. It was the age of grunge. I'm generationally obliged to like Nirvana, but what a perfectly depressive, suicidal soundtrack to loser culture. Naomi Wolf's The Beauty Myth was published in 1990. It took a critical theory-like lens on beauty ideals, and finding it all so awfully oppressive. Because, actually, seeing beautiful, slim people in advertising or media is bad. Because we don't all look like that! And who's even to say what "beauty" is, anyway? It's all just socially constructed! The final stage of that dead-end argument appeared as an ad here in Copenhagen thirty years later during the 2020 insanity: I passed it every day biking the boys to school for weeks. Next to other slim, fit Danes also riding their bikes. None of whom resembled the grotesque display of obesity towering over them on their commute from Calvin Klein. While this campaign was laughably out of place in Copenhagen, it's possible that it brought recognition and representation in some parts of America. But a celebration of ideals it was not. That's the problem with the whole "representation" narrative. It proposes we're all better off if all we see is a mirror of ourselves, however obese, lazy, ignorant, or incompetent, because at least it won't be "unrealistic". Screw that. The last thing we need is a patronizing message that however little you try, you're perfect just the way you are. No, the beauty of ideals is that they ask more of us. Ask us to pursue knowledge, fitness, and competence by taking inspiration from the best human specimens. Thankfully, no amount of post-modern deconstruction or academic theory babble seems capable of suppressing the intrinsic human yearning for excellence forever. The ideals are finally starting to emerge again.
Some lessons I’ve learned from experience. 1. Install Stuff Indiscriminately From npm Become totally dependent on others, that’s why they call them “dependencies” after all! Lean in to it. Once your dependencies break — and they will, time breaks all things — then you can spend lots of time and energy (which was your goal from the beginning) ripping out those dependencies and replacing them with new dependencies that will break later. Why rip them out? Because you can’t fix them. You don’t even know how they work, that’s why you introduced them in the first place! Repeat ad nauseam (that is, until you decide you don’t want to make websites that require lots of your time and energy, but that’s not your goal if you’re reading this article). 2. Pick a Framework Before You Know You Need One Once you hitch your wagon to a framework (a dependency, see above) then any updates to your site via the framework require that you first understand what changed in the framework. More of your time and energy expended, mission accomplished! 3. Always, Always Require a Compilation Step Put a critical dependency between working on your website and using it in the browser. You know, some mechanism that is required to function before you can even see your website — like a complication step or build process. The bigger and more complex, the better. This is a great way to spend lots of time and energy working on your website. (Well, technically it’s not really working on your website. It’s working on the thing that spits out your website. So you’ll excuse me for recommending something that requires your time and energy that isn’t your website — since that’s not the stated goal — but trust me, this apparent diversion will directly affect the overall amount of time and energy you spend making a website. So, ultimately, it will still help you reach our stated goal.) Requiring that the code you write be transpiled, compiled, parsed, and evaluated before it can be used in your website is a great way to spend extra time and energy making a website (as opposed to, say, writing code as it will be run which would save you time and energy and is not our goal here). More? Do you have more advice on building a website that will require a lot of your time and energy? Share your recommendations with others, in case they’re looking for such advice. Email · Mastodon · Bluesky
Am I a good programmer? The short answer is: I don’t know what that means. I have been programming for 52 years now, having started in a public high school class in 1973, which is pretty rare because few high schools offered such an opportunity back then. I
The world is waking to the fact that talk therapy is neither the only nor the best way to cure a garden-variety petite depression. Something many people will encounter at some point in their lives. Studies have shown that exercise, for example, is a more effective treatment than talk therapy (and pharmaceuticals!) when dealing with such episodes. But I'm just as interested in the role building competence can have in warding off the demons. And partly because of this meme: I've talked about it before, but I keep coming back to the fact that it's exactly backwards. That signing up for an educational quest into Linux, history, or motorcycle repair actually is an incredibly effective alternative to therapy! At least for men who'd prefer to feel useful over being listened to, which, in my experience, is most of them. This is why I find it so misguided when people who undertake those quests sell their journey short with self-effacing jibes about how much an unattractive nerd it makes them to care about their hobby. Mihaly Csikszentmihalyi detailed back in 1990 how peak human happiness arrives exactly in these moments of flow when your competence is stretched by a difficult-but-doable challenge. Don't tell me those endorphins don't also help counter the darkness. But it's just as much about the fact that these pursuits of competence usually offer a great opportunity for community as well that seals the deal. I've found time and again that people are starved for the kind of topic-based connections that, say, learning about Linux offers in spades. You're not just learning, you're learning with others. That is a time-tested antidote to depression: Forming and cultivating meaningful human connections. Yes, doing so over the internet isn't as powerful as doing it in person, but it's still powerful. It still offers community, involvement, and plenty of invitation to carry a meaningful burden. Open source nails this trifecta of motivations to a T. There are endless paths of discovery and mastery available. There are tons of fellow travelers with whom to connect and collaborate. And you'll find an unlimited number of meaningful burdens in maintainerships open for the taking. So next time you see that meme, you should cheer that the talk therapy table is empty. Leave it available for the severe, pathological cases that exercise and the pursuit of competence can't cure. Most people just don't need therapy, they need purpose, they need competence, they need exercise, and they need community.