Full Width [alt+shift+f] Shortcuts [alt+shift+k] TRY SIMPLE MODE
Sign Up [alt+shift+s] Log In [alt+shift+l]
4
You can just change things! That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome? Well, yes! We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming. And now it's released as a package anyone can install on any flavor of Arch using the AUR (Arch User Repository). We got it all done in just four days. From idea, to solicitation, to successful patch, to release, to incorporation. And now it'll be part of the next release of Omarchy. There are no speed limits in open source. Nobody to ask for permission. You have the code, so you can make the change. All you need is skill and will (and maybe, if you need someone else to do it for you, a $5,000 incentive 😄).
2 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from David Heinemeier Hansson

What do you do with a chance?

One day, I got a chance. It just seemed to show up. It acted like it knew me, as if it wanted something. This is how Kobi Yamada's book What do you do with a chance? starts. I've been reading that beautiful book to the boys at bedtime since it came out in 2018. It continues: It fluttered around me. It brushed up against me. It circled me as if it wanted me to grab it. What a mesmerizing mental image of a chance, fluttering about. What do you do with a chance? is a great book exactly because it's not just for the boys, but for me too. A poetic reminder of what being open to chance looks like, and what to do when it shows up.  Right now, Omarchy feels like that chance. Like Linux fluttered into my hands and said "let's take a trip to the moon".  I joked on X that perhaps it was the new creatine routine I picked up from Pieter Levels, but I only started that a few days ago, so I really do think it's actually Linux! This exhilaration of The Chance reminds me of the 1986 cult classic Highlander. There's a fantastic montage in the middle where Sean Connery is teaching Christopher Lambert to fight for the prize of immortality, and in it, he talks about The Quickening. Feeling the stag, feeling the opportunity. That's the feeling I have when I wake up in the morning at the moment: The Quickening. There's something so exciting here, so energizing, that I simply must get to the keyboard and chase wherever it flutters to.

5 days ago 7 votes
All-in on Omarchy at 37signals

We're going all-in on Omarchy at 37signals. Over the next three years, as the regular churn of hardware invites it, we're switching everyone on our Ops and Ruby programming teams to our own Arch-derived Linux distribution (and of course sharing all the improvements we make along the way with everyone else on Omarchy!). It's funny how nobody bats an eye when the company mandate is to use Macs or Windows, but when the prescription is Linux, it's suddenly surprising. It really shouldn't be. Your ability to control your own destiny with Linux is far superior to what you'll get from a closed-source, commercial operating system. Of course it is! The code is literally all there! True, you might face more challenges, and there won't be a vendor to call (unless you hop into the Enterprise Linux camp, which doesn't appeal to me either). But I've never given a damn about that. I started using Ruby to build Basecamp when we could barely fill a room in American with professional Ruby programmers. This is what we do here! This also means giving up on MacBooks and choosing Framework laptops as the new standard-issue equipment. Along with desktop machines from Framework and Beelink both. PC hardware has gotten incredibly good over the last few years, as AMD in particular has managed to extract many of the same processor improvements from TSMC, as Apple did so well with the M series. At least in terms of performance. Again, true, there is still a gap on the efficiency front. Nobody can currently beat Apple on the wattage-to-power ratio (but the gap is fast closing!). So battery life on Linux using Framework is currently a bit less. I get about 6 hours of mixed use from my Framework 13, so whenever I suspect that might be a problem, I bring a small 20K mAh Anker battery in the bag, and now I have double the capacity.  A small concession on a rare occasion, but nothing like the performance AND battery deficit we willingly endured for decades on the Mac before Apple switched to their own chips. Because we wanted to run OSX. It was worth sacrifice a few other concerns for. Just like Linux is today. On the flip side, we'll get a massive boost in productivity from being able to run our Ruby on Rails test suites locally much faster. For our HEY application, even the fastest Mac, an M4 Max, is almost twice as slow as a Framework Desktop machine running Linux, which can do Docker natively. It's an exciting new adventure for us. Omarchy is already by far-and-away my favorite computing environment. Right up there in joy and wonder with the old Amiga days or early OSX. It's been a blast learning that so many other early-adopters have found the same feeling. Very reminiscent of the excitement in the early Ruby days. Knowing you'd found something super special that wasn't yet widely distributed (but poised to be). I spoke about all of this with Kimberly on a bonus episode of The REWORK Podcast. Give it a listen if you're curious about the why, the how, and the inevitable objections.

a week ago 9 votes
It's beginning to feel like the 80s in America again

Have I told you how much I've come to dislike the 90s? The depressive music, the ironic distance to everything, the deconstructive narratives, the moral relativism, and the total cultural takeover of postmodern ideology. Oh, I did that just last week? Well, allow me another go. But rather than railing against the 90s, let me tell you about the 80s. They were amazing. America was firing on all cylinders, Reagan had brought the morning back, and the Soviet Union provided a clear black-and-white adversarial image. But it was the popular culture of the era that still fills me with hiraeth. It was the time of earnest storytelling. When Rocky could just train real hard to avenge the death of his friend by punching Dolph Lundgreen for 10 minutes straight in a montage of blood, sweat, and tears. After which even the Russians couldn't help themselves but cheer for him. Not a shred of irony. Just pure "if you work real hard, you can do the right thing" energy. Or what about Weird Science from 1985? Two nerds bring Barbie to life, and she teaches them to talk to real girls. It was goofy, it was kitsch, but it was also earnest and honest. Nerdy teenage boys have a hard time talking to girls! But they can learn how, and if they do, it'll all work out. That movie was actually the earliest memory I have of wanting to move to America. I don't remember exactly when I saw it, but I remember at the end thinking, "I have to go there." Such was the magnetic power of that American 80s earnest optimism! Or what about the music? Do you have any idea what the 1986 music video for Sabrina's BOYS could do for a young Danish boy who'd only just discovered the appeal of girls? Wonders, is what. Wonders. And again, it depicted this goofy but earnest energy. Boys and girls like each other! They have fun with the chase. The genders are not doomed to opposing trenches on the Eastern Front for all eternity. So back to today. It feels like we're finally emerging from this constant 90s Seattle drizzle to sunny 80s LA vibes in America. The constant pessimism, the cancellation militias, and the walking-on-eggshells atmosphere have given way to something far brighter, bolder, and, yes, better. An optimism, a levity, a confidence. First, America has Sweeney fever. The Sydney Sweeney Has Great Jeans campaign has been dominating the discourse for weeks, but rather than back down with a groveling apology, American Eagle has doubled down with a statement and now the Las Vegas sphere! And Sweeney herself has also kept her foot on the gas. Now there's a new Baskin-Robbins campaign that's equal parts Weird Science (two nerds!) and "Boys Boys Boys" music video. No self-referential winks to how this might achksually be ProBLeMAtIC. Just a confident IT GIRL grabbing the attention of a nation. The male role model too has been going through a rehabilitation lately. I absolutely loved F1: The Movie. It's classic Jerry Bruckheimer Top Gun pathos: Real men doing dangerous stuff for the chase, the glory, and the next generation with a love story that involves a competent yet feminine female partner.  Again, it's an entirely sincere story with great morals. The generational gap is real, but we can learn from each other. Young hotshots have speed but lack experience. Old-timers can be grumpy but their wisdom was hard-won. Even the diversity in that movie fails to feel forced! It's a woman leading the engineering in a male-dominated world, and she can't quite hack it at first. Her designs are too by-the-book. But then Brad Pitt sells the gamble of a dirty-air fighter design, she steps up her game, and wins the crown by her wits and talent. Tell me that isn't a wholesome story! David Foster Wallace nailed this all in his critique of postmodernism. He called out the irony, cynicism, and irreverence that had fully permeated the culture from the 90s forward. And frankly, he was bored with it! It's an amazing interview to watch today. Wallace had the diagnosis nailed back in 1997. But it's taken us until these mid-2020s to fully embrace its conclusions. We need earnest values and virtues. We need sincere stories that are not afraid of grand narratives. That don't constantly have to deconstruct "but what is good really?", and dare embrace a solid defense of "some ways of being really are better." We also need to have fun! We need to throw away these shit-tinted glasses that see everything in the world as a problematic example of some injustice or oppression. We a bit of gratitude for technology and progress! That's what the Sweeney campaign is doing. That's what Brad Pitt is racing for in F1: The Movie. That's what I'm here for! Because as much as I love the croissants of the Old World, I find myself craving that uniquely American brand of optimism, enthusiasm, and determination more whenever I've been back in Europe for too long. Give me some Weird Science! Give me some Sabrina at the pool! Give me some American 80s vibes!

a week ago 10 votes
The Framework Desktop is a beast

I've been running the Framework Desktop for a few months here in Copenhagen now. It's an incredible machine. It's completely quiet, even under heavy, stress-all-cores load. It's tiny too, at just 4.5L of volume, especially compared to my old beautiful but bulky North tower running the 7950X — yet it's faster! And finally, it's simply funky, quirky, and fun! In some ways, the Framework Desktop is a curious machine. Desktop PCs are already very user-repairable! So why is Framework even bringing their talents to this domain? In the laptop realm, they're basically alone with that concept, but in the desktop space, it's rather crowded already. Yet it somehow still makes sense. Partly because Framework has gone with the AMD Ryzen AI Max 395+, which is technically a laptop CPU. You can find it in the ASUS ROG Flow Z13 and the HP ZBook Ultra. Which means it'll fit in a tiny footprint, and Framework apparently just wanted to see what they could do in that form factor. They clearly had fun with it. Look at mine: There are 21 little tiles on the front that you can get in a bunch of different colors or with logos from Framework. Or you can 3D print your own! It's a welcome change in aesthetic from the brushed aluminum or gamer-focused RGBs approach that most of the competition is taking. But let's cut to the benchmarks. That's really why you'd buy a machine like the Framework Desktop. There are significantly cheaper mini PCs available from Beelink and others, but so far, Framework has the only AMD 395+ unit on sale that's completely silent (the GMKTec very much is not, nor is the Z3 Flow). And for me, that's just a dealbreaker. I can't listen to roaring fans anymore. Here's the key benchmark for me: That's the only type of multi-core workload I really sit around waiting on these days, and the Framework Desktop absolutely crushes it. It's almost twice as fast as the Beelink SER8 and still a solid third faster than the Beelink SER9 too. Of course, it's also a lot more expensive, but you're clearly getting some multi-core bang for your buck here! It's even a more dramatic difference to the Macs. It's a solid 40% faster than the M4 Max and 50% faster than the M4 Pro! Now some will say "that's just because Docker is faster on Linux," and they're not entirely wrong. Docker runs natively on Linux, so for this test, where the MySQL/Redis/ElasticSearch data stores run in Docker while Ruby and the app code runs natively, that's part of the answer. Last I checked, it was about 25% of the difference. But so what? Docker is an integral part of the workflow for tons of developers. We use it to be able to run different versions of MySQL, Redis, and ElasticSearch for different applications on the same machine at the same time. You can't really do that without Docker. So this is what Real World benchmarks reveal. It's not just about having a Docker advantage, though. The AMD 395+ is also incredibly potent in RAW CPU performance. Those 16 Zen5 cores are running at 5.1GHz, and in Geekbench 6 multicore, this is how they stack up: Basically matching the M4 Max! And a good chunk faster than the M4 Pro (as well as other AMDs and Intel's 14900K!). No wonder that it's crazy quick with a full-core stress test like running 30,000 assertions for our HEY test suite. To be fair, the M4s are faster in single-core performance. Apple holds the crown there. It's about 20%. And you'll see that in benchmarks like Speedometer, which mostly measures JavaScript single-core performance. The Framework Desktop puts out 670 vs 744 on the M4 Pro on Speedometer 2.1. On SP 3.1, it's an even bigger difference with 35 vs 50. But I've found that all these computers feel fast enough in single-core performance these days. I can't actually feel the difference browsing on a machine that does 670 vs 744 on SP2.1. Hell, I can barely feel the difference between the SER8, which does 506, and the M4 Pro! The only time I actually feel like I'm waiting on anything is in multi-core workloads like the HEY test suite, and here the AMD 395+ is very near the fastest you can get for a consumer desktop machine today at any price. It gets even better when you bring price into the equation, though. The Framework Desktop with 64GB RAM + 2TB NVMe is $1,876. To get a Mac Studio with similar specs — M4 Max, 64GB RAM, 2TB NVMe — you'll literally spend nearly twice as much at $3,299! If you go for 128GB RAM, you'll spend $2,276 on the Framework, but $4,099 on the Mac. And it'll still be way slower for development work using Docker! The Framework Desktop is simply a great deal. Speaking of 64GB vs 128GB, I've been running the 64GB version, and I almost never get anywhere close to the limits. I think the highest I've seen in regular use is about 20GB of RAM in action. Linux is really efficient. Especially when you're using a window manager like Hyprland, as we do in Omarchy. The only reason you really want to go for the full 128GB RAM is to run local LLM models. The AMD 395+ uses unified memory, like Apple, so nearly all of it is addressable to be used by the GPU. That means you can run monster models, like the new 120b gpt-oss from OpenAI. Framework has a video showing them pushing out 40 tokens/second doing just that. That seems about in range of the numbers I've seen from the M4 Max, which also seem in the 40-50 token/second range, but I'll defer to folks who benchmark local LLMs for the exact details on that. I tried running the new gpt-oss-20b on my 64GB machine, though, and I wasn't exactly blown away by the accuracy. In fact, I'd say it was pretty bad. I mean, exceptionally cool that it's doable, but very far off the frontier models we have access to as SaaS. So personally, this isn't yet something I actually use all that much in day-to-day development. I want the best models running at full speed, and right now that means SaaS. So if you just want the best, small computer that runs Linux superbly well out of the box, you should buy the Framework Desktop. It's completely quiet, fantastically fast, and super fun to look at. But I think it's also fair to mention that you can get something like a Beelink SER9 for half the price! Yes, it's also only 2/3 the performance in multi-core, but it's just as fast in single-core. Most developers could totally get away with the SER9, and barely notice what they were missing. But there are just as many people for whom the extra $1,000 is worth the price to run the test suite 40 seconds quicker! You know who you are. Oh, before I close, I also need to mention that this thing is a gaming powerhouse. It basically punches about as hard as an RTX 4060! With an iGPU! That's kinda crazy. Totally new territory on the PC side for integrated graphics. ETA Prime has a video showing the same chip in the GMK Tech running premier games at 1440p High Settings at great frame rates. You can run most games under Linux these days too (thanks Valve and Steam Deck!), but if you need to dual boot with Windows, the dual NVMe slots in the Framework Desktop come very handy. Framework did good with this one. AMD really blew it out of the water with the 395+. We're spoiled to have such incredible hardware available for Linux at such appealing discounts over similar stuff from Cupertino. What a great time to love open source software and tinker-friendly hardware!

a week ago 14 votes

More in programming

The future of large files in Git is Git

.title {text-wrap:balance;} #content > p:first-child {text-wrap:balance;} If Git had a nemesis, it’d be large files. Large files bloat Git’s storage, slow down git clone, and wreak havoc on Git forges. In 2015, GitHub released Git LFS—a Git extension that hacked around problems with large files. But Git LFS added new complications and storage costs. Meanwhile, the Git project has been quietly working on large files. And while LFS ain’t dead yet, the latest Git release shows the path towards a future where LFS is, finally, obsolete. What you can do today: replace Git LFS with Git partial clone Git LFS works by storing large files outside your repo. When you clone a project via LFS, you get the repo’s history and small files, but skip large files. Instead, Git LFS downloads only the large files you need for your working copy. In 2017, the Git project introduced partial clones that provide the same benefits as Git LFS: Partial clone allows us to avoid downloading [large binary assets] in advance during clone and fetch operations and thereby reduce download times and disk usage. – Partial Clone Design Notes, git-scm.com Git’s partial clone and LFS both make for: Small checkouts – On clone, you get the latest copy of big files instead of every copy. Fast clones – Because you avoid downloading large files, each clone is fast. Quick setup – Unlike shallow clones, you get the entire history of the project—you can get to work right away. What is a partial clone? A Git partial clone is a clone with a --filter. For example, to avoid downloading files bigger than 100KB, you’d use: git clone --filter='blobs:size=100k' <repo> Later, Git will lazily download any files over 100KB you need for your checkout. By default, if I git clone a repo with many revisions of a noisome 25 MB PNG file, then cloning is slow and the checkout is obnoxiously large: $ time git clone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (153/153), 1.19 GiB real 3m49.052s Almost four minutes to check out a single 25MB file! $ du --max-depth=0 --human-readable noise-over-git/. 1.3G noise-over-git/. $ ^ 🤬 And 50 revisions of that single 25MB file eat 1.3GB of space. But a partial clone side-steps these problems: $ git config --global alias.pclone 'clone --filter=blob:limit=100k' $ time git pclone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (1/1), 24.03 MiB real 0m6.132s $ du --max-depth=0 --human-readable noise-over-git/. 49M noise-over-git/ $ ^ 😻 (the same size as a git lfs checkout) My filter made cloning 97% faster (3m 49s → 6s), and it reduced my checkout size by 96% (1.3GB → 49M)! But there are still some caveats here. If you run a command that needs data you filtered out, Git will need to make a trip to the server to get it. So, commands like git diff, git blame, and git checkout will require a trip to your Git host to run. But, for large files, this is the same behavior as Git LFS. Plus, I can’t remember the last time I ran git blame on a PNG 🙃. Why go to the trouble? What’s wrong with Git LFS? Git LFS foists Git’s problems with large files onto users. And the problems are significant: 🖕 High vendor lock-in – When GitHub wrote Git LFS, the other large file systems—Git Fat, Git Annex, and Git Media—were agnostic about the server-side. But GitHub locked users to their proprietary server implementation and charged folks to use it.1 💸 Costly – GitHub won because it let users host repositories for free. But Git LFS started as a paid product. Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage. In contrast, storing 50GB on Amazon’s S3 standard storage is $13/year. 😰 Hard to undo – Once you’ve moved to Git LFS, it’s impossible to undo the move without rewriting history. 🌀 Ongoing set-up costs – All your collaborators need to install Git LFS. Without Git LFS installed, your collaborators will get confusing, metadata-filled text files instead of the large files they expect. The future: Git large object promisors Large files create problems for Git forges, too. GitHub and GitLab put limits on file size2 because big files cost more money to host. Git LFS keeps server-side costs low by offloading large files to CDNs. But the Git project has a new solution. Earlier this year, Git merged a new feature: large object promisers. Large object promisors aim to provide the same server-side benefits as LFS, minus the hassle to users. This effort aims to especially improve things on the server side, and especially for large blobs that are already compressed in a binary format. This effort aims to provide an alternative to Git LFS – Large Object Promisors, git-scm.com What is a large object promisor? Large object promisors are special Git remotes that only house large files. In the bright, shiny future, large object promisors will work like this: You push a large file to your Git host. In the background, your Git host offloads that large file to a large object promisor. When you clone, the Git host tells your Git client about the promisor. Your client will clone from the Git host, and automagically nab large files from the promisor remote. But we’re still a ways off from that bright, shiny future. Git large object promisors are still a work in progress. Pieces of large object promisors merged to Git in March of 2025. But there’s more to do and open questions yet to answer. And so, for today, you’re stuck with Git LFS for giant files. But once large object promisors see broad adoption, maybe GitHub will let you push files bigger than 100MB. The future of large files in Git is Git. The Git project is thinking hard about large files, so you don’t have to. Today, we’re stuck with Git LFS. But soon, the only obstacle for large files in Git will be your half-remembered, ominous hunch that it’s a bad idea to stow your MP3 library in Git. Edited by Refactoring English Later, other Git forges made their own LFS servers. Today, you can push to multiple Git forges or use an LFS transfer agent, but all this makes set up harder for contributors. You’re pretty much locked-in unless you put in extra effort to get unlocked.↩︎ File size limits: 100MB for GitHub, 100MB for GitLab.com↩︎

16 hours ago 5 votes
Just a Little More Context Bro, I Promise, and It’ll Fix Everything

Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky

17 hours ago 3 votes
How to Leverage the CPU’s Micro-Op Cache for Faster Loops

Measuring, analyzing, and optimizing loops using Linux perf, Top-Down Microarchitectural Analysis, and the CPU’s micro-op cache

yesterday 4 votes
Choosing Tools To Make Websites

Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky

3 days ago 4 votes