Full Width [alt+shift+f] Shortcuts [alt+shift+k] TRY SIMPLE MODE
Sign Up [alt+shift+s] Log In [alt+shift+l]
3
Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing...
17 hours ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

Choosing Tools To Make Websites

Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky

3 days ago 4 votes
Sit On Your Ass Web Development

I’ve been reading listening to Poor Charlie’s Almanack which is a compilation of talks by Charlie Munger, legendary vice-chairman at Berkshire Hathaway. One thing Charlie talks about is what he calls “sit on your ass investing” which is the opposite of day trading. Rather than being in the market every day (chasing trends, reacting to fluctuations, and trying to time transactions) Charlie advocates spending most of your time “sitting on your ass”. That doesn’t mean you’re doing nothing. It means that instead of constantly trading you’re spending your time in research and preparation for trading. Eventually, a top-tier opportunity will come along and your preparation will make you capable of recognizing it and betting big. That’s when you trade. After that, you’re back to “sitting on your ass”. Trust your research. Trust your choices. Don’t tinker. Don’t micromanage. Don’t panic. Just let the compounding effects of a good choice work in your favor. Day Trading, Day Developing As a day trader your job is to trade daily (it’s right there in the job title). If you’re not trading every day — following trends, reacting to fluctuations, timing trades — then what are you even doing? Not your job, apparently. I think it’s easy to view “development” this. You’re a developer. Your job is to develop programs — to write code. If you’re not doing that every single day, then what are you even doing? From this perspective, it becomes easy to think that writing endless code for ever-changing software paradigms is just one develops websites. But it doesn’t have to be that way. Granted, there’s cold-blooded and warm-blooded software. Sometimes you can’t avoid that. But I also think there’s a valuable lesson in Charlie’s insight. You don’t have to chase “the market” of every new framework or API, writing endless glue code for features that already exist or that will soon exist in browsers. Instead, you can make a few select, large bets on the web platform and then “sit on your ass” until the payoff comes later! An Example: Polyfills I think polyfills are a great example of an approach to “sit on your ass” web development. Your job as a developer is to know enough to make a bet on a particular polyfill that aligns with the future of the web platform. Once implemented, all you have to do is sit on your ass while other really smart people who are building browsers do their part to ship the polyfilled feature in the platform. Once shipped, you “sell” your investment by stripping out the polyfill and reap the reward of having your application get lighter and faster with zero additional effort. A big part of the payoff is in the waiting — in the “sitting on your ass”. You make a smart bet, then you sit patiently while others run around endlessly writing and rewriting more code (meanwhile the only thing left for you will be to delete code). Charlie’s business partner Warren Buffett once said that it’s “better to buy a wonderful company at a fair price, than a fair company at a wonderful price”. Similarly, I’d say it’s better to build on a polyfill aligned with the future of the platform than to build on a framework re-inventing a feature of the platform. Get Out Of Your Own Way Want to do “Day Trading Development”? Jump tools and frameworks constantly — “The next one will solve all our problems!” Build complex, custom solutions that duplicate work the web platform is already moving towards solving. Commit code that churns with time, rather than compounds with it. Want to do “Sit on Your Ass Development”? Do the minimum necessary to bridge the gap until browsers catch up. Build on forward-facing standards, then sit back and leverage the compounding effects of browser makers and standards bodies that iteratively improve year over year (none of whom you have to pay). As Alex Russel recommends, spend as little time as possible in your own code and instead focus on glueing together “the big C++/Rust subsystems” of the browser. In short: spend less time glueing together tools and frameworks on top of the browser, and more time bridging tools and APIs inside of the browser. Then get out of your own way and go sit on your ass. You might find yourself more productive than ever! Email · Mastodon · Bluesky

6 days ago 9 votes
Writing: Blog Posts and Songs

I was listening to a podcast interview with the Jackson Browne (American singer/songwriter, political activist, and inductee into the Rock and Roll Hall of Fame) and the interviewer asks him how he approaches writing songs with social commentaries and critiques — something along the lines of: “How do you get from the New York Times headline on a social subject to the emotional heart of a song that matters to each individual?” Browne discusses how if you’re too subtle, people won’t know what you’re talking about. And if you’re too direct, you run the risk of making people feel like they’re being scolded. Here’s what he says about his songwriting: I want this to sound like you and I were drinking in a bar and we’re just talking about what’s going on in the world. Not as if you’re at some elevated place and lecturing people about something they should know about but don’t but [you think] they should care. You have to get to people where [they are, where] they do care and where they do know. I think that’s a great insight for anyone looking to have a connecting, effective voice. I know for me, it’s really easily to slide into a lecturing voice — you “should” do this and you “shouldn’t” do that. But I like Browne’s framing of trying to have an informal, conversational tone that meets people where they are. Like you’re discussing an issue in the bar, rather than listening to a sermon. Chris Coyier is the canonical example of this that comes to mind. I still think of this post from CSS Tricks where Chris talks about how to have submit buttons that go to different URLs: When you submit that form, it’s going to go to the URL /submit. Say you need another submit button that submits to a different URL. It doesn’t matter why. There is always a reason for things. The web is a big place and all that. He doesn’t conjure up some universally-applicable, justified rationale for why he’s sharing this method. Nor is there any pontificating on why this is “good” or “bad”. Instead, like most of Chris’ stuff, I read it as a humble acknowledgement of the practicalities at hand — “Hey, the world is a big place. People have to do crafty things to make their stuff work. And if you’re in that situation, here’s something that might help what ails ya.” I want to work on developing that kind of a voice because I love reading voices like that. Email · Mastodon · Bluesky

a week ago 14 votes
A Few Things About the Anchor Element’s href You Might Not Have Known

I’ve written previously about reloading a document using only HTML but that got me thinking: What are all the values you can put in an anchor tag’s href attribute? Well, I looked around. I found some things I already knew about, e.g. Link protocols like mailto:, tel:, sms: and javascript: which deal with specific ways of handling links. Protocol-relative links, e.g. href="//" Text fragments for linking to specific pieces of text on a page, e.g. href="#:~:text=foo" But I also found some things I didn’t know about (or only vaguely knew about) so I wrote them down in an attempt to remember them. href="#" Scrolls to the top of a document. I knew that. But I’m writing because #top will also scroll to the top if there isn’t another element with id="top" in the document. I didn’t know that. (Spec: “If decodedFragment is an ASCII case-insensitive match for the string top, then return the top of the document.”) href="" Reloads the current page, preserving the search string but removing the hash string (if present). URL href="" resolves to /path/ /path/ /path/#foo /path/ /path/?id=foo /path/?id=foo /path/?id=foo#bar /path/?id=foo href="." Reloads the current page, removing both the search and hash strings (if present). Note: If you’re using href="." as a link to the current page, ensure your URLs have a trailing slash or you may get surprising navigation behavior. The path is interpreted as a file, so "." resolves to the parent directory of the current location. URL href="." resolves to /path / /path#foo / /path?id=foo / /path/ /path/ /path/#foo /path/ /path/?id=foo /path/ /path/index.html /path/ href="?" Reloads the current page, removing both the search and hash strings (if present). However, it preserves the ? character. Note: Unlike href=".", trailing slashes don’t matter. The search parameters will be removed but the path will be preserved as-is. URL href="?" resolves to /path /path? /path#foo /path? /path?id=foo /path? /path?id=foo#bar /path? /index.html /index.html? href="data:" You can make links that navigate to data URLs. The super-readable version of this would be: <a href="data:text/plain,hello world"> View plain text data URL </a> But you probably want data: URLs to be encoded so you don’t get unexpected behavior, e.g. <a href="data:text/plain,hello%20world"> View plain text data URL </a> Go ahead and try it (FYI: may not work in your user agent). Here’s a plain-text file and an HTML file. href="video.mp4#t=10,20" Media fragments allow linking to specific parts of a media file, like audio or video. For example, video.mp4#t=10,20 links to a video. It starts play at 10 seconds, and stops it at 20 seconds. (Support is limited at the time of this writing.) See For Yourself I tested a lot of this stuff in the browser and via JS. I think I got all these right. Thanks to JavaScript’s URL constructor (and the ability to pass a base URL), I could programmatically explore how a lot of these href’s would resolve. Here’s a snippet of the test code I wrote. You can copy/paste this in your console and they should all pass 🤞 const assertions = [ // Preserves search string but strips hash // x -> { search: '?...', hash: '' } { href: '', location: '/path', resolves_to: '/path' }, { href: '', location: '/path/', resolves_to: '/path/' }, { href: '', location: '/path/#foo', resolves_to: '/path/' }, { href: '', location: '/path/?id=foo', resolves_to: '/path/?id=foo' }, { href: '', location: '/path/?id=foo#bar', resolves_to: '/path/?id=foo' }, // Strips search and hash strings // x -> { search: '', hash: '' } { href: '.', location: '/path', resolves_to: '/' }, { href: '.', location: `/path#foo`, resolves_to: `/` }, { href: '.', location: `/path?id=foo`, resolves_to: `/` }, { href: '.', location: `/path/`, resolves_to: `/path/` }, { href: '.', location: `/path/#foo`, resolves_to: `/path/` }, { href: '.', location: `/path/?id=foo`, resolves_to: `/path/` }, { href: '.', location: `/path/index.html`, resolves_to: `/path/` }, // Strips search parameters and hash string, // but preserves search delimeter (`?`) // x -> { search: '?', hash: '' } { href: '?', location: '/path', resolves_to: '/path?' }, { href: '?', location: '/path#foo', resolves_to: '/path?' }, { href: '?', location: '/path?id=foo', resolves_to: '/path?' }, { href: '?', location: '/path/', resolves_to: '/path/?' }, { href: '?', location: '/path/?id=foo#bar', resolves_to: '/path/?' }, { href: '?', location: '/index.html#foo', resolves_to: '/index.html?'} ]; const assertions_evaluated = assertions.map(({ href, location, resolves_to }) => { const domain = 'https://example.com'; const expected = new URL(href, domain + location).toString(); const received = new URL(domain + resolves_to).toString(); return { href, location, expected: expected.replace(domain, ''), received: received.replace(domain, ''), passed: expected === received }; }); console.table(assertions_evaluated); Email · Mastodon · Bluesky

a week ago 18 votes

More in programming

The future of large files in Git is Git

.title {text-wrap:balance;} #content > p:first-child {text-wrap:balance;} If Git had a nemesis, it’d be large files. Large files bloat Git’s storage, slow down git clone, and wreak havoc on Git forges. In 2015, GitHub released Git LFS—a Git extension that hacked around problems with large files. But Git LFS added new complications and storage costs. Meanwhile, the Git project has been quietly working on large files. And while LFS ain’t dead yet, the latest Git release shows the path towards a future where LFS is, finally, obsolete. What you can do today: replace Git LFS with Git partial clone Git LFS works by storing large files outside your repo. When you clone a project via LFS, you get the repo’s history and small files, but skip large files. Instead, Git LFS downloads only the large files you need for your working copy. In 2017, the Git project introduced partial clones that provide the same benefits as Git LFS: Partial clone allows us to avoid downloading [large binary assets] in advance during clone and fetch operations and thereby reduce download times and disk usage. – Partial Clone Design Notes, git-scm.com Git’s partial clone and LFS both make for: Small checkouts – On clone, you get the latest copy of big files instead of every copy. Fast clones – Because you avoid downloading large files, each clone is fast. Quick setup – Unlike shallow clones, you get the entire history of the project—you can get to work right away. What is a partial clone? A Git partial clone is a clone with a --filter. For example, to avoid downloading files bigger than 100KB, you’d use: git clone --filter='blobs:size=100k' <repo> Later, Git will lazily download any files over 100KB you need for your checkout. By default, if I git clone a repo with many revisions of a noisome 25 MB PNG file, then cloning is slow and the checkout is obnoxiously large: $ time git clone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (153/153), 1.19 GiB real 3m49.052s Almost four minutes to check out a single 25MB file! $ du --max-depth=0 --human-readable noise-over-git/. 1.3G noise-over-git/. $ ^ 🤬 And 50 revisions of that single 25MB file eat 1.3GB of space. But a partial clone side-steps these problems: $ git config --global alias.pclone 'clone --filter=blob:limit=100k' $ time git pclone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (1/1), 24.03 MiB real 0m6.132s $ du --max-depth=0 --human-readable noise-over-git/. 49M noise-over-git/ $ ^ 😻 (the same size as a git lfs checkout) My filter made cloning 97% faster (3m 49s → 6s), and it reduced my checkout size by 96% (1.3GB → 49M)! But there are still some caveats here. If you run a command that needs data you filtered out, Git will need to make a trip to the server to get it. So, commands like git diff, git blame, and git checkout will require a trip to your Git host to run. But, for large files, this is the same behavior as Git LFS. Plus, I can’t remember the last time I ran git blame on a PNG 🙃. Why go to the trouble? What’s wrong with Git LFS? Git LFS foists Git’s problems with large files onto users. And the problems are significant: 🖕 High vendor lock-in – When GitHub wrote Git LFS, the other large file systems—Git Fat, Git Annex, and Git Media—were agnostic about the server-side. But GitHub locked users to their proprietary server implementation and charged folks to use it.1 💸 Costly – GitHub won because it let users host repositories for free. But Git LFS started as a paid product. Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage. In contrast, storing 50GB on Amazon’s S3 standard storage is $13/year. 😰 Hard to undo – Once you’ve moved to Git LFS, it’s impossible to undo the move without rewriting history. 🌀 Ongoing set-up costs – All your collaborators need to install Git LFS. Without Git LFS installed, your collaborators will get confusing, metadata-filled text files instead of the large files they expect. The future: Git large object promisors Large files create problems for Git forges, too. GitHub and GitLab put limits on file size2 because big files cost more money to host. Git LFS keeps server-side costs low by offloading large files to CDNs. But the Git project has a new solution. Earlier this year, Git merged a new feature: large object promisers. Large object promisors aim to provide the same server-side benefits as LFS, minus the hassle to users. This effort aims to especially improve things on the server side, and especially for large blobs that are already compressed in a binary format. This effort aims to provide an alternative to Git LFS – Large Object Promisors, git-scm.com What is a large object promisor? Large object promisors are special Git remotes that only house large files. In the bright, shiny future, large object promisors will work like this: You push a large file to your Git host. In the background, your Git host offloads that large file to a large object promisor. When you clone, the Git host tells your Git client about the promisor. Your client will clone from the Git host, and automagically nab large files from the promisor remote. But we’re still a ways off from that bright, shiny future. Git large object promisors are still a work in progress. Pieces of large object promisors merged to Git in March of 2025. But there’s more to do and open questions yet to answer. And so, for today, you’re stuck with Git LFS for giant files. But once large object promisors see broad adoption, maybe GitHub will let you push files bigger than 100MB. The future of large files in Git is Git. The Git project is thinking hard about large files, so you don’t have to. Today, we’re stuck with Git LFS. But soon, the only obstacle for large files in Git will be your half-remembered, ominous hunch that it’s a bad idea to stow your MP3 library in Git. Edited by Refactoring English Later, other Git forges made their own LFS servers. Today, you can push to multiple Git forges or use an LFS transfer agent, but all this makes set up harder for contributors. You’re pretty much locked-in unless you put in extra effort to get unlocked.↩︎ File size limits: 100MB for GitHub, 100MB for GitLab.com↩︎

16 hours ago 5 votes
How to Leverage the CPU’s Micro-Op Cache for Faster Loops

Measuring, analyzing, and optimizing loops using Linux perf, Top-Down Microarchitectural Analysis, and the CPU’s micro-op cache

yesterday 4 votes
Omarchy micro-forks Chromium

You can just change things! That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome? Well, yes! We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming. And now it's released as a package anyone can install on any flavor of Arch using the AUR (Arch User Repository). We got it all done in just four days. From idea, to solicitation, to successful patch, to release, to incorporation. And now it'll be part of the next release of Omarchy. There are no speed limits in open source. Nobody to ask for permission. You have the code, so you can make the change. All you need is skill and will (and maybe, if you need someone else to do it for you, a $5,000 incentive 😄).

2 days ago 4 votes
Choosing Tools To Make Websites

Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky

3 days ago 4 votes