More from Krzysztof Kowalczyk blog
I strongly believe that fast iteration cycle is important for productivity. In programming fast iteration means: how quickly can I go from finishing writing the code to testing it. That’s why I don’t like languages that compile slowly: slow compilation puts a hard limit on your iteration cycle. That’s a general principle. How does it translate to actions? I’m glad you asked. Simulated errors I’m writing a web app and some code paths might fail e.g. server returns an error response. When that happens I want to display an error message to the user. I want to test that but server doesn’t typically return errors. I can modify the server to return error and restart it but in a compiled language like Go it’s a whole thing. Instead I can force error condition in the code. Because web dev typically offers hot reload of code, I can modify the code to pretend the request failed, save the code, reload the app and I’m testing the error handling. To make it less ad-hoc another strategy is to have debug flags on window object e.g.: window.debug.simulateError = false Then the code will have: try { if (window.debug.simulateError) { throw Error("fetch failed"); } let rsp = await fetch(...); } That way I can toggle window.debug.simulateError in dev tools console, without changing the code. I have to repeat this code for every fetch(). More principled approach is: async function myFetch(uri, opts) { if (window.debug.simulateError) { throw new Error("featch failed"); } return await fetch(uri, opts); } To go even further, we could simulate different kinds of network errors: Response.ok is false response is 404 or 500 failed to reach the server We can change simulateError from bool to a number and have: async function myFetch(uri, opts) { let se = window.debug.simulateError; if (se === 1) { // simulate Response.ok is false return ...; } if (se === 2) { // simulate 404 response return; } if (se === 3) { // simulate network offline } return await fetch(uri, opts); } Start by show dialog Let’s say I’m working on a new dialog e.g. rename dialog. To get to that dialog I have to perform some UI action e.g. use context menu and click the Rename note menu item. Not a big deal but I’m still working on the dialog so it’s a bit annoying to repeat that UI action every time I move the button to the right, to see how it’ll look. In Svelte we do: let showingRenameDialog = $state(false); function showRenameDialog() { showingRenameDialog = true; } {#if showingRenameDialog} <RenameDialog ...></RenameDialog> {'if} To speed up dev cycle: let showingRenameDialog = $state(true); // show while we're working ont While I’m still designing and writing code for the dialog, show it by default. That way app reloads due to changing code won’t require you to manually redo UI actions to trigger the dialog. Ad-hoc test code Let’s say you’re writing a non-trivial code server code to process a request. Let’s say the request is a POST with body containing zip file with images which the server needs to unzip, resize the files, save them to a file system. You want to test it as you implement the logic but iteration cycle is slow: you write the code recompile and restart the server go through UI actions to send the request Most of the code (unpacking zip file, resizing images) can be tested without doing the request. So you isolate the function: func resizeImagesInZip(zipData []byte) error { // the code you're writing on } Now you have to trigger it easily. The simplest way is ad-hoc test code: func main() { if true { zipData := loadTestZipFileMust() resizeImagesInZip(zipData); return; } } While you’re working on the code, the server just runs the test code. When you’re done, you switch it off: func main() { if false { // leave the code so that you can re-enable it // easily in the future zipData := loadTestZipFileMust() resizeImagesInZip(zipData); return; } } Those are few tactical tips to increase dev cycles. You can come up with more such ideas by asking yourself: how can I speed iteration cycle?
Stage Manager in Mac OS is not a secret, but I’ve only learned about it recently. It’s off by default so you have to enable it in system settings: It’s hard to describe in words, so you’ll have to try it. And experiment a bit because I didn’t get in the first hour. It’s a certain way to manage windows for less clutter. Imagine you have a browser, an editor and a terminal i.e. 3 windows. Might be annoying to have them all shown on screen. When Stage Manager is enabled, thumbnails of windows are on the left edge of the screen and you can switch to the window by clicking the thumbnail. By default Stage Manager shows each window by itself on screen. You can group them by dragging thumbnail onto screen. For example, when I’m developing, I’m using text editor and terminal so I will group them so they are both visible on screen at the same time, but not the other apps. So far I’m enjoying using Stage Manager.
Today I figured out how to setup Zed to debug, at the same time, my go server and Svelte web app. My dev setup for working on my web app is: go server is run with -run-dev arg go server provides backend apis and proxies requests it doesn’t handle to a vite dev server that does the serving of JavaScript etc. files from my Svelte code go server in -run-dev mode automatically launches vite dev go server runs on port 9339 It’s possible to setup Zed to debug both server and frontend JavaScript code. In retrospect it’s simple, but took me a moment to figure out. I needed to create the following .zed/debug.json: // Project-local debug tasks // // For more documentation on how to configure debug tasks, // see: https://zed.dev/docs/debugger [ { "adapter": "Delve", "label": "run go server", "request": "launch", "mode": "debug", "program": ".", "cwd": "${ZED_WORKTREE_ROOT}", "args": ["-run-dev", "-no-open"], "buildFlags": [], "env": {} }, { "adapter": "JavaScript", "label": "Debug in Chrome", "type": "chrome", "request": "launch", "url": "http://localhost:9339/", "webRoot": "$ZED_WORKTREE_ROOT/src", "console": "integratedTerminal", "skipFiles": ["<node_internals>/**"] } ] It’s mostly self-exploratory. First entry tells Zed to build go program with go build . and run the resulting executable under the debugger with -run-dev -no-open args. Second entry tells to launch Chrome in debug mode with http://localhost:9339/ and that files seen by Chrome come from src/ directory i.e. if browser loads /foo.js the source file is src/foo.js. This is necessary to be able to set breakpoints in Zed and have them propagate to Chrome. This eliminates the need for terminal so I can edit and debug with just Zed and Chrome. This is a great setup. I’m impressed with Zed.
When working on big JavaScript web apps, you can split the bundle in multiple chunks and import selected chunks lazily, only when needed. That makes the main bundle smaller, faster to load and parse. How to lazy import a module? let hljs = await import("highlight.js").default; is equivalent of: import hljs from "highlight.js"; Now: let libZip = await import("@zip.js/zip.js"); let blobReader = new libZip.BlobReader(blob); Is equivalent to: import { BlobReader } from "@zip.js/zip.js"; It’s simple if we call it from async function but sometimes we want to lazy load from non-async function so things might get more complicated: let isLazyImportng = false; let hljs; let markdownIt; let markdownItAnchor; async function lazyImports() { if (isLazyImportng) return; isLazyImportng = true; let promises = await Promise.all([ import("highlight.js"), import("markdown-it"), import("markdown-it-anchor"), ]); hljs = promises[0].default; markdownIt = promises[1].default; markdownItAnchor = promises[2].default; } We can run it from non-async function: function doit() { lazyImports().then( () => { if (hljs) { // use hljs to do something } }) } I’ve included protection against kicking of lazy import more than once. That means on second and n-th call we might not yet have the module loaded so hljs will be still undefined.
Svelte 5 just added a way to use async function in components. This is from Rich Harris talk The simplest component <script> async function multiply(x, y) { let uri = `/multiply?x=${x}&y=${y}` let rsp = await fetch(uri) let resp = await rsp.text(); return parseInt(resp); } let n = $state(2) </script> <div>{n} * 2 = {await multiply(n, 2)}</div> Previously you couldn’t do {await multiply(n, 2) because Svelte didn’t understand promises. Now you can. Aborting outdated requests Imagine getting search results from a server based on what you type in an input box. If you type foo, we first send request for f, then for fo then for foo at which point we don’t care about the results for f and fo. Svelte 5 can handle aborting outdated requests: <script> import { getAbortSignal } from "svelte"; let search = $state("") const API = "https://dummyjson.com/product/search"; const response = $derived(await fetch(`${API}?q=${search}`), { signal: getAbortSignal() }) </script> <input bind:value={search}> <svelte:boundary> <ul> {#each (await response.json().products as product)} <li>{product.title}</li> {/each} </ul>
More in programming
Last year I wrote a pair of articles about ratelimiting: GCRA: leaky buckets without the buckets exponential rate limiting Recently, Chris “cks” Siebenmann has been working on ratelimiting HTTP bots that are hammering his blog. His articles prompted me to write some clarifications, plus a few practical anecdotes about ratelimiting email. mea culpa The main reason I wrote the GCRA article was to explain GCRA better without the standard obfuscatory terminology, and to compare GCRA with a non-stupid version of the leaky bucket algorithm. It wasn’t written with my old exponential ratelimiting in mind, so I didn’t match up the vocabulary. In the exponential ratelimiting article I tried to explain how the different terms correspond to the same ideas, but I botched it by trying to be too abstract. So let’s try again. parameters It’s simplest to configure these ratelimiters (leaky bucket, GCRA, exponential) with two parameters: limit period The maximum permitted average rate is calculated from these parameters by dividing them: rate = limit / period The period is the time over which client behaviour is averaged, which is also how long it takes for the ratelimiter to forget past behaviour. In my GCRA article I called it the window. Linear ratelimiters (leaky bucket and GCRA) are 100% forgetful after one period; the exponential ratelimiter is 67% forgetful. The limit does double duty: as well as setting the maximum average rate (measured in requests per period) it sets the maximum size (measured in requests) of a fast burst of requests following a sufficiently long quiet gap. how bursty You can increase or decrease the burst limit – while keeping the average rate limit the same – by increasing or decreasing both the limit and the period. For example, I might set limit = 600 requests per period = 1 hour. If I want to allow the same average rate, but with a smaller burst size, I might set limit = 10 requests per period = 1 minute. anecdote When I was looking after email servers, I set ratelimits for departmental mail servers to catch outgoing spam in case of compromised mail accounts or web servers. I sized these limits to a small multiple of the normal traffic so that legitimate mail was not delayed but spam could be stopped fairly quickly. A typical setting was 200/hour, which is enough for a medium-sized department. (As a rule of thumb, expect people to send about 10 messages per day.) An hourly limit is effective at catching problems quickly during typical working hours, but it can let out a lot of spam over a weekend. So I would also set a second backstop limit like 1000/day, based on average daily traffic instead of peak hourly traffic. It’s a lower average rate that doesn’t forget bad behaviour so quickly, both of which help with weekend spam. variable cost Requests are not always fixed-cost. For example, you might want to count the request size in bytes when ratelimiting bandwidth. The exponential algorithm calculates the instantaneous rate as r_inst = cost / interval where cost is the size of the request and interval is the time since the previous request. I’ve edited my GCRA algorithm to make the cost of requests more clear. In GCRA a request uses up some part of the client’s window, a nominal time measured in seconds. To convert a request’s size into time spent: spend = cost / rate So the client’s earliest permitted time should be updated like: time += cost * period / limit (In constrained implementations the period / limit factor can be precomputed.) how lenient When a client has used up its burst limit and is persistently making requests faster than its rate limit, the way the ratelimiter accepts or rejects requests is affected by the way it updates its memory of the client. For exim’s ratelimit feature I provided two modes called “strict” and “leaky”. There is a third possibility: an intermediate mode which I will call “forgiving”. The “leaky” mode is most lenient. An over-limit client will have occasional requests accepted at the maximum permitted rate. The rest of its requests will be rejected. When a request is accepted, all of the client’s state is updated; when a request is rejected, the client’s state is left unchanged. The lenient leaky mode works for both GCRA and exponential ratelimiting. In “forgiving” mode, all of a client’s requests are rejected while it is over the ratelimit. As soon as it slows down below the ratelimit its requests will start being accepted. When a request is accepted, all of the client’s state is updated; when a request is rejected, the client’s time is updated, but (in the exponential ratelimiter) not its measured rate. The forgiving mode works for both GCRA and exponential ratelimiting. In “strict” mode, all of a client’s requests are rejected while it is over the ratelimit, and requests continue to be rejected after a client has slowed down depending on how fast it previously was. When a request is accepted or rejected, both of the client’s time and measured rate are updated. The strict mode only works for exponential ratelimiting. I only realised yesterday, from the discussion with cks, how a “forgiving” mode can be useful for the exponential ratelimiter, and how it corresponds to the less-lenient mode of linear leaky bucket and GCRA ratelimiters. (I didn’t describe the less-lenient mode in my GCRA article.) anecdote One of the hardest things about ratelimiting email was coming up with a policy that didn’t cause undue strife and unnecessary work. When other mail servers (like the departmental server in the anecdote above) were sending mail through my relays, it made sense to use “leaky” ratelimiter mode with SMTP 450 temporary rejects. When there was a flood of mail, messages would be delayed and retried automatically. When their queue size alerts went off, the department admin could take a look and respond as appropriate. That policy worked fairly well. However, when the sender was an end-user sending via my MUA message submission servers, they usually were not running software that could gracefully handle an SMTP 450 temporary rejection. The most difficult cases were the various college and department alumni offices. Many of them would send out annual newsletters, using some dire combination of Microsoft Excel / Word / Outlook mailmerge, operated by someone with limited ability to repair a software failure. In that situation, SMTP 450 errors broke their mailshots, causing enormous problems for the alumni office and their local IT support. (Not nice to realise I caused such trouble!) The solution was to configure the ratelimiter in “strict” mode and “freeze” or quarantine over-limit bulk mail from MUAs. The “strict” mode ensured that everything after the initial burst of a spam run was frozen. When the alert was triggered I inspected a sample of the frozen messages. If they were legitimate newsletters, I could thaw them for delivery and reset the user’s ratelimit. In almost all cases the user would not be disturbed. If it turned out the user’s account was compromised and used to send spam, then I could get their IT support to help sort it out, and delete the frozen junk from the quarantine. That policy worked OK: I was the only one who had to deal with my own false positives, and they were tolerably infrequent.
Linus Torvalds, Creator of Git and Linux, on reducing cognitive load
You heard there was money in tech. You never cared about technology. You are an entryist piece of shit. But you won’t leave willingly. Give it all away to everyone for free. Then you’ll have no reason to be here.
Understanding how the architecture of a remote build system for Bazel helps implement verifiable action execution and end-to-end builds
Debates, at their finest, are about exploring topics together in search for truth. That probably sounds hopelessly idealistic to anyone who've ever perused a comment section on the internet, but ideals are there to remind us of what's possible, to inspire us to reach higher — even if reality falls short. I've been reaching for those debating ideals for thirty years on the internet. I've argued with tens of thousands of people, first on Usenet, then in blog comments, then Twitter, now X, and also LinkedIn — as well as a million other places that have come and gone. It's mostly been about technology, but occasionally about society and morality too. There have been plenty of heated moments during those three decades. It doesn't take much for a debate between strangers on this internet to escalate into something far lower than a "search for truth", and I've often felt willing to settle for just a cordial tone! But for the majority of that time, I never felt like things might escalate beyond the keyboards and into the real world. That was until we had our big blow-up at 37signals back in 2021. I suddenly got to see a different darkness from the most vile corners of the internet. Heard from those who seem to prowl for a mob-sanctioned opportunity to threaten and intimidate those they disagree with. It fundamentally changed me. But I used the experience as a mirror to reflect on the ways my own engagement with the arguments occasionally felt too sharp, too personal. And I've since tried to refocus way more of my efforts on the positive and the productive. I'm by no means perfect, and the internet often tempts the worst in us, but I resist better now than I did then. What I cannot come to terms with, though, is the modern equation of words with violence. The growing sense of permission that if the disagreement runs deep enough, then violence is a justified answer to settle it. That sounds so obvious that we shouldn't need to state it in a civil society, but clearly it is not. Not even in technology. Not even in programming. There are plenty of factions here who've taken to justify their violent fantasies by referring to their ideological opponents as "nazis", "fascists", or "racists". And then follow that up with a call to "punch a nazi" or worse. When you hear something like that often enough, it's easy to grow glib about it. That it's just a saying. They don't mean it. But I'm afraid many of them really do. Which brings us to Charlie Kirk. And the technologists who name drinks at their bar after his mortal wound just hours after his death, to name but one of the many, morbid celebrations of the famous conservative debater's death. It's sickening. Deeply, profoundly sickening. And my first instinct was exactly what such people would delight in happening. To watch the rest of us recoil, then retract, and perhaps even eject. To leave the internet for a while or forever. But I can't do that. We shouldn't do that. Instead, we should double down on the opposite. Continue to show up with our ideals held high while we debate strangers in that noble search for the truth. Where we share our excitement, our enthusiasm, and our love of technology, country, and humanity. I think that's what Charlie Kirk did so well. Continued to show up for the debate. Even on hostile territory. Not because he thought he was ever going to convince everyone, but because he knew he'd always reach some with a good argument, a good insight, or at least a different perspective. You could agree or not. Counter or be quiet. But the earnest exploration of the topics in a live exchange with another human is as fundamental to our civilization as Socrates himself. Don't give up, don't give in. Keep debating.