Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
23
I strongly believe that fast iteration cycle is important for productivity. In programming fast iteration means: how quickly can I go from finishing writing the code to testing it. That’s why I don’t like languages that compile slowly: slow compilation puts a hard limit on your iteration cycle. That’s a general principle. How does it translate to actions? I’m glad you asked. Simulated errors I’m writing a web app and some code paths might fail e.g. server returns an error response. When that happens I want to display an error message to the user. I want to test that but server doesn’t typically return errors. I can modify the server to return error and restart it but in a compiled language like Go it’s a whole thing. Instead I can force error condition in the code. Because web dev typically offers hot reload of code, I can modify the code to pretend the request failed, save the code, reload the app and I’m testing the error handling. To make it less ad-hoc another strategy...
a month ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Krzysztof Kowalczyk blog

Stage manager in Mac OS

Stage Manager in Mac OS is not a secret, but I’ve only learned about it recently. It’s off by default so you have to enable it in system settings: It’s hard to describe in words, so you’ll have to try it. And experiment a bit because I didn’t get in the first hour. It’s a certain way to manage windows for less clutter. Imagine you have a browser, an editor and a terminal i.e. 3 windows. Might be annoying to have them all shown on screen. When Stage Manager is enabled, thumbnails of windows are on the left edge of the screen and you can switch to the window by clicking the thumbnail. By default Stage Manager shows each window by itself on screen. You can group them by dragging thumbnail onto screen. For example, when I’m developing, I’m using text editor and terminal so I will group them so they are both visible on screen at the same time, but not the other apps. So far I’m enjoying using Stage Manager.

a month ago 24 votes
Zed debug setup for go server / Svelte web app

Today I figured out how to setup Zed to debug, at the same time, my go server and Svelte web app. My dev setup for working on my web app is: go server is run with -run-dev arg go server provides backend apis and proxies requests it doesn’t handle to a vite dev server that does the serving of JavaScript etc. files from my Svelte code go server in -run-dev mode automatically launches vite dev go server runs on port 9339 It’s possible to setup Zed to debug both server and frontend JavaScript code. In retrospect it’s simple, but took me a moment to figure out. I needed to create the following .zed/debug.json: // Project-local debug tasks // // For more documentation on how to configure debug tasks, // see: https://zed.dev/docs/debugger [ { "adapter": "Delve", "label": "run go server", "request": "launch", "mode": "debug", "program": ".", "cwd": "${ZED_WORKTREE_ROOT}", "args": ["-run-dev", "-no-open"], "buildFlags": [], "env": {} }, { "adapter": "JavaScript", "label": "Debug in Chrome", "type": "chrome", "request": "launch", "url": "http://localhost:9339/", "webRoot": "$ZED_WORKTREE_ROOT/src", "console": "integratedTerminal", "skipFiles": ["<node_internals>/**"] } ] It’s mostly self-exploratory. First entry tells Zed to build go program with go build . and run the resulting executable under the debugger with -run-dev -no-open args. Second entry tells to launch Chrome in debug mode with http://localhost:9339/ and that files seen by Chrome come from src/ directory i.e. if browser loads /foo.js the source file is src/foo.js. This is necessary to be able to set breakpoints in Zed and have them propagate to Chrome. This eliminates the need for terminal so I can edit and debug with just Zed and Chrome. This is a great setup. I’m impressed with Zed.

a month ago 32 votes
lazy import of JavaScript modules

When working on big JavaScript web apps, you can split the bundle in multiple chunks and import selected chunks lazily, only when needed. That makes the main bundle smaller, faster to load and parse. How to lazy import a module? let hljs = await import("highlight.js").default; is equivalent of: import hljs from "highlight.js"; Now:   let libZip = await import("@zip.js/zip.js");   let blobReader = new libZip.BlobReader(blob); Is equivalent to: import { BlobReader } from "@zip.js/zip.js"; It’s simple if we call it from async function but sometimes we want to lazy load from non-async function so things might get more complicated: let isLazyImportng = false; let hljs; let markdownIt; let markdownItAnchor; async function lazyImports() { if (isLazyImportng) return; isLazyImportng = true; let promises = await Promise.all([ import("highlight.js"), import("markdown-it"), import("markdown-it-anchor"), ]); hljs = promises[0].default; markdownIt = promises[1].default; markdownItAnchor = promises[2].default; } We can run it from non-async function: function doit() { lazyImports().then( () => { if (hljs) { // use hljs to do something } }) } I’ve included protection against kicking of lazy import more than once. That means on second and n-th call we might not yet have the module loaded so hljs will be still undefined.

a month ago 23 votes
Using await in Svelte 5 components

Svelte 5 just added a way to use async function in components. This is from Rich Harris talk The simplest component <script> async function multiply(x, y) { let uri = `/multiply?x=${x}&y=${y}` let rsp = await fetch(uri) let resp = await rsp.text(); return parseInt(resp); } let n = $state(2) </script> <div>{n} * 2 = {await multiply(n, 2)}</div> Previously you couldn’t do {await multiply(n, 2) because Svelte didn’t understand promises. Now you can. Aborting outdated requests Imagine getting search results from a server based on what you type in an input box. If you type foo, we first send request for f, then for fo then for foo at which point we don’t care about the results for f and fo. Svelte 5 can handle aborting outdated requests: <script> import { getAbortSignal } from "svelte"; let search = $state("") const API = "https://dummyjson.com/product/search"; const response = $derived(await fetch(`${API}?q=${search}`), { signal: getAbortSignal() }) </script> <input bind:value={search}> <svelte:boundary> <ul> {#each (await response.json().products as product)} <li>{product.title}</li> {/each} </ul>

a month ago 25 votes

More in programming

Trusting builds with Bazel remote execution

Understanding how the architecture of a remote build system for Bazel helps implement verifiable action execution and end-to-end builds

13 hours ago 4 votes
How to Not Write "Garbage Code" (by Linus Torvalds)

Linus Torvalds, Creator of Git and Linux, on reducing cognitive load

5 hours ago 2 votes
Words are not violence

Debates, at their finest, are about exploring topics together in search for truth. That probably sounds hopelessly idealistic to anyone who've ever perused a comment section on the internet, but ideals are there to remind us of what's possible, to inspire us to reach higher — even if reality falls short. I've been reaching for those debating ideals for thirty years on the internet. I've argued with tens of thousands of people, first on Usenet, then in blog comments, then Twitter, now X, and also LinkedIn — as well as a million other places that have come and gone. It's mostly been about technology, but occasionally about society and morality too. There have been plenty of heated moments during those three decades. It doesn't take much for a debate between strangers on this internet to escalate into something far lower than a "search for truth", and I've often felt willing to settle for just a cordial tone! But for the majority of that time, I never felt like things might escalate beyond the keyboards and into the real world. That was until we had our big blow-up at 37signals back in 2021. I suddenly got to see a different darkness from the most vile corners of the internet. Heard from those who seem to prowl for a mob-sanctioned opportunity to threaten and intimidate those they disagree with. It fundamentally changed me. But I used the experience as a mirror to reflect on the ways my own engagement with the arguments occasionally felt too sharp, too personal. And I've since tried to refocus way more of my efforts on the positive and the productive. I'm by no means perfect, and the internet often tempts the worst in us, but I resist better now than I did then. What I cannot come to terms with, though, is the modern equation of words with violence. The growing sense of permission that if the disagreement runs deep enough, then violence is a justified answer to settle it. That sounds so obvious that we shouldn't need to state it in a civil society, but clearly it is not. Not even in technology. Not even in programming. There are plenty of factions here who've taken to justify their violent fantasies by referring to their ideological opponents as "nazis", "fascists", or "racists". And then follow that up with a call to "punch a nazi" or worse. When you hear something like that often enough, it's easy to grow glib about it. That it's just a saying. They don't mean it. But I'm afraid many of them really do. Which brings us to Charlie Kirk. And the technologists who name drinks at their bar after his mortal wound just hours after his death, to name but one of the many, morbid celebrations of the famous conservative debater's death. It's sickening. Deeply, profoundly sickening. And my first instinct was exactly what such people would delight in happening. To watch the rest of us recoil, then retract, and perhaps even eject. To leave the internet for a while or forever. But I can't do that. We shouldn't do that. Instead, we should double down on the opposite. Continue to show up with our ideals held high while we debate strangers in that noble search for the truth. Where we share our excitement, our enthusiasm, and our love of technology, country, and humanity. I think that's what Charlie Kirk did so well. Continued to show up for the debate. Even on hostile territory. Not because he thought he was ever going to convince everyone, but because he knew he'd always reach some with a good argument, a good insight, or at least a different perspective. You could agree or not. Counter or be quiet. But the earnest exploration of the topics in a live exchange with another human is as fundamental to our civilization as Socrates himself. Don't give up, don't give in. Keep debating.

2 days ago 3 votes
AI Coding

In my old age I’ve mostly given up trying to convince anyone of anything. Most people do not care to find the truth, they care about what pumps their bags. Some people go as far as to believe that perception is reality and that truth is a construction. I hope there’s a special place in hell for those people. It’s why the world wasted $10B+ on self driving car companies that obviously made no sense. There’s a much bigger market for truths that pump bags vs truths that don’t. So here’s your new truth that there’s no market for. Do you believe a compiler can code? If so, then go right on believing that AI can code. But if you don’t, then AI is no better than a compiler, and arguably in its current form, worse. The best model of a programming AI is a compiler. You give it a prompt, which is “the code”, and it outputs a compiled version of that code. Sometimes you’ll use it interactively, giving updates to the prompt after it has returned code, but you find that, like most IDEs, this doesn’t work all that well and you are often better off adjusting the original prompt and “recompiling”. While noobs and managers are excited that the input language to this compiler is English, English is a poor language choice for many reasons. It’s not precise in specifying things. The only reason it works for many common programming workflows is because they are common. The minute you try to do new things, you need to be as verbose as the underlying language. AI workflows are, in practice, highly non-deterministic. While different versions of a compiler might give different outputs, they all promise to obey the spec of the language, and if they don’t, there’s a bug in the compiler. English has no similar spec. Prompts are highly non local, changes made in one part of the prompt can affect the entire output. tl;dr, you think AI coding is good because compilers, languages, and libraries are bad. This isn’t to say “AI” technology won’t lead to some extremely good tools. But I argue this comes from increased amounts of search and optimization and patterns to crib from, not from any magic “the AI is doing the coding”. You are still doing the coding, you are just using a different programming language. That anyone uses LLMs to code is a testament to just how bad tooling and languages are. And that LLMs can replace developers at companies is a testament to how bad that company’s codebase and hiring bar is. AI will eventually replace programming jobs in the same way compilers replaced programming jobs. In the same way spreadsheets replaced accounting jobs. But the sooner we start thinking about it as a tool in a workflow and a compiler—through a lens where tons of careful thought has been put in—the better. I can’t believe anyone bought those vibe coding crap things for billions. Many people in self driving accused me of just being upset that I didn’t get the billions, and I’m sure it’s the same thoughts this time. Is your way of thinking so fucking broken that you can’t believe anyone cares more about the actual truth than make believe dollars? From this study, AI makes you feel 20% more productive but in reality makes you 19% slower. How many more billions are we going to waste on this? Or we could, you know, do the hard work and build better programming languages, compilers, and libraries. But that can’t be hyped up for billions.

2 days ago 3 votes
Performant Full-Disk Encryption on a Raspberry Pi, but Foiled by Twisty UARTs

In my post yesterday (“ARM is great, ARM is terrible (and so is RISC-V)), I described my desire to find ARM hardware with AES instructions to support full-disk encryption, and the poor state of the OS ecosystem around the newer ARM boards. I was anticipating buying either a newer ARM SBC or an x86 mini … Continue reading Performant Full-Disk Encryption on a Raspberry Pi, but Foiled by Twisty UARTs →

2 days ago 4 votes