More from Jim Nielsen’s Blog
Read more about RSS Club. I’ve been reading Apple in China by Patrick McGee. There’s this part in there where he’s talking about a guy who worked for Apple and was known for being ruthless, stopping at nothing to negotiate the best deal for Apple. He was so aggressive yet convincing that suppliers often found themselves faced with regret, wondering how they got talked into a deal that in hindsight was not in their best interest.[1] One particular Apple executive sourced in the book noted how there are companies who don’t employ questionable tactics to gain an edge, but most of them don’t exist anymore. To paraphrase: “I worked with two kinds of suppliers at Apple: 1) complete assholes, and 2) those who are no longer in business.” Taking advantage of people is normalized in business on account of it being existential, i.e. “If we don’t act like assholes — or have someone on our team who will on our behalf[1] — we will not survive!” In other words: All’s fair in self-defense. But what’s the point of survival if you become an asshole in the process? What else is there in life if not what you become in the process? It’s almost comedically twisted how easy it is for us to become the very thing we abhor if it means our survival. (Note to self: before you start anything, ask “What will this help me become, and is that who I want to be?”) It’s interesting how we can smile at stories like that and think, “Gosh they’re tenacious, glad they’re on my side!” Not stopping to think for a moment what it would feel like to be on the other side of that equation. ⏎ Email · Mastodon · Bluesky
Dan Abramov in “Static as a Server”: Static is a server that runs ahead of time. “Static” and “dynamic” don’t have to be binaries that describe an entire application architecture. As Dan describes in his post, “static” or “dynamic” it’s all just computers doing stuff. Computer A requests something (an HTML document, a PDF, some JSON, who knows) from computer B. That request happens via a URL and the response can be computed “ahead of time” or “at request time”. In this paradigm: “Static” is server responding ahead of time to an anticipated requests with identical responses. “Dynamic” is a server responding at request time to anticipated requests with varying responses. But these definitions aren’t binaries, but rather represent two ends of a spectrum. Ultimately, however you define “static” or “dynamic”, what you’re dealing with is a response generated by a server — i.e. a computer — so the question is really a matter of when you want to respond and with what. Answering the question of when previously had a really big impact on what kind of architecture you inherited. But I think we’re realizing we need more nimble architectures that can flex and grow in response to changing when a request/response cycle happens and what you respond with. Perhaps a poor analogy, but imagine you’re preparing holiday cards for your friends and family: “Static” is the same card sent to everyone “Dynamic” is a hand-written card to each individual But between these two are infinite possibilities, such as: A hand-written card that’s photocopied and sent to everyone A printed template with the same hand-written note to everyone A printed template with a different hand-written note for just some people etc. Are those examples “static” or “dynamic”? [Cue endless debate]. The beauty is that in proving the space between binaries — between what “static” means and what “dynamic” means — I think we develop a firmer grasp of what we mean by those words as well as what we’re trying to accomplish with our code. I love tools that help you think of the request/response cycle across your entire application as an endlessly-changing set of computations that happen either “ahead of time”, “just in time”, or somewhere in-between. Email · Mastodon · Bluesky
Dan Abramov on his blog (emphasis mine): The division between the frontend and the backend is physical. We can’t escape from the fact that we’re writing client/server applications. Some logic is naturally more suited to either side. But one side should not dominate the other. And we shouldn’t have to change the approach whenever we need to move the boundary. What we need are the tools that let us compose across the stack. What are these tools that allow us to easily change the computation of an application happening between two computers? I think Dan is arguing that RSC is one of these tools. I tend to think of Remix (v1) as one of these tools. Let me try and articulate why by looking at the difference between how we thought of websites in a “JAMstack” architecture vs. how tools (like Remix) are changing that perspective. JAMstack: a website is a collection of static documents which are created by a static site generator and put on a CDN. If you want dynamism, you “opt-out” of a static document for some host-specific solution whose architecture is starkly different from the rest of your site. Remix: a website is a collection of URLs that follow a request/response cycle handled by a server. Dynamism is “built-in” to the architecture and handled on a URL-by-URL basis. You choose how dynamic you want any particular response to be: from a static document on a CDN for everyone, to a custom response on a request-by-request basis for each user. As your needs grow beyond the basic “static files on disk”, a JAMstack architecture often ends up looking like a microservices architecture where you have disparate pieces that work together to create the final whole: your static site generator here, your lambda functions there, your redirect engine over yonder, each with its own requirements and lifecycles once deployed. Remix, in contrast, looks more like a monolith: your origin server handles the request/response lifecycle of all URLs at the time and in the manner of your choosing. Instead of a build tool that generates static documents along with a number of distinct “escape hatches” to handle varying dynamic needs, your entire stack is “just a server” (that can be hosted anywhere you host a server) and you decide how and when to respond to each request — beforehand (at build), or just in time (upon request). No architectural escape hatches necessary. You no longer have to choose upfront whether your site as a whole is “static” or “dynamic”, but rather how much dynamism (if any) is present on a URL-by-URL basis. It’s a sliding scale — a continuum of dynamism — from “completely static, the same for everyone” to “no one line of markup is the same from one request to another”, all of it modeled under the same architecture. And, crucially, that URL-by-URL decision can change as needs change. As Dan Abramov noted in a tweet: [your] build doesn’t have to be modeled as server. but modeling it as a server (which runs once early) lets you later move stuff around. Instead of opting into a single architecture up front with escape hatches for every need that breaks the mold, you’re opting in to the request/response cycle of the web’s natural grain, and deciding how to respond on a case-by-case basis. The web is not a collection of static documents. It’s a collection of URLs — of requests and responses — and tools that align themselves to this grain make composing sites with granular levels of dynamism so much easier. Email · Mastodon · Bluesky
Radek Sienkiewicz in a funny-because-its-true piece titled “Why do AI company logos look like buttholes?“: We made a circular shape [logo] with some angles because it looked nice, then wrote flowery language to justify why our…design is actually profound. As someone who has grown up through the tumult of the design profession in technology, that really resonates. I’ve worked on lots of projects where I got tired of continually justifying design decisions with language dressed in corporate rationality. This is part of the allure of code. To most people, code either works or it doesn’t. However bad it might be, you can always justify it with “Yeah, but it’s working.” But visual design is subjective forever. And that’s a difficult space to work in, where you need to forever justify your choices. In that kind of environment, decisions are often made by whoever can come up with the best language to justify their choices, or whoever has the most senior job title. Personally, I found it very exhausting. As Radek points out, this homogenization justified through seemingly-profound language reveals something deeper about tech as an industry: folks are afraid to stand out too much. Despite claims of innovation and disruption, there's tremendous pressure to look legitimate by conforming to established visual language. In contrast to this stands the work of individual creators whose work I have always loved — whether its individual blogs, videos, websites, you name it. The individual (and I’ll throw small teams in there too) have a sense of taste that doesn’t dilute through the structure and processes of a larger organization. No single person suggests making a logo that resembles an anus, but when everyone's feedback gets incorporated, that's what often emerges. In other words, no individual would ever recommend what you get through corporate hierarchies. That’s why I love the work of small teams and individuals. There’s still soul. You can still sense the individuals — their personalities, their values — oozing through the work. Reminds me of Jony Ive’s description of when he first encountered a Mac: I was shocked that I had a sense for the people who made it. They could’ve been in the room. You really had a sense of what was on their minds, and their values, and their joy and exuberance in making something that they knew was helpful. This is precisely why I love the websites of individuals because their visual language is as varied as the humans behind them — I mean, just look at the websites of these individuals and small teams. You immediately get a sense for the people behind them. I love it! Email · Mastodon · Bluesky
I quite enjoyed this talk. Some of the technical details went over my head (I don’t know what “split 16-bit mask into two 8-bit LTUs” means) but I could still follow the underlying point. First off, Andreas has a great story at the beginning about how he has a friend with a browser bookmarklet that replaces every occurrence of the word “dependency” with the word “liability”. Can you imagine npm working that way? Inside package.json: { "liabilities": { "react": "^19.0.0", "typescript": "^5.0.0" }, "devLiabilities": {...} } But I digress, back to Andreas. He points out that the context of your problems and the context of someone else’s problems do not overlap as often as we might think. It’s so unlikely that someone else tried to solve exactly our same problem with exactly our same constraints that [their solution or abstraction] will be the most economical or the best choice for us. It might be ok, but it won’t be the best thing. So while we immediately jump to tools built by others, the reality is that their tools were built for their problems and therefore won’t overlap with our problems as much or as often as we’re led to believe. In Andreas’ example, rather than using a third-party library to parse JSON and turn it into something, he writes his own bespoke parser for the problem at hand. His parser ignores a whole swath of abstractions a more generalized parser solves for, and guess what? His is an order of magnitude faster! Solving problems in the wrong domain and then glueing things together is always much, much worse [in terms of performance] than solving for what you actually need to solve. It’s fun watching him step through the performance gains as he goes from a generalized solution to one more tailored to his own specific context. What really resonates in his step-by-step process is how, as problems present themselves, you see how much easier it is to deal with performance issues for stuff you wrote vs. stuff others wrote. Not only that, but you can debug way faster! (Just think of the last time you tried to debug a file 1) you wrote, vs. 2) one you vendored vs. 3) one you installed deep down in node_modules somewhere.) Andreas goes from 41MB/s throughput to 1.25GB/s throughput without changing the behavior of the program. He merely removed a bunch of generalized abstractions he wasn’t using and didn’t need. Surprise, surprise: not doing unnecessary things is faster! You should always consider the unique context of your situation and weigh trade-offs. A “generic” solution means a solution “not tuned for your use case”. Email · Mastodon · Bluesky
More in programming
The use of std::string should be banned in C++ code bases. I’m sure this statement sounds like heresy and you want to burn me at stake. But is it really controversial? Java, C#, Go, JavaScript, Python, Ruby, PHP: they all have immutable strings that are basically 2 machine words: a pointer to string data and size of the string. If they have an equivalent of std:string it’s something like StringBuilder. C++ should also use immutable strings in 97% of situations. The problem is gravity: the existing code, the culture. They all pull you strongly towards std::string and going against the current is the hardest thing there is. There isn’t a standard type for that. You can use newish std::span<char*> but there really should be std::str (or some such). I did that in SumatraPDF where I mostly pass char* but I don’t expect many other C++ code bases to switch away from std::string.
This article was originally commissioned by Luca Rossi (paywalled) for refactoring.fm, on February 11th, 2025. Luca edited a version of it that emphasized the importance of building “10x engineering teams” . It was later picked up by IEEE Spectrum (!!!), who scrapped most of the teams content and published a different, shorter piece on March […]
Go team wrote golang.org/x/sys/windows package to call functions in a Windows DLL. Their way is inefficient and this article describes a better way. The sys/windows way To call a function in a DLL, let’s say kernel32.dll, we must: load the dll into memory with LoadLibrary get the address of a function in the dll call the function at that address Here’s how it looks when you use sys/windows library: var ( libole32 *windows.LazyDLL coCreateInstance *windows.LazyProc ) func init() { libole32 = windows.NewLazySystemDLL("ole32.dll") coCreateInstance = libole32.NewProc("CoCreateInstance") } func CoCreateInstance(rclsid *GUID, pUnkOuter *IUnknown, dwClsContext uint32, riid *GUID, ppv *unsafe.Pointer) HRESULT { ret, _, _ := syscall.SyscallN(coCreateInstance.Addr(), 5, uintptr(unsafe.Pointer(rclsid)), uintptr(unsafe.Pointer(pUnkOuter)), uintptr(dwClsContext), uintptr(unsafe.Pointer(riid)), uintptr(unsafe.Pointer(ppv)), 0, ) return HRESULT(ret) } The problem The problem is that this is memory inefficient. For every function all we need is: name of the function to get its address in a dll. That is a string so its 8 bytes (address of the string) + 8 bytes (size of the string) + the content of the string. address of a function, which is 8 bytes on a 64-bit CPU Unfortunately in sys/windows each function requires this: type LazyProc struct { Name string mu sync.Mutex l *LazyDLL proc *Proc } type Proc struct { Dll *DLL Name string addr uintptr } // sync.Mutex type Mutex struct { _ noCopy mu isync.Mutex } // isync.Mutex type Mutex struct { state int32 sema uint32 } Let’s eyeball the size of all those structures: LazyProc : 16 + sizeof(Mutex) + 8 + 8 = 32 + sizeof(Mutex) Proc : 8 + 16 + 8 = 32 Mutex : 8 Total: 32 + 32 + 8 = 72 and that’s not counting possible memory padding for allocations. Windows has a lot of functions so this adds up. Additionally, at startup we call NewProcfor every function, even if they are not used by the program. This increases startup time. The better way What we ultimately need is uintptr for the address of the function. It’ll be lazily looked up. Let’s say we use 8 functions from ole32.dll. We can use a single array of uintptr values for storing function pointers: var oleFuncPtrs = [8]uintptr var oleFuncNames = []string{"CoCreateInstance", "CoGetClassObject", ... } const kCoCreateInstance = 0 const kCoGetClassObject = 1 // etc. const kFuncMissing = 1 func funcAddrInDLL(dll *windows.LazyDLL, funcPtrs []uintptr, funcIdx int, funcNames []string) uintptr { addr := funcPtrs[funcIdx]; if addr == kFuncMissing { // we already tried to look it up and didn't find it // this can happen becuse older version of Windows might not implement this function return 0 } if addr != 0 { return addr } // lookup the funcion by name in dll name := funcNames[funcIdx] /// ... return addr } In real life this would need multi-threading protection with e.g. a mutex. Saving on strings The following is not efficient: var oleFuncNames = []string{"CoCreateInstance", "CoGetClassObject", ... } In addition to the text of the string Go needs 16 bytes: 8 for a pointer to the string and 8 for the size of the string. We can be more efficient by storing all names as a single string: var oleFuncNames ` CoCreateInstance CoGetClassObject ` Only when we’re looking up the function by name we need to construct temporary string that is a slice of oleFuncNames. We need to know the offset and size inside oleFuncNames which we can cleverly encode as a single number: // Auto-generated shell procedure identifier: cache index | str start | str past-end. const ( _PROC_SHCreateItemFromIDList _PROC_SHELL = 0 | (9 << 16) | (31 << 32) _PROC_SHCreateItemFromParsingName _PROC_SHELL = 1 | (32 << 16) | (59 << 32) // ... ) We pack the info into a single number: bits 0-15 : index of function in array of function pointers bits 16-31: start of function name in multi-name string bits 32-47: end of function name in multi-name string This technique requires code generation. It would be too difficult to write those numbers manually. References This technique is used in https://github.com/rodrigocfd/windigo win32 bindings Go library. See e.g. https://github.com/rodrigocfd/windigo/blob/master/internal/dll/dll_gdi.go
How a wild side-quest became the source of many of the articles you’ve read—and have come to expect—in this publication
Watch now | Privilege levels, syscall conventions, and how assembly code talks to the Linux kernel