Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
17
In his book “The Order of Time” Carlo Rovelli notes how we often asks ourselves questions about the fundamental nature of reality such as “What is real?” and “What exists?” But those are bad questions he says. Why? the adjective “real” is ambiguous; it has a thousand meanings. The verb “to exist” has even more. To the question “Does a puppet whose nose grows when he lies exist?” it is possible to reply: “Of course he exists! It’s Pinocchio!”; or: “No, it doesn’t, he’s only part of a fantasy dreamed up by Collodi.” Both answers are correct, because they are using different meanings of the verb “to exist.” He notes how Pinocchio “exists” and is “real” in terms of a literary character, but not so far as any official Italian registry office is concerned. To ask oneself in general “what exists” or “what is real” means only to ask how you would like to use a verb and an adjective. It’s a grammatical question, not a question about nature. The point he goes on to make is that our language has...
a month ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Jim Nielsen’s Blog

The Continuum From Static to Dynamic

Dan Abramov in “Static as a Server”: Static is a server that runs ahead of time. “Static” and “dynamic” don’t have to be binaries that describe an entire application architecture. As Dan describes in his post, “static” or “dynamic” it’s all just computers doing stuff. Computer A requests something (an HTML document, a PDF, some JSON, who knows) from computer B. That request happens via a URL and the response can be computed “ahead of time” or “at request time”. In this paradigm: “Static” is server responding ahead of time to an anticipated requests with identical responses. “Dynamic” is a server responding at request time to anticipated requests with varying responses. But these definitions aren’t binaries, but rather represent two ends of a spectrum. Ultimately, however you define “static” or “dynamic”, what you’re dealing with is a response generated by a server — i.e. a computer — so the question is really a matter of when you want to respond and with what. Answering the question of when previously had a really big impact on what kind of architecture you inherited. But I think we’re realizing we need more nimble architectures that can flex and grow in response to changing when a request/response cycle happens and what you respond with. Perhaps a poor analogy, but imagine you’re preparing holiday cards for your friends and family: “Static” is the same card sent to everyone “Dynamic” is a hand-written card to each individual But between these two are infinite possibilities, such as: A hand-written card that’s photocopied and sent to everyone A printed template with the same hand-written note to everyone A printed template with a different hand-written note for just some people etc. Are those examples “static” or “dynamic”? [Cue endless debate]. The beauty is that in proving the space between binaries — between what “static” means and what “dynamic” means — I think we develop a firmer grasp of what we mean by those words as well as what we’re trying to accomplish with our code. I love tools that help you think of the request/response cycle across your entire application as an endlessly-changing set of computations that happen either “ahead of time”, “just in time”, or somewhere in-between. Email · Mastodon · Bluesky

4 days ago 4 votes
The Web as URLs, Not Documents

Dan Abramov on his blog (emphasis mine): The division between the frontend and the backend is physical. We can’t escape from the fact that we’re writing client/server applications. Some logic is naturally more suited to either side. But one side should not dominate the other. And we shouldn’t have to change the approach whenever we need to move the boundary. What we need are the tools that let us compose across the stack. What are these tools that allow us to easily change the computation of an application happening between two computers? I think Dan is arguing that RSC is one of these tools. I tend to think of Remix (v1) as one of these tools. Let me try and articulate why by looking at the difference between how we thought of websites in a “JAMstack” architecture vs. how tools (like Remix) are changing that perspective. JAMstack: a website is a collection of static documents which are created by a static site generator and put on a CDN. If you want dynamism, you “opt-out” of a static document for some host-specific solution whose architecture is starkly different from the rest of your site. Remix: a website is a collection of URLs that follow a request/response cycle handled by a server. Dynamism is “built-in” to the architecture and handled on a URL-by-URL basis. You choose how dynamic you want any particular response to be: from a static document on a CDN for everyone, to a custom response on a request-by-request basis for each user. As your needs grow beyond the basic “static files on disk”, a JAMstack architecture often ends up looking like a microservices architecture where you have disparate pieces that work together to create the final whole: your static site generator here, your lambda functions there, your redirect engine over yonder, each with its own requirements and lifecycles once deployed. Remix, in contrast, looks more like a monolith: your origin server handles the request/response lifecycle of all URLs at the time and in the manner of your choosing. Instead of a build tool that generates static documents along with a number of distinct “escape hatches” to handle varying dynamic needs, your entire stack is “just a server” (that can be hosted anywhere you host a server) and you decide how and when to respond to each request — beforehand (at build), or just in time (upon request). No architectural escape hatches necessary. You no longer have to choose upfront whether your site as a whole is “static” or “dynamic”, but rather how much dynamism (if any) is present on a URL-by-URL basis. It’s a sliding scale — a continuum of dynamism — from “completely static, the same for everyone” to “no one line of markup is the same from one request to another”, all of it modeled under the same architecture. And, crucially, that URL-by-URL decision can change as needs change. As Dan Abramov noted in a tweet: [your] build doesn’t have to be modeled as server. but modeling it as a server (which runs once early) lets you later move stuff around. Instead of opting into a single architecture up front with escape hatches for every need that breaks the mold, you’re opting in to the request/response cycle of the web’s natural grain, and deciding how to respond on a case-by-case basis. The web is not a collection of static documents. It’s a collection of URLs — of requests and responses — and tools that align themselves to this grain make composing sites with granular levels of dynamism so much easier. Email · Mastodon · Bluesky

5 days ago 6 votes
Some Miscellaneous Thoughts on Visual Design Prodded By The Sameness of AI Company Logos

Radek Sienkiewicz in a funny-because-its-true piece titled “Why do AI company logos look like buttholes?“: We made a circular shape [logo] with some angles because it looked nice, then wrote flowery language to justify why our…design is actually profound. As someone who has grown up through the tumult of the design profession in technology, that really resonates. I’ve worked on lots of projects where I got tired of continually justifying design decisions with language dressed in corporate rationality. This is part of the allure of code. To most people, code either works or it doesn’t. However bad it might be, you can always justify it with “Yeah, but it’s working.” But visual design is subjective forever. And that’s a difficult space to work in, where you need to forever justify your choices. In that kind of environment, decisions are often made by whoever can come up with the best language to justify their choices, or whoever has the most senior job title. Personally, I found it very exhausting. As Radek points out, this homogenization justified through seemingly-profound language reveals something deeper about tech as an industry: folks are afraid to stand out too much. Despite claims of innovation and disruption, there's tremendous pressure to look legitimate by conforming to established visual language. In contrast to this stands the work of individual creators whose work I have always loved — whether its individual blogs, videos, websites, you name it. The individual (and I’ll throw small teams in there too) have a sense of taste that doesn’t dilute through the structure and processes of a larger organization. No single person suggests making a logo that resembles an anus, but when everyone's feedback gets incorporated, that's what often emerges. In other words, no individual would ever recommend what you get through corporate hierarchies. That’s why I love the work of small teams and individuals. There’s still soul. You can still sense the individuals — their personalities, their values — oozing through the work. Reminds me of Jony Ive’s description of when he first encountered a Mac: I was shocked that I had a sense for the people who made it. They could’ve been in the room. You really had a sense of what was on their minds, and their values, and their joy and exuberance in making something that they knew was helpful. This is precisely why I love the websites of individuals because their visual language is as varied as the humans behind them — I mean, just look at the websites of these individuals and small teams. You immediately get a sense for the people behind them. I love it! Email · Mastodon · Bluesky

a week ago 5 votes
Notes from Andreas Fredriksson’s “Context is Everything”

I quite enjoyed this talk. Some of the technical details went over my head (I don’t know what “split 16-bit mask into two 8-bit LTUs” means) but I could still follow the underlying point. First off, Andreas has a great story at the beginning about how he has a friend with a browser bookmarklet that replaces every occurrence of the word “dependency” with the word “liability”. Can you imagine npm working that way? Inside package.json: { "liabilities": { "react": "^19.0.0", "typescript": "^5.0.0" }, "devLiabilities": {...} } But I digress, back to Andreas. He points out that the context of your problems and the context of someone else’s problems do not overlap as often as we might think. It’s so unlikely that someone else tried to solve exactly our same problem with exactly our same constraints that [their solution or abstraction] will be the most economical or the best choice for us. It might be ok, but it won’t be the best thing. So while we immediately jump to tools built by others, the reality is that their tools were built for their problems and therefore won’t overlap with our problems as much or as often as we’re led to believe. In Andreas’ example, rather than using a third-party library to parse JSON and turn it into something, he writes his own bespoke parser for the problem at hand. His parser ignores a whole swath of abstractions a more generalized parser solves for, and guess what? His is an order of magnitude faster! Solving problems in the wrong domain and then glueing things together is always much, much worse [in terms of performance] than solving for what you actually need to solve. It’s fun watching him step through the performance gains as he goes from a generalized solution to one more tailored to his own specific context. What really resonates in his step-by-step process is how, as problems present themselves, you see how much easier it is to deal with performance issues for stuff you wrote vs. stuff others wrote. Not only that, but you can debug way faster! (Just think of the last time you tried to debug a file 1) you wrote, vs. 2) one you vendored vs. 3) one you installed deep down in node_modules somewhere.) Andreas goes from 41MB/s throughput to 1.25GB/s throughput without changing the behavior of the program. He merely removed a bunch of generalized abstractions he wasn’t using and didn’t need. Surprise, surprise: not doing unnecessary things is faster! You should always consider the unique context of your situation and weigh trade-offs. A “generic” solution means a solution “not tuned for your use case”. Email · Mastodon · Bluesky

a week ago 7 votes
Is It JavaScript?

OH: It’s just JavaScript, right? I know JavaScript. My coworker who will inevitably spend the rest of the day debugging an electron issue — @jonkuperman.com on BlueSky “It’s Just JavaScript!” is probably a phrase you’ve heard before. I’ve used it myself a number of times. It gets thrown around a lot, often to imply that a particular project is approachable because it can be achieved writing the same, ubiquitous, standardized scripting language we all know and love: JavaScript. Take what you learned moving pixels around in a browser and apply that same language to running a server and querying a database. You can do both with the same language, It’s Just JavaScript! But wait, what is JavaScript? Is any code in a .js file “Just JavaScript”? Let’s play a little game I shall call: “Is It JavaScript?” Browser JavaScript let el = document.querySelector("#root"); window.location = "https://jim-nielsen.com"; That’s DOM stuff, i.e. browser APIs. Is it JavaScript? “If it runs in the browser, it’s JavaScript” seems like a pretty good rule of thumb. But can you say “It’s Just JavaScript” if it only runs in the browser? What about the inverse: code that won’t run in the browser but will run elsewhere? Server JavaScript const fs = require('fs'); const content = fs.readFileSync('./data.txt', 'utf8'); That will run in Node — or something with Node compatibility, like Deno — but not in the browser. Is it “Just JavaScript”? Environment Variables It’s very possible you’ve seen this in a .js file: const apiUrl = process.env.API_URL; But that’s following a Node convention which means that particular .js file probably won’t work as expected in a browser but will on a server. Is it “Just JavaScript” if executes but will only work as expected with special knowledge of runtime conventions? JSX What about this file MyComponent.js function MyComponent() { const handleClick = () => {/* do stuff */} return ( <Button onClick={handleClick}>Click me</Button> ) } That won’t run in a browser. It requires a compilation step to turn it into React.createElement(...) (or maybe even something else) which will run in a browser. Or wait, that can also run on the server. So it can run on a server or in the browser, but now requires a compilation step. Is it “Just JavaScript”? Pragmas What about this little nugget? /** @jsx h */ import { h } from "preact"; const HelloWorld = () => <div>Hello</div>; These are magic comments which affect the interpretation and compilation of JavaScript code (Tom MacWright has an excellent article on the subject). If code has magic comments that direct how it is compiled and subsequently executed, is it “Just JavaScript”? TypeScript What about: const name: string = "Hello world"; You see it everywhere and it seems almost synonymous with JavaScript, would you consider it “Just JavaScript”? Imports It’s very possible you’ve come across a .js file that looks like this at the top. import icon from './icon.svg'; import data from './data.json'; import styles from './styles.css'; import foo from '~/foo.js'; import foo from 'bar:foo'; But a lot of that syntax is non-standard (I’ve written about this topic previously in more detail) and requires some kind of compilation — is this “Just JavaScript”? Vanilla Here’s a .js file: var foo = 'bar'; I can run it here (in the browser). I can run it there (on the server). I can run it anywhere. It requires no compiler, no magic syntax, no bundler, no transpiler, no runtime-specific syntax. It’ll run the same everywhere. That seems like it is, in fact, Just JavaScript. As Always, Context Is Everything A lot of JavaScript you see every day is non-standard. Even though it might be rather ubiquitous — such as seeing processn.env.* — lots of JS code requires you to be “in the know” to understand how it’s actually working because it’s not following any part of the ECMAScript standard. There are a few vital pieces of context you need in order to understand a .js file, such as: Which runtime will this execute in? The browser? Something server-side like Node, Deno, or Bun? Or perhaps something else like Cloudflare Workers? What tools are required to compile this code before it can be executed in the runtime? (vite, esbuild, webpack, rollup typescript, etc.) What frameworks are implicit in the code? e.g. are there non-standard globals like Deno.* or special keyword exports like export function getServerSideProps(){...}? When somebody says, “It’s Just JavaScript” what would be more clear is to say “It’s Just JavaScript for…”, e.g. It’s just JavaScript for the browser It’s just JavaScript for Node It’s just JavaScript for Next.js So what would you call JavaScript that can run in any of the above contexts? Well, I suppose you would call that “Just JavaScript”. Email · Mastodon · Bluesky

a week ago 7 votes

More in programming

What is the competitive advantage of authors in the age of LLMs?

Over the past 19 months, I’ve written Crafting Engineering Strategy, a book on creating engineering strategy. I’ve also been working increasingly with large language models at work. Unsurprisingly, the intersection of those two ideas is a topic that I’ve been thinking about a lot. What, I’ve wondered, is the role of the author, particularly the long-form author, in a world where an increasingly large percentage of writing is intermediated by large language models? One framing I’ve heard somewhat frequently is the view that LLMs are first and foremost a great pillaging of authors’ work. It’s true. They are that. At some point there was a script to let you check which books had been loaded into Meta’s LLaMa, and every book I’d written at that point was included, none of them with my consent. However, I long ago made my peace with plagiarism online, and this strikes me as not particularly different, albeit conducted by larger players. The folks using this writing are going to keep using it beyond the constraints I’d prefer it to be used in, and I’m disinterested in investing my scarce mental energy chasing through digital or legal mazes. Instead, I’ve been thinking about how this transition might go right for authors. My favorite idea that I’ve come up with is the idea of written content as “datapacks” for thinking. Buy someone’s book / “datapack”, then upload it into your LLM, and you can immediately operate almost as if you knew the book’s content. Let’s start with an example. Imagine you want help onboarding as an executive, and you’ve bought a copy of The Engineering Executive’s Primer, you could create a project in Anthropic’s Claude, and upload the LLM-optimized book into your project. Here is what your Claude project might look like. Once you have it set up, you can ask it to help you create your onboarding plan. This guidance makes sense, largely pulled from Your first 90 days as CTO. As always, you can iterate on your initial prompt–including more details you want to include into the plan–along with follow ups to improve the formatting and so on. One interesting thing here, is that I don’t currently have a datapack for The Engineering Executive’s Primer! To solve that, I built one from all my blog posts marked with the “executive” tag. I did that using this script that packages Hugo blog posts, that I generated using this prompt with Claude 3.7 Sonnet. The output of that script gets passed into repomix via: repomix --include "`./scripts/tags.py content executive | paste -d, -s -`" The mess with paste is to turn the multiline output from tags.py into a comma-separated list that repomix knows how to use. This is a really neat pattern, and starts to get at where I see the long-term advantage of writers in the current environment: if you’re a writer and have access to your raw content, you can create a problem-specific datapack to discuss the problem. You can also give that datapack to someone else, or use it to answer their questions. For example, someone asked me a very detailed followup question about a recent blog post. It was a very long question, and I was on a weekend trip. I already had a Claude project setup with the contents of Crafting Engineering Strategy, so I just passed the question verbatim into that project, and sent the answer back to the person who asked it. (I did have to ask Claude to revise the answer once to focus more on what I thought the most important part of the answer was.) This, for what it’s worth, wasn’t a perfect answer, but it’s pretty good. If the question asker had the right datapack, they could have gotten it themselves, without needing me to decide to answer it. However, this post is less worried about the reader than it is about the author. What is our competitive advantage as authors in a future where people are not reading our work? Well, maybe they’re still buying our work in the form of datapacks and such, but it certainly seems likely that book sales, like blog traffic, will be impacted negatively. In trade, it’s now possible for machines to understand our thinking that we’ve recorded down into words over time. There’s a running joke in my executive learning circle that I’ve written a blog post on every topic that comes up, and that’s kind of true. That means that I am on the cusp of the opportunity to uniquely scale myself by connecting “intelligence on demand for a few cents” with the written details of my thinking built over the past two decades of being a writer who operates. The tools that exist today are not quite there yet, although a combination of selling datapacks like the one for Crafting Engineering Strategy and tools like Claude’s projects are a good start. There are many ways the exact details might come together, but I’m optimistic that writing will become more powerful rather than less in this new world, even if the particular formats change. (For what it’s worth, I don’t think human readers are going away either.) If you’re interested in the fully fleshed out version of this idea, starting here you can read the full AI Companion to Crafting Engineering Strategy. The datapack will be available via O’Reilly in the next few months. If you’re an existing O’Reilly author who’s skepical of this idea, don’t worry: I worked with them to sign a custom contract, this usage–as best I understood it, although I am not a lawyer and am not providing legal advice–is outside of the scope of the default contract I signed with my prior book, and presumably most others’ contracts as well.

yesterday 3 votes
TypeScript Conditional Types for Type Safety (Without Assertions)

Using conditional types to achieve type safety without having to use 'as'

yesterday 2 votes
Solving LinkedIn Queens with SMT

No newsletter next week I’ll be speaking at Systems Distributed. My talk isn't close to done yet, which is why this newsletter is both late and short. Solving LinkedIn Queens in SMT The article Modern SAT solvers: fast, neat and underused claims that SAT solvers1 are "criminally underused by the industry". A while back on the newsletter I asked "why": how come they're so powerful and yet nobody uses them? Many experts responded saying the reason is that encoding SAT kinda sucked and they rather prefer using tools that compile to SAT. I was reminded of this when I read Ryan Berger's post on solving “LinkedIn Queens” as a SAT problem. A quick overview of Queens. You’re presented with an NxN grid divided into N regions, and have to place N queens so that there is exactly one queen in each row, column, and region. While queens can be on the same diagonal, they cannot be adjacently diagonal. (Important note: Linkedin “Queens” is a variation on the puzzle game Star Battle, which is the same except the number of stars you place in each row/column/region varies per puzzle, and is usually two. This is also why 'queens' don’t capture like chess queens.) Ryan solved this by writing Queens as a SAT problem, expressing properties like "there is exactly one queen in row 3" as a large number of boolean clauses. Go read his post, it's pretty cool. What leapt out to me was that he used CVC5, an SMT solver.2 SMT solvers are "higher-level" than SAT, capable of handling more data types than just boolean variables. It's a lot easier to solve the problem at the SMT level than at the SAT level. To show this, I whipped up a short demo of solving the same problem in Z3 (via the Python API). Full code here, which you can compare to Ryan's SAT solution here. I didn't do a whole lot of cleanup on it (again, time crunch!), but short explanation below. The code from z3 import * # type: ignore from itertools import combinations, chain, product solver = Solver() size = 9 # N Initial setup and modules. size is the number of rows/columns/regions in the board, which I'll call N below. # queens[n] = col of queen on row n # by construction, not on same row queens = IntVector('q', size) SAT represents the queen positions via N² booleans: q_00 means that a Queen is on row 0 and column 0, !q_05 means a queen isn't on row 0 col 5, etc. In SMT we can instead encode it as N integers: q_0 = 5 means that the queen on row 0 is positioned at column 5. This immediately enforces one class of constraints for us: we don't need any constraints saying "exactly one queen per row", because that's embedded in the definition of queens! (Incidentally, using 0-based indexing for the board was a mistake on my part, it makes correctly encoding the regions later really painful.) To actually make the variables [q_0, q_1, …], we use the Z3 affordance IntVector(str, n) for making n variables at once. solver.add([And(0 <= i, i < size) for i in queens]) # not on same column solver.add(Distinct(queens)) First we constrain all the integers to [0, N), then use the incredibly handy Distinct constraint to force all the integers to have different values. This guarantees at most one queen per column, which by the pigeonhole principle means there is exactly one queen per column. # not diagonally adjacent for i in range(size-1): q1, q2 = queens[i], queens[i+1] solver.add(Abs(q1 - q2) != 1) One of the rules is that queens can't be adjacent. We already know that they can't be horizontally or vertically adjacent via other constraints, which leaves the diagonals. We only need to add constraints that, for each queen, there is no queen in the lower-left or lower-right corner, aka q_3 != q_2 ± 1. We don't need to check the top corners because if q_1 is in the upper-left corner of q_2, then q_2 is in the lower-right corner of q_1! That covers everything except the "one queen per region" constraint. But the regions are the tricky part, which we should expect because we vary the difficulty of queens games by varying the regions. regions = { "purple": [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (0, 6), (0, 7), (0, 8), (1, 0), (2, 0), (3, 0), (4, 0), (5, 0), (6, 0), (7, 0), (8, 0), (1, 1), (8, 1)], "red": [(1, 2), (2, 2), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1), (6, 2), (7, 1), (7, 2), (8, 2), (8, 3),], # you get the picture } # Some checking code left out, see below The region has to be manually coded in, which is a huge pain. (In the link, some validation code follows. Since it breaks up explaining the model I put it in the next section.) for r in regions.values(): solver.add(Or( *[queens[row] == col for (row, col) in r] )) Finally we have the region constraint. The easiest way I found to say "there is exactly one queen in each region" is to say "there is a queen in region 1 and a queen in region 2 and a queen in region 3" etc." Then to say "there is a queen in region purple" I wrote "q_0 = 0 OR q_0 = 1 OR … OR q_1 = 0 etc." Why iterate over every position in the region instead of doing something like (0, q[0]) in r? I tried that but it's not an expression that Z3 supports. if solver.check() == sat: m = solver.model() print([(l, m[l]) for l in queens]) Finally, we solve and print the positions. Running this gives me: [(q__0, 0), (q__1, 5), (q__2, 8), (q__3, 2), (q__4, 7), (q__5, 4), (q__6, 1), (q__7, 3), (q__8, 6)] Which is the correct solution to the queens puzzle. I didn't benchmark the solution times, but I imagine it's considerably slower than a raw SAT solver. Glucose is really, really fast. But even so, solving the problem with SMT was a lot easier than solving it with SAT. That satisfies me as an explanation for why people prefer it to SAT. Sanity checks One bit I glossed over earlier was the sanity checking code. I knew for sure that I was going to make a mistake encoding the region, and the solver wasn't going to provide useful information abut what I did wrong. In cases like these, I like adding small tests and checks to catch mistakes early, because the solver certainly isn't going to catch them! all_squares = set(product(range(size), repeat=2)) def test_i_set_up_problem_right(): assert all_squares == set(chain.from_iterable(regions.values())) for r1, r2 in combinations(regions.values(), 2): assert not set(r1) & set(r2), set(r1) & set(r2) The first check was a quick test that I didn't leave any squares out, or accidentally put the same square in both regions. Converting the values into sets makes both checks a lot easier. Honestly I don't know why I didn't just use sets from the start, sets are great. def render_regions(): colormap = ["purple", "red", "brown", "white", "green", "yellow", "orange", "blue", "pink"] board = [[0 for _ in range(size)] for _ in range(size)] for (row, col) in all_squares: for color, region in regions.items(): if (row, col) in region: board[row][col] = colormap.index(color)+1 for row in board: print("".join(map(str, row))) render_regions() The second check is something that prints out the regions. It produces something like this: 111111111 112333999 122439999 124437799 124666779 124467799 122467899 122555889 112258899 I can compare this to the picture of the board to make sure I got it right. I guess a more advanced solution would be to print emoji squares like 🟥 instead. Neither check is quality code but it's throwaway and it gets the job done so eh. "Boolean SATisfiability Solver", aka a solver that can find assignments that make complex boolean expressions true. I write a bit more about them here. ↩ "Satisfiability Modulo Theories" ↩

3 days ago 6 votes
Why Go iterators are ugly, clever and elegant

Go 1.23 adds iterators. An iterator is a way to provide values that can be used in for x := range iter loops. People are happy the iterators were added to the language. Not everyone is happy about HOW they were implemented. This person opined that they demonstrate “typical Go fashion of quite ugly syntax”. The ugly Are Go iterators ugly? Here’s the boilerplate of an iterator: func IterNumbers(n int) func(func(int) bool) { return func(yield func(int) bool) { // ... the code } } Ok, that is kind of ugly. I can’t imagine typing it from memory. The competition We do not live in a vacuum. How do other languages implement iterators? C++ I recently implemented DirIter class with an iterator in C++, for SumatraPDF. I did it to so that I can write code like for (DirEntry* e : DirIter("c:\")) { ... } to read list of files in directory c:\. Implementing it was no fun. I had to implement a class with the following methods: begin() end() DirEntry* operator*() operator==() operator!=() operator++() operator++(int) Oh my, that’s a lot of methods to implement. A bigger problem is that the logic is complicated. This is an example of pull iterator where the caller “pulls” next value out of the iterator. The caller needs at least two operations from an iterator: give me next value do you have more values? In C++ it’s more complicated than that because “Overcomplication” is C++’s middle name. A function that reads a list of entries in a directory is relatively simple. The difficulty of implementing pull iterator comes from the need to track the current state of iteration to be able to provide “give me next value” function. A simple directory traversal turned into complicated tracking of what I have read so far, did the process finish and reading the next directory entry. C C# also has pull iterators but they removed incidental complexity present in C++. It reduced the interface to just 2 essential methods: T Next() which returns next element bool HasMore() which tells if there are more values to read Here’s an iterator that returns integers from 1 to n: class NumberIterator { private int _current; private int _end; public NumberIterator(int n) { _current = 0; _end = n; } public bool HasMore() { return _current < _end; } public int Next() { if (!HasMore()) { throw new InvalidOperationException("No more elements."); } return ++_current; } } Much better but still doesn’t solve the big problem: the logic is split across many calls to Next()so the code needs to track the state. C# push iterator with yield Later C# improved this by adding a way to implement push iterator. An iterator is just a function that “pushes” values to the caller using a yield statement. Push iterator is much simpler: static IEnumerable<int> GetNumbers(int n) { for (int i = 1; i <= n; i++) { yield return i; } } Clever and elegant Here’s a Go version: func GetNumbers(n int) func(func(int) bool) { return func(yield func(int) bool) { for i := i; i <= n; i++ { if !yield(i) { return } } } } The clever and elegant part is that Go designers figured out how to implement push iterators in a way very similar to C#’s yield without adding new keyword. The hard part, the logic of the iterator, is equally simple as with yield. The yield statement in C# is kind of magic. What actually happens is that the compiler rewrites the code inside-out and turns linear logic into a state machine. Go designers figured out how to implement it using just a function. It is true that there remains essential complexity: iterator is a function that returns a function that takes a function as an argument. That is a mind bend, but it can be analyzed. Instead of yield statement pushing values to the loop driver, we have a function. This function is synthesized by the compiler and provided to the iterator function. The argument to that function is the value we’re pushing to the loop. It returns a bool to indicate early exit. This is needed to implement early break out of for loop. An iterator function returns an iterator object. In Go case, the iterator object is a new function. This creates a closure. If function is an iterator object then local variables of the function are state of the iterator. I don’t know why Go designers chose this design over yield. I assume the implementation is simpler so maybe that was the reason. Or maybe they didn’t want to add new keyword and potentially break existing code.

3 days ago 4 votes
The Continuum From Static to Dynamic

Dan Abramov in “Static as a Server”: Static is a server that runs ahead of time. “Static” and “dynamic” don’t have to be binaries that describe an entire application architecture. As Dan describes in his post, “static” or “dynamic” it’s all just computers doing stuff. Computer A requests something (an HTML document, a PDF, some JSON, who knows) from computer B. That request happens via a URL and the response can be computed “ahead of time” or “at request time”. In this paradigm: “Static” is server responding ahead of time to an anticipated requests with identical responses. “Dynamic” is a server responding at request time to anticipated requests with varying responses. But these definitions aren’t binaries, but rather represent two ends of a spectrum. Ultimately, however you define “static” or “dynamic”, what you’re dealing with is a response generated by a server — i.e. a computer — so the question is really a matter of when you want to respond and with what. Answering the question of when previously had a really big impact on what kind of architecture you inherited. But I think we’re realizing we need more nimble architectures that can flex and grow in response to changing when a request/response cycle happens and what you respond with. Perhaps a poor analogy, but imagine you’re preparing holiday cards for your friends and family: “Static” is the same card sent to everyone “Dynamic” is a hand-written card to each individual But between these two are infinite possibilities, such as: A hand-written card that’s photocopied and sent to everyone A printed template with the same hand-written note to everyone A printed template with a different hand-written note for just some people etc. Are those examples “static” or “dynamic”? [Cue endless debate]. The beauty is that in proving the space between binaries — between what “static” means and what “dynamic” means — I think we develop a firmer grasp of what we mean by those words as well as what we’re trying to accomplish with our code. I love tools that help you think of the request/response cycle across your entire application as an endlessly-changing set of computations that happen either “ahead of time”, “just in time”, or somewhere in-between. Email · Mastodon · Bluesky

4 days ago 4 votes