More from Computer Things
I know I said we'd be back to normal newsletters this week and in fact had 80% of one already written. Then I unearthed something that was better left buried. Blog post here, Patreon notes here (Mostly an explanation of how I found this horror in the first place). Next week I'll send what was supposed to be this week's piece. (PS: April Cools in three weeks!)
No newsletter next week, I'm teaching a TLA+ workshop. Speaking of which: I spend a lot of time thinking about formal methods (and TLA+ specifically) because it's where the source of almost all my revenue. But I don't share most of the details because 90% of my readers don't use FM and never will. I think it's more interesting to talk about ideas from FM that would be useful to people outside that field. For example, the idea of "property strength" translates to the idea that some tests are stronger than others. Another possible export is how FM approaches nondeterminism. A nondeterministic algorithm is one that, from the same starting conditions, has multiple possible outputs. This is nondeterministic: # Pseudocode def f() { return rand()+1; } When specifying systems, I may not encounter nondeterminism more often than in real systems, but I am definitely more aware of its presence. Modeling nondeterminism is a core part of formal specification. I mentally categorize nondeterminism into five buckets. Caveat, this is specifically about nondeterminism from the perspective of system modeling, not computer science as a whole. If I tried to include stuff on NFAs and amb operations this would be twice as long.1 1. True Randomness Programs that literally make calls to a random function and then use the results. This the simplest type of nondeterminism and one of the most ubiquitous. Most of the time, random isn't truly nondeterministic. Most of the time computer randomness is actually pseudorandom, meaning we seed a deterministic algorithm that behaves "randomly-enough" for some use. You could "lift" a nondeterministic random function into a deterministic one by adding a fixed seed to the starting state. # Python from random import random, seed def f(x): seed(x) return random() >>> f(3) 0.23796462709189137 >>> f(3) 0.23796462709189137 Often we don't do this because the point of randomness is to provide nondeterminism! We deliberately abstract out the starting state of the seed from our program, because it's easier to think about it as locally nondeterministic. (There's also "true" randomness, like using thermal noise as an entropy source, which I think are mainly used for cryptography and seeding PRNGs.) Most formal specification languages don't deal with randomness (though some deal with probability more broadly). Instead, we treat it as a nondeterministic choice: # software if rand > 0.001 then return a else crash # specification either return a or crash This is because we're looking at worst-case scenarios, so it doesn't matter if crash happens 50% of the time or 0.0001% of the time, it's still possible. 2. Concurrency # Pseudocode global x = 1, y = 0; def thread1() { x++; x++; x++; } def thread2() { y := x; } If thread1() and thread2() run sequentially, then (assuming the sequence is fixed) the final value of y is deterministic. If the two functions are started and run simultaneously, then depending on when thread2 executes y can be 1, 2, 3, or 4. Both functions are locally sequential, but running them concurrently leads to global nondeterminism. Concurrency is arguably the most dramatic source of nondeterminism. Small amounts of concurrency lead to huge explosions in the state space. We have words for the specific kinds of nondeterminism caused by concurrency, like "race condition" and "dirty write". Often we think about it as a separate topic from nondeterminism. To some extent it "overshadows" the other kinds: I have a much easier time teaching students about concurrency in models than nondeterminism in models. Many formal specification languages have special syntax/machinery for the concurrent aspects of a system, and generic syntax for other kinds of nondeterminism. In P that's choose. Others don't special-case concurrency, instead representing as it as nondeterministic choices by a global coordinator. This more flexible but also more inconvenient, as you have to implement process-local sequencing code yourself. 3. User Input One of the most famous and influential programming books is The C Programming Language by Kernighan and Ritchie. The first example of a nondeterministic program appears on page 14: For the newsletter readers who get text only emails,2 here's the program: #include /* copy input to output; 1st version */ main() { int c; c = getchar(); while (c != EOF) { putchar(c); c = getchar(); } } Yup, that's nondeterministic. Because the user can enter any string, any call of main() could have any output, meaning the number of possible outcomes is infinity. Okay that seems a little cheap, and I think it's because we tend to think of determinism in terms of how the user experiences the program. Yes, main() has an infinite number of user inputs, but for each input the user will experience only one possible output. It starts to feel more nondeterministic when modeling a long-standing system that's reacting to user input, for example a server that runs a script whenever the user uploads a file. This can be modeled with nondeterminism and concurrency: We have one execution that's the system, and one nondeterministic execution that represents the effects of our user. (One intrusive thought I sometimes have: any "yes/no" dialogue actually has three outcomes: yes, no, or the user getting up and walking away without picking a choice, permanently stalling the execution.) 4. External forces The more general version of "user input": anything where either 1) some part of the execution outcome depends on retrieving external information, or 2) the external world can change some state outside of your system. I call the distinction between internal and external components of the system the world and the machine. Simple examples: code that at some point reads an external temperature sensor. Unrelated code running on a system which quits programs if it gets too hot. API requests to a third party vendor. Code processing files but users can delete files before the script gets to them. Like with PRNGs, some of these cases don't have to be nondeterministic; we can argue that "the temperature" should be a virtual input into the function. Like with PRNGs, we treat it as nondeterministic because it's useful to think in that way. Also, what if the temperature changes between starting a function and reading it? External forces are also a source of nondeterminism as uncertainty. Measurements in the real world often comes with errors, so repeating a measurement twice can give two different answers. Sometimes operations fail for no discernable reason, or for a non-programmatic reason (like something physically blocks the sensor). All of these situations can be modeled in the same way as user input: a concurrent execution making nondeterministic choices. 5. Abstraction This is where nondeterminism in system models and in "real software" differ the most. I said earlier that pseudorandomness is arguably deterministic, but we abstract it into nondeterminism. More generally, nondeterminism hides implementation details of deterministic processes. In one consulting project, we had a machine that received a message, parsed a lot of data from the message, went into a complicated workflow, and then entered one of three states. The final state was totally deterministic on the content of the message, but the actual process of determining that final state took tons and tons of code. None of that mattered at the scope we were modeling, so we abstracted it all away: "on receiving message, nondeterministically enter state A, B, or C." Doing this makes the system easier to model. It also makes the model more sensitive to possible errors. What if the workflow is bugged and sends us to the wrong state? That's already covered by the nondeterministic choice! Nondeterministic abstraction gives us the potential to pick the worst-case scenario for our system, so we can prove it's robust even under those conditions. I know I beat the "nondeterminism as abstraction" drum a whole lot but that's because it's the insight from formal methods I personally value the most, that nondeterminism is a powerful tool to simplify reasoning about things. You can see the same approach in how I approach modeling users and external forces: complex realities black-boxed and simplified into nondeterministic forces on the system. Anyway, I hope this collection of ideas I got from formal methods are useful to my broader readership. Lemme know if it somehow helps you out! I realized after writing this that I already talked wrote an essay about nondeterminism in formal specification just under a year ago. I hope this one covers enough new ground to be interesting! ↩ There is a surprising number of you. ↩
I occasionally receive emails asking me to look at the writer's new language/library/tool. Sometimes it's in an area I know well, like formal methods. Other times, I'm a complete stranger to the field. Regardless, I'm generally happy to check it out. When starting out, this is the biggest question I'm looking to answer: What does this technology make easy that's normally hard? What justifies me learning and migrating to a new thing as opposed to fighting through my problems with the tools I already know? The new thing has to have some sort of value proposition, which could be something like "better performance" or "more secure". The most universal value and the most direct to show is "takes less time and mental effort to do something". I can't accurately judge two benchmarks, but I can see two demos or code samples and compare which one feels easier to me. Examples Functional programming What drew me originally to functional programming was higher order functions. # Without HOFs out = [] for x in input { if test(x) { out.append(x) } } # With HOFs filter(test, input) We can also compare the easiness of various tasks between examples within the same paradigm. If I know FP via Clojure, what could be appealing about Haskell or F#? For one, null safety is a lot easier when I've got option types. Array Programming Array programming languages like APL or J make certain classes of computation easier. For example, finding all of the indices where two arrays differ. Here it is in Python: x = [1, 4, 2, 3, 4, 1, 0, 0, 0, 4] y = [2, 3, 1, 1, 2, 3, 2, 0, 2, 4] >>> [i for i, (a, b) in enumerate(zip(x, y)) if a == b] [7, 9] And here it is in J: x =: 1 4 2 3 4 1 0 0 0 4 y =: 2 3 1 1 2 3 2 0 2 4 I. x = y 7 9 Not every tool is meant for every programmer, because you might not have any of the problems a tool makes easier. What comes up more often for you: filtering a list or finding all the indices where two lists differ? Statistically speaking, functional programming is more useful to you than array programming. But I have this problem enough to justify learning array programming. LLMs I think a lot of the appeal of LLMs is they make a lot of specialist tasks easy for nonspecialists. One thing I recently did was convert some rst list tables to csv tables. Normally I'd have to do write some tricky parsing and serialization code to automatically convert between the two. With LLMs, it's just Convert the following rst list-table into a csv-table: [table] "Easy" can trump "correct" as a value. The LLM might get some translations wrong, but it's so convenient I'd rather manually review all the translations for errors than write specialized script that is correct 100% of the time. Let's not take this too far A college friend once claimed that he cracked the secret of human behavior: humans do whatever makes them happiest. "What about the martyr who dies for their beliefs?" "Well, in their last second of life they get REALLY happy." We can do the same here, fitting every value proposition into the frame of "easy". CUDA makes it easier to do matrix multiplication. Rust makes it easier to write low-level code without memory bugs. TLA+ makes it easier to find errors in your design. Monads make it easier to sequence computations in a lazy environment. Making everything about "easy" obscures other reason for adopting new things. That whole "simple vs easy" thing Sometimes people think that "simple" is better than "easy", because "simple" is objective and "easy" is subjective. This comes from the famous talk Simple Made Easy. I'm not sure I agree that simple is better or more objective: the speaker claims that polymorphism and typeclasses are "simpler" than conditionals, and I doubt everybody would agree with that. The problem is that "simple" is used to mean both "not complicated" and "not complex". And everybody agrees that "complicated" and "complex" are different, even if they can't agree what the difference is. This idea should probably expanded be expanded into its own newsletter. It's also a lot harder to pitch a technology on being "simpler". Simplicity by itself doesn't make a tool better equipped to solve problems. Simplicity can unlock other benefits, like compositionality or tractability, that provide the actual value. And often that value is in the form of "makes some tasks easier".
I'm making a more focused effort to juggle this year. Mostly boxes, but also classic balls too.1 I've gotten to the point where I can almost consistently do a five-ball cascade, which I thought was the cutoff to being a "good juggler". "Thought" because I now know a "good juggler" is one who can do the five-ball cascade with outside throws. I know this because I can't do the outside five-ball cascade... yet. But it's something I can see myself eventually mastering, unlike the slightly more difficult trick of the five-ball mess, which is impossible for mere mortals like me. In theory there is a spectrum of trick difficulties and skill levels. I could place myself on the axis like this: In practice, there are three tiers: Toddlers Good jugglers who practice hard Genetic freaks and actual wizards And the graph always, always looks like this: This is the jugglers curse, and it's a three-parter: The threshold between you and "good" is the next trick you cannot do. Everything below that level is trivial. Once you've gotten a trick down, you can never go back to not knowing it, to appreciating how difficult it was to learn in the first place.2 Everything above that level is just "impossible". You don't have the knowledge needed to recognize the different tiers.3 So as you get better, the stuff that was impossible becomes differentiable, and you can see that some of it is possible. And everything you learned becomes trivial. So you're never a good juggler until you learn "just one more hard trick". The more you know, the more you know you don't know and the less you know you know. This is supposed to be a software newsletter A monad is a monoid in the category of endofunctors, what's the problem? (src) I think this applies to any difficult topic? Most fields don't have the same stark spectral lines as juggling, but there's still tiers of difficulty to techniques, which get compressed the further in either direction they are from your current level. Like, I'm not good at formal methods. I've written two books on it but I've never mastered a dependently-typed language or a theorem prover. Those are equally hard. And I'm not good at modeling concurrent systems because I don't understand the formal definition of bisimulation and haven't implemented a Raft. Those are also equally hard, in fact exactly as hard as mastering a theorem prover. At the same time, the skills I've already developed are easy: properly using refinement is exactly as easy as writing a wrapped counter. Then I get surprised when I try to explain strong fairness to someone and they just don't get how □◇(ENABLED〈A〉ᵥ) is obviously different from ◇□(ENABLED 〈A〉ᵥ). Juggler's curse! Now I don't actually know if this is actually how everybody experiences expertise or if it's just my particular personality— I was a juggler long before I was a software developer. Then again, I'd argue that lots of people talk about one consequence of the juggler's curse: imposter syndrome. If you constantly think what you know is "trivial" and what you don't know is "impossible", then yeah, you'd start feeling like an imposter at work real quick. I wonder if part of the cause is that a lot of skills you have to learn are invisible. One of my favorite blog posts ever is In Defense of Blub Studies, which argues that software expertise comes through understanding "boring" topics like "what all of the error messages mean" and "how to use a debugger well". Blub is a critical part of expertise and takes a lot of hard work to learn, but it feels like trivia. So looking back on a skill I mastered, I might think it was "easy" because I'm not including all of the blub that I had to learn, too. The takeaway, of course, is that the outside five-ball cascade is objectively the cutoff between good jugglers and toddlers. Rant time: I love cigar box juggling. It's fun, it's creative, it's totally unlike any other kind of juggling. And it's so niche I straight up cannot find anybody in Chicago to practice with. I once went to a juggling convention and was the only person with a cigar box set there. ↩ This particular part of the juggler's curse is also called the curse of knowledge or "expert blindness". ↩ This isn't Dunning-Kruger, because DK says that people think they are better than they actually are, and also may not actually be real. ↩
More in programming
At first, it sounds obvious: if we want to save the planet, we should do less. Fewer people, less consumption, smaller footprints. I believed this too—so much so that I once thought having kids was irresponsible. But the more I looked into it, the less sense it made.
A deep dive into Testing Library's .toBeVisible() and .toBeInTheDocument() matchers, exploring their differences, use cases, and best practices
A week ago, somebody added malicious code to the tj-actions/changed-files GitHub Action. If you used the compromised action, it would leak secrets to your build log. Those build logs are public for public repositories, so anybody could see your secrets. Scary! Mutable vs immutable references This attack was possible because it’s common practice to refer to tags in a GitHub Actions workflow, for example: jobs: changed_files: ... steps: - name: Get changed files id: changed-files uses: tj-actions/changed-files@v2 ... At a glance, this looks like an immutable reference to an already-released “version 2” of this action, but actually this is a mutable Git tag. If somebody changes the v2 tag in the tj-actions/changed-files repo to point to a different commit, this action will run different code the next time it runs. If you specify a Git commit ID instead (e.g. a5b3abf), that’s an immutable reference that will run the same code every time. Tags vs commit IDs is a tradeoff between convenience and security. Specifying an exact commit ID means the code won’t change unexpectedly, but tags are easier to read and compare. Do I have any mutable references? I wasn’t worried about this particular attack because I don’t use tj-actions, but I was curious about what other GitHub Actions I’m using. I ran a short shell script in the folder where I have local clones of all my repos: find . -path '*/.github/workflows/*' -type f -name '*.yml' -print0 \ | xargs -0 grep --no-filename "uses:" \ | sed 's/\- uses:/uses:/g' \ | tr '"' ' ' \ | awk '{print $2}' \ | sed 's/\r//g' \ | sort \ | uniq --count \ | sort --numeric-sort This prints a tally of all the actions I’m using. Here’s a snippet of the output: 1 hashicorp/setup-terraform@v3 2 dtolnay/rust-toolchain@v1 2 taiki-e/create-gh-release-action@v1 2 taiki-e/upload-rust-binary-action@v1 4 actions/setup-python@v4 6 actions/cache@v4 9 ruby/setup-ruby@v1 31 actions/setup-python@v5 58 actions/checkout@v4 I went through the entire list and thought about how much I trust each action and its author. Is it from a large organisation like actions or ruby? They’re not perfect, but they’re likely to have good security procedures in place to protect against malicious changes. Is it from an individual developer or small organisation? Here I tend to be more wary, especially if I don’t know the author personally. That’s not to say that individuals can’t have good security, but there’s more variance in the security setup of random developers on the Internet than among big organisations. Do I need to use somebody else’s action, or could I write my own script to replace it? This is what I generally prefer, especially if I’m only using a small subset of the functionality offered by the action. It’s a bit more work upfront, but then I know exactly what it’s doing and there’s less churn and risk from upstream changes. I feel pretty good about my list. Most of my actions are from large organisations, and the rest are a few actions specific to my Rust command-line tools which are non-critical toys, where the impact of a compromised GitHub repo would be relatively slight. How this script works This is a classic use of Unix pipelines, where I’m chaining together a bunch of built-in text processing tools. Let’s step through how it works. find . -path '*/.github/workflows/*' -type f -name '*.yml' -print0 .yml in a folder like .github/workflows/. It prints a list of filenames, like: ./alexwlchan.net/.github/workflows/build_site.yml \0) between them, which makes it possible to split the filenames in the next step. By default it uses a newline, but a null byte is a bit safer, in case you have filenames which include newline characters. .yml as a file extension, but if you sometimes use .yaml, you can replace -name '*.yml' with \( -name '*.yml' -o -name '*.yaml' \) -path rules, like -not -path './cpython/*'. xargs -0 grep --no-filename "uses:" xargs to go through the filenames one-by-one. The `-0` flag tells it to split on the null byte, and then it runs grep to look for lines that include "uses:" – this is how you use an action in your workflow file. --no-filename option means this just prints the matching line, and not the name of the file it comes from. Not all of my files are formatted or indented consistently, so the output is quite messy: - uses: actions/checkout@v4 sed 's/\- uses:/uses:/g' \ uses: is the first key in the YAML dictionary. This sed command replaces "- uses:" with "uses:" to start tidying up the data. uses: actions/checkout@v4 sed is a pretty powerful tool for making changes to text, but I only know a couple of simple commands, like this pattern for replacing text: sed 's/old/new/g'. tr '"' ' ' uses: actions/checkout@v4 sed to make this substitution as well. I reached for tr because I've been using it for longer, and the syntax is simpler for doing single character substitutions: tr '<oldchar>' '<newchar>' awk '{print $2}' actions/checkout@v4 awk is another powerful text utility that I’ve never learnt properly – I only know how to print the nth word in a string. It has a lot of pattern-matching features I’ve never tried. sed 's/\r//g' \r), and those were included in the awk output. This command gets rid of them, which makes the data more consistent for the final step. sort | uniq --count | sort --numeric-sort tally. 6 actions/cache@v4 This step-by-step approach is how I build Unix text pipelines: I can write a step at a time, and gradually refine and tweak the output until I get the result I want. There are lots of ways to do it, and because this is a script I’ll use once and then discard, I don’t have to worry too much about doing it in the “purest” way – as long as it gets the right result, that’s good enough. If you use GitHub Actions, you might want to use this script to check your own actions, and see what you’re using. But more than that, I recommend becoming familiar with the Unix text processing tools and pipelines – even in the age of AI, they’re still a powerful and flexible way to cobble together one-off scripts for processing data. [If the formatting of this post looks odd in your feed reader, visit the original article]
JavaScript went against the grain in only using floating point numbers initially, and now we pay the price
Immortality always sounded like a curse to me. But especially now, having passed the halfway point of the average wealthy male life expectancy. Another scoop of life as big as the one I've already been served seems more than enough, thank you very much. Does that strike you as morbid? It's funny, people seem to have no problem understanding satiation when it comes to the individual parts of life. Enough delicious cake, no more rides on the rollercoaster, the end of a great party. But not life itself. Why? The eventual end strikes me as beautiful relief. Framing the idea that you can see enough, do enough, be enough. And have enjoyed the bulk of it, without wanting it to go on forever. Have you seen Highlander? It got panned on its initial release in the 80s. Even Sean Connery couldn't save it with the critics at the time. But I love it. It's one of my all-time favorite movies. It's got a silly story about a worldwide tournament of immortal Highlanders who live forever, lest they get their heads chopped off, and then the last man standing wins... more life? Yeah, it doesn't actually make a lot of sense. But it nails the sadness of forever. The loneliness, the repetition, the inevitable cynicism with humanity. Who wants to live forever, indeed. It's the same theme in Björk's wonderfully melancholic song I've Seen It All. It's a great big world, but eventually every unseen element will appear as but a variation on an existing theme. Even surprise itself will succumb to familiarity. Even before the last day, you can look forward to finality, too. I love racing, but I'm also drawn to the day when the reflexes finally start to fade, and I'll hang up the helmet. One day I will write the last line of Ruby code, too. Sell the last subscription. Write the last tweet. How merciful. It gets harder with people you love, of course. Harder to imagine the last day with them. But I didn't know my great-great-grandfather, and can easily picture him passing with the satisfaction of seeing his lineage carry on without him. One way to think of this is to hold life with a loose grip. Like a pair of drumsticks. I don't play, but I'm told that the music flows better when you avoid strangling them in a death grip. And then you enjoy keeping the beat until the song ends. Amor fati. Amor mori.