More from wingolog
Hey all, the video of my is up:FOSDEM talk on Whippet Slides , if that’s your thing.here I ended the talk with some puzzling results around generational collection, which prompted . I don’t have a firm answer yet. Or rather, perhaps for the splay benchmark, it is to be expected that a generational GC is not great; but there are other benchmarks that also show suboptimal throughput in generational configurations. Surely it is some tuning issue; I’ll be looking into it.yesterday’s post Happy hacking!
Usually in this space I like to share interesting things that I find out; you might call it a research-epistle-publish loop. Today, though, I come not with answers, but with questions, or rather one question, but with fractal surface area: what is the value proposition of generational garbage collection? The conventional wisdom is encapsulated in a 2004 Blackburn, Cheng, and McKinley paper, , which compares whole-heap mark-sweep and copying collectors to their generational counterparts, using the Jikes RVM as a test harness. (It also examines a generational reference-counting collector, which is an interesting predecessor to the 2022 work by Zhao, Blackburn, and McKinley.)“Myths and Realities: The Performance Impact of Garbage Collection”LXR The paper finds that generational collectors spend less time than their whole-heap counterparts for a given task. This is mainly due to less time spent collecting, because generational collectors avoid tracing/copying work for older objects that mostly stay in the same place in the live object graph. The paper also notes an improvement for mutator time under generational GC, but only for the generational mark-sweep collector, which it attributes to the locality and allocation speed benefit of bump-pointer allocation in the nursery. However for copying collectors, generational GC tends to slow down the mutator, probably because of the write barrier, but in the end lower collector times still led to lower total times. So, I expected generational collectors to always exhibit lower wall-clock times than whole-heap collectors. In , I have a garbage collector with an abstract API that specializes at compile-time to the mutator’s object and root-set representation and to the collector’s allocator, write barrier, and other interfaces. I embed it in , a simple Scheme-to-C compiler that can run some small multi-threaded benchmarks, for example the classic Gabriel benchmarks. We can then test those benchmarks against different collectors, mutator (thread) counts, and heap sizes. I expect that the generational parallel copying collector takes less time than the whole-heap parallel copying collector.whippetwhiffle So, I ran some benchmarks. Take the splay-tree benchmark, derived from Octane’s . I have a port to Scheme, and the results are... not good!splay.js In this graph the “pcc” series is the whole-heap copying collector, and “generational-pcc” is the generational counterpart, with a nursery sized such that after each collection, its size is 2 MB times the number of active mutator threads in the last collector. So, for this test with eight threads, on my 8-core Ryzen 7 7840U laptop, the nursery is 16MB including the copy reserve, which happens to be the same size as the L3 on this CPU. New objects are kept in the nursery one cycle before being promoted to the old generation. There are also results for , which use an Immix-derived algorithm that allows for bump-pointer allocation but which doesn’t require a copy reserve. There, the generational collectors use a , which has very different performance characteristics as promotion is in-place, and the nursery is as large as the available heap size.“mmc” and “generational-mmc” collectorssticky mark-bit algorithm The salient point is that at all heap sizes, and for these two very different configurations (mmc and pcc), generational collection takes more time than whole-heap collection. It’s not just the splay benchmark either; I see the same thing for the very different . What is the deal?nboyer benchmark I am honestly quite perplexed by this state of affairs. I wish I had a narrative to tie this together, but in lieu of that, voici some propositions and observations. Sometimes people say that the reason generational collection is good is because you get bump-pointer allocation, which has better locality and allocation speed. This is misattribution: it’s bump-pointer allocators that have these benefits. You can have them in whole-heap copying collectors, or you can have them in whole-heap mark-compact or immix collectors that bump-pointer allocate into the holes. Or, true, you can have them in generational collectors with a copying nursery but a freelist-based mark-sweep allocator. But also you can have generational collectors without bump-pointer allocation, for free-list sticky-mark-bit collectors. To simplify this panorama to “generational collectors have good allocators” is incorrect. It’s true, generational GC does lower median pause times: But because a major collection is usually slightly more work under generational GC than in a whole-heap system, because of e.g. the need to reset remembered sets, the maximum pauses are just as big and even a little bigger: I am not even sure that it is meaningful to compare median pause times between generational and non-generational collectors, given that the former perform possibly orders of magnitude more collections than the latter. Doing fewer whole-heap traces is good, though, and in the ideal case, the less frequent major traces under generational collectors allows time for concurrent tracing, which is the true mitigation for long pause times. Could it be that the test harness I am using is in some way unrepresentative? I don’t have more than one test harness for Whippet yet. I will start work on a second Whippet embedder within the next few weeks, so perhaps we will have an answer there. Still, there is ample time spent in GC pauses in these benchmarks, so surely as a GC workload Whiffle has some utility. One reasons that Whiffle might be unrepresentative is that it is an ahead-of-time compiler, whereas nursery addresses are assigned at run-time. Whippet exposes the necessary information to allow a just-in-time compiler to specialize write barriers, for example the inline check that the field being mutated is not in the nursery, and an AOT compiler can’t encode this as an immediate. But it seems a small detail. Also, Whiffle doesn’t do much compiler-side work to elide write barriers. Could the cost of write barriers be over-represented in Whiffle, relative to a production language run-time? Relatedly, Whiffle is just a baseline compiler. It does some partial evaluation but no CFG-level optimization, no contification, no nice closure conversion, no specialization, and so on: is it not representative because it is not an optimizing compiler? How big should the nursery be? I have no idea. As a thought experiment, consider the case of a 1 kilobyte nursery. It is probably too small to allow the time for objects to die young, so the survival rate at each minor collection would be high. Above a certain survival rate, generational GC is probably a lose, because your program violates the weak generational hypothesis: it introduces a needless copy for all survivors, and a synchronization for each minor GC. On the other hand, a 1 GB nursery is probably not great either. It is plenty large enough to allow objects to die young, but the number of survivor objects in a space that large is such that pause times would not be very low, which is one of the things you would like in generational GC. Also, you lose out on locality: a significant fraction of the objects you traverse are probably out of cache and might even incur TLB misses. So there is probably a happy medium somewhere. My instinct is that for a copying nursery, you want to make it about as big as L3 cache, which on my 8-core laptop is 16 megabytes. Systems are different sizes though; in Whippet my current heuristic is to reserve 2 MB of nursery per core that was active in the previous cycle, so if only 4 threads are allocating, you would have a 8 MB nursery. Is this good? I don’t know. I don’t have a very large set of benchmarks that run on Whiffle, and they might not be representative. I mean, they are microbenchmarks. One question I had was about heap sizes. If a benchmark’s maximum heap size fits in L3, which is the case for some of them, then probably generational GC is a wash, because whole-heap collection stays in cache. When I am looking at benchmarks that evaluate generational GC, I make sure to choose those that exceed L3 size by a good factor, for example the 8-mutator splay benchmark in which minimum heap size peaks at 300 MB, or the 8-mutator nboyer-5 which peaks at 1.6 GB. But then, should nursery size scale with total heap size? I don’t know! Incidentally, the way that I scale these benchmarks to multiple mutators is a bit odd: they are serial benchmarks, and I just run some number of threads at a time, and scale the heap size accordingly, assuming that the minimum size when there are 4 threads is four times the minimum size when there is just one thread. However, , in the sense that there is no heap size under which they fail and above which they succeed; I quote:multithreaded programs are unreliable A generational collector partitions objects into old and new sets, and a minor collection starts by visiting all old-to-new edges, called the “remembered set”. As the program runs, mutations to old objects might introduce new old-to-new edges. To maintain the remembered set in a generational collector, the mutator invokes : little bits of code that run when you mutate a field in an object. This is overhead relative to non-generational configurations, where the mutator doesn’t have to invoke collector code when it sets fields.write barriers So, could it be that Whippet’s write barriers or remembered set are somehow so inefficient that my tests are unrepresentative of the state of the art? I used to use card-marking barriers, but I started to suspect they cause too much overhead during minor GC and introduced too much cache contention. I switched to some months back for Whippet’s Immix-derived space, and we use the same kind of barrier in the generational copying (pcc) collector. I think this is state of the art. I need to see if I can find a configuration that allows me to measure the overhead of these barriers, independently of other components of a generational collector.precise field-logging barriers A few months ago, my only generational collector used the algorithm, which is an unconventional configuration: its nursery is not contiguous, non-moving, and can be as large as the heap. This is part of the reason that I implemented generational support for the parallel copying collector, to have a different and more conventional collector to compare against. But generational collection loses on some of these benchmarks in both places!sticky mark-bit On one benchmark which repeatedly constructs some trees and then verifies them, I was seeing terrible results for generational GC, which I realized were because of cooperative safepoints: generational GC collects more often, so it requires that all threads reach safepoints more often, and the non-allocating verification phase wasn’t emitting any safepoints. I had to change the compiler to emit safepoints at regular intervals (in my case, on function entry), and it sped up the generational collector by a significant amount. This is one instance of a general observation, which is that any work that doesn’t depend on survivor size in a GC pause is more expensive with a generational collector, which runs more collections. Synchronization can be a cost. I had one bug in which tracing ephemerons did work proportional to the size of the whole heap, instead of the nursery; I had to specifically add generational support for the way Whippet deals with ephemerons during a collection to reduce this cost. Looking deeper at the data, I have partial answers for the splay benchmark, and they are annoying :) Splay doesn’t actually allocate all that much garbage. At a 2.5x heap, the stock parallel MMC collector (in-place, sticky mark bit) collects... one time. That’s all. Same for the generational MMC collector, because the first collection is always major. So at 2.5x we would expect the generational collector to be slightly slower. The benchmark is simply not very good – or perhaps the most generous interpretation is that it represents tasks that allocate 40 MB or so of long-lived data and not much garbage on top. Also at 2.5x heap, the whole-heap copying collector runs 9 times, and the generational copying collector does 293 minor collections and... 9 major collections. We are not reducing the number of major GCs. It means either the nursery is too small, so objects aren’t dying young when they could, or the benchmark itself doesn’t conform to the weak generational hypothesis. At a 1.5x heap, the copying collector doesn’t have enough space to run. For MMC, the non-generational variant collects 7 times, and generational MMC times out. Timing out indicates a bug, I think. Annoying! I tend to think that if I get results and there were fewer than, like, 5 major collections for a whole-heap collector, that indicates that the benchmark is probably inapplicable at that heap size, and I should somehow surface these anomalies in my analysis scripts. Doing a similar exercise for nboyer at 2.5x heap with 8 threads (4GB for 1.6GB live data), I see that pcc did 20 major collections, whereas generational pcc lowered that to 8 major collections and 3471 minor collections. Could it be that there are still too many fixed costs associated with synchronizing for global stop-the-world minor collections? I am going to have to add some fine-grained tracing to find out. I just don’t know! I want to believe that generational collection was an out-and-out win, but I haven’t yet been able to prove it is true. I do have some homework to do. I need to find a way to test the overhead of my write barrier – probably using the MMC collector and making it only do major collections. I need to fix generational-mmc for splay and a 1.5x heap. And I need to do some fine-grained performance analysis for minor collections in large heaps. Enough for today. Feedback / reactions very welcome. Thanks for reading and happy hacking! hypothesis test workbench results? “generational collection is good because bump-pointer allocation” “generational collection lowers pause times” is it whiffle? is it something about the nursery size? is it something about the benchmarks? is it the write barrier? is it something about the generational mechanism? is it something about collecting more often? is it something about collection frequency? collecting more often redux conclusion?
Good evening, fey readers. Tonight, a note on human rights and human wrongs. I am in my mid-fourties, and so I have seen some garbage governments in my time; one of the worst was Trump’s election in 2016. My heart ached in so many ways, but most of all for immigrants in the US. It has always been expedient for a politician to blame problems on those outside the polity, and in recent years it has been open season on immigration: there is always a pundit ready to make immigrants out to be the source of a society’s ills, always a publisher ready to distribute their views, always a news channel ready to invite the pundit to discuss these Honest Questions, and never will they actually talk to an immigrant. It gives me a visceral sense of revulsion, as much now as in 2016. What to do? All of what was happening in my country was at a distance. And there is this funny thing that you don’t realize inside the US, in which the weight of the US’s cultural influence is such that the concerns of the US are pushed out into the consciousness of the rest of the world and made out to have a kind of singular, sublime importance, outweighing any concern that people in South Africa or Kosovo or Thailand might be having; and even to the point of outweighing what is happening in your own country of residence. I remember the horror of Europeans upon hearing of Trump’s detention camps—and it is right to be horrified!—but the same people do not know what is happening at and within their own borders, the network of prisons and flophouses and misery created by Europe’s own xenophobic project. The cultural weight of the US is such that it can blind the rest of the world into ignorance of the local, and thus to inaction, there at the place where the rest of us actually have power to act. I could not help immigrants in the US, practically speaking. So I started to help immigrants in France. I joined the local chapter of the , an organization with a human rights focus but whose main activity was a weekly legal and administrative advice clinic. I had gone through the immigration pipeline and could help others.Ligue des droits de l’Homme It has been interesting work. One thing that you learn quickly is that not everything that the government does is legal. Sometimes this observation takes the form of an administrative decision that didn’t respect the details of a law. Sometimes it’s something related to the , for example that a law’s intent was not reflected in the way it was translated to the operational norms used by, say, asylum processing agents. Sometimes it’s that there was no actual higher norm, but that the norms are made by the people who show up, and if it’s only the cops that show up, things get tilted copwards.hierarchy of norms A human-rights approach is essentially liberal, and I don’t know if it is enough in these the end days of a liberal rule of law. It is a tool. But for what I see, it is enough for me right now: there is enough to do, I can make enough meaningful progress for people that the gaping hole in my soul opened by the Trumpocalypse has started to close. I found my balm in this kind of work, but there are as many paths as people, and surely yours will be different. So, friends, here we are in 2025: . These are trying times. Care for yourself and for your loved ones. For some of you, that will be as much as you can do; we all have different situations. But for the rest of us, and especially those who are not really victims of marginalization, we can do things, and it will help people, and what’s more, it will make you feel better. Find some comrades, and reach your capacity gradually; you’re no use if you burn out. I don’t know what is on the other side of this, but in the meantime let’s not make it easy for the bastards.new liver, same eagles
I just found a funny failure mode in the garbage collector and thought readers might be amused.Whippet Say you have a semi-space nursery and a semi-space old generation. Both are block-structured. You are allocating live data, say, a long linked list. Allocation fills the nursery, which triggers a minor GC, which decides to keep everything in the nursery another round, because that’s policy: Whippet gives new objects another cycle in which to potentially become unreachable. This causes a funny situation! Consider that the first minor GC doesn’t actually free anything. But, like, : it’s impossible to allocate anything in the nursery after collection, so you run another minor GC, which promotes everything, and you’re back to the initial situation, wash rinse repeat. Copying generational GC is strictly a pessimization in this case, with the additional insult that it doesn’t preserve object allocation order.nothing Consider also that because , any one of your minor GCs might require more blocks after GC than before. Unlike in the case of a major GC in which this essentially indicates out-of-memory, either because of a mutator bug or because the user didn’t give the program enough heap, for minor GC this is just what we expect when allocating a long linked list.copying collectors with block-structured heaps are unreliable Therefore we either need to allow a minor GC to allocate fresh blocks – very annoying, and we have to give them back at some point to prevent the nursery from growing over time – or we need to maintain some kind of margin, corresponding to the maximum amount of fragmentation. Or, or, we allow evacuation to fail in a minor GC, in which case we fall back to promotion. Anyway, I am annoyed and amused and I thought others might share in one or the other of these feelings. Good day and happy hacking!
More in programming
As well as changing the way I organise my writing, last year I made some cosmetic improvements to this site. I design everything on this site myself, and I write the CSS by hand – I don’t use any third-party styles or frameworks. I don’t have any design training, and I don’t do design professionally, so I use this site as a place to learn and practice my design skills. It’s a continual work-in-progress, but I’d like to think it’s getting better over time. I design this site for readers. I write long, text-heavy posts with the occasional illustration or diagram, so I want something that will be comfortable to read and look good on a wide variety of browsers and devices. I get a lot of that “for free” by using semantic HTML and the default styles – most of my CSS is just cosmetic. Let’s go through some of the changes. Cleaning up the link styles This is what links used to look like: Every page has a tint colour, and then I was deriving different shades to style different links – a darker shade for visited links, a lighter shade for visited links in dark mode, and a background that appears on hover. I’m generating these new colours programatically, and I was so proud of getting that code working that I didn’t stop to think whether it was a good idea. In hindsight, I see several issues. The tint colour is meant to give the page a consistent visual appearance, but the different shades diluted that effect. I don’t think their meaning was especially obvious. How many readers ever worked it out? And the hover styles are actively unhelpful – just as you hover over a link you’re interested in, I’m making it harder to read! (At least in light mode – in dark mode, the hover style is barely legible.) One thing I noticed is that for certain tint colours, the “visited” colour I generated was barely distinguishable from the text colour. So I decided to lean into that in the new link styles: visited links are now the same colour as regular text. This new set of styles feels more coherent. I’m only using one shade of the tint colour, and I think the meaning is a bit clearer – only new-to-you links will get the pop of colour to stand out from the rest of the text. I’m happy to rely on underlines for the links you’ve already visited. And when you hover, the thick underline means you can see where you are, but the link text remains readable. Swapping out the font I swapped out the font, replacing Georgia with Charter. The difference is subtle, so I’d be surprised if anyone noticed: I’ve always used web safe fonts for this site – the fonts that are built into web browsers, and don’t need to be downloaded first. I’ve played with custom fonts from time to time, but there’s no font I like more enough to justify the hassle of loading a custom font. I still like Georgia, but I felt it was showing its age – it was designed in 1993 to look good on low-resolution screens, but looks a little chunky on modern displays. I think Charter looks nicer on high-resolution screens, but if you don’t have it installed then I fall back to Georgia. Making all the roundrects consistent I use a lot of rounded rectangles for components on this site, including article cards, blockquotes, and code blocks. For a long time they had similar but not identical styles, because I designed them all at different times. There were weird inconsistencies. For example, why does one roundrect have a 2px border, but another one is 3px? These are small details that nobody will ever notice directly, but undermine the sense of visual together-ness. I’ve done a complete overhaul of these styles, to make everything look more consistent. I’m leaning heavily on CSS variables, a relatively new CSS feature that I’ve really come to like. Variables make it much easier to use consistent values in different rules. I also tweaked the appearance: I’ve removed another two shades of the tint colour. (Yes, those shades were different from the ones used in links.) Colour draws your attention, so I’m trying to use it more carefully. A link says “click here”. A heading says “start here”. What does a blockquote or code snippet say? It’s just part of the text, so it shouldn’t be grabbing your attention. I think the neutral background also makes the syntax highlighting easier to read, because the tint colour isn’t clashing with the code colours. I could probably consolidate the shades of grey I’m using, but that’s a task for another day. I also removed the left indent on blockquotes and code blocks – I think it looks nicer to have a flush left edge for everything, and it means you can read more text on mobile screens. (That’s where I really felt the issues with the old design.) What’s next? By tidying up the design and reducing the number of unique elements, I’ve got a bit of room to add something new. For a while now I’ve wanted a place at the bottom of posts for common actions, or links to related and follow-up posts. As I do more and more long-form, reflective writing, I want to be able to say “if you liked this, you should read this too”. I want something that catches your eye, but doesn’t distract from the article you’re already reading. Louie Mantia has a version of this that I quite like: I’ve held off designing this because the existing pages felt too busy, but now I feel like I have space to add this – there aren’t as many clashing colours and components to compete for your attention. I’m still sketching out designs – my current idea is my rounded rectangle blocks, but with a coloured border instead of a subtle grey, but when I did a prototype, I feel like it’s missing something. I need to try a few more ideas. Watch this space! [If the formatting of this post looks odd in your feed reader, visit the original article]
One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's customer support or it's sales, it's an incredible advantage to have the founders doing that work directly, and for much longer than they find comfortable. The absolute worst thing you can do is hire a sales person or a customer service agent too early. You'll miss all the golden nuggets that customers throw at you for free when they're rejecting your pitch or complaining about the product. Seeing these reasons paraphrased or summarized destroy all the nutrients in their insights. You want that whole-grain feedback straight from the customers' mouth! When we launched Basecamp in 2004, Jason was doing all the customer service himself. And he kept doing it like that for three years!! By the time we hired our first customer service agent, Jason was doing 150 emails/day. The business was doing millions of dollars in ARR. And Basecamp got infinitely, better both as a market proposition and as a product, because Jason could funnel all that feedback into decisions and positioning. For a long time after that, we did "Everyone on Support". Frequently rotating programmers, designers, and founders through a day of answering emails directly to customers. The dividends of doing this were almost as high as having Jason run it all in the early years. We fixed an incredible number of minor niggles and annoying bugs because programmers found it easier to solve the problem than to apologize for why it was there. It's not easy doing this! Customers often offer their valuable insights wrapped in rude language, unreasonable demands, and bad suggestions. That's why many founders quit the business of dealing with them at the first opportunity. That's why few companies ever do "Everyone On Support". That's why there's such eagerness to reduce support to an AI-only interaction. But quitting dealing with customers early, not just in support but also in sales, is an incredible handicap for any startup. You don't have to do everything that every customer demands of you, but you should certainly listen to them. And you can't listen well if the sound is being muffled by early layers of indirection.
Humanity's Last Exam by Center for AI Safety (CAIS) and Scale AI
Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most people are inclined to give up too easily, as soon as the going gets hard. But it's also worth remembering that sometimes you really should fold, admit defeat, and accept that your plan didn't work out. But how to distinguish between a bad plan and insufficient effort? It's not easy. Plenty of plans look foolish at first glance, especially to people without skin in the game. That's the essence of a disruptive startup: The idea ought to look a bit daft at first glance or it probably doesn't carry the counter-intuitive kernel needed to really pop. Yet it's also obviously true that not every daft idea holds the potential to be a disruptive startup. That's why even the best venture capital investors in the world are wrong far more than they're right. Not because they aren't smart, but because nobody is smart enough to predict (the disruption of) the future consistently. The best they can do is make long bets, and then hope enough of them pay off to fund the ones that don't. So far, so logical, so conventional. A million words have been written by a million VCs about how their shrewd eyes let them see those hidden disruptive kernels before anyone else could. Good for them. What I'm more interested in knowing more about is how and when you pivot from a promising bet to folding your hand. When do you accept that no amount of additional effort is going to get that turkey to soar? I'm asking because I don't have any great heuristics here, and I'd really like to know! Because the ability to fold your hand, and live to play your remaining chips another day, isn't just about startups. It's also about individual projects. It's about work methods. Hell, it's even about politics and societies at large. I'll give you just one small example. In 2017, Rails 5.1 shipped with new tooling for doing end-to-end system tests, using a headless browser to validate the functionality, as a user would in their own browser. Since then, we've spent an enormous amount of time and effort trying to make this approach work. Far too much time, if you ask me now. This year, we finished our decision to fold, and to give up on using these types of system tests on the scale we had previously thought made sense. In fact, just last week, we deleted 5,000 lines of code from the Basecamp code base by dropping literally all the system tests that we had carried so diligently for all these years. I really like this example, because it draws parallels to investing and entrepreneurship so well. The problem with our approach to system tests wasn't that it didn't work at all. If that had been the case, bailing on the approach would have been a no brainer long ago. The trouble was that it sorta-kinda did work! Some of the time. With great effort. But ultimately wasn't worth the squeeze. I've seen this trap snap on startups time and again. The idea finds some traction. Enough for the founders to muddle through for years and years. Stuck with an idea that sorta-kinda does work, but not well enough to be worth a decade of their life. That's a tragic trap. The only antidote I've found to this on the development side is time boxing. Programmers are just as liable as anyone to believe a flawed design can work if given just a bit more time. And then a bit more. And then just double of what we've already spent. The time box provides a hard stop. In Shape Up, it's six weeks. Do or die. Ship or don't. That works. But what's the right amount of time to give a startup or a methodology or a societal policy? There's obviously no universal answer, but I'd argue that whatever the answer, it's "less than you think, less than you want". Having the grit to stick with the effort when the going gets hard is a key trait of successful people. But having the humility to give up on good bets turned bad might be just as important.
No newsletter next week, I'm teaching a TLA+ workshop. Speaking of which: I spend a lot of time thinking about formal methods (and TLA+ specifically) because it's where the source of almost all my revenue. But I don't share most of the details because 90% of my readers don't use FM and never will. I think it's more interesting to talk about ideas from FM that would be useful to people outside that field. For example, the idea of "property strength" translates to the idea that some tests are stronger than others. Another possible export is how FM approaches nondeterminism. A nondeterministic algorithm is one that, from the same starting conditions, has multiple possible outputs. This is nondeterministic: # Pseudocode def f() { return rand()+1; } When specifying systems, I may not encounter nondeterminism more often than in real systems, but I am definitely more aware of its presence. Modeling nondeterminism is a core part of formal specification. I mentally categorize nondeterminism into five buckets. Caveat, this is specifically about nondeterminism from the perspective of system modeling, not computer science as a whole. If I tried to include stuff on NFAs and amb operations this would be twice as long.1 1. True Randomness Programs that literally make calls to a random function and then use the results. This the simplest type of nondeterminism and one of the most ubiquitous. Most of the time, random isn't truly nondeterministic. Most of the time computer randomness is actually pseudorandom, meaning we seed a deterministic algorithm that behaves "randomly-enough" for some use. You could "lift" a nondeterministic random function into a deterministic one by adding a fixed seed to the starting state. # Python from random import random, seed def f(x): seed(x) return random() >>> f(3) 0.23796462709189137 >>> f(3) 0.23796462709189137 Often we don't do this because the point of randomness is to provide nondeterminism! We deliberately abstract out the starting state of the seed from our program, because it's easier to think about it as locally nondeterministic. (There's also "true" randomness, like using thermal noise as an entropy source, which I think are mainly used for cryptography and seeding PRNGs.) Most formal specification languages don't deal with randomness (though some deal with probability more broadly). Instead, we treat it as a nondeterministic choice: # software if rand > 0.001 then return a else crash # specification either return a or crash This is because we're looking at worst-case scenarios, so it doesn't matter if crash happens 50% of the time or 0.0001% of the time, it's still possible. 2. Concurrency # Pseudocode global x = 1, y = 0; def thread1() { x++; x++; x++; } def thread2() { y := x; } If thread1() and thread2() run sequentially, then (assuming the sequence is fixed) the final value of y is deterministic. If the two functions are started and run simultaneously, then depending on when thread2 executes y can be 1, 2, 3, or 4. Both functions are locally sequential, but running them concurrently leads to global nondeterminism. Concurrency is arguably the most dramatic source of nondeterminism. Small amounts of concurrency lead to huge explosions in the state space. We have words for the specific kinds of nondeterminism caused by concurrency, like "race condition" and "dirty write". Often we think about it as a separate topic from nondeterminism. To some extent it "overshadows" the other kinds: I have a much easier time teaching students about concurrency in models than nondeterminism in models. Many formal specification languages have special syntax/machinery for the concurrent aspects of a system, and generic syntax for other kinds of nondeterminism. In P that's choose. Others don't special-case concurrency, instead representing as it as nondeterministic choices by a global coordinator. This more flexible but also more inconvenient, as you have to implement process-local sequencing code yourself. 3. User Input One of the most famous and influential programming books is The C Programming Language by Kernighan and Ritchie. The first example of a nondeterministic program appears on page 14: For the newsletter readers who get text only emails,2 here's the program: #include /* copy input to output; 1st version */ main() { int c; c = getchar(); while (c != EOF) { putchar(c); c = getchar(); } } Yup, that's nondeterministic. Because the user can enter any string, any call of main() could have any output, meaning the number of possible outcomes is infinity. Okay that seems a little cheap, and I think it's because we tend to think of determinism in terms of how the user experiences the program. Yes, main() has an infinite number of user inputs, but for each input the user will experience only one possible output. It starts to feel more nondeterministic when modeling a long-standing system that's reacting to user input, for example a server that runs a script whenever the user uploads a file. This can be modeled with nondeterminism and concurrency: We have one execution that's the system, and one nondeterministic execution that represents the effects of our user. (One intrusive thought I sometimes have: any "yes/no" dialogue actually has three outcomes: yes, no, or the user getting up and walking away without picking a choice, permanently stalling the execution.) 4. External forces The more general version of "user input": anything where either 1) some part of the execution outcome depends on retrieving external information, or 2) the external world can change some state outside of your system. I call the distinction between internal and external components of the system the world and the machine. Simple examples: code that at some point reads an external temperature sensor. Unrelated code running on a system which quits programs if it gets too hot. API requests to a third party vendor. Code processing files but users can delete files before the script gets to them. Like with PRNGs, some of these cases don't have to be nondeterministic; we can argue that "the temperature" should be a virtual input into the function. Like with PRNGs, we treat it as nondeterministic because it's useful to think in that way. Also, what if the temperature changes between starting a function and reading it? External forces are also a source of nondeterminism as uncertainty. Measurements in the real world often comes with errors, so repeating a measurement twice can give two different answers. Sometimes operations fail for no discernable reason, or for a non-programmatic reason (like something physically blocks the sensor). All of these situations can be modeled in the same way as user input: a concurrent execution making nondeterministic choices. 5. Abstraction This is where nondeterminism in system models and in "real software" differ the most. I said earlier that pseudorandomness is arguably deterministic, but we abstract it into nondeterminism. More generally, nondeterminism hides implementation details of deterministic processes. In one consulting project, we had a machine that received a message, parsed a lot of data from the message, went into a complicated workflow, and then entered one of three states. The final state was totally deterministic on the content of the message, but the actual process of determining that final state took tons and tons of code. None of that mattered at the scope we were modeling, so we abstracted it all away: "on receiving message, nondeterministically enter state A, B, or C." Doing this makes the system easier to model. It also makes the model more sensitive to possible errors. What if the workflow is bugged and sends us to the wrong state? That's already covered by the nondeterministic choice! Nondeterministic abstraction gives us the potential to pick the worst-case scenario for our system, so we can prove it's robust even under those conditions. I know I beat the "nondeterminism as abstraction" drum a whole lot but that's because it's the insight from formal methods I personally value the most, that nondeterminism is a powerful tool to simplify reasoning about things. You can see the same approach in how I approach modeling users and external forces: complex realities black-boxed and simplified into nondeterministic forces on the system. Anyway, I hope this collection of ideas I got from formal methods are useful to my broader readership. Lemme know if it somehow helps you out! I realized after writing this that I already talked wrote an essay about nondeterminism in formal specification just under a year ago. I hope this one covers enough new ground to be interesting! ↩ There is a surprising number of you. ↩