More from Tony Finch's blog
In hot weather I like to drink my coffee in an iced latte. To make it, I have a very large Bialetti Moka Express. Recently when I got it going again after a winter of disuse, it took me a couple of attempts to get the technique right, so here are some notes as a reminder to my future self next year. It’s worth noting that I’m not fussy about my coffee: I usually drink pre-ground beans from the supermarket, with cream (in winter hot coffee) or milk and ice. basic principle When I was getting the hang of my moka pot, I learned from YouTube coffee geeks such as James Hoffmann that the main aim is for the water to be pushed through the coffee smoothly and gently. Better to err on the side of too little flow than too much. I have not had much success trying to make fine temperature adjustments while the coffee is brewing, because the big moka pot has a lot of thermal inertia: it takes a long time for any change in gas level to have any effect on on the coffee flow. routine fill the kettle and turn it on put the moka pot’s basket in a mug to keep it stable fill it with coffee (mine needs about 4 Aeropress scoops) tamp it down firmly [1] when the kettle has boiled, fill the base of the pot to just below the pressure valve (which is also just below the filter screen in the basket) insert the coffee basket, making sure there are no stray grounds around the edge where the seal will mate screw on the upper chamber firmly put it on a small gas ring turned up to the max [2] leave the lid open and wait for the coffee to emerge immediately turn the gas down to the minimum [3] the coffee should now come out in a steady thin stream without spluttering or stalling when the upper chamber is filled near the mouths of the central spout, it’ll start fizzing or spluttering [4] turn off the gas and pour the coffee into a carafe notes If I don’t tamp the grounds, the pot tends to splutter. I guess tamping gives the puck better integrity to resist channelling, and to keep the water under even pressure. Might be an effect of the relatively coarse supermarket grind? It takes a long time to get the pot back up to boiling point and I’m not sure that heating it up slower helps. The main risk, I think, is overshooting the ideal steady brewing state too much, but: With my moka pot on my hob the lowest gas flow on the smallest rings is just enough to keep the coffee flowing without stalling. The flow when the coffee first emerges is relatively fast, and it slows to the steady state several seconds after I turn the heat down, so I think the overshoot isn’t too bad. This routine turns almost all of the water into coffee, which Hoffmann suggests is a good result, and a sign that the pressure and temperature aren’t getting too high.
TIL (or this week-ish I learned) why big-sigma and big-pi turn up in the notation of dependent type theory. I’ve long been aware of the zoo of more obscure Greek letters that turn up in papers about type system features of functional programming languages, μ, Λ, Π, Σ. Their meaning is usually clear from context but the reason for the choice of notation is usually not explained. I recently stumbled on an explanation for Π (dependent functions) and Σ (dependent pairs) which turn out to be nicer than I expected, and closely related to every-day algebraic data types. sizes of types The easiest way to understand algebraic data types is by counting the inhabitants of a type. For example: the unit type () has one inhabitant, (), and the number 1 is why it’s called the unit type; the bool type hass two inhabitants, false and true. I have even seen these types called 1 and 2 (cruelly, without explanation) in occasional papers. product types Or pairs or (more generally) tuples or records. Usually written, (A, B) The pair contains an A and a B, so the number of possible values is the number of possible A values multiplied by the number of possible B values. So it is spelled in type theory (and in Standard ML) like, A * B sum types Or disjoint union, or variant record. Declared in Haskell like, data Either a b = Left a | Right b Or in Rust like, enum Either<A, B> { Left(A), Right(B), } A value of the type is either an A or a B, so the number of possible values is the number of A values plus the number of B values. So it is spelled in type theory like, A + B dependent pairs In a dependent pair, the type of the second element depends on the value of the first. The classic example is a slice, roughly, struct IntSlice { len: usize, elem: &[i64; len], } (This might look a bit circular, but the idea is that an array [i64; N] must be told how big it is – its size is an explicit part of its type – but an IntSlice knows its own size. The traditional dependent “vector” type is a sized linked list, more like my array type than my slice type.) The classic way to write a dependent pair in type theory is like, Σ len: usize . Array(Int, len) The big sigma binds a variable that has a type annotation, with a scope covering the expression after the dot – similar syntax to a typed lambda expression. We can expand a simple example like this into a many-armed sum type: either an array of length zero, or an array of length 1, or an array of length 2, … but in a sigma type the discriminant is user-defined instead of hidden. The number of possible values of the type comes from adding up all the alternatives, a summation just like the big sigma summation we were taught in school. ∑ a ∈ A B a When the second element doesn’t depend on the first element, we can count the inhabitants like, ∑ A B = A*B And the sigma type simplifies to a product type. telescopes An aside from the main topic of these notes, I also recently encountered the name “telescope” for a multi-part dependent tuple or record. The name “telescope” comes from de Bruijn’s AUTOMATH, one of the first computerized proof assistants. (I first encountered de Bruijn as the inventor of numbered lambda bindings.) dependent functions The return type of a dependent function can vary according to the argument it is passed. For example, to construct an array we might write something like, fn repeat_zero(len: usize) -> [i64; len] { [0; len] } The classic way to write the type of repeat_zero() is very similar to the IntSlice dependent pair, but with a big pi instead of a big sigma: Π len: usize . Array(Int, len) Mmm, pie. To count the number of possible (pure, total) functions A ➞ B, we can think of each function as a big lookup table with A entries each containing a B. That is, a big tuple (B, B, … B), that is, B * B * … * B, that is, BA. Functions are exponential types. We can count a dependent function, where the number of possible Bs depends on which A we are passed, ∏ a ∈ A B a danger I have avoided the terms “dependent sum” and “dependent product”, because they seem perfectly designed to cause confusion over whether I am talking about variants, records, or functions. It kind of makes me want to avoid algebraic data type jargon, except that there isn’t a good alternative for “sum type”. Hmf.
About half a year ago I encountered a paper bombastically titled “the ultimate conditional syntax”. It has the attractive goal of unifying pattern match with boolean if tests, and its solution is in some ways very nice. But it seems over-complicated to me, especially for something that’s a basic work-horse of programming. I couldn’t immediately see how to cut it down to manageable proportions, but recently I had an idea. I’ll outline it under the “penultimate conditionals” heading below, after reviewing the UCS and explaining my motivation. what the UCS? whence UCS out of scope penultimate conditionals dangling syntax examples antepenultimate breath what the UCS? The ultimate conditional syntax does several things which are somewhat intertwined and support each other. An “expression is pattern” operator allows you to do pattern matching inside boolean expressions. Like “match” but unlike most other expressions, “is” binds variables whose scope is the rest of the boolean expression that might be evaluated when the “is” is true, and the consequent “then” clause. You can “split” tests to avoid repeating parts that are the same in successive branches. For example, if num < 0 then -1 else if num > 0 then +1 else 0 can be written if num < 0 then -1 > 0 then +1 else 0 The example shows a split before an operator, where the left hand operand is the same and the rest of the expression varies. You can split after the operator when the operator is the same, which is common for “is” pattern match clauses. Indentation-based syntax (an offside rule) reduces the amount of punctuation that splits would otherwise need. An explicit version of the example above is if { x { { < { 0 then −1 } }; { > { 0 then +1 } }; else 0 } } (This example is written in the paper on one line. I’ve split it for narrow screens, which exposes what I think is a mistake in the nesting.) You can also intersperse let bindings between splits. I doubt the value of this feature, since “is” can also bind values, but interspersed let does have its uses. The paper has an example using let to avoid rightward drift: if let tp1_n = normalize(tp1) tp1_n is Bot then Bot let tp2_n = normalize(tp2) tp2_n is Bot then Bot let m = merge(tp1_n, tp2_n) m is Some(tp) then tp m is None then glb(tp1_n, tp2_n) It’s probably better to use early return to avoid rightward drift. The desugaring uses let bindings when lowering the UCS to simpler constructions. whence UCS Pattern matching in the tradition of functional programming languages supports nested patterns that are compiled in a way that eliminates redundant tests. For example, this example checks that e1 is Some(_) once, not twice as written. if e1 is Some(Left(lv)) then e2 Some(Right(rv)) then e3 None then e4 Being cheeky, I’d say UCS introduces more causes of redundant checks, then goes to great effort to to eliminate redundant checks again. Splits reduce redundant code at the source level; the bulk of the paper is about eliminating redundant checks in the lowering from source to core language. I think the primary cause of this extra complexity is treating the is operator as a two-way test rather than a multi-way match. Splits are introduced as a more general (more complicated) way to build multi-way conditions out of two-way tests. There’s a secondary cause: the tradition of expression-oriented functional languages doesn’t like early returns. A nice pattern in imperative code is to write a function as a series of preliminary calculations and guards with early returns that set things up for the main work of the function. Rust’s ? operator and let-else statement support this pattern directly. UCS addresses the same pattern by wedging calculate-check sequences into if statements, as in the normalize example above. out of scope I suspect UCS’s indentation-based syntax will make programmers more likely to make mistakes, and make compilers have more trouble producing nice error messages. (YAML has put me off syntax that doesn’t have enough redundancy to support good error recovery.) So I wondered if there’s a way to have something like an “is pattern” operator in a Rust-like language, without an offside rule, and without the excess of punctuation in the UCS desugaring. But I couldn’t work out how to make the scope of variable bindings in patterns cover all the code that might need to use them. The scope needs to extend into the consequent then clause, but also into any follow-up tests – and those tests can branch so the scope might need to reach into multiple then clauses. The problem was the way I was still thinking of the then and else clauses as part of the outer if. That implied the expression has to be closed off before the then, which troublesomely closes off the scope of any is-bound variables. The solution – part of it, at least – is actually in the paper, where then and else are nested inside the conditional expression. penultimate conditionals There are two ingredients: The then and else clauses become operators that cause early return from a conditional expression. They can be lowered to a vaguely Rust syntax with the following desugaring rules. The 'if label denotes the closest-enclosing if; you can’t use then or else inside the expr of a then or else unless there’s another intervening if. then expr ⟼ && break 'if expr else expr ⟼ || break 'if expr else expr ⟼ || _ && break 'if expr There are two desugarings for else depending on whether it appears in an expression or a pattern. If you prefer a less wordy syntax, you might spell then as => (like match in Rust) and else as || =>. (For symmetry we might allow && => for then as well.) An is operator for multi-way pattern-matching that binds variables whose scope covers the consequent part of the expression. The basic form is like the UCS, scrutinee is pattern which matches the scrutinee against the pattern returning a boolean result. For example, foo is None Guarded patterns are like, scrutinee is pattern && consequent where the scope of the variables bound by the pattern covers the consequent. The consequent might be a simple boolean guard, for example, foo is Some(n) && n < 0 or inside an if expression it might end with a then clause, if foo is Some(n) && n < 0 => -1 // ... Simple multi-way patterns are like, scrutinee is { pattern || pattern || … } If there is a consequent then the patterns must all bind the same set of variables (if any) with the same types. More typically, a multi-way match will have consequent clauses, like scrutinee is { pattern && consequent || pattern && consequent || => otherwise } When a consequent is false, we go on to try other alternatives of the match, like we would when the first operand of boolean || is false. To help with layout, you can include a redundant || before the first alternative. For example, if foo is { || Some(n) && n < 0 => -1 || Some(n) && n > 0 => +1 || Some(n) => 0 || None => 0 } Alternatively, if foo is { Some(n) && ( n < 0 => -1 || n > 0 => +1 || => 0 ) || None => 0 } (They should compile the same way.) The evaluation model is like familiar shortcutting && and || and the syntax is supposed to reinforce that intuition. The UCS paper spends a lot of time discussing backtracking and how to eliminate it, but penultimate conditionals evaluate straightforwardly from left to right. The paper briefly mentions as patterns, like Some(Pair(x, y) as p) which in Rust would be written Some(p @ Pair(x, y)) The is operator doesn’t need a separate syntax for this feature: Some(p is Pair(x, y)) For large examples, the penultimate conditional syntax is about as noisy as Rust’s match, but it scales down nicely to smaller matches. However, there are differences in how consequences and alternatives are punctuated which need a bit more discussion. dangling syntax The precedence and associativity of the is operator is tricky: it has two kinds of dangling-else problem. The first kind occurs with a surrounding boolean expression. For example, when b = false, what is the value of this? b is true || false It could bracket to the left, yielding false: (b is true) || false Or to the right, yielding true: b is { true || false } This could be disambiguated by using different spellings for boolean or and pattern alternatives. But that doesn’t help for the second kind which occurs with an inner match. foo is Some(_) && bar is Some(_) || None Does that check foo is Some(_) with an always-true look at bar ( foo is Some(_) ) && bar is { Some(_) || None } Or does it check bar is Some(_) and waste time with foo? foo is { Some(_) && ( bar is Some(_) ) || None } I have chosen to resolve the ambiguity by requiring curly braces {} around groups of alternative patterns. This allows me to use the same spelling || for all kinds of alternation. (Compare Rust, which uses || for boolean expressions, | in a pattern, and , between the arms of a match.) Curlies around multi-way matches can be nested, so the example in the previous section can also be written, if foo is { || Some(n) && n < 0 => -1 || Some(n) && n > 0 => +1 || { Some(0) || None } => 0 } The is operator binds tigher than && on its left, but looser than && on its right (so that a chain of && is gathered into a consequent) and tigher than || on its right so that outer || alternatives don’t need extra brackets. examples I’m going to finish these notes by going through the ultimate conditional syntax paper to translate most of its examples into the penultimate syntax, to give it some exercise. Here we use is to name a value n, as a replacement for the |> abs pipe operator, and we use range patterns instead of split relational operators: if foo(args) is { || 0 => "null" || n && abs(n) is { || 101.. => "large" || ..10 => "small" || => "medium" ) } In both the previous example and the next one, we have some extra brackets where UCS relies purely on an offside rule. if x is { || Right(None) => defaultValue || Right(Some(cached)) => f(cached) || Left(input) && compute(input) is { || None => defaultValue || Some(result) => f(result) } } This one is almost identical to UCS apart from the spellings of and, then, else. if name.startsWith("_") && name.tailOption is Some(namePostfix) && namePostfix.toIntOption is Some(index) && 0 <= index && index < arity && => Right([index, name]) || => Left("invalid identifier: " + name) Here are some nested multi-way matches with overlapping patterns and bound values: if e is { // ... || Lit(value) && Map.find_opt(value) is Some(result) => Some(result) // ... || { Lit(value) || Add(Lit(0), value) || Add(value, Lit(0)) } => { print_int(value); Some(value) } // ... } The next few examples show UCS splits without the is operator. In my syntax I need to press a few more buttons but I think that’s OK. if x == 0 => "zero" || x == 1 => "unit" || => "?" if x == 0 => "null" || x > 0 => "positive" || => "negative" if predicate(0, 1) => "A" || predicate(2, 3) => "B" || => "C" The first two can be written with is instead, but it’s not briefer: if x is { || 0 => "zero" || 1 => "unit" || => "?" } if x is { || 0 => "null" || 1.. => "positive" || => "negative" } There’s little need for a split-anything feature when we have multi-way matches. if foo(u, v, w) is { || Some(x) && x is { || Left(_) => "left-defined" || Right(_) => "right-defined" } || None => "undefined" } A more complete function: fn zip_with(f, xs, ys) { if [xs, ys] is { || [x :: xs, y :: ys] && zip_with(f, xs, ys) is Some(tail) => Some(f(x, y) :: tail) || [Nil, Nil] => Some(Nil) || => None } } Another fragment of the expression evaluator: if e is { // ... || Var(name) && Map.find_opt(env, name) is { || Some(Right(value)) => Some(value) || Some(Left(thunk)) => Some(thunk()) } || App(lhs, rhs) => // ... // ... } This expression is used in the paper to show how a UCS split is desugared: if Pair(x, y) is { || Pair(Some(xv), Some(yv)) => xv + yv || Pair(Some(xv), None) => xv || Pair(None, Some(yv)) => yv || Pair(None, None) => 0 } The desugaring in the paper introduces a lot of redundant tests. I would desugar straightforwardly, then rely on later optimizations to eliminate other redundancies such as the construction and immediate destruction of the pair: if Pair(x, y) is Pair(xx, yy) && xx is { || Some(xv) && yy is { || Some(yv) => xv + yv || None => xv } || None && yy is { || Some(yv) => yv || None => 0 } } Skipping ahead to the “non-trivial example” in the paper’s fig. 11: if e is { || Var(x) && context.get(x) is { || Some(IntVal(v)) => Left(v) || Some(BoolVal(v)) => Right(v) } || Lit(IntVal(v)) => Left(v) || Lit(BoolVal(v)) => Right(v) // ... } The next example in the paper compares C# relational patterns. Rust’s range patterns do a similar job, with the caveat that Rust’s ranges don’t have a syntax for exclusive lower bounds. fn classify(value) { if value is { || .. -4.0 => "too low" || 10.0 .. => "too high" || NaN => "unknown" || => "acceptable" } } I tend to think relational patterns are the better syntax than ranges. With relational patterns I can rewrite an earlier example like, if foo is { || Some(< 0) => -1 || Some(> 0) => +1 || { Some(0) || None } => 0 } I think with the UCS I would have to name the Some(_) value to be able to compare it, which suggests that relational patterns can be better than UCS split relational operators. Prefix-unary relational operators are also a nice way to write single-ended ranges in expressions. We could simply write both ends to get a complete range, like >= lo < hi or like if value is > -4.0 < 10.0 => "acceptable" || => "far out" Near the start I quoted a normalize example that illustrates left-aligned UCS expression. The penultimate version drifts right like the Scala version: if normalize(tp1) is { || Bot => Bot || tp1_n && normalize(tp2) is { || Bot => Bot || tp2_n && merge(tp1_n, tp2_n) is { || Some(tp) => tp || None => glb(tp1_n, tp2_n) } } } But a more Rusty style shows the benefits of early returns (especially the terse ? operator) and monadic combinators. let tp1 = normalize(tp1)?; let tp2 = normalize(tp2)?; merge(tp1, tp2) .unwrap_or_else(|| glb(tp1, tp2)) antepenultimate breath When I started writing these notes, my penultimate conditional syntax was little more than a sketch of an idea. Having gone through the previous section’s exercise, I think it has turned out better than I thought it might. The extra nesting from multi-way match braces doesn’t seem to be unbearably heavyweight. However, none of the examples have bulky then or else blocks which are where the extra nesting is more likely to be annoying. But then, as I said before it’s comparable to a Rust match: match scrutinee { pattern => { consequent } } if scrutinee is { || pattern => { consequent } } The || lines down the left margin are noisy, but hard to get rid of in the context of a curly-brace language. I can’t reduce them to | like OCaml because what would I use for bitwise OR? I don’t want presence or absence of flow control to depend on types or context. I kind of like Prolog / Erlang , for && and ; for ||, but that’s well outside what’s legible to mainstream programmers. So, dunno. Anyway, I think I’ve successfully found a syntax that does most of what UCS does, but much in a much simpler fashion.
Recently, Alex Kladov wrote on the TigerBeetle blog about swarm testing data structures. It’s a neat post about randomized testing with Zig. I wrote a comment with an idea that was new to Alex @matklad, so I’m reposing a longer version here. differential testing problems grow / shrink random elements element-wise testing test loop data structure size invariants performance conclusion differential testing A common approach to testing data structures is to write a second reference implementation that has the same API but simpler and/or more obviously correct, though it uses more memory or is slower or less concurrent or otherwise not up to production quality. Then, run the production implementation and the reference implementation on the same sequence of operations, and verify that they produce the same results. Any difference is either a bug in the production implementation (probably) or a bug in the reference implementation (unlucky) or a bug in the tests (unfortunate). This is a straightforward differential testing pattern. problems There are a couple of difficulties with this kind of basic differential testing. grow / shrink The TigerBeetle article talks about adjusting the probabilities of different operations on the data structure to try to explore more edge cases. To motivate the idea, the article talks about adjusting the probabilities of adding or deleting items: If adding and deleting have equal probability, then the test finds it hard to grow the data structure to interesting sizes that might expose bugs. Unfortunately, if the probability of add is greater than del, then the data structure tends to grow without bound. If the probability of del is greater than add, then it tries to shrink from nothing: worse than equal probabilities! They could preload the data structure to test how it behaves when it shrinks, but a fixed set of probabilities per run is not good at testing both growth and shrinkage on the same test run on the same data structure. One way to improve this kind of test is to adjust the probability of add and del dynamically: make add more likely when the data structure is small, and del more likely when it is big. And maybe make add more likely in the first half of a test run and del more likely in the second half. random elements The TigerBeetle article glosses over the question of where the tests get fresh elements to add to the data structure. And its example is chosen so it doesn’t have to think about which elements get deleted. In my experience writing data structures for non-garbage-collected languages, I had to be more deliberate about how to create and destroy elements. That led to a style of test that’s more element-centric, as Alex described it. element-wise testing Change the emphasis so that instead of testing that two implementations match, test that one implementation obeys the expected behaviour. No need to make a drop-in replacement reference implementation! What I typically do is pre-allocate an array of elements, with slots that I can set to keep track of how each element relates to the data structure under test. The most important property is whether the element has been added or deleted, but there might be others related to ordering of elements, or values associated with keys, and so on. test loop Each time round the loop, choose at random an element from the array, and an action such as add / del / get / … Then, if it makes sense, perform the operation on the data structure with the element. For example, you might skip an add action if the element is already in the data structure, unless you can try to add it and expect an error. data structure size This strategy tends to grow the data structure until about 50% of the pre-allocated elements are inserted, then it makes a random walk around this 50% point. Random walks can diverge widely from their central point both in theory and in practice, so this kind of testing is reasonably effective at both growing and (to a lesser extent) shrinking the data structure. invariants I usually check some preconditions before an action, to verify that the data structure matches the expected properties of the chosen element. This can help to detect earlier that an action on one element has corrupted another element. After performing the action and updating the element’s properties, I check the updated properties as a postcondition, to make sure the action had the expected effects. performance John Regehr’s great tutorial, how to fuzz an ADT implementation, recommends writing a checkRep() function that thoroughly verifies a data structure’s internal consistency. A checkRep() function is a solid gold testing tool, but it is O(n) at least and typically very slow. If you call checkRep() frequently during testing, your tests slow down dramatically as your data structure gets larger. I like my per-element invariants to be local and ideally O(1) or O(log n) at worst, so they don’t slow down the tests too much. conclusion Recently I’ve used this pattern to exhibit concurrency bugs in an API that’s hard to make thread-safe. Writing the tests has required some cunning to work out what invariants I can usefully maintain and test; what variety of actions I can use to stress those invariants; and what mix of elements + actions I need so that my tests know which properties of each element should be upheld and which can change. I’m testing multiple implementations of the same API, trying to demonstrate which is safest. Differential testing can tell me that implementations diverge, but not which is correct, whereas testing properties and invariants more directly tells me whether an implementation does what I expect. (Or gives me a useless answer when my tests are weak.) Which is to say that this kind of testing is a fun creative challenge. I find it a lot more rewarding than example-based testing.
More in programming
How a wild side-quest became the source of many of the articles you’ve read—and have come to expect—in this publication
Watch now | Privilege levels, syscall conventions, and how assembly code talks to the Linux kernel
Learn how disposable objects solve test cleanup problems in flat testing. Use TypeScript's using keyword to ensure reliable resource disposal in tests.
Digital Ghosts My mom recently had a free consultation from her electric company to assess replacing her propane water heater with an electric water pump heater. She forwarded the assessment report to me, and I spent some time reviewing and researching the program. Despite living quite far away, I have been surprised by how much […]
In the past few years, social media use has gained a bad reputation. More or less everyone is now aware that TikTok is ruining your attention span, and Twitter is radicalizing you into extreme ideologies. But, despite its enormous popularity amongst technology enthusiasts, there’s not a lot of attention given to Discord. I personally have been using Discord so much for so long that the majority of my social circle is made of people I met through the platform. I even spent two years of my life helping run the infrastructure behind the most popular Bot available on Discord. In this article, I will try to give my perspective on Discord, why I think it is harmful, and what can we do about it. appshunter.io A tale of two book clubs To explain my point of view about Discord, I will compare the experience between joining a real-life book-club, and one that communicates exclusively through Discord. This example is about books, but the same issues would apply if it was a community talking about investing, knitting, or collecting stamps. As Marshall McLuhan showed last century, examining media should be done independently of their content. In the first scenario, we have Bob. Bob enjoys reading books, which is generally a solitary hobby. To break this solitude, Bob decides to join a book club. This book club reunites twice a month in a library where they talk about a new book each time. In the second scenario, we have Alice. Alice also likes books. Alice also wants to meet fellow book lovers. Being a nerd, Alice decides to join a Discord server. This server does not have fixed meeting times. Most users simply use the text channels to talk about what they are reading anytime during the day. Crumbs of Belongingness In Bob’s book club, a session typically lasts an hour. First, the librarian takes some time to welcome everyone and introduce newcomers. After, that each club member talks about the book they were expected to read. They can talk about what they liked and disliked, how the book made them feel, and the chapters they found particularly noteworthy. Once each member had the time to talk about the book, they vote on the book they are going to read for the next date. After the session is concluded, some members move to the nearest coffeehouse to keep talking. During this session of one hour, Bob spent around one hour socializing. The need for belongingness that drove Bob to join this book club is fully met. On Alice’s side, the server is running 24/7. When she opens the app, even if there are sometimes more than 4000 members of her virtual book club online, most of the time, nobody is talking. If she was to spend an entire hour staring at the server she might witness a dozen or so messages. Those messages may be part of small conversations in which Alice can take part. Sadly, most of the time they will be simple uploads of memes, conversations about books she hasn’t read, or messages that do not convey enough meaning to start a conversation. In one hour of constant Discord use, Alice’s need for socializing has not been met. Susan Q Yin The shop is closed Even if Bob’s library is open every day, the book club is only open for a total of two hours a month. It is enough for Bob. Since the book club fulfills his need, he doesn’t want it to be around for longer. He has not even entertained the thought of joining a second book club, because too many meetings would be overwhelming. For Alice, Discord is always available. No matter if she is at home or away, it is always somewhere in her phone or taskbar. At any moment of the day, she might notice a red circle above the icon. It tells her there are unread messages on Discord. When she notices that, she instinctively stops her current task and opens the app to spend a few minutes checking her messages. Most of the time those messages do not lead to a meaningful conversation. Reading a few messages isn’t enough to meet her need for socialization. So, after having scrolled through the messages, she goes back to waiting for the next notification. Each time she interrupts her current task to check Discord, getting back into the flow can take several minutes or not happen at all. This can easily happen dozens of times a day and cost Alice hundreds of hours each month. Book hopping When Bob gets home, the club only requires him to read the next book. He may also choose to read two books at the same time, one for the book club and one from his personal backlog. But, if he were to keep his efforts to a strict minimum, he would still have things to talk about in the next session. Alice wants to be able to talk with other users about the books they are reading. So she starts reading the books that are trending and get mentionned often. The issue is, Discord’s conversation are instantaneous, and instantaneity compresses time. A book isn’t going to stay popular and relevant for two whole weeks, if it manages to be the thing people talk about for two whole days, it’s already great. Alice might try to purchase and read two to three books a week to keep up with the server rythm. Even if books are not terribly expensive, this can turn a 20 $/month hobby into a 200 $/month hobby. In addition to that, if reading a book takes Alice on average 10 hours, reading 3 books a week would be like adding a part-time job to her schedule. All this, while being constantly interrupted by the need to check if new conversations have been posted to the server. visnu deva Quitting Discord If you are in Alice’s situation, the solution is quite simple: use Discord less, ideally not at all. On my side, I’ve left every server that is not relevant to my current work. I blocked discord.com from the DNS of my coding computer (using NextDNS) and uninstalled the app from my phone. This makes the platform only usable as a direct messaging app, exclusively from my gaming device, which I cannot carry with me. I think many people realize the addictive nature of Discord, yet keep using the application all the time. One common objection to quitting the platform, is that there’s a need for an alternative: maybe we should go back to forums, or IRC, or use Matrix, etc… I don’t think any alternative internet chat platform can solve the problem. The real problem is that we want to be able to talk to people without leaving home, at any time, without any inconvenience. But what we should do is exactly that, leave home and join a real book club, one that is not open 24/7, and one where the members take the time to listen to each other. In the software community, we have also been convinced that every one of our projects needs to be on Discord. Every game needs a server, open-source projects offer support on Discord, and a bunch of AI startups even use it as their main user interface. I even made a server for Dice’n Goblins. I don’t think it’s really that useful. I’m not even sure it’s that convenient. Popular games are not popular because they have big servers, they have big servers because they are popular. Successful open-source projects often don’t even have a server.