More from Tony Finch's blog
TIL (or this week-ish I learned) why big-sigma and big-pi turn up in the notation of dependent type theory. I’ve long been aware of the zoo of more obscure Greek letters that turn up in papers about type system features of functional programming languages, μ, Λ, Π, Σ. Their meaning is usually clear from context but the reason for the choice of notation is usually not explained. I recently stumbled on an explanation for Π (dependent functions) and Σ (dependent pairs) which turn out to be nicer than I expected, and closely related to every-day algebraic data types. sizes of types The easiest way to understand algebraic data types is by counting the inhabitants of a type. For example: the unit type () has one inhabitant, (), and the number 1 is why it’s called the unit type; the bool type hass two inhabitants, false and true. I have even seen these types called 1 and 2 (cruelly, without explanation) in occasional papers. product types Or pairs or (more generally) tuples or records. Usually written, (A, B) The pair contains an A and a B, so the number of possible values is the number of possible A values multiplied by the number of possible B values. So it is spelled in type theory (and in Standard ML) like, A * B sum types Or disjoint union, or variant record. Declared in Haskell like, data Either a b = Left a | Right b Or in Rust like, enum Either<A, B> { Left(A), Right(B), } A value of the type is either an A or a B, so the number of possible values is the number of A values plus the number of B values. So it is spelled in type theory like, A + B dependent pairs In a dependent pair, the type of the second element depends on the value of the first. The classic example is a slice, roughly, struct IntSlice { len: usize, elem: &[i64; len], } (This might look a bit circular, but the idea is that an array [i64; N] must be told how big it is – its size is an explicit part of its type – but an IntSlice knows its own size. The traditional dependent “vector” type is a sized linked list, more like my array type than my slice type.) The classic way to write a dependent pair in type theory is like, Σ len: usize . Array(Int, len) The big sigma binds a variable that has a type annotation, with a scope covering the expression after the dot – similar syntax to a typed lambda expression. We can expand a simple example like this into a many-armed sum type: either an array of length zero, or an array of length 1, or an array of length 2, … but in a sigma type the discriminant is user-defined instead of hidden. The number of possible values of the type comes from adding up all the alternatives, a summation just like the big sigma summation we were taught in school. ∑ a ∈ A B a When the second element doesn’t depend on the first element, we can count the inhabitants like, ∑ A B = A*B And the sigma type simplifies to a product type. telescopes An aside from the main topic of these notes, I also recently encountered the name “telescope” for a multi-part dependent tuple or record. The name “telescope” comes from de Bruijn’s AUTOMATH, one of the first computerized proof assistants. (I first encountered de Bruijn as the inventor of numbered lambda bindings.) dependent functions The return type of a dependent function can vary according to the argument it is passed. For example, to construct an array we might write something like, fn repeat_zero(len: usize) -> [i64; len] { [0; len] } The classic way to write the type of repeat_zero() is very similar to the IntSlice dependent pair, but with a big pi instead of a big sigma: Π len: usize . Array(Int, len) Mmm, pie. To count the number of possible (pure, total) functions A ➞ B, we can think of each function as a big lookup table with A entries each containing a B. That is, a big tuple (B, B, … B), that is, B * B * … * B, that is, BA. Functions are exponential types. We can count a dependent function, where the number of possible Bs depends on which A we are passed, ∏ a ∈ A B a danger I have avoided the terms “dependent sum” and “dependent product”, because they seem perfectly designed to cause confusion over whether I am talking about variants, records, or functions. It kind of makes me want to avoid algebraic data type jargon, except that there isn’t a good alternative for “sum type”. Hmf.
About half a year ago I encountered a paper bombastically titled “the ultimate conditional syntax”. It has the attractive goal of unifying pattern match with boolean if tests, and its solution is in some ways very nice. But it seems over-complicated to me, especially for something that’s a basic work-horse of programming. I couldn’t immediately see how to cut it down to manageable proportions, but recently I had an idea. I’ll outline it under the “penultimate conditionals” heading below, after reviewing the UCS and explaining my motivation. what the UCS? whence UCS out of scope penultimate conditionals dangling syntax examples antepenultimate breath what the UCS? The ultimate conditional syntax does several things which are somewhat intertwined and support each other. An “expression is pattern” operator allows you to do pattern matching inside boolean expressions. Like “match” but unlike most other expressions, “is” binds variables whose scope is the rest of the boolean expression that might be evaluated when the “is” is true, and the consequent “then” clause. You can “split” tests to avoid repeating parts that are the same in successive branches. For example, if num < 0 then -1 else if num > 0 then +1 else 0 can be written if num < 0 then -1 > 0 then +1 else 0 The example shows a split before an operator, where the left hand operand is the same and the rest of the expression varies. You can split after the operator when the operator is the same, which is common for “is” pattern match clauses. Indentation-based syntax (an offside rule) reduces the amount of punctuation that splits would otherwise need. An explicit version of the example above is if { x { { < { 0 then −1 } }; { > { 0 then +1 } }; else 0 } } (This example is written in the paper on one line. I’ve split it for narrow screens, which exposes what I think is a mistake in the nesting.) You can also intersperse let bindings between splits. I doubt the value of this feature, since “is” can also bind values, but interspersed let does have its uses. The paper has an example using let to avoid rightward drift: if let tp1_n = normalize(tp1) tp1_n is Bot then Bot let tp2_n = normalize(tp2) tp2_n is Bot then Bot let m = merge(tp1_n, tp2_n) m is Some(tp) then tp m is None then glb(tp1_n, tp2_n) It’s probably better to use early return to avoid rightward drift. The desugaring uses let bindings when lowering the UCS to simpler constructions. whence UCS Pattern matching in the tradition of functional programming languages supports nested patterns that are compiled in a way that eliminates redundant tests. For example, this example checks that e1 is Some(_) once, not twice as written. if e1 is Some(Left(lv)) then e2 Some(Right(rv)) then e3 None then e4 Being cheeky, I’d say UCS introduces more causes of redundant checks, then goes to great effort to to eliminate redundant checks again. Splits reduce redundant code at the source level; the bulk of the paper is about eliminating redundant checks in the lowering from source to core language. I think the primary cause of this extra complexity is treating the is operator as a two-way test rather than a multi-way match. Splits are introduced as a more general (more complicated) way to build multi-way conditions out of two-way tests. There’s a secondary cause: the tradition of expression-oriented functional languages doesn’t like early returns. A nice pattern in imperative code is to write a function as a series of preliminary calculations and guards with early returns that set things up for the main work of the function. Rust’s ? operator and let-else statement support this pattern directly. UCS addresses the same pattern by wedging calculate-check sequences into if statements, as in the normalize example above. out of scope I suspect UCS’s indentation-based syntax will make programmers more likely to make mistakes, and make compilers have more trouble producing nice error messages. (YAML has put me off syntax that doesn’t have enough redundancy to support good error recovery.) So I wondered if there’s a way to have something like an “is pattern” operator in a Rust-like language, without an offside rule, and without the excess of punctuation in the UCS desugaring. But I couldn’t work out how to make the scope of variable bindings in patterns cover all the code that might need to use them. The scope needs to extend into the consequent then clause, but also into any follow-up tests – and those tests can branch so the scope might need to reach into multiple then clauses. The problem was the way I was still thinking of the then and else clauses as part of the outer if. That implied the expression has to be closed off before the then, which troublesomely closes off the scope of any is-bound variables. The solution – part of it, at least – is actually in the paper, where then and else are nested inside the conditional expression. penultimate conditionals There are two ingredients: The then and else clauses become operators that cause early return from a conditional expression. They can be lowered to a vaguely Rust syntax with the following desugaring rules. The 'if label denotes the closest-enclosing if; you can’t use then or else inside the expr of a then or else unless there’s another intervening if. then expr ⟼ && break 'if expr else expr ⟼ || break 'if expr else expr ⟼ || _ && break 'if expr There are two desugarings for else depending on whether it appears in an expression or a pattern. If you prefer a less wordy syntax, you might spell then as => (like match in Rust) and else as || =>. (For symmetry we might allow && => for then as well.) An is operator for multi-way pattern-matching that binds variables whose scope covers the consequent part of the expression. The basic form is like the UCS, scrutinee is pattern which matches the scrutinee against the pattern returning a boolean result. For example, foo is None Guarded patterns are like, scrutinee is pattern && consequent where the scope of the variables bound by the pattern covers the consequent. The consequent might be a simple boolean guard, for example, foo is Some(n) && n < 0 or inside an if expression it might end with a then clause, if foo is Some(n) && n < 0 => -1 // ... Simple multi-way patterns are like, scrutinee is { pattern || pattern || … } If there is a consequent then the patterns must all bind the same set of variables (if any) with the same types. More typically, a multi-way match will have consequent clauses, like scrutinee is { pattern && consequent || pattern && consequent || => otherwise } When a consequent is false, we go on to try other alternatives of the match, like we would when the first operand of boolean || is false. To help with layout, you can include a redundant || before the first alternative. For example, if foo is { || Some(n) && n < 0 => -1 || Some(n) && n > 0 => +1 || Some(n) => 0 || None => 0 } Alternatively, if foo is { Some(n) && ( n < 0 => -1 || n > 0 => +1 || => 0 ) || None => 0 } (They should compile the same way.) The evaluation model is like familiar shortcutting && and || and the syntax is supposed to reinforce that intuition. The UCS paper spends a lot of time discussing backtracking and how to eliminate it, but penultimate conditionals evaluate straightforwardly from left to right. The paper briefly mentions as patterns, like Some(Pair(x, y) as p) which in Rust would be written Some(p @ Pair(x, y)) The is operator doesn’t need a separate syntax for this feature: Some(p is Pair(x, y)) For large examples, the penultimate conditional syntax is about as noisy as Rust’s match, but it scales down nicely to smaller matches. However, there are differences in how consequences and alternatives are punctuated which need a bit more discussion. dangling syntax The precedence and associativity of the is operator is tricky: it has two kinds of dangling-else problem. The first kind occurs with a surrounding boolean expression. For example, when b = false, what is the value of this? b is true || false It could bracket to the left, yielding false: (b is true) || false Or to the right, yielding true: b is { true || false } This could be disambiguated by using different spellings for boolean or and pattern alternatives. But that doesn’t help for the second kind which occurs with an inner match. foo is Some(_) && bar is Some(_) || None Does that check foo is Some(_) with an always-true look at bar ( foo is Some(_) ) && bar is { Some(_) || None } Or does it check bar is Some(_) and waste time with foo? foo is { Some(_) && ( bar is Some(_) ) || None } I have chosen to resolve the ambiguity by requiring curly braces {} around groups of alternative patterns. This allows me to use the same spelling || for all kinds of alternation. (Compare Rust, which uses || for boolean expressions, | in a pattern, and , between the arms of a match.) Curlies around multi-way matches can be nested, so the example in the previous section can also be written, if foo is { || Some(n) && n < 0 => -1 || Some(n) && n > 0 => +1 || { Some(0) || None } => 0 } The is operator binds tigher than && on its left, but looser than && on its right (so that a chain of && is gathered into a consequent) and tigher than || on its right so that outer || alternatives don’t need extra brackets. examples I’m going to finish these notes by going through the ultimate conditional syntax paper to translate most of its examples into the penultimate syntax, to give it some exercise. Here we use is to name a value n, as a replacement for the |> abs pipe operator, and we use range patterns instead of split relational operators: if foo(args) is { || 0 => "null" || n && abs(n) is { || 101.. => "large" || ..10 => "small" || => "medium" ) } In both the previous example and the next one, we have some extra brackets where UCS relies purely on an offside rule. if x is { || Right(None) => defaultValue || Right(Some(cached)) => f(cached) || Left(input) && compute(input) is { || None => defaultValue || Some(result) => f(result) } } This one is almost identical to UCS apart from the spellings of and, then, else. if name.startsWith("_") && name.tailOption is Some(namePostfix) && namePostfix.toIntOption is Some(index) && 0 <= index && index < arity && => Right([index, name]) || => Left("invalid identifier: " + name) Here are some nested multi-way matches with overlapping patterns and bound values: if e is { // ... || Lit(value) && Map.find_opt(value) is Some(result) => Some(result) // ... || { Lit(value) || Add(Lit(0), value) || Add(value, Lit(0)) } => { print_int(value); Some(value) } // ... } The next few examples show UCS splits without the is operator. In my syntax I need to press a few more buttons but I think that’s OK. if x == 0 => "zero" || x == 1 => "unit" || => "?" if x == 0 => "null" || x > 0 => "positive" || => "negative" if predicate(0, 1) => "A" || predicate(2, 3) => "B" || => "C" The first two can be written with is instead, but it’s not briefer: if x is { || 0 => "zero" || 1 => "unit" || => "?" } if x is { || 0 => "null" || 1.. => "positive" || => "negative" } There’s little need for a split-anything feature when we have multi-way matches. if foo(u, v, w) is { || Some(x) && x is { || Left(_) => "left-defined" || Right(_) => "right-defined" } || None => "undefined" } A more complete function: fn zip_with(f, xs, ys) { if [xs, ys] is { || [x :: xs, y :: ys] && zip_with(f, xs, ys) is Some(tail) => Some(f(x, y) :: tail) || [Nil, Nil] => Some(Nil) || => None } } Another fragment of the expression evaluator: if e is { // ... || Var(name) && Map.find_opt(env, name) is { || Some(Right(value)) => Some(value) || Some(Left(thunk)) => Some(thunk()) } || App(lhs, rhs) => // ... // ... } This expression is used in the paper to show how a UCS split is desugared: if Pair(x, y) is { || Pair(Some(xv), Some(yv)) => xv + yv || Pair(Some(xv), None) => xv || Pair(None, Some(yv)) => yv || Pair(None, None) => 0 } The desugaring in the paper introduces a lot of redundant tests. I would desugar straightforwardly, then rely on later optimizations to eliminate other redundancies such as the construction and immediate destruction of the pair: if Pair(x, y) is Pair(xx, yy) && xx is { || Some(xv) && yy is { || Some(yv) => xv + yv || None => xv } || None && yy is { || Some(yv) => yv || None => 0 } } Skipping ahead to the “non-trivial example” in the paper’s fig. 11: if e is { || Var(x) && context.get(x) is { || Some(IntVal(v)) => Left(v) || Some(BoolVal(v)) => Right(v) } || Lit(IntVal(v)) => Left(v) || Lit(BoolVal(v)) => Right(v) // ... } The next example in the paper compares C# relational patterns. Rust’s range patterns do a similar job, with the caveat that Rust’s ranges don’t have a syntax for exclusive lower bounds. fn classify(value) { if value is { || .. -4.0 => "too low" || 10.0 .. => "too high" || NaN => "unknown" || => "acceptable" } } I tend to think relational patterns are the better syntax than ranges. With relational patterns I can rewrite an earlier example like, if foo is { || Some(< 0) => -1 || Some(> 0) => +1 || { Some(0) || None } => 0 } I think with the UCS I would have to name the Some(_) value to be able to compare it, which suggests that relational patterns can be better than UCS split relational operators. Prefix-unary relational operators are also a nice way to write single-ended ranges in expressions. We could simply write both ends to get a complete range, like >= lo < hi or like if value is > -4.0 < 10.0 => "acceptable" || => "far out" Near the start I quoted a normalize example that illustrates left-aligned UCS expression. The penultimate version drifts right like the Scala version: if normalize(tp1) is { || Bot => Bot || tp1_n && normalize(tp2) is { || Bot => Bot || tp2_n && merge(tp1_n, tp2_n) is { || Some(tp) => tp || None => glb(tp1_n, tp2_n) } } } But a more Rusty style shows the benefits of early returns (especially the terse ? operator) and monadic combinators. let tp1 = normalize(tp1)?; let tp2 = normalize(tp2)?; merge(tp1, tp2) .unwrap_or_else(|| glb(tp1, tp2)) antepenultimate breath When I started writing these notes, my penultimate conditional syntax was little more than a sketch of an idea. Having gone through the previous section’s exercise, I think it has turned out better than I thought it might. The extra nesting from multi-way match braces doesn’t seem to be unbearably heavyweight. However, none of the examples have bulky then or else blocks which are where the extra nesting is more likely to be annoying. But then, as I said before it’s comparable to a Rust match: match scrutinee { pattern => { consequent } } if scrutinee is { || pattern => { consequent } } The || lines down the left margin are noisy, but hard to get rid of in the context of a curly-brace language. I can’t reduce them to | like OCaml because what would I use for bitwise OR? I don’t want presence or absence of flow control to depend on types or context. I kind of like Prolog / Erlang , for && and ; for ||, but that’s well outside what’s legible to mainstream programmers. So, dunno. Anyway, I think I’ve successfully found a syntax that does most of what UCS does, but much in a much simpler fashion.
Recently, Alex Kladov wrote on the TigerBeetle blog about swarm testing data structures. It’s a neat post about randomized testing with Zig. I wrote a comment with an idea that was new to Alex @matklad, so I’m reposing a longer version here. differential testing problems grow / shrink random elements element-wise testing test loop data structure size invariants performance conclusion differential testing A common approach to testing data structures is to write a second reference implementation that has the same API but simpler and/or more obviously correct, though it uses more memory or is slower or less concurrent or otherwise not up to production quality. Then, run the production implementation and the reference implementation on the same sequence of operations, and verify that they produce the same results. Any difference is either a bug in the production implementation (probably) or a bug in the reference implementation (unlucky) or a bug in the tests (unfortunate). This is a straightforward differential testing pattern. problems There are a couple of difficulties with this kind of basic differential testing. grow / shrink The TigerBeetle article talks about adjusting the probabilities of different operations on the data structure to try to explore more edge cases. To motivate the idea, the article talks about adjusting the probabilities of adding or deleting items: If adding and deleting have equal probability, then the test finds it hard to grow the data structure to interesting sizes that might expose bugs. Unfortunately, if the probability of add is greater than del, then the data structure tends to grow without bound. If the probability of del is greater than add, then it tries to shrink from nothing: worse than equal probabilities! They could preload the data structure to test how it behaves when it shrinks, but a fixed set of probabilities per run is not good at testing both growth and shrinkage on the same test run on the same data structure. One way to improve this kind of test is to adjust the probability of add and del dynamically: make add more likely when the data structure is small, and del more likely when it is big. And maybe make add more likely in the first half of a test run and del more likely in the second half. random elements The TigerBeetle article glosses over the question of where the tests get fresh elements to add to the data structure. And its example is chosen so it doesn’t have to think about which elements get deleted. In my experience writing data structures for non-garbage-collected languages, I had to be more deliberate about how to create and destroy elements. That led to a style of test that’s more element-centric, as Alex described it. element-wise testing Change the emphasis so that instead of testing that two implementations match, test that one implementation obeys the expected behaviour. No need to make a drop-in replacement reference implementation! What I typically do is pre-allocate an array of elements, with slots that I can set to keep track of how each element relates to the data structure under test. The most important property is whether the element has been added or deleted, but there might be others related to ordering of elements, or values associated with keys, and so on. test loop Each time round the loop, choose at random an element from the array, and an action such as add / del / get / … Then, if it makes sense, perform the operation on the data structure with the element. For example, you might skip an add action if the element is already in the data structure, unless you can try to add it and expect an error. data structure size This strategy tends to grow the data structure until about 50% of the pre-allocated elements are inserted, then it makes a random walk around this 50% point. Random walks can diverge widely from their central point both in theory and in practice, so this kind of testing is reasonably effective at both growing and (to a lesser extent) shrinking the data structure. invariants I usually check some preconditions before an action, to verify that the data structure matches the expected properties of the chosen element. This can help to detect earlier that an action on one element has corrupted another element. After performing the action and updating the element’s properties, I check the updated properties as a postcondition, to make sure the action had the expected effects. performance John Regehr’s great tutorial, how to fuzz an ADT implementation, recommends writing a checkRep() function that thoroughly verifies a data structure’s internal consistency. A checkRep() function is a solid gold testing tool, but it is O(n) at least and typically very slow. If you call checkRep() frequently during testing, your tests slow down dramatically as your data structure gets larger. I like my per-element invariants to be local and ideally O(1) or O(log n) at worst, so they don’t slow down the tests too much. conclusion Recently I’ve used this pattern to exhibit concurrency bugs in an API that’s hard to make thread-safe. Writing the tests has required some cunning to work out what invariants I can usefully maintain and test; what variety of actions I can use to stress those invariants; and what mix of elements + actions I need so that my tests know which properties of each element should be upheld and which can change. I’m testing multiple implementations of the same API, trying to demonstrate which is safest. Differential testing can tell me that implementations diverge, but not which is correct, whereas testing properties and invariants more directly tells me whether an implementation does what I expect. (Or gives me a useless answer when my tests are weak.) Which is to say that this kind of testing is a fun creative challenge. I find it a lot more rewarding than example-based testing.
I have added syntax highlighting to my blog using tree-sitter. Here are some notes about what I learned, with some complaining. static site generator markdown ingestion highlighting incompatible?! highlight names class names styling code results future work frontmatter templates feed style highlight quality static site generator I moved my blog to my own web site a few years ago. It is produced using a scruffy Rust program that converts a bunch of Markdown files to HTML using pulldown-cmark, and produces complete pages from Handlebars templates. Why did I write another static site generator? Well, partly as an exercise when learning Rust. Partly, since I wrote my own page templates, I’m not going to benefit from a library of existing templates. On the contrary, it’s harder to create new templates that work with a general-purpose SSG than write my own simpler site-specific SSG. It’s miserable to write programs in template languages. My SSG can keep the logic in the templates to a minimum, and do all the fiddly stuff in Rust. (Which is not very fiddly, because my site doesn’t have complicated navigation – compared to the multilevel menus on www.dns.cam.ac.uk for instance.) markdown ingestion There are a few things to do to each Markdown file: split off and deserialize the YAML frontmatter find the <cut> or <toc> marker that indicates the end of the teaser / where the table of contents should be inserted augment headings with self-linking anchors (which are also used by the ToC) Before this work I was using regexes to do all these jobs, because that allowed me to treat pulldown-cmark as a black box: Markdown in, HTML out. But for syntax highlighting I had to be able to find fenced code blocks. It was time to put some code into the pipeline between pulldown-cmark’s parser and renderer. And if I’m using a proper parser I can get rid of a few regexes: after some hacking, now only the YAML frontmatter is handled with a regex. Sub-heading linkification and ToC construction are fiddly and more complicated than they were before. But they are also less buggy: markup in headings actually works now! Compared to the ToC, it’s fairly simple to detect code blocks and pass them through a highlighter. You can look at my Markdown munger here. (I am not very happy with the way it uses state, but it works.) highlighting As well as the tree-sitter-highlight documentation I used femark as an example implementation. I encountered a few problems. incompatible?! I could not get the latest tree-sitter-highlight to work as described in its documentation. I thought the current tree-sitter crates were incompatible with each other! For a while I downgraded to an earlier version, but eventually I solved the problem. Where the docs say, let javascript_language = tree_sitter_javascript::language(); They should say: let javascript_language = tree_sitter::Language::new( tree_sitter_javascript::LANGUAGE ); highlight names I was offended that tree-sitter-highlight seems to expect me to hardcode a list of highlight names, without explaining where they come from or what they mean. I was doubly offended that there’s an array of STANDARD_CAPTURE_NAMES but it isn’t exported, and doesn’t match the list in the docs. You mean I have to copy and paste it? Which one?! There’s some discussion of highlight names in the tree-sitter manual’s “syntax highlighting” chapter, but that is aimed at people who are writing a tree-sitter grammar, not people who are using one. Eventually I worked out that tree_sitter_javascript::HIGHLIGHT_QUERY in the tree-sitter-highlight example corresponds to the contents of a highlights.scm file. Each @name in highlights.scm is a highlight name that I might be interested in. In principle I guess different tree-sitter grammars should use similar highlight names in their highlights.scm files? (Only to a limited extent, it turns out.) I decided the obviously correct list of highlight names is the list of every name defined in the HIGHLIGHT_QUERY. The query is just a string so I can throw a regex at it and build an array of the matches. This should make the highlighter produce <span> wrappers for as many tokens as possible in my code, which might be more than necessary but I don’t have to style them all. class names The tree-sitter-highlight crate comes with a lightly-documented HtmlRenderer, which does much of the job fairly straightforwardly. The fun part is the attribute_callback. When the HtmlRenderer is wrapping a token, it emits the start of a <span then expects the callback to append whatever HTML attributes it thinks might be appropriate. Uh, I guess I want a class="..." here? Well, the highlight names work a little bit like class names: they have dot-separated parts which tree-sitter-highlight can match more or less specifically. (However I am telling it to match all of them.) So I decided to turn each dot-separated highlight name into a space-separated class attribute. The nice thing about this is that my Rust code doesn’t need to know anything about a language’s tree-sitter grammar or its highlight query. The grammar’s highlight names become CSS class names automatically. styling code Now I can write some simple CSS to add some colours to my code. I can make type names green, code span.hilite.type { color: #aca; } If I decide builtin types should be cyan like keywords I can write, code span.hilite.type.builtin, code span.hilite.keyword { color: #9cc; } results You can look at my tree-sitter-highlight wrapper here. Getting it to work required a bit more creativity than I would have preferred, but it turned out OK. I can add support for a new language by adding a crate to Cargo.toml and a couple of lines to hilite.rs – and maybe some CSS if I have not yet covered its highlight names. (Like I just did to highlight the CSS above!) future work While writing this blog post I found myself complaining about things that I really ought to fix instead. frontmatter I might simplify the per-page source format knob so that I can use pulldown-cmark’s support for YAML frontmatter instead of a separate regex pass. This change will be easier if I can treat the html pages as Markdown without mangling them too much (is Markdown even supposed to be idempotent?). More tricky are a couple of special case pages whose source is Handlebars instead of Markdown. templates I’m not entirely happy with Handlebars. It’s a more powerful language than I need – I chose Handlebars instead of Mustache because Handlebars works neatly with serde. But it has a dynamic type system that makes the templates more error-prone than I would like. Perhaps I can find a more static Rust template system that takes advantage of the close coupling between my templates and the data structure that describes the web site. However, I like my templates to be primarily HTML with a sprinkling of insertions, not something weird that’s neither HTML nor Rust. feed style There’s no CSS in my Atom feed, so code blocks there will remain unstyled. I don’t know if feed readers accept <style> tags or if it has to be inline styles. (That would make a mess of my neat setup!) highlight quality I’m not entirely satisfied with the level of detail and consistency provided by the tree-sitter language grammars and highlight queries. For instance, in the CSS above the class names and property names have the same colour because the CSS highlights.scm gives them the same highlight name. The C grammar is good at identifying variables, but the Rust grammar is not. Oh well, I guess it’s good enough for now. At least it doesn’t involve Javascript.
Last year I wrote about inlining just the fast path of Lemire’s algorithm for nearly-divisionless unbiased bounded random numbers. The idea was to reduce code bloat by eliminating lots of copies of the random number generator in the rarely-executed slow paths. However a simple split prevented the compiler from being able to optimize cases like pcg32_rand(1 << n), so a lot of the blog post was toying around with ways to mitigate this problem. On Monday while procrastinating a different blog post, I realised that it’s possible to do better: there’s a more general optimization which gives us the 1 << n special case for free. nearly divisionless Lemire’s algorithm has about 4 neat tricks: use multiplication instead of division to reduce the output of a random number generator modulo some limit eliminate the bias in (1) by (counterintuitively) looking at the lower digits fun modular arithmetic to calculate the reject threshold for (2) arrange the reject tests to avoid the slow division in (3) in most cases The nearly-divisionless logic in (4) leads to two copies of the random number generator, in the fast path and the slow path. Generally speaking, compilers don’t try do deduplicate code that was written by the programmer, so they can’t simplify the nearly-divisionless algorithm very much when the limit is constant. constantly divisionless Two points occurred to me: when the limit is constant, the reject threshold (3) can be calculated at compile time when the division is free, there’s no need to avoid it using (4) These observations suggested that when the limit is constant, the function for random numbers less than a limit should be written: static inline uint32_t pcg32_rand_const(pcg32_t *rng, uint32_t limit) { uint32_t reject = -limit % limit; uint64_t sample; do sample = (uint64_t)pcg32_random(rng) * (uint64_t)limit); while ((uint32_t)(sample) < reject); return ((uint32_t)(sample >> 32)); } This has only one call to pcg32_random(), saving space as I wanted, and the compiler is able to eliminate the loop automatically when the limit is a power of two. The loop is smaller than a call to an out-of-line slow path function, so it’s better all round than the code I wrote last year. algorithm selection As before it’s possible to automatically choose the constantly-divisionless or nearly-divisionless algorithms depending on whether the limit is a compile-time constant or run-time variable, using arcane C tricks or GNU C __builtin_constant_p(). I have been idly wondering how to do something similar in other languages. Rust isn’t very keen on automatic specialization, but it has a reasonable alternative. The thing to avoid is passing a runtime variable to the constantly-divisionless algorithm, because then it becomes never-divisionless. Rust has a much richer notion of compile-time constants than C, so it’s possible to write a method like the follwing, which can’t be misused: pub fn upto<const LIMIT: u32>(&mut self) -> u32 { let reject = LIMIT.wrapping_neg().wrapping_rem(LIMIT); loop { let (lo, hi) = self.get_u32().embiggening_mul(LIMIT); if lo < reject { continue; } else { return hi; } } } assert!(rng.upto::<42>() < 42); (embiggening_mul is my stable replacement for the unstable widening_mul API.) This is a nugatory optimization, but there are more interesting cases where it makes sense to choose a different implementation for constant or variable arguments – that it, the constant case isn’t simply a constant-folded or partially-evaluated version of the variable case. Regular expressions might be lex-style or pcre-style, for example. It’s a curious question of language design whether it should be possible to write a library that provides a uniform API that automatically chooses constant or variable implementations, or whether the user of the library must make the choice explicit. Maybe I should learn some Zig to see how its comptime works.
More in programming
Thinking about moving to Japan? You’re not alone—Japan is a popular destination for those hoping to move abroad. What’s more, Japan actually needs more international developers. But how easy is it to immigrate to and work in Japan? Scores of videos on social media warn that living in Japan is quite different from holidaying here, and graphic descriptions of exploitative companies also create doubt. The truth is that Japan is not the easiest country to immigrate to, nor is it the hardest. Some Japanese tech companies and developer roles offer great work-life balance and good compensation; others do not. Based on other developers’ experiences, you’ll thrive here if you: Are an experienced developer Value safety, good food, and convenience over a high salary Are willing to invest time and effort into learning Japanese over the long term Read on to discover if Japan is a good fit for you, and the best ways to get a visa and begin your life here. What is it like working as a developer in Japan? TokyoDev conducts an annual survey of international developers living in Japan. Many of the questions in TokyoDev’s 2024 survey specifically addressed respondents’ work environments. Compensation When TokyoDev asked about “workplace difficulties” in the 2024 survey, 45% of respondents said that “compensation” was their number one problem at work. Overall, compensation for developers in Japan is far lower than the US developer median salary of 120,000 USD (currently 17.5 million yen), but higher than the Indian developer median salary of 640,000 rupees (currently around 1.1 million yen). Yet evaluating compensation for international developers in Japan, specifically, is trickier than you might expect. It’s hard to define an expected salary range because international developers tend to work in different companies and roles than the average Japanese developer. According to a 2024 survey conducted by the Japanese Ministry of Health, Labor and Welfare, the average annual salary of software engineers in Japan was 5.69 million yen. In a survey conducted that same year by TokyoDev, though, English-speaking international software developers in Japan enjoyed a median salary of 8.5 million yen. Of those international developers who responded, only 71% of them worked at a company headquartered in Japan, and almost 80% of them used English always or frequently, with 79% belonging to an engineering team with many other non-Japanese members. Wages, then, are heavily influenced by a range of factors, but particularly by whether you’re working for a Japanese or international company. In general, 75% of the international developers surveyed made 6 million yen or more. The real question is, is that enough for you to be comfortable in Japan? The answer is likely to be yes, if you don’t have overseas financial obligations or dependents. If you do, you’ll want to look carefully at rent, grocery, and education prices in your area of choice to guesstimate the expense of your Japanese lifestyle. Work-life balance Japan has a tradition of long hours and overtime. The Financial Times reports that the Japanese government has taken many measures to reduce the phenomenon of death from overwork (過労死, karoushi), from capping overtime to 100 hours a month, to setting up a national hotline for employees to report abusive companies. The results seem mixed. The Financial Times article adds that in 2024, employees at 26,000 organizations reported working illegal overtime at 44.5% of those businesses. On the other hand, average working hours for men fell to below 45 hours per week, and for women to below 35, which is similar to average working hours in the US. Still, 72% of the developers surveyed by TokyoDev worked for less than 40 hours a week. In addition, 70% of TokyoDev respondents cited work-life balance as their top workplace perk. The number of respondents happy with their working conditions came in just below that, at 69%. There was some correlation between hours worked and the type of employer, though. Employees at international subsidiaries were slightly more likely to enjoy shorter work weeks than those at Japanese companies. Remote work Remote work is still relatively new in Japan. Although more offices adopted the practice during Covid, many of them are now backtracking and requiring employees to return to the office, often with a hybrid schedule. While only 9% of TokyoDev respondents weren’t allowed any remote work, 76% of those required to work in-office were employed by Japan-headquartered companies. By contrast, of the 16% who worked fully remotely, only 57% worked for a Japanese company. Those with the option to work remotely really enjoy it. When asked what their most important workplace benefit was, 49% of respondents answered “remote work,” outstripping every other answer by far. Job security A major plus of working in Japan is job security—which, given the waves of layoffs at American tech companies, may now seem extra appealing. It’s overwhelmingly difficult to fire or lay off an employee with a permanent contract (正社員, seishain) in Japan, due to labor laws designed to protect the individual. This may be why 54% of TokyoDev survey respondents named “job security” as their most important workplace perk. Not every company will adhere to labor protection laws, and sometimes businesses pressure employees to “voluntarily” resign. Nonetheless, employees have significant legal recourse when companies attempt to fire them, change their contracts, or alter the current workplace conditions (sometimes, even if those conditions were never stated in writing). Developer stories TokyoDev regularly interviews developers working at our client companies, for information on both their specific positions and their general work environment. Our interviewees work with a variety of technology in many different roles, and at companies ranging from fintech enterprises like PayPay to game companies like Wizcorp. Why do developers choose Japan? In 2024 TokyoDev also asked developers, “What’s your favorite thing about Japan?” The results were: Safety: 21% Food: 13% Convenience: 11% Culture: 8% Peacefulness: 7% Nature: 5% Interestingly, there was a strong correlation between the amount of time someone had lived in Japan and their answer. Those who had been in Japan three years or less more frequently chose “food” or “culture.” Those who’d lived in Japan for four or more years were significantly more likely to answer “safety” or “peacefulness.” Safety It’s true that Japan enjoys a lower crime rate than many developed nations. The Security Journal UK ranked it ninth in a list of the world’s twenty safest countries. In 2024, World Population Review selected Tokyo as the safest city in the world. The homicide rate in 2023 was only 0.23 per 100,000 people, and has been steadily declining since the nineties. There are a few women-specific concerns, such as sexual violence. Nonetheless, the subjective experience of many women in the TokyoDev audience is that Japan feels safe; for example, they experience no trepidation walking around late at night. Of course, crime statistics don’t take into account natural disasters, of which Japan has more than its fair share. Thanks to being located on the Ring of Fire, Japan regularly copes with earthquakes and volcanic activity, and its location in the Pacific means that it is also affected by typhoons and tsunamis. To compensate, Japan also takes natural disaster countermeasures extremely seriously. It’s certainly the only country I’ve been to that posts large-scale evacuation maps on the side of the street, stores emergency supply stockpiles in public parks, and often requires schoolchildren to keep earthquake safety headgear at their desks. Food Food is another major draw. Many respondents simply wrote that “food” or “fresh, affordable food” was their favorite thing about Japan, but a few listed specific dishes. Favorite Japanese foods of the TokyoDev audience include: Yakiniku (self-grilled meat) Ramen Peaches Sushi Hiroshima-style okonomiyaki (savory pancake) Curry rice Onigiri (rice balls) Of those, sushi was mentioned most often. One respondent also answered the question with “drinking,” if you think that should count! Personal experiences Our contributors have also shared their personal experiences of moving to and working in Japan. We’ve got articles from Filipino, Indonesian, Australian, Vietnamese, and Mongolian developers, as well as others sharing what it’s like to work as a female software developer in Japan, or to live in Japan with a disability. Why shouldn’t you live in Japan? Safety, food, convenience, and culture are the most commonly-cited upsides of living in Japan. The downsides are the necessity of learning the language and some strict, yet often-unspoken, cultural expectations. Language Fluency in Japanese is not strictly necessary to live or work in Japan. Access to government services for you and your family, such as Japanese public school, is possible even if you speak little Japanese. (That doesn’t mean that most city hall clerks speak English; usually they’ll either locate a translator, or work with you via a translation app.) Nonetheless, TokyoDev’s 2024 survey showed that language ability was highly correlated to social success in Japan. In particular, 56% of all respondents were happy or very happy with their adjustment to Japanese culture. Breaking down that number, though, 76% of those with fluent or native Japanese ability reported being happy with their cultural adjustment, while only 34% of those with little or no Japanese ability were similarly happy. The same held true for social life satisfaction: 59% of those with fluent or native Japanese ability were happy or very happy with their social life, compared to 42% of those who don’t speak much Japanese. While English study is compulsory in Japan and starts in elementary school, as of 2025, only 28% of Japanese people speak English, and most of them can’t converse with high fluency. Living and working in Japan is possible without Japanese, but it’s hard to integrate, make friends, and participate in cultural activities if you can’t communicate with the locals. Cultural expectations As mentioned above, fluency in Japanese is closely allied to fluency in Japanese culture. At the same time, one does not necessarily imply the other. It’s possible to be fluent in Japanese, but still not grasp many of the unspoken rules your Japanese friends, neighbors, and coworkers operate by. Japan’s culture is both high-context and specifically averse to confrontation and outspokenness; if you get it “wrong,” people aren’t likely to tell you so. Japanese culture also values conformity: as the saying goes, “the nail that sticks up, gets hammered down.” While there are hints of things changing, with many Japanese companies saying support for greater diversity is necessary, minorities or those who are different may experience pressure to fit in. Introspection is required: are you the kind of person who’s adept at “reading the room,” a highly-valued quality in Japan? Conversely, are you self-confident enough to not sweat the small stuff? Either of these personality types may do well in Japan, but if social acceptance is very important to you, and you’re also uncomfortable with feeling occasionally awkward or uncertain, then you may struggle more to adjust. I want to go! How can I get there? If you’ve decided to immigrate to Japan, there are a number of ways to acquire a work visa. The simplest way is to get hired by a company operating in Japan. Alternatively, you can start your own business in Japan, come over on a Working Holiday, or even—if you’re very determined—arrive first as an English teacher. Let’s begin with the most straightforward route: getting hired as a developer. Getting a developer job in Japan As mentioned before, Japan needs more international developers. Some types of developers, though, will find it easier to get a job in Japan. In particular, companies in Japan are looking for the following: Senior developers. Companies are particularly interested in those with management experience and soft skills such as communication and leadership. Backend developers. This is one of the most widely-available roles for those who don’t speak Japanese. Developers who know Python. Python is one of Japan’s top in-demand languages. AI and Machine Learning Specialists. Japan is leaning hard on AI to help cope with demographic changes. Those who already know, or are willing to learn, Japanese. Combining those criteria, an experienced developer who speaks Japanese should have little difficulty finding a job! If you’re none of these things, you don’t need to give up—you just need to be patient, flexible, and willing to think outside the box. As Mercari Senior Technical Recruiter Clement Chidiac told me, “I know a bunch of people that managed to land a job because they’ve tried harder, going to meetups, reaching out to people, networking, that kind of thing.” Edmund Ho, Principal Consultant at Talisman Corporation, agreed that overseas candidates hoping to work in Japan for the first time face a tough road. He believes candidates should maintain a realistic, but optimistic, view of the process. “Keep a longer mindset,” he suggested. “Maybe you don’t get an offer the first year, but you do the second year.” “Stepping-stone” jobs Candidates from overseas do face a severe disadvantage: many companies, even those founded by non-Japanese people, are only open to developers who already live in Japan. Although getting a work visa for an overseas employee is cheaper and easier in Japan than in many countries, it still presents a barrier some organizations are reluctant to overcome. By contrast, once you’re already on the ground, more companies will be interested in your skills. This is why some developers settle on a “stepping-stone” position—in other words, a job that may not be all you hoped for, but that is willing to sponsor your visa and bring you into the country. Here’s where some important clarification on Japanese work visas is required. Work visas The most common visa for developers is the Engineer/Specialist in Humanities/International Services visa, a broad-category visa for foreign workers in those fields. To qualify, a developer must have a college degree, or have ten years of work experience, or have passed an approved IT exam. Another relatively common visa for high-level developers is the Highly-Skilled Professional (HSP) visa. To acquire it, applicants must score at least 70 points on an assessment scale that addresses age, education level, Japanese level, income, and more. The HSP visa has many advantages, but there is one important difference between it, and the more standard Engineer visa. The Engineer/Specialist in Humanities/International Services visa is not tied to a specific company. It grants you the legal right to work within those fields for a specific period of time in Japan. The Highly-Skilled Foreign Professional visa, on the other hand, is tied to a specific employer. If you want to change jobs, you’ll need to update your residency status with immigration. Some unscrupulous companies will try to claim that because they sponsored your Engineer/Specialist in Humanities/International Servicesvisa, you are obligated to remain with their company or risk being deported. This is not the case. If you do leave your job without another one lined up, you have three months to find another before you may be at risk for deportation. In addition, the fields of work covered by the Engineer/Specialist in Humanities/International Services visa are incredibly broad, and include everything from sales to product development to language instruction. As TokyoDev specifically confirmed with immigration, you can even come to Japan as an English instructor, then later work as a developer, without needing to alter your visa. Those with the HSP visa will need to go to immigration and alter their residency status each time they change roles. However, if you have the points and qualifications for an HSP visa, that means you’re also eligible for Permanent Residency within one to three years. Once you’ve obtained Permanent Residency, you’re free to pursue whatever sort of employment you like. International or Japanese company? As you begin your job hunt, you’ll hopefully receive responses from several sorts of companies: Japanese companies that also primarily hire Japanese people, Japanese companies with designated multinational developer teams, companies that were founded in Japan but nonetheless hire international developers for a variety of positions, and international subsidiaries. There are advantages and disadvantages to working with mostly-Japanese or mostly-international companies. Japanese companies The more Japanese a company is—both in philosophy and personnel—the more you’ll need Japanese language skills to thrive there. It’s true that a number of well-established Japanese tech companies are now creating developer teams designed to be multinational from the outset: typically, these are very English-language friendly. Some organizations, such as Money Forward, have even adopted English as the official company language. However, this often results in an institutional language barrier between development teams and the rest of the company, which is usually staffed by Japanese speakers. Developers are still encouraged to learn Japanese, particularly as they climb the promotional ladder, to help facilitate interdepartmental communication. Some companies, such as DeepX and Beatrust, either offer language classes themselves or provide a stipend for language learning. In addition to the language, you’ll also need to become “fluent” in Japanese business norms, which can be much more rigid and hierarchical than American or European company cultures. For example, at introductory drinking parties (themselves a potential surprise for many!), it is customary for new employees or women employees to go around with a bottle of beer and pour glasses for their managers and the company’s senior management. As mentioned in the cultural expectations section, most Japanese people won’t correct you even if you’re doing it all wrong, which leaves foreigners to discover their gaffes via trial-and-error. The advantage here is that you’ll be pressured, hopefully in a good way, to adapt swiftly to the Japanese language and business culture. There’s a sink-or-swim element to this approach, but if you’re serious about settling in Japan, then this “downside” could benefit you in the long run. Finally, there is the above-mentioned issue of compensation. On average, international companies pay more than Japanese ones; the median salary difference is around three million yen per year. Specific roles may be paid at higher rates, though, and most Japanese companies do offer bonuses. Many Japanese companies also offer other perks, such as housing stipends, spouse and child allowances, etc. If you receive an offer, it’s worth examining the whole compensation package before you make a decision. International companies The advantages of working either for an international company, or for a Japanese company that already employs many non-Japanese people, are straightforward: you can usually communicate in English, you already understand most of the business norms, and such companies typically pay developers more. You do run the risk of getting stuck in a rut, though. As mentioned earlier, TokyoDev found in its own survey that the correlation between Japanese language skills and social life satisfaction is high. You can of course study Japanese in your free time—and many do—but the more your work environment and social life revolve around English, the more difficult acquiring Japanese becomes. Want a job? Start here! If you’re ready to begin your job hunt, you can start with the TokyoDev job board. TokyoDev only works with companies we feel good about sending applicants to, and the job board includes positions that don’t require Japanese and that accept candidates from abroad. Other alternatives These visas don’t lead directly to working as a software developer in Japan, but can still help you get your foot in the door. DIY options If you prefer to be your own boss, there are several visas that allow you to set up a business in Japan. The Business Manager visa is typically good for one year, although repeated applicants may get longer terms. Applicants should have five million yen in a bank account when they apply, and there are some complicated requirements for getting and keeping the visa, such as maintaining an office, paying yourself a minimum salary, following proper accounting procedures, etc. The Startup visa is another option if the Business Manager visa appeals to you, but you don’t yet have the funds or connections to make it happen. You’ll be granted the equivalent of a Business Manager visa for up to one year so that you can launch your business in Japan. Working Holiday visa This is the path our own founder Paul McMahon took to get his first developer job in Japan. If you meet various qualifications, and you belong to a country that has a Working Holiday visa agreement with Japan, you can come to Japan for a period of one year and do work that is “incidental” to your holiday. In practice, this means you can work almost any job except for those that are considered “immoral” (bars, clubs, gambling, etc.). The Working Holiday visa is a great opportunity for those who have the option. It allows you to experience living and working in Japan without any long-term commitments, and also permits you to job-hunt freely without time or other visa constraints. J-Find visa The J-Find visa is a one-year visa, intended to let graduates of top universities job-hunt or prepare to found a start-up in Japan. To qualify, applicants should have: A degree from a university ranked in the top 100 by at least two world university rankings, or completed a graduate course there Graduated within five years of the application date At least 200,000 yen for initial living expenses TokyoDev contributor Oguzhan Karagözoglu received a J-Find visa, though he did run into some difficulties, particularly given immigration’s unfamiliarity with this relatively new type of visa. Digital Nomad visa This is another new visa category that allows foreigners from specific countries, who must make over 10 million yen or more a year, to work remotely from Japan for six months. Given that the application process alone can take months, the visa isn’t extendable or renewable, and you’re not granted residency, it’s questionable whether the pay-off is worth the effort. Still, if you have the option to work remotely and want to test out living in Japan before committing long-term, this is one way to do that. TokyoDev contributor Christian Mack was not only one of the first to acquire the Digital Nomad visa, but has since opened a consultancy to help others through the process. Conclusion If your takeaway from this article is, “Japan, here I come!” then there are more TokyoDev articles that can help you on your way. For example, if you want to bring your pets with you, you should know that you need to start preparing the import paperwork up to seven months in advance. If you’re ready now to start applying for jobs, check out the TokyoDev job board. You’ll also want to look at how to write a resume for a job in Japan, and our industry insider advice on passing the resume screening process. These tips for interviewing at Japanese tech companies would be useful, and when you’re ready for it, see this guide to salary negotiations. Once you’ve landed that job, we’ve got articles on everything from bringing your family with you, to getting your first bank account and apartment. In addition, the TokyoDev Discord hosts regular discussions on all these topics and more. It’s a great chance to make developer friends in Japan before you ever set foot in the country. Once you are here, you can join some of Japan’s top tech meetups, including many organized by TokyoDev itself. We look forward to seeing you soon!
We go over the "Wake up, Remix!" article by the remix team and talk about their decisions moving forward and also speculate on what is coming next.
TIL (or this week-ish I learned) why big-sigma and big-pi turn up in the notation of dependent type theory. I’ve long been aware of the zoo of more obscure Greek letters that turn up in papers about type system features of functional programming languages, μ, Λ, Π, Σ. Their meaning is usually clear from context but the reason for the choice of notation is usually not explained. I recently stumbled on an explanation for Π (dependent functions) and Σ (dependent pairs) which turn out to be nicer than I expected, and closely related to every-day algebraic data types. sizes of types The easiest way to understand algebraic data types is by counting the inhabitants of a type. For example: the unit type () has one inhabitant, (), and the number 1 is why it’s called the unit type; the bool type hass two inhabitants, false and true. I have even seen these types called 1 and 2 (cruelly, without explanation) in occasional papers. product types Or pairs or (more generally) tuples or records. Usually written, (A, B) The pair contains an A and a B, so the number of possible values is the number of possible A values multiplied by the number of possible B values. So it is spelled in type theory (and in Standard ML) like, A * B sum types Or disjoint union, or variant record. Declared in Haskell like, data Either a b = Left a | Right b Or in Rust like, enum Either<A, B> { Left(A), Right(B), } A value of the type is either an A or a B, so the number of possible values is the number of A values plus the number of B values. So it is spelled in type theory like, A + B dependent pairs In a dependent pair, the type of the second element depends on the value of the first. The classic example is a slice, roughly, struct IntSlice { len: usize, elem: &[i64; len], } (This might look a bit circular, but the idea is that an array [i64; N] must be told how big it is – its size is an explicit part of its type – but an IntSlice knows its own size. The traditional dependent “vector” type is a sized linked list, more like my array type than my slice type.) The classic way to write a dependent pair in type theory is like, Σ len: usize . Array(Int, len) The big sigma binds a variable that has a type annotation, with a scope covering the expression after the dot – similar syntax to a typed lambda expression. We can expand a simple example like this into a many-armed sum type: either an array of length zero, or an array of length 1, or an array of length 2, … but in a sigma type the discriminant is user-defined instead of hidden. The number of possible values of the type comes from adding up all the alternatives, a summation just like the big sigma summation we were taught in school. ∑ a ∈ A B a When the second element doesn’t depend on the first element, we can count the inhabitants like, ∑ A B = A*B And the sigma type simplifies to a product type. telescopes An aside from the main topic of these notes, I also recently encountered the name “telescope” for a multi-part dependent tuple or record. The name “telescope” comes from de Bruijn’s AUTOMATH, one of the first computerized proof assistants. (I first encountered de Bruijn as the inventor of numbered lambda bindings.) dependent functions The return type of a dependent function can vary according to the argument it is passed. For example, to construct an array we might write something like, fn repeat_zero(len: usize) -> [i64; len] { [0; len] } The classic way to write the type of repeat_zero() is very similar to the IntSlice dependent pair, but with a big pi instead of a big sigma: Π len: usize . Array(Int, len) Mmm, pie. To count the number of possible (pure, total) functions A ➞ B, we can think of each function as a big lookup table with A entries each containing a B. That is, a big tuple (B, B, … B), that is, B * B * … * B, that is, BA. Functions are exponential types. We can count a dependent function, where the number of possible Bs depends on which A we are passed, ∏ a ∈ A B a danger I have avoided the terms “dependent sum” and “dependent product”, because they seem perfectly designed to cause confusion over whether I am talking about variants, records, or functions. It kind of makes me want to avoid algebraic data type jargon, except that there isn’t a good alternative for “sum type”. Hmf.
Systems Distributed I'll be speaking at Systems Distributed next month! The talk is brand new and will aim to showcase some of the formal methods mental models that would be useful in mainstream software development. It has added some extra stress on my schedule, though, so expect the next two monthly releases of Logic for Programmers to be mostly minor changes. What does "Undecidable" mean, anyway Last week I read Against Curry-Howard Mysticism, which is a solid article I recommend reading. But this newsletter is actually about one comment: I like to see posts like this because I often feel like I can’t tell the difference between BS and a point I’m missing. Can we get one for questions like “Isn’t XYZ (Undecidable|NP-Complete|PSPACE-Complete)?” I've already written one of these for NP-complete, so let's do one for "undecidable". Step one is to pull a technical definition from the book Automata and Computability: A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (pg 220) Step two is to translate the technical computer science definition into more conventional programmer terms. Warning, because this is a newsletter and not a blog post, I might be a little sloppy with terms. Machines and Decision Problems In automata theory, all inputs to a "program" are strings of characters, and all outputs are "true" or "false". A program "accepts" a string if it outputs "true", and "rejects" if it outputs "false". You can think of this as automata studying all pure functions of type f :: string -> boolean. Problems solvable by finding such an f are called "decision problems". This covers more than you'd think, because we can bootstrap more powerful functions from these. First, as anyone who's programmed in bash knows, strings can represent any other data. Second, we can fake non-boolean outputs by instead checking if a certain computation gives a certain result. For example, I can reframe the function add(x, y) = x + y as a decision problem like this: IS_SUM(str) { x, y, z = split(str, "#") return x + y == z } Then because IS_SUM("2#3#5") returns true, we know 2 + 3 == 5, while IS_SUM("2#3#6") is false. Since we can bootstrap parameters out of strings, I'll just say it's IS_SUM(x, y, z) going forward. A big part of automata theory is studying different models of computation with different strengths. One of the weakest is called "DFA". I won't go into any details about what DFA actually can do, but the important thing is that it can't solve IS_SUM. That is, if you give me a DFA that takes inputs of form x#y#z, I can always find an input where the DFA returns true when x + y != z, or an input which returns false when x + y == z. It's really important to keep this model of "solve" in mind: a program solves a problem if it correctly returns true on all true inputs and correctly returns false on all false inputs. (total) Turing Machines A Turing Machine (TM) is a particular type of computation model. It's important for two reasons: By the Church-Turing thesis, a Turing Machine is the "upper bound" of how powerful (physically realizable) computational models can get. This means that if an actual real-world programming language can solve a particular decision problem, so can a TM. Conversely, if the TM can't solve it, neither can the programming language.1 It's possible to write a Turing machine that takes a textual representation of another Turing machine as input, and then simulates that Turing machine as part of its computations. Property (1) means that we can move between different computational models of equal strength, proving things about one to learn things about another. That's why I'm able to write IS_SUM in a pseudocode instead of writing it in terms of the TM computational model (and why I was able to use split for convenience). Property (2) does several interesting things. First of all, it makes it possible to compose Turing machines. Here's how I can roughly ask if a given number is the sum of two primes, with "just" addition and boolean functions: IS_SUM_TWO_PRIMES(z): x := 1 y := 1 loop { if x > z {return false} if IS_PRIME(x) { if IS_PRIME(y) { if IS_SUM(x, y, z) { return true; } } } y := y + 1 if y > x { x := x + 1 y := 0 } } Notice that without the if x > z {return false}, the program would loop forever on z=2. A TM that always halts for all inputs is called total. Property (2) also makes "Turing machines" a possible input to functions, meaning that we can now make decision problems about the behavior of Turing machines. For example, "does the TM M either accept or reject x within ten steps?"2 IS_DONE_IN_TEN_STEPS(M, x) { for (i = 0; i < 10; i++) { `simulate M(x) for one step` if(`M accepted or rejected`) { return true } } return false } Decidability and Undecidability Now we have all of the pieces to understand our original definition: A property P of strings is said to be decidable if ... there is a total Turing machine that accepts input strings that have property P and rejects those that do not. (220) Let IS_P be the decision problem "Does the input satisfy P"? Then IS_P is decidable if it can be solved by a Turing machine, ie, I can provide some IS_P(x) machine that always accepts if x has property P, and always rejects if x doesn't have property P. If I can't do that, then IS_P is undecidable. IS_SUM(x, y, z) and IS_DONE_IN_TEN_STEPS(M, x) are decidable properties. Is IS_SUM_TWO_PRIMES(z) decidable? Some analysis shows that our corresponding program will either find a solution, or have x>z and return false. So yes, it is decidable. Notice there's an asymmetry here. To prove some property is decidable, I need just to need to find one program that correctly solves it. To prove some property is undecidable, I need to show that any possible program, no matter what it is, doesn't solve it. So with that asymmetry in mind, do are there any undecidable problems? Yes, quite a lot. Recall that Turing machines can accept encodings of other TMs as input, meaning we can write a TM that checks properties of Turing machines. And, by Rice's Theorem, almost every nontrivial semantic3 property of Turing machines is undecidable. The conventional way to prove this is to first find a single undecidable property H, and then use that to bootstrap undecidability of other properties. The canonical and most famous example of an undecidable problem is the Halting problem: "does machine M halt on input i?" It's pretty easy to prove undecidable, and easy to use it to bootstrap other undecidability properties. But again, any nontrivial property is undecidable. Checking a TM is total is undecidable. Checking a TM accepts any inputs is undecidable. Checking a TM solves IS_SUM is undecidable. Etc etc etc. What this doesn't mean in practice I often see the halting problem misconstrued as "it's impossible to tell if a program will halt before running it." This is wrong. The halting problem says that we cannot create an algorithm that, when applied to an arbitrary program, tells us whether the program will halt or not. It is absolutely possible to tell if many programs will halt or not. It's possible to find entire subcategories of programs that are guaranteed to halt. It's possible to say "a program constructed following constraints XYZ is guaranteed to halt." The actual consequence of undecidability is more subtle. If we want to know if a program has property P, undecidability tells us We will have to spend time and mental effort to determine if it has P We may not be successful. This is subtle because we're so used to living in a world where everything's undecidable that we don't really consider what the counterfactual would be like. In such a world there might be no need for Rust, because "does this C program guarantee memory-safety" is a decidable property. The entire field of formal verification could be unnecessary, as we could just check properties of arbitrary programs directly. We could automatically check if a change in a program preserves all existing behavior. Lots of famous math problems could be solved overnight. (This to me is a strong "intuitive" argument for why the halting problem is undecidable: a halt detector can be trivially repurposed as a program optimizer / theorem-prover / bcrypt cracker / chess engine. It's too powerful, so we should expect it to be impossible.) But because we don't live in that world, all of those things are hard problems that take effort and ingenuity to solve, and even then we often fail. To be pendantic, a TM can't do things like "scrape a webpage" or "render a bitmap", but we're only talking about computational decision problems here. ↩ One notation I've adopted in Logic for Programmers is marking abstract sections of pseudocode with backticks. It's really handy! ↩ Nontrivial meaning "at least one TM has this property and at least one TM doesn't have this property". Semantic meaning "related to whether the TM accepts, rejects, or runs forever on a class of inputs". IS_DONE_IN_TEN_STEPS is not a semantic property, as it doesn't tell us anything about inputs that take longer than ten steps. ↩
I came across this post from the tech collective crftd. about how software is in a process of “continuous disintegration”: One of the uncomfortable truths we sometimes have to break to people is that software isn't just never “done”. Worse even, it rots… The practices of continuous integration act as enablers for us to keep adding value and keeping development maintainable, but they cannot stop the inevitable: The system will eventually fail in unexpected ways, as is the nature of complex systems: That all resonates with me — software is rarely “done”, it generally has shelf life and starts rotting the moment you ship it — but what really made me pause was this line: The practices of continuous integration act as enablers for us I read “enabler” there in the negative context of the word, like in addiction when the word “enabler” refers to someone who exploits others by encouraging a pattern of self-destructive behavior. Is CI/CD an enabler? I’d only ever thought on moving towards CI/CD as a net positive thing. Is it possible that, like everything, CI/CD has its tradeoffs and isn’t always the Best Thing Ever™️? What are the trade-offs of CI/CD? The thought occurred to me that CI stands for “continuous investment” because that’s what it requires to keep it working — a continuous investment in the both the infrastructure that delivers the software and the software itself. Everybody complains now-a-days about how software requires a subscription. Why is that? Could it be, perhaps, because of CI/CD? If you want continuous updates to your software, you’re going to have to pay for it continuously. We’ve made delivering software continuously easy, which means we’ve made creating software that’s “done” hard — be careful of what you make easy. In some sense — at least on the web — I think you could argue that we don’t know how to make software that’s done (e.g. software that ships on a CD). We’re inundated with tools and practices and norms that enable the opposite of that. And, perhaps, we’ve trading something there? When something comes along and enables new capabilities, it often severs others. Email · Mastodon · Bluesky