Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
29
About half a year ago I encountered a paper bombastically titled “the ultimate conditional syntax”. It has the attractive goal of unifying pattern match with boolean if tests, and its solution is in some ways very nice. But it seems over-complicated to me, especially for something that’s a basic work-horse of programming. I couldn’t immediately see how to cut it down to manageable proportions, but recently I had an idea. I’ll outline it under the “penultimate conditionals” heading below, after reviewing the UCS and explaining my motivation. what the UCS? whence UCS out of scope penultimate conditionals dangling syntax examples antepenultimate breath what the UCS? The ultimate conditional syntax does several things which are somewhat intertwined and support each other. An “expression is pattern” operator allows you to do pattern matching inside boolean expressions. Like “match” but unlike most other expressions, “is” binds variables whose scope is the rest of the boolean expression...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Tony Finch's blog

clamp / median / range

Here are a few tangentially-related ideas vaguely near the theme of comparison operators. comparison style clamp style clamp is median clamp in range range style style clash? comparison style Some languages such as BCPL, Icon, Python have chained comparison operators, like if min <= x <= max: ... In languages without chained comparison, I like to write comparisons as if they were chained, like, if min <= x && x <= max { // ... } A rule of thumb is to prefer less than (or equal) operators and avoid greater than. In a sequence of comparisons, order values from (expected) least to greatest. clamp style The clamp() function ensures a value is between some min and max, def clamp(min, x, max): if x < min: return min if max < x: return max return x I like to order its arguments matching the expected order of the values, following my rule of thumb for comparisons. (I used that flavour of clamp() in my article about GCRA.) But I seem to be unusual in this preference, based on a few examples I have seen recently. clamp is median Last month, Fabian Giesen pointed out a way to resolve this difference of opinion: A function that returns the median of three values is equivalent to a clamp() function that doesn’t care about the order of its arguments. This version is written so that it returns NaN if any of its arguments is NaN. (When an argument is NaN, both of its comparisons will be false.) fn med3(a: f64, b: f64, c: f64) -> f64 { match (a <= b, b <= c, c <= a) { (false, false, false) => f64::NAN, (false, false, true) => b, // a > b > c (false, true, false) => a, // c > a > b (false, true, true) => c, // b <= c <= a (true, false, false) => c, // b > c > a (true, false, true) => a, // c <= a <= b (true, true, false) => b, // a <= b <= c (true, true, true) => b, // a == b == c } } When two of its arguments are constant, med3() should compile to the same code as a simple clamp(); but med3()’s misuse-resistance comes at a small cost when the arguments are not known at compile time. clamp in range If your language has proper range types, there is a nicer way to make clamp() resistant to misuse: fn clamp(x: f64, r: RangeInclusive<f64>) -> f64 { let (&min,&max) = (r.start(), r.end()); if x < min { return min } if max < x { return max } return x; } let x = clamp(x, MIN..=MAX); range style For a long time I have been fond of the idea of a simple counting for loop that matches the syntax of chained comparisons, like for min <= x <= max: ... By itself this is silly: too cute and too ad-hoc. I’m also dissatisfied with the range or slice syntax in basically every programming language I’ve seen. I thought it might be nice if the cute comparison and iteration syntaxes were aspects of a more generally useful range syntax, but I couldn’t make it work. Until recently when I realised I could make use of prefix or mixfix syntax, instead of confining myself to infix. So now my fantasy pet range syntax looks like >= min < max // half-open >= min <= max // inclusive And you might use it in a pattern match if x is >= min < max { // ... } Or as an iterator for x in >= min < max { // ... } Or to take a slice xs[>= min < max] style clash? It’s kind of ironic that these range examples don’t follow the left-to-right, lesser-to-greater rule of thumb that this post started off with. (x is not lexically between min and max!) But that rule of thumb is really intended for languages such as C that don’t have ranges. Careful stylistic conventions can help to avoid mistakes in nontrivial conditional expressions. It’s much better if language and library features reduce the need for nontrivial conditions and catch mistakes automatically.

a week ago 12 votes
Golang and Let's Encrypt: a free software story

Here’s a story from nearly 10 years ago. the bug I think it was my friend Richard Kettlewell who told me about a bug he encountered with Let’s Encrypt in its early days in autumn 2015: it was failing to validate mail domains correctly. the context At the time I had previously been responsible for Cambridge University’s email anti-spam system for about 10 years, and in 2014 I had been given responsibility for Cambridge University’s DNS. So I knew how Let’s Encrypt should validate mail domains. Let’s Encrypt was about one year old. Unusually, the code that runs their operations, Boulder, is free software and open to external contributors. Boulder is written in Golang, and I had not previously written any code in Golang. But its reputation is to be easy to get to grips with. So, in principle, the bug was straightforward for me to fix. How difficult would it be as a Golang newbie? And what would Let’s Encrypt’s contribution process be like? the hack I cloned the Boulder repository and had a look around the code. As is pretty typical, there are a couple of stages to fixing a bug in an unfamiliar codebase: work out where the problem is try to understand if the obvious fix could be better In this case, I remember discovering a relatively substantial TODO item that intersected with the bug. I can’t remember the details, but I think there were wider issues with DNS lookups in Boulder. I decided it made sense to fix the immediate problem without getting involved in things that would require discussion with Let’s Encrypt staff. I faffed around with the code and pushed something that looked like it might work. A fun thing about this hack is that I never got a working Boulder test setup on my workstation (or even Golang, I think!) – I just relied on the Let’s Encrypt cloud test setup. The feedback time was very slow, but it was tolerable for a simple one-off change. the fix My pull request was small, +48-14. After a couple of rounds of review and within a few days, it was merged and put into production! A pleasing result. the upshot I thought Golang (at least as it was used in the Boulder codebase) was as easy to get to grips with as promised. I did not touch it again until several years later, because there was no need to, but it seemed fine. I was very impressed by the Let’s Encrypt continuous integration and automated testing setup, and by their low-friction workflow for external contributors. One of my fastest drive-by patches to get into worldwide production. My fix was always going to be temporary, and all trace of it was overwritten years ago. It’s good when “temporary” turns out to be true! the point I was reminded of this story in the pub this evening, and I thought it was worth writing down. It demonstrated to me that Let’s Encrypt really were doing all the good stuff they said they were doing. So thank you to Let’s Encrypt for providing an exemplary service and for giving me a happy little anecdote.

2 weeks ago 15 votes
performance of random floats

A couple of years ago I wrote about random floating point numbers. In that article I was mainly concerned about how neat the code is, and I didn’t pay attention to its performance. Recently, a comment from Oliver Hunt and a blog post from Alisa Sireneva prompted me to wonder if I made an unwarranted assumption. So I wrote a little benchmark, which you can find in pcg-dxsm.git. As a brief recap, there are two basic ways to convert a random integer to a floating point number between 0.0 and 1.0: Use bit fiddling to construct an integer whose format matches a float between 1.0 and 2.0; this is the same span as the result but with a simpler exponent. Bitcast the integer to a float and subtract 1.0 to get the result. Shift the integer down to the same range as the mantissa, convert to float, then multiply by a scaling factor that reduces it to the desired range. This produces one more bit of randomness than the bithacking conversion. (There are other less basic ways.) My benchmark has 2 x 2 x 2 tests: bithacking vs multiplying 32 bit vs 64 bit sequential integers vs random integers Each operation is isolated from the benchmark loop by putting it in a separate translation unit (to prevent the compiler from inlining) and there is a fence instruction (ISB SY on ARM, MFENCE on AMD) in the loop to stop the CPU from overlapping successive iterations. I ran the benchmark on my Apple M1 Pro and my AMD Ryzen 7950X. In the table below, the leftmost column is the number of random bits. The top half measures sequential numbers, the bottom half is random numbers. The times are nanoseconds per operation, which includes the overheads of the benchmark loop and function call. arm amd 23 12.15 11.22 24 13.37 11.21 52 12.11 11.02 53 13.38 11.20 23 14.75 12.62 24 15.85 12.81 52 16.78 14.23 53 18.02 14.41 The times vary a little from run to run but the difference in speed of the various loops is reasonably consistent. I think my conclusion is that the bithacking conversion is about 1ns faster than the multiply conversion on my ARM box. There’s a subnanosecond difference on my AMD box which might indicate that the conversion takes different amounts of time depending on the value? Dunno.

a month ago 22 votes
moka pot notes

In hot weather I like to drink my coffee in an iced latte. To make it, I have a very large Bialetti Moka Express. Recently when I got it going again after a winter of disuse, it took me a couple of attempts to get the technique right, so here are some notes as a reminder to my future self next year. It’s worth noting that I’m not fussy about my coffee: I usually drink pre-ground beans from the supermarket, with cream (in winter hot coffee) or milk and ice. basic principle When I was getting the hang of my moka pot, I learned from YouTube coffee geeks such as James Hoffmann that the main aim is for the water to be pushed through the coffee smoothly and gently. Better to err on the side of too little flow than too much. I have not had much success trying to make fine temperature adjustments while the coffee is brewing, because the big moka pot has a lot of thermal inertia: it takes a long time for any change in gas level to have any effect on on the coffee flow. routine fill the kettle and turn it on put the moka pot’s basket in a mug to keep it stable fill it with coffee (mine needs about 4 Aeropress scoops) tamp it down firmly [1] when the kettle has boiled, fill the base of the pot to just below the pressure valve (which is also just below the filter screen in the basket) insert the coffee basket, making sure there are no stray grounds around the edge where the seal will mate screw on the upper chamber firmly put it on a small gas ring turned up to the max [2] leave the lid open and wait for the coffee to emerge immediately turn the gas down to the minimum [3] the coffee should now come out in a steady thin stream without spluttering or stalling when the upper chamber is filled near the mouths of the central spout, it’ll start fizzing or spluttering [4] turn off the gas and pour the coffee into a carafe notes If I don’t tamp the grounds, the pot tends to splutter. I guess tamping gives the puck better integrity to resist channelling, and to keep the water under even pressure. Might be an effect of the relatively coarse supermarket grind? It takes a long time to get the pot back up to boiling point and I’m not sure that heating it up slower helps. The main risk, I think, is overshooting the ideal steady brewing state too much, but: With my moka pot on my hob the lowest gas flow on the smallest rings is just enough to keep the coffee flowing without stalling. The flow when the coffee first emerges is relatively fast, and it slows to the steady state several seconds after I turn the heat down, so I think the overshoot isn’t too bad. This routine turns almost all of the water into coffee, which Hoffmann suggests is a good result, and a sign that the pressure and temperature aren’t getting too high.

a month ago 16 votes
the algebra of dependent types

TIL (or this week-ish I learned) why big-sigma and big-pi turn up in the notation of dependent type theory. I’ve long been aware of the zoo of more obscure Greek letters that turn up in papers about type system features of functional programming languages, μ, Λ, Π, Σ. Their meaning is usually clear from context but the reason for the choice of notation is usually not explained. I recently stumbled on an explanation for Π (dependent functions) and Σ (dependent pairs) which turn out to be nicer than I expected, and closely related to every-day algebraic data types. sizes of types The easiest way to understand algebraic data types is by counting the inhabitants of a type. For example: the unit type () has one inhabitant, (), and the number 1 is why it’s called the unit type; the bool type hass two inhabitants, false and true. I have even seen these types called 1 and 2 (cruelly, without explanation) in occasional papers. product types Or pairs or (more generally) tuples or records. Usually written, (A, B) The pair contains an A and a B, so the number of possible values is the number of possible A values multiplied by the number of possible B values. So it is spelled in type theory (and in Standard ML) like, A * B sum types Or disjoint union, or variant record. Declared in Haskell like, data Either a b = Left a | Right b Or in Rust like, enum Either<A, B> { Left(A), Right(B), } A value of the type is either an A or a B, so the number of possible values is the number of A values plus the number of B values. So it is spelled in type theory like, A + B dependent pairs In a dependent pair, the type of the second element depends on the value of the first. The classic example is a slice, roughly, struct IntSlice { len: usize, elem: &[i64; len], } (This might look a bit circular, but the idea is that an array [i64; N] must be told how big it is – its size is an explicit part of its type – but an IntSlice knows its own size. The traditional dependent “vector” type is a sized linked list, more like my array type than my slice type.) The classic way to write a dependent pair in type theory is like,      Σ len: usize . Array(Int, len) The big sigma binds a variable that has a type annotation, with a scope covering the expression after the dot – similar syntax to a typed lambda expression. We can expand a simple example like this into a many-armed sum type: either an array of length zero, or an array of length 1, or an array of length 2, … but in a sigma type the discriminant is user-defined instead of hidden. The number of possible values of the type comes from adding up all the alternatives, a summation just like the big sigma summation we were taught in school. ∑ a ∈ A B a When the second element doesn’t depend on the first element, we can count the inhabitants like, ∑ A B = A*B And the sigma type simplifies to a product type. telescopes An aside from the main topic of these notes, I also recently encountered the name “telescope” for a multi-part dependent tuple or record. The name “telescope” comes from de Bruijn’s AUTOMATH, one of the first computerized proof assistants. (I first encountered de Bruijn as the inventor of numbered lambda bindings.) dependent functions The return type of a dependent function can vary according to the argument it is passed. For example, to construct an array we might write something like, fn repeat_zero(len: usize) -> [i64; len] { [0; len] } The classic way to write the type of repeat_zero() is very similar to the IntSlice dependent pair, but with a big pi instead of a big sigma:      Π len: usize . Array(Int, len) Mmm, pie. To count the number of possible (pure, total) functions A ➞ B, we can think of each function as a big lookup table with A entries each containing a B. That is, a big tuple (B, B, … B), that is, B * B * … * B, that is, BA. Functions are exponential types. We can count a dependent function, where the number of possible Bs depends on which A we are passed, ∏ a ∈ A B a danger I have avoided the terms “dependent sum” and “dependent product”, because they seem perfectly designed to cause confusion over whether I am talking about variants, records, or functions. It kind of makes me want to avoid algebraic data type jargon, except that there isn’t a good alternative for “sum type”. Hmf.

a month ago 26 votes

More in programming

Measurement and Numbers

Here’s Jony Ive talking to Patrick Collison about measurement and numbers: People generally want to talk about product attributes that you can measure easily with a number…schedule, costs, speed, weight, anything where you can generally agree that six is a bigger number than two He says he used to get mad at how often people around him focused on the numbers of the work over other attributes of the work. But after giving it more thought, he now has a more generous interpretation of why we do this: because we want relate to each other, understand each other, and be inclusive of one another. There are many things we can’t agree on, but it’s likely we can agree that six is bigger than two. And so in this capacity, numbers become a tool for communicating with each other, albeit a kind of least common denominator — e.g. “I don’t agree with you at all, but I can’t argue that 134 is bigger than 87.” This is conducive to a culture where we spend all our time talking about attributes we can easily measure (because then we can easily communicate and work together) and results in a belief that the only things that matter are those which can be measured. People will give lip service to that not being the case, e.g. “We know there are things that can’t be measured that are important.” But the reality ends up being: only that which can be assigned a number gets managed, and that which gets managed is imbued with importance because it is allotted our time, attention, and care. This reminds me of the story of the judgement of King Solomon, an archetypal story found in cultures around the world. Here’s the story as summarized on Wikipedia: Solomon ruled between two women who both claimed to be the mother of a child. Solomon ordered the baby be cut in half, with each woman to receive one half. The first woman accepted the compromise as fair, but the second begged Solomon to give the baby to her rival, preferring the baby to live, even without her. Solomon ordered the baby given to the second woman, as her love was selfless, as opposed to the first woman's selfish disregard for the baby's actual well-being In an attempt to resolve the friction between two individuals, an appeal was made to numbers as an arbiter. We can’t agree on who the mother is, so let’s make it a numbers problem. Reduce the baby to a number and we can agree! But that doesn’t work very well, does it? I think there is a level of existence where measurement and numbers are a sound guide, where two and two make four and two halves make a whole. But, as humans, there is another level of existence where mathematical propositions don’t translate. A baby is not a quantity. A baby is an entity. Take a whole baby and divide it up by a sword and you do not half two halves of a baby. I am not a number. I’m an individual. Indivisible. What does this all have to do with software? Software is for us as humans, as individuals, and because of that I believe there is an aspect of its nature where metrics can’t take you.cIn fact, not only will numbers not guide you, they may actually misguide you. I think Robin Rendle articulated this well in his piece “Trust the vibes”: [numbers] are not representative of human experience or human behavior and can’t tell you anything about beauty or harmony or how to be funny or what to do next and then how to do it. Wisdom is knowing when to use numbers and when to use something else. Email · Mastodon · Bluesky

57 minutes ago 1 votes
The 6 Hours of Lex

When I drive the 24 Hours of Le Mans, I spend a total of about 6-9 hours in the car, divided into stints of roughly two hours at a time. It's intense. But talking with Lex Fridman in Austin on his podcast? Over six hours straight! We only interrupted the session for five minutes total to take three bathroom breaks. All that endurance training has clearly paid off! But the magic of a good conversation, like the magic of driving at Le Mans, is that time flies by. Those six hours felt more like sixty minutes. This is what flow does: it compresses the moment. Besides, we had plenty to talk about. Lex prepares like no other podcast I've ever been on. Pages and pages of notes. Deep questions, endless attention for tangents. We covered the beauty of Ruby for half an hour alone! But also the future of AI, small teams, why we left the cloud, Elon Musk, fatherhood, money and happiness, and a million other topics (which Lex mercifully timestamps, so listeners without six hours to spare can hop around). It was a privilege to appear. If you're interested, the conversation is on YouTube, on Spotify, on X, and as a regular podcast.

an hour ago 1 votes
Computers Are a Feeling

Exploring diagram.website, I came across The Computer is a Feeling by Tim Hwang and Omar Rizwan: the modern internet exerts a tyranny over our imagination. The internet and its commercial power has sculpted the computer-device. It's become the terrain of flat, uniform, common platforms and protocols, not eccentric, local, idiosyncratic ones. Before computers were connected together, they were primarily personal. Once connected, they became primarily social. The purpose of the computer shifted to become social over personal. The triumph of the internet has also impoverished our sense of computers as a tool for private exploration rather than public expression. The pre-network computer has no utility except as a kind of personal notebook, the post-network computer demotes this to a secondary purpose. Smartphones are indisputably the personal computer. And yet, while being so intimately personal, they’re also the largest distribution of behavior-modification devices the world has ever seen. We all willing carry around in our pockets a device whose content is largely designed to modify our behavior and extract our time and money. Making “computer” mean computer-feelings and not computer-devices shifts the boundaries of what is captured by the word. It removes a great many things – smartphones, language models, “social” “media” – from the domain of the computational. It also welcomes a great many things – notebooks, papercraft, diary, kitchen – back into the domain of the computational. I love the feeling of a personal computer, one whose purpose primarily resides in the domain of the individual and secondarily supports the social. It’s part of what I love about the some of the ideas embedded in local-first, which start from the principle of owning and prioritizing what you do on your computer first and foremost, and then secondarily syncing that to other computers for the use of others. Email · Mastodon · Bluesky

4 days ago 7 votes