More from Tony Finch's blog
Last year I wrote about inlining just the fast path of Lemire’s algorithm for nearly-divisionless unbiased bounded random numbers. The idea was to reduce code bloat by eliminating lots of copies of the random number generator in the rarely-executed slow paths. However a simple split prevented the compiler from being able to optimize cases like pcg32_rand(1 << n), so a lot of the blog post was toying around with ways to mitigate this problem. On Monday while procrastinating a different blog post, I realised that it’s possible to do better: there’s a more general optimization which gives us the 1 << n special case for free. nearly divisionless Lemire’s algorithm has about 4 neat tricks: use multiplication instead of division to reduce the output of a random number generator modulo some limit eliminate the bias in (1) by (counterintuitively) looking at the lower digits fun modular arithmetic to calculate the reject threshold for (2) arrange the reject tests to avoid the slow division in (3) in most cases The nearly-divisionless logic in (4) leads to two copies of the random number generator, in the fast path and the slow path. Generally speaking, compilers don’t try do deduplicate code that was written by the programmer, so they can’t simplify the nearly-divisionless algorithm very much when the limit is constant. constantly divisionless Two points occurred to me: when the limit is constant, the reject threshold (3) can be calculated at compile time when the division is free, there’s no need to avoid it using (4) These observations suggested that when the limit is constant, the function for random numbers less than a limit should be written: static inline uint32_t pcg32_rand_const(pcg32_t *rng, uint32_t limit) { uint32_t reject = -limit % limit; uint64_t sample; do sample = (uint64_t)pcg32_random(rng) * (uint64_t)limit); while ((uint32_t)(sample) < reject); return ((uint32_t)(sample >> 32)); } This has only one call to pcg32_random(), saving space as I wanted, and the compiler is able to eliminate the loop automatically when the limit is a power of two. The loop is smaller than a call to an out-of-line slow path function, so it’s better all round than the code I wrote last year. algorithm selection As before it’s possible to automatically choose the constantly-divisionless or nearly-divisionless algorithms depending on whether the limit is a compile-time constant or run-time variable, using arcane C tricks or GNU C __builtin_constant_p(). I have been idly wondering how to do something similar in other languages. Rust isn’t very keen on automatic specialization, but it has a reasonable alternative. The thing to avoid is passing a runtime variable to the constantly-divisionless algorithm, because then it becomes never-divisionless. Rust has a much richer notion of compile-time constants than C, so it’s possible to write a method like the follwing, which can’t be misused: pub fn upto<const LIMIT: u32>(&mut self) -> u32 { let reject = LIMIT.wrapping_neg().wrapping_rem(LIMIT); loop { let (lo, hi) = self.get_u32().embiggening_mul(LIMIT); if lo < reject { continue; } else { return hi; } } } assert!(rng.upto::<42>() < 42); (embiggening_mul is my stable replacement for the unstable widening_mul API.) This is a nugatory optimization, but there are more interesting cases where it makes sense to choose a different implementation for constant or variable arguments – that it, the constant case isn’t simply a constant-folded or partially-evaluated version of the variable case. Regular expressions might be lex-style or pcre-style, for example. It’s a curious question of language design whether it should be possible to write a library that provides a uniform API that automatically chooses constant or variable implementations, or whether the user of the library must make the choice explicit. Maybe I should learn some Zig to see how its comptime works.
One of the neat things about the PCG random number generator by Melissa O’Neill is its use of instruction-level parallelism: the PCG state update can run in parallel with its output permutation. However, PCG only has a limited amount of ILP, about 3 instructions. Its overall speed is limited by the rate at which a CPU can run a sequence where the output of one multiply-add feeds into the next multiply-add. … Or is it? With some linear algebra and some AVX512, I can generate random numbers from a single instance of pcg32 at 200 Gbit/s on a single core. This is the same sequence of random numbers generated in the same order as normal pcg32, but more than 4x faster. You can look at the benchmark in my pcg-dxsm repository. skip ahead the insight multipliers trying it out results skip ahead One of the slightly weird features that PCG gets from its underlying linear congruential generator is “seekability”: you can skip ahead k steps in the stream of random numbers in log(k) time. The PCG paper (in section 4.3.1) cites Forrest Brown’s paper, random numbers with arbitrary strides, which explains that the skip-ahead feature is useful for reproducibility of monte carlo simulations. But what caught my eye is the skip-ahead formula. Rephrased in programmer style, state[n+k] = state[n] * pow(MUL, k) + inc * (pow(MUL, k) - 1) / (MUL - 1) the insight The skip-ahead formula says that we can calculate a future state using a couple of multiplications. The skip-ahead multipliers depend only on the LCG multiplier, not on the variable state, nor on the configurable increment. That means that for a fixed skip ahead, we can precalculate the multipliers before compile time. The skip-ahead formula allows us to unroll the PCG data dependency chain. Normally, four iterations of the PCG state update look like, state0 = rng->state state1 = state0 * MUL + rng->inc state2 = state1 * MUL + rng->inc state3 = state2 * MUL + rng->inc state4 = state3 * MUL + rng->inc rng->state = state4 With the skip-ahead multipliers it looks like, state0 = rng->state state1 = state0 * MULs1 + rng->inc * MULi1 state2 = state0 * MULs2 + rng->inc * MULi2 state3 = state0 * MULs3 + rng->inc * MULi3 state4 = state0 * MULs4 + rng->inc * MULi4 rng->state = state4 These state calculations can be done in parallel using NEON or AVX vector instructions. The disadvantage is that calculating future states in parallel requires more multiplications than doing so in series, but that’s OK because modern CPUs have lots of ALUs. multipliers The skip-ahead formula is useful for jumping ahead long distances, because (as Forrest Brown explained) you can do the exponentiation in log(k) time using repeated squaring. (The same technique is used in for modexp in RSA.) But I’m only interested in the first few skip-ahead multipliers. I’ll define the linear congruential generator as: lcg(s, inc) = s * MUL + inc Which is used in PCG’s normal state update like: rng->state = lcg(rng->state, rng->inc) To precalculate the first few skip-ahead multipliers, we iterate the LCG starting from zero and one, like this: MULs0 = 1 MULs1 = lcg(MULs0, 0) MULs2 = lcg(MULs1, 0) MULi0 = 0 MULi1 = lcg(MULi0, 1) MULi2 = lcg(MULi1, 1) My benchmark code’s commentary includes a proof by induction, which I wrote to convince myself that these multipliers are correct. trying it out To explore how well this skip-ahead idea works, I have written a couple of variants of my pcg32_bytes() function, which simply iterates pcg32 and writes the results to a byte array. The variants have an adjustable amount of parallelism. One variant is written as scalar code in a loop that has been unrolled by hand a few times. I wanted to see if standard C gets a decent speedup, perhaps from autovectorization. The other variant uses the GNU C portable vector extensions to calculate pcg32 in an explicitly parallel manner. The benchmark also ensures the output from every variant matches the baseline pcg32_bytes(). results The output from the benchmark harness lists: the function variant either the baseline version or uN for a scalar loop unrolled N times or xN for vector code with N lanes its speed in bytes per nanosecond (aka gigabytes per second) its performance relative to the baseline There are small differences in style between the baseline and u1 functions, but their performance ought to be basically the same. Apple clang 16, Macbook Pro M1 Pro. This compiler is eager and fairly effective at autovectorizing. ARM NEON isn’t big enough to get a speedup from 8 lanes of parallelism. __ 3.66 bytes/ns x 1.00 u1 3.90 bytes/ns x 1.07 u2 6.40 bytes/ns x 1.75 u3 7.66 bytes/ns x 2.09 u4 8.52 bytes/ns x 2.33 x2 7.59 bytes/ns x 2.08 x4 10.49 bytes/ns x 2.87 x8 10.40 bytes/ns x 2.84 The following results were from my AMD Ryzen 9 7950X running Debian 12 “bookworm”, comparing gcc vs clang, and AVX2 vs AVX512. gcc is less keen to autovectorize so it doesn’t do very well with the unrolled loops. (Dunno why u1 is so much slower than the baseline.) gcc 12.2 -march=x86-64-v3 __ 5.57 bytes/ns x 1.00 u1 5.13 bytes/ns x 0.92 u2 5.03 bytes/ns x 0.90 u3 7.01 bytes/ns x 1.26 u4 6.83 bytes/ns x 1.23 x2 3.96 bytes/ns x 0.71 x4 8.00 bytes/ns x 1.44 x8 12.35 bytes/ns x 2.22 clang 16.0 -march=x86-64-v3 __ 4.89 bytes/ns x 1.00 u1 4.08 bytes/ns x 0.83 u2 8.76 bytes/ns x 1.79 u3 10.43 bytes/ns x 2.13 u4 10.81 bytes/ns x 2.21 x2 6.67 bytes/ns x 1.36 x4 12.67 bytes/ns x 2.59 x8 15.27 bytes/ns x 3.12 gcc 12.2 -march=x86-64-v4 __ 5.53 bytes/ns x 1.00 u1 5.53 bytes/ns x 1.00 u2 5.55 bytes/ns x 1.00 u3 6.99 bytes/ns x 1.26 u4 6.79 bytes/ns x 1.23 x2 4.75 bytes/ns x 0.86 x4 17.14 bytes/ns x 3.10 x8 20.90 bytes/ns x 3.78 clang 16.0 -march=x86-64-v4 __ 5.53 bytes/ns x 1.00 u1 4.25 bytes/ns x 0.77 u2 7.94 bytes/ns x 1.44 u3 9.31 bytes/ns x 1.68 u4 15.33 bytes/ns x 2.77 x2 9.07 bytes/ns x 1.64 x4 21.74 bytes/ns x 3.93 x8 26.34 bytes/ns x 4.76 That last result is pcg32 generating random numbers at 200 Gbit/s.
D’oh, I lost track of a bug report that should have been fixed in nsnotifyd-2.2. Thus, hot on the heels of [the previous release][prev], here’s nsnotifyd-2.3. Sorry for causing extra work to my uncountably many users! The nsnotifyd daemon monitors a set of DNS zones and runs a command when any of them change. It listens for DNS NOTIFY messages so it can respond to changes promptly. It also uses each zone’s SOA refresh and retry parameters to poll for updates if nsnotifyd does not receive NOTIFY messages more frequently. It comes with a client program nsnotify for sending notify messages. This nsnotifyd-2.3 release includes some bug fixes: When nsnotifyd receives a SIGINT or SIGTERM while running the command, it failed to handle it correctly. Now it exits promptly. Many thanks to Athanasius for reporting the bug! Miscellaneous minor code cleanup and compiler warning suppression. Thanks also to Dan Langille who sent me a lovely appreciation: Now that I think of it, nsnotifyd is in my favorite group of software. That group is software I forget I’m running, because they just run and do the work. For years. I haven’t touched, modified, or configured nsnotifyd and it just keeps doing the job.
I have made a new release of nsnotifyd, a tiny DNS server that just listens for NOTIFY messages and runs a script when one of your zones changes. This nsnotifyd-2.2 release includes a new feature: nsnotify can now send NOTIFY messages from a specific source address Thanks to Adam Augustine for the suggestion. I like receiving messages that say things like, Thanks for making this useful tool available for free.
More in programming
Last year I wrote about inlining just the fast path of Lemire’s algorithm for nearly-divisionless unbiased bounded random numbers. The idea was to reduce code bloat by eliminating lots of copies of the random number generator in the rarely-executed slow paths. However a simple split prevented the compiler from being able to optimize cases like pcg32_rand(1 << n), so a lot of the blog post was toying around with ways to mitigate this problem. On Monday while procrastinating a different blog post, I realised that it’s possible to do better: there’s a more general optimization which gives us the 1 << n special case for free. nearly divisionless Lemire’s algorithm has about 4 neat tricks: use multiplication instead of division to reduce the output of a random number generator modulo some limit eliminate the bias in (1) by (counterintuitively) looking at the lower digits fun modular arithmetic to calculate the reject threshold for (2) arrange the reject tests to avoid the slow division in (3) in most cases The nearly-divisionless logic in (4) leads to two copies of the random number generator, in the fast path and the slow path. Generally speaking, compilers don’t try do deduplicate code that was written by the programmer, so they can’t simplify the nearly-divisionless algorithm very much when the limit is constant. constantly divisionless Two points occurred to me: when the limit is constant, the reject threshold (3) can be calculated at compile time when the division is free, there’s no need to avoid it using (4) These observations suggested that when the limit is constant, the function for random numbers less than a limit should be written: static inline uint32_t pcg32_rand_const(pcg32_t *rng, uint32_t limit) { uint32_t reject = -limit % limit; uint64_t sample; do sample = (uint64_t)pcg32_random(rng) * (uint64_t)limit); while ((uint32_t)(sample) < reject); return ((uint32_t)(sample >> 32)); } This has only one call to pcg32_random(), saving space as I wanted, and the compiler is able to eliminate the loop automatically when the limit is a power of two. The loop is smaller than a call to an out-of-line slow path function, so it’s better all round than the code I wrote last year. algorithm selection As before it’s possible to automatically choose the constantly-divisionless or nearly-divisionless algorithms depending on whether the limit is a compile-time constant or run-time variable, using arcane C tricks or GNU C __builtin_constant_p(). I have been idly wondering how to do something similar in other languages. Rust isn’t very keen on automatic specialization, but it has a reasonable alternative. The thing to avoid is passing a runtime variable to the constantly-divisionless algorithm, because then it becomes never-divisionless. Rust has a much richer notion of compile-time constants than C, so it’s possible to write a method like the follwing, which can’t be misused: pub fn upto<const LIMIT: u32>(&mut self) -> u32 { let reject = LIMIT.wrapping_neg().wrapping_rem(LIMIT); loop { let (lo, hi) = self.get_u32().embiggening_mul(LIMIT); if lo < reject { continue; } else { return hi; } } } assert!(rng.upto::<42>() < 42); (embiggening_mul is my stable replacement for the unstable widening_mul API.) This is a nugatory optimization, but there are more interesting cases where it makes sense to choose a different implementation for constant or variable arguments – that it, the constant case isn’t simply a constant-folded or partially-evaluated version of the variable case. Regular expressions might be lex-style or pcre-style, for example. It’s a curious question of language design whether it should be possible to write a library that provides a uniform API that automatically chooses constant or variable implementations, or whether the user of the library must make the choice explicit. Maybe I should learn some Zig to see how its comptime works.
A few months ago I wrote about what it means to stay gold — to hold on to the best parts of ourselves, our communities, and the American Dream itself. But staying gold isn’t passive. It takes work. It takes action. It takes hard conversations that ask
<![CDATA[I wrote Bitsnap, a tool in Interlisp for capturing screenshots on the Medley environment. It can capture and optionally save to a file the full screen, a window with or without title bar and borders, or an arbitrary area. This project helped me learn the internals of Medley, such as extending the background menu, and produced a tool I wanted. For example, with Bitsnap I can capture some areas like specific windows without manually framing them; or the full screen of Medley excluding the title bar and borders of the operating systems that hosts Medley, Linux in my case. Medley can natively capture various portions of the screen. These facilities produce 1-bit images as instances of BITMAP, an image data structure Medley uses for everything from bit patterns, to icons, to actual images. Some Lisp functions manipulate bitmaps. Bitsnap glues together these facilities and packages them in an interactive interface accessible as a submenu of the background menu as well as a programmatic interface, the Interlisp function SNAP. To provide feedback after a capture Bitsnap displays in a window the area just captured, as shown here along with the Bitsnap menu. A bitmap captured with the Bitsnap screenshot tool and its menu on Medley Interlisp. The tool works by copying to a new bitmap the system bitmap that holds the designated area of the screen. Which is straighforward as there are Interlisp functions for accessing the source bitmaps. These functions return a BITMAP and capture: SCREENBITMAP: the full screen WINDOW.BITMAP: a window including the title bar and border BITMAPCOPY: the interior of a window with no title bar and border SNAPW: an arbitrary area The slightly more involved part is bringing captured bitmaps out of Medley in a format today's systems and tools understand. Some Interlisp functions can save a BITMAP to disk in text and binary encodings, none of which are modern standards. The only Medley tool to export to a modern — or less ancient — format less bound to Lisp is the xerox-to-xbm module which converts a BITMAP to the Unix XBM (X BitMap) format. However, xerox-to-xbm can't process large bitmaps. To work around the issue I wrote the function BMTOPBM that saves a BITMAP to a file in a slightly more modern and popular format, PBM (Portable BitMap). I can't think of anything simpler and, indeed, it took me just half a dozen minutes to write the function. Linux and other modern operating systems can natively display PBM files and Netpbm converts PBM to PNG and other widely used standards. For example, this Netpbm pipeline converts to PNG: $ pbmtopnm screenshot.pbm | pnmtopng screenshot.png BMTOPBM can handle bitmaps of any size but its simple algorithm is inefficient. However, on my PC the function takes about 5 seconds to save a 1920x1080 bitmap, which is the worst case as this is the maximum screen size Medley allows. Good enough for the time being. Bitsnap does pretty much all I want and doesn't need major new features. Still, I may optimize BMTOPBM or save directly to PNG. #Interlisp #Lisp a href="https://remark.as/p/journal.paoloamoroso.com/bitsnap-a-screenshot-capture-tool-for-medley-interlisp"Discuss.../a Email | Reply @amoroso@fosstodon.org !--emailsub--]]>
I developed seasonal allergies relatively late in life. From my late twenties onward, I spent many miserable days in the throes of sneezing, headache, and runny eyes. I tried everything the doctors recommended for relief. About a million different types of medicine, several bouts of allergy vaccinations, and endless testing. But never once did an allergy doctor ask the basic question: What kind of air are you breathing? Turns out that's everything when you're allergic to pollen, grass, and dust mites! The air. That's what's carrying all this particulate matter, so if your idea of proper ventilation is merely to open a window, you're inviting in your nasal assailants. No wonder my symptoms kept escalating. For me, the answer was simply to stop breathing air full of everything I'm allergic to while working, sleeping, and generally just being inside. And the way to do that was to clean the air of all those allergens with air purifiers running HEPA-grade filters. That's it. That was the answer! After learning this, I outfitted everywhere we live with these machines of purifying wonder: One in the home office, one in the living area, one in the bedroom. All monitored for efficiency using Awair air sensors. Aiming to have the PM2.5 measure read a fat zero whenever possible. In America, I've used the Alen BreatheSmart series. They're great. And in Europe, I've used the Philips ones. Also good. It's been over a decade like this now. It's exceptionally rare that I have one of those bad allergy days now. It can still happen, of course — if I spend an entire day outside, breathing in allergens in vast quantities. But as with almost everything, the dose makes the poison. The difference between breathing in some allergens, some of the time, is entirely different from breathing all of it, all of the time. I think about this often when I see a doctor for something. Here was this entire profession of allergy specialists, and I saw at least a handful of them while I was trying to find a medical solution. None of them even thought about dealing with the environment. The cause of the allergy. Their entire field of view was restricted to dealing with mitigation rather than prevention. Not every problem, medical or otherwise, has a simple solution. But many problems do, and you have to be careful not to be so smart that you can't see it.