Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
92
This write-up is meant to present the rationale and technical details behind a tiny project I wrote the other day, WTH, or WindowTitleHack, which is meant to force a constant window name for apps that keep changing it (I'm looking specifically at Firefox and Krita, but there are probably many others). Why tho? I've been streaming on Twitch from Linux (X11) with a barebone OBS Studio setup for a while now, and while most of the experience has been relatively smooth, one particularly striking frustration has been dealing with windows detection. If you don't want to capture the whole desktop for privacy reasons or simply to have control over the scene layout depending on the currently focused app, you need to rely on the Window Capture (XComposite) source. This works mostly fine, and it is actually able to track windows even when their title bar is renamed. But obviously, upon restart it can't find it again because both the window title and the window ID changed, meaning you have to redo...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from A small freedom area RSS

Fixing the iterative damping interpolation in video games

As I'm exploring the fantastic world of indie game development lately, I end up watching a large number of video tutorials on the subject. Even though the quality of the content is pretty variable, I'm very grateful to the creators for it. That being said, I couldn't help noticing this particular bit times and times again: a = lerp(a, B, delta * RATE) Behind this apparent banal call hides a terrible curse, forever perpetrated by innocent souls on the Internet. In this article we will study what it's trying to achieve, how it works, why it's wrong, and then we'll come up with a good solution to the initial problem. The usual warning: I don't have a mathematics or academic background, so the article is addressed at other neanderthals like myself, who managed to understand that pressing keys on a keyboard make pixels turn on and off. What is it? Let's start from the beginning. We're in a game engine main loop callback called at a regular interval (roughly), passing down the time difference from the last call. In Godot engine, it looks like this: func _physics_process(delta: float): ... If the game is configured to refresh at 60 FPS, we can expect this function to be called around 60 times per second with delta = 1/60 = 0.01666.... As a game developer, we want some smooth animations for all kind of transformations. For example, we may want the speed of the player to go down to zero as they release the moving key. We could do that linearly, but to make the stop less brutal and robotic we want to slow down the speed progressively. Linear (top) versus smooth/exponential (bottom) animation Virtually every tutorial will suggest updating some random variable with something like that: velocity = lerp(velocity, 0, delta * RATE) At 60 FPS, a decay RATE defined to 3.5, and an initial velocity of 100, the velocity will go down to 0 following this curve: Example curve of a decaying variable Note: velocity is just a variable name example, it can be found in many other contexts If you're familiar with lerp() ("linear interpolation") you may be wondering why this is making a curve. Indeed, this lerp() function, also known as mix(), is a simple linear function defined as lerp(a,b,x) = x*(b-a) + a or its alternative stable form lerp(a,b,x) = (1-a)x + bx. For more information, see a previous article about this particular function. But here we are re-using the previous value, so this essentially means nesting lerp() function calls, which expands into a power formula, forming a curve composed of a chain of small straight segments. Why is it wrong? The main issue is that the formula is heavily depending on the refresh rate. If the game is supposed to work at 30, 60, or 144 FPS, then it means the physics engine is going to behave differently. Here is an illustration of the kind of instability we can expect: Comparison of the curves at different frame rates with the problematic formula Note that the inaccuracy when compared to an ideal curve is not the issue here. The problem is that the game mechanics are different depending on the hardware, the system, and the wind direction observed in a small island of Japan. Imagine being able to jump further if we replace our 60Hz monitor with a 144Hz one, that would be some nasty pay to win incentive. We may be able to get away with this by forcing a constant refresh rate for the game and consider this a non-issue (I'm not convinced this is achievable on all engines and platforms), but then we meet another problem: the device may not be able to hold this requirement at all times because of potential lags (for reasons that may be outside our control). That's right, so far we assumed delta=1/FPS but that's merely a target, it could fluctuate, causing mild to dramatic situations gameplay wise. One last issue with that formula is the situation of a huge delay spike, causing an overshooting of the target. For example if we have RATE=3 and we end up with a frame that takes 500ms for whatever random reason, we're going to interpolate with a value of 1.5, which is way above 1. This is easily fixed by maxing out the 3rd argument of lerp to 1, but we have to keep that issue in mind. To summarize, the formula is: not frame rate agnostic ❌ non deterministic ❌ vulnerable to overshooting ❌ If you're not interested in the gory details on the how, you can now jump straight to the conclusion for a better alternative. Study We're going to switch to a more mathematical notation from now on. It's only going to be linear algebra, nothing particularly fancy, but we're going to make a mess of 1 letter symbols, bear with me. Let's name the exhaustive list of inputs of our problem: initial value: a_0=\Alpha (from where we start, only used once) target value: \Beta (where we are going, constant value) time delta: \Delta_n (time difference from last call) the rate of change: R (arbitrary scaling user constant) original sequence: a_{n+1} = \texttt{lerp}(a_n, \Beta, R\Delta_n) (the code in the main loop callback) frame rate: F (the target frame rate, for example 60 FPS) time: t (animation time elapsed) What we are looking for is a new sequence formula u_n (u standing for purfect) that doesn't have the 3 previously mentioned pitfalls. The first thing we can do is to transform this recursive sequence into the expected ideal contiguous time based function. The original sequence was designed for a given rate R and FPS F: this means that while \Delta_n changes in practice every frame, the ideal function we are looking for is constant: \Delta=1/F. So instead of starting from a_{n+1} = \texttt{lerp}(a_n, \Beta, R\Delta_n), we will look for u_n starting from u_{n+1} = \texttt{lerp}(u_n, \Beta, R\Delta) with u_0=a_0=\Alpha. Since I'm lazy and incompetent, we are just going to ask WolframAlpha for help finding the solution to the recursive sequence. But to feed its input we need to simplify the terms a bit: ...with P=(1-R\Delta) and Q=\Beta R\Delta. We do that so we have a familiar ax+b linear form. According to WolframAlpha this is equivalent to: This is great because we now have the formula according to n, our frame number. We can also express that discrete sequence into a contiguous function according to the time t: Expanding our temporary P and Q placeholders with their values and unrolling, we get: This function perfectly matches the initial lerp() sequence in the hypothetical situation where the frame rate is honored. Basically, it's what the sequence a_{n+1} was meant to emulate at a given frame rate F. Note: we swapped the first 2 terms of lerp() at the last step because it makes more sense semantically to go from \Alpha to \Beta. Let's again summarize what we have and what we want: we're into the game main loop and we want our running value to stick to that f(t) function. We have: v=f(t): the value previously computed (t is the running duration so far, but we don't have it); in the original sequence this is known as a_n \Delta_n: the delta time for the current frame We are looking for a function \Eta(v,\Delta_n) which defines the position of a new point on the curve, only knowing v and \Delta_n. It's a "time agnostic" version of f(t). Basically, it is defined as \Eta(v,\Delta_n)=f(t+\Delta_n), but since we don't have t it's not very helpful. That being said, while we don't have t, we do have f(t) (the previous value v). Looking at the curve, we know the y-value of the previous point, and we know the difference between the new point and the previous point on the x-axis: Previous and current point in time If we want t (the total time elapsed at the previous point), we need the inverse function f^{-1}. Indeed, t = f^{-1}(f(t)): taking the inverse of a function gives back the input. We know f so we can inverse it, relying on WolframAlpha again (what a blessing this website is): Note: \ln stands for natural logarithm, sometimes also called \log. Careful though, on Desmos for example \log is in base 10, not base e (while its \exp is in base e for some reason). This complex formula may feel a bit intimidating but we can now find \Eta only using its two parameters: It's pretty messy but we can simplify that down to something much simpler. An interesting property that is going to be helpful here is m^n = e^{n \ln m}. For my fellow programmers getting tensed here: pow(m, n) == exp(n * log(m)). Similarly, e^{\ln x} = x (exp(log(x)) == x), which give us an idea where this is going. Again we swapped the first 2 arguments of lerp at the last step at the cost of an additional subtraction: this is more readable because \Beta is our destination point. Rewriting this in a sequence notation, we get: We still have this annoying F\ln(1-R/F) bit in the formula, but we can take it out and precompute it because it is constant: it's our rate conversion formula: We're going to make one extra adjustment: R' is negative, which is not exactly intuitive to work with as a user (in case it is defined arbitrarily and not through the conversion formula), so we make a sign swap for convenience: The conversion formula is optional, it's only needed to port a previously broken code to the new formula. One interesting thing here is that R' is fairly close to R when R is small. For example, a rate factor R=5 at 60 FPS gives us R' \approx 5.22. This means that if the rate factors weren't closely tuned, it is probably acceptable to go with R'=R and not bother with any conversion. Still, having that formula can be useful to update all the decay constants and check that everything still works as expected. Also, notice how if the delta gets very large, -R'\Delta_n is going toward -\infty, e^{-R'\Delta_n} toward 0, 1-e^{-R'\Delta_n} toward 1, and so the interpolation is going to reach our final target \Beta without overshooting. This means the formula doesn't need any extra care with regard to the 3rd issue we pointed out earlier. Looking at the previous curves but now with the new formula and an adjusted rate: Comparison of the curves at different frame rates with the new formula Conclusion So there we have it, the perfect formula, frame rate agnostic ✅, deterministic ✅ and resilient to overshooting ✅. If you've quickly skimmed through the maths, here is what you need to know: a = lerp(a, B, delta * RATE) Should be changed to: a = lerp(a, B, 1.0 - exp(-delta * RATE2)) With the precomputed RATE2 = -FPS * log(1 - RATE/FPS) (where log is the natural logarithm), or simply using RATE2 = RATE as a rough equivalent. Also, any existing overshooting clamping can safely be dropped. Now please adjust your game to make the world a better and safer place for everyone ♥

a year ago 103 votes
Improving color quantization heuristics

In 2015, I wrote an article about how the palette color quantization was improved in FFmpeg in order to make nice animated GIF files. For some reason, to this day this is one of my most popular article. As time passed, my experience with colors grew and I ended up being quite ashamed and frustrated with the state of these filters. A lot of the code was naive (when not terribly wrong), despite the apparent good results. One of the major change I wanted to do was to evaluate the color distances using a perceptually uniform colorspace, instead of using a naive euclidean distance of RGB triplets. As usual it felt like a week-end long project; after all, all I have to do is change the distance function to work in a different space, right? Well, if you're following my blog you might have noticed I've add numerous adventures that stacked up on each others: I had to work out the colorspace with integer arithmetic first ...which forced me to look into integer division more deeply ...which confronted me to all sort of undefined behaviours in the process And when I finally reached the point where I could make the switch to OkLab (the perceptual colorspace), a few experiments showed that the flavour of the core algorithm I was using might contain some fundamental flaws, or at least was not implementing optimal heuristics. So here we go again, quickly enough I find myself starting a new research study in the pursuit of understanding how to put pixels on the screen. This write-up is the story of yet another self-inflicted struggle. Palette quantization But what is palette quantization? It essentially refers to the process of reducing the number of available colors of an image down to a smaller subset. In sRGB, an image can have up to 16.7 million colors. In practice though it's generally much less, to the surprise of no one. Still, it's not rare to have a few hundreds of thousands different colors in a single picture. Our goal is to reduce that to something like 256 colors that represent them best, and use these colors to create a new picture. Why you may ask? There are multiple reasons, here are some: Improve size compression (this is a lossy operation of course, and using dithering on top might actually defeat the original purpose) Some codecs might not support anything else than limited palettes (GIF or subtitles codecs are examples) Various artistic purposes Following is an example of a picture quantized at different levels: Original (26125 colors) Quantized to 8bpp (256 colors) Quantized to 2bpp (4 colors) This color quantization process can be roughly summarized in a 4-steps based process: Sample the input image: we build an histogram of all the colors in the picture (basically a simple statistical analysis) Design a colormap: we build the palette through various means using the histograms Create a pixel mapping which associates a color (one that can be found in the input image) with another (one that can be found in the newly created palette) Image quantizing: we use the color mapping to build our new image. This step may also involve some dithering. The study here will focus on step 2 (which itself relies on step 1). Colormap design algorithms A palette is simply a set of colors. It can be represented in various ways, for example here in 2D and 3D: To generate such a palette, all sort of algorithms exists. They are usually classified into 2 large categories: Dividing/splitting algorithms (such as Median-Cut and its various flavors) Clustering algorithms (such as K-means, maximin distance, (E)LBG or pairwise clustering) The former are faster but non-optimal while the latter are slower but better. The problem is NP-complete, meaning it's possible to find the optimal solution but it can be extremely costly. On the other hand, it's possible to find "local optimums" at minimal cost. Since I'm working within FFmpeg, speed has always been a priority. This was the reason that motivated me to initially implement the Median-Cut over a more expensive algorithm. The rough picture of the algorithm is relatively easy to grasp. Assuming we want a palette of K colors: A set S of all the colors in the input picture is constructed, along with a respective set W of the weight of each color (how much they appear) Since the colors are expressed as RGB triplets, they can be encapsulated in one big cuboid, or box The box is cut in two along one of the axis (R, G or B) on the median (hence the name of the algorithm) If we don't have a total K boxes yet, pick one of them and go back to previous step All the colors in each of the K boxes are then averaged to form the color palette entries Here is how the process looks like visually: Median-Cut algorithm targeting 16 boxes You may have spotted in this video that the colors are not expressed in RGB but in Lab: this is because instead of representing the colors in a traditional RGB colorspace, we are instead using the OkLab colorspace which has the property of being perceptually uniform. It doesn't really change the Median Cut algorithm, but it definitely has an impact on the resulting palette. One striking limitation of this algorithm is that we are working exclusively with cuboids: the cuts are limited to an axis, we are not cutting along an arbitrary plane or a more complex shape. Think of it like working with voxels instead of more free-form geometries. The main benefit is that the algorithm is pretty simple to implement. Now the description provided earlier conveniently avoided describing two important aspects happening in step 3 and 4: How do we choose the next box to split? How do we choose along which axis of the box we make the cut? I pondered about that for a quite a long time. An overview of the possible heuristics In bulk, some of the heuristics I started thinking of: should we take the box that has the longest axis across all boxes? should we take the box that has the largest volume? should we take the box that has the biggest Mean Squared Error when compared to its average color? should we take the box that has the axis with the biggest MSE? assuming we choose to go with the MSE, should it be normalized across all boxes? should we even account for the weight of each color or consider them equal? what about the axis? Is it better to pick the longest or the one with the higher MSE? I tried to formalize these questions mathematically to the best of my limited abilities. So let's start by saying that all the colors c of given box are stored in a N×M 2D-array following the matrix notation: L₁L₂L₃…Lₘ a₁a₂a₃…aₘ b₁b₂b₃…bₘ N is the number of components (3 in our case, whether it's RGB or Lab), and M the number of colors in that box. You can visualize this as a list of vectors as well, where c_{i,j} is the color at row i and column j. With that in mind we can sketch the following diagram representing the tree of heuristic possibilities to implement: Mathematicians are going to kill me for doodling random notes all over this perfectly understandable symbols gibberish, but I believe this is required for the human beings reading this article. In summary, we end up with a total of 24 combinations to try out: 2 axis selection heuristics: cut the axis with the maximum error squared cut the axis with the maximum length 3 operators: maximum measurement out of all the channels product of the measurements of all the channels sum of the measurements of all the channels 4 measurements: error squared, honoring weights error squared, not honoring weights error squared, honoring weights, normalized length of the axis If we start to intuitively think about which ones are likely going to perform the best, we quickly realize that we haven't actually formalized what we are trying to achieve. Such a rookie mistake. Clarifying this will help us getting a better feeling about the likely outcome. I chose to target an output that minimizes the MSE against the reference image, in a perceptual way. Said differently, trying to make the perceptual distance between an input and output color pixel as minimal as possible. This is an arbitrary and debatable target, but it's relatively simple and objective to evaluate if we have faith in the selected perceptual model. Another appropriate metric could have been to find the ideal palette through another algorithm, and compare against that instead. Doing that unfortunately implied that I would trust that other algorithm, its implementation, and that I have enough computing power. So to summarize, we want to minimize the MSE between the input and output, evaluated in the OkLab colorspace. This can be expressed with the following formula: Where: P is a partition (which we constrain to a box in our implementation) C the set of colors in the partition P w the weight of a color c a single color µ the average color of the set C Special thanks to criver for helping me a ton on the math area, this last formula is from them. Looking at the formula, we can see how similar it is to certain branches of the heuristics tree, so we can start getting an intuition about the result of the experiment. Experiment language Short deviation from the main topic (feel free to skip to the next section): working in C within FFmpeg quickly became a hurdle more than anything. Aside from the lack of flexibility, the implicit casts destroying the precision deceitfully, and the undefined behaviours, all kind of C quirks went in the way several times, which made me question my sanity. This one typically severly messed me up while trying to average the colors: #include <stdio.h> #include <stdint.h> int main (void) { const int32_t x = -30; const uint32_t y = 10; const uint32_t a = 30; const int32_t b = -10; printf("%d×%u=%d\n", x, y, x * y); printf("%u×%d=%d\n", a, b, a * b); printf("%d/%u=%d\n", x, y, x / y); printf("%u/%d=%d\n", a, b, a / b); return 0; } % cc -Wall -Wextra -fsanitize=undefined test.c -o test && ./test -30×10=-300 30×-10=-300 -30/10=429496726 30/-10=0 Anyway, I know this is obvious but if you aren't already doing that I suggest you build your experiments in another language, Python or whatever, and rewrite them in C later when you figured out your expected output. Re-implementing what I needed in Python didn't take me long. It was, and still is obviously much slower at runtime, but that's fine. There is a lot of room for speed improvement, typically by relying on numpy (which I didn't bother with). Experiment results I created a research repository for the occasion. The code to reproduce and the results can be found in the color quantization README. In short, based on the results, we can conclude that: Overall, the box that has the axis with the largest non-normalized weighted sum of squared error is the best candidate in the box selection algorithm Overall, cutting the axis with the largest weighted sum of squared error is the best axis cut selection algorithm To my surprise, normalizing the weights per box is not a good idea. I initially observed that by trial and error, which was actually one of the main motivator for this research. I initially thought normalizing each box was necessary in order to compare them against each others (such that they are compared on a common ground). My loose explanation of the phenomenon was that not normalizing causes a bias towards boxes with many colors, but that's actually exactly what we want. I believe it can also be explained by our evaluation function: we want to minimize the error across the whole set of colors, so small partitions (in color counts) must not be made stronger. At least not in the context of the target we chose. It's also interesting to see how the max() seems to perform better than the sum() of the variance of each component most of the time. Admittedly my picture samples set is not that big, which may imply that more experiments to confirm that tendency are required. In retrospective, this might have been quickly predictable to someone with a mathematical background. But since I don't have that, nor do I trust my abstract thinking much, I'm kind of forced to try things out often. This is likely one of the many instances where I spent way too much energy on something obvious from the beginning, but I have the hope it will actually provide some useful information for other lost souls out there. Known limitations There are two main limitations I want to discuss before closing this article. The first one is related to minimizing the MSE even more. K-means refinement We know the Median-Cut actually provides a rough estimate of the optimal palette. One thing we could do is use it as a first step before refinement, for example by running a few K-means iterations as post-processing (how much refinement/iterations could be a user control). The general idea of K-means is to progressively move each colors individually to a more appropriate box, that is a box for which the color distance to the average color of that box is smaller. I started implementing that in a very naive way, so it's extremely slow, but that's something to investigate further because it definitely improves the results. Most of the academic literature seems to suggest the use of the K-means clustering, but all of them require some startup step. Some come up with various heuristics, some use PCA, but I've yet to see one that rely on Median-Cut as first pass; maybe that's not such a good idea, but who knows. Bias toward perceived lightness Another more annoying problem for which I have no solution for is with regards to the human perception being much more sensitive to light changes than hue. If you look at the first demo with the parrot, you may have observed the boxes are kind of thin. This is because the a and b components (respectively how green/red and blue/yellow the color is) have a much smaller amplitude compared to the L (perceived lightness). Here is a side by side comparison of the spread of colors between a stretched and normalized view: You may rightfully question whether this is a problem or not. In practice, this means that when K is low (let's say smaller than 8 or even 16), cuts along L will almost always be preferred, causing the picture to be heavily desaturated. This is because it tries to preserve the most significant attribute in human perception: the lightness. That particular picture is actually a pathological study case: 4 colors 8 colors 12 colors 16 colors We can see the hue timidly appearing around K=16 (specifically it starts being more strongly noticeable starting the cut K=13). Conclusion For now, I'm mostly done with this "week-end long project" into which I actually poured 2 or 3 months of lifetime. The FFmpeg patchset will likely be upstreamed soon so everyone should hopefully be able to benefit from it in the next release. It will also come with additional dithering methods, which implementation actually was a relaxing distraction from all this hardship. There are still many ways of improving this work, but it's the end of the line for me, so I'll trust the Internet with it.

over a year ago 52 votes
Porting OkLab colorspace to integer arithmetic

For reasons I'll explain in a futur write-up, I needed to make use of a perceptually uniform colorspace in some computer vision code. OkLab from Björn Ottosson was a great candidate given how simple the implementation is. But there is a plot twist: I needed the code to be deterministic for the tests to be portable across a large variety of architecture, systems and configurations. Several solutions were offered to me, including reworking the test framework to support a difference mechanism with threshold, but having done that in another project I can confidently say that it's not trivial (when not downright impossible in certain cases). Another approach would have been to hardcode the libc math functions, but even then I wasn't confident the floating point arithmetic would determinism would be guaranteed in all cases. So I ended up choosing to port the code to integer arithmetic. I'm sure many people would disagree with that approach, but: code determinism is guaranteed not all FPU are that efficient, typically on embedded it can now be used in the kernel; while this is far-fetched for OkLab (though maybe someone needs some color management in v4l2 or something), sRGB transforms might have their use cases it's a learning experience which can be re-used in other circumstances working on the integer arithmetic versions unlocked various optimizations for the normal case Note: I'm following Björn Ottosson will to have OkLab code in the public domain as well as under MIT license, so this "dual licensing" applies to all the code presented in this article. Warning: The integer arithmetics in this write-up can only work if your language behaves the same as C99 (or more recent) with regard to integer division. See this previous article on integer division for more information. Quick summary of uniform colorspaces For those unfamiliar with color management, one of the main benefit of a uniform colorspace like OkLab is that the euclidean distance between two colors is directly correlated with the human perception of these colors. More concretely, if we want to evaluate the distance between the RGB triplets (R₀,G₀,B₀) and (R₁,G₁,B₁), one may naively compute the euclidean distance √((R₀-R₁)²+(G₀-G₁)²+(B₀-B₁)²). Unfortunately, even if the RGB is gamma expanded into linear values, the computed distance will actually be pretty far from reflecting how the human eye perceive this difference. It typically isn't going to be consistent when applied to another pair of colors. With OkLab (and many other uniform colorspaces), the colors are also identified with 3D coordinates, but instead of (R,G,B) we call them (L,a,b) (which is an entirely different 3D space). In that space √((L₀-L₁)²+(a₀-a₁)²+(b₀-b₁)² (called ΔE, or Delta-E) is expected to be aligned with human perception of color differences. Of course, this is just one model, and it doesn't take into account many parameters. For instance, the perception of a color depends a lot on the surrounding colors. Still, these models are much better than working with RGB triplets, which don't make much sense visually speaking. Reference code / diagram In this study case, We will be focusing on the transform that goes from sRGB to OkLab, and back again into sRGB. Only the first part is interesting if we want the color distance, but sometimes we also want to alter a color uniformly and thus we need the 2nd part as well to reconstruct an sRGB color from it. We are only considering the sRGB input and output for simplicity, which means we will be inlining the sRGB color transfer in the pipeline. If you're not familiar with gamma compression, there are many resources about it on the Internet which you may want to look into first. Here is a diagram of the complete pipeline: And the corresponding code (of the 4 circles in the diagram) we will be porting: struct Lab { float L, a, b; } uint8_t linear_f32_to_srgb_u8(float x) { if (x <= 0.0) { return 0; } else if (x >= 1.0) { return 0xff; } else { const float v = x < 0.0031308f ? x*12.92f : 1.055f*powf(x, 1.f/2.4f) - 0.055f; return lrintf(v * 255.f); } } float srgb_u8_to_linear_f32(uint8_t x) { const float v = x / 255.f; return v < 0.04045f ? v/12.92f : powf((v+0.055f)/1.055f, 2.4f); } struct Lab srgb_u8_to_oklab_f32(uint32_t srgb) { const float r = srgb_u8_to_linear_f32(srgb >> 16 & 0xff); const float g = srgb_u8_to_linear_f32(srgb >> 8 & 0xff); const float b = srgb_u8_to_linear_f32(srgb & 0xff); const float l = 0.4122214708f * r + 0.5363325363f * g + 0.0514459929f * b; const float m = 0.2119034982f * r + 0.6806995451f * g + 0.1073969566f * b; const float s = 0.0883024619f * r + 0.2817188376f * g + 0.6299787005f * b; const float l_ = cbrtf(l); const float m_ = cbrtf(m); const float s_ = cbrtf(s); const struct Lab ret = { .L = 0.2104542553f * l_ + 0.7936177850f * m_ - 0.0040720468f * s_, .a = 1.9779984951f * l_ - 2.4285922050f * m_ + 0.4505937099f * s_, .b = 0.0259040371f * l_ + 0.7827717662f * m_ - 0.8086757660f * s_, }; return ret; } uint32_t oklab_f32_to_srgb_u8(struct Lab c) { const float l_ = c.L + 0.3963377774f * c.a + 0.2158037573f * c.b; const float m_ = c.L - 0.1055613458f * c.a - 0.0638541728f * c.b; const float s_ = c.L - 0.0894841775f * c.a - 1.2914855480f * c.b; const float l = l_*l_*l_; const float m = m_*m_*m_; const float s = s_*s_*s_; const uint8_t r = linear_f32_to_srgb_u8(+4.0767416621f * l - 3.3077115913f * m + 0.2309699292f * s); const uint8_t g = linear_f32_to_srgb_u8(-1.2684380046f * l + 2.6097574011f * m - 0.3413193965f * s); const uint8_t b = linear_f32_to_srgb_u8(-0.0041960863f * l - 0.7034186147f * m + 1.7076147010f * s); return r<<16 | g<<8 | b; } sRGB to Linear The first step is converting the sRGB color to linear values. That sRGB transfer function can be intimidating, but it's pretty much a simple power function: The input is 8-bit ([0x00;0xff] for each of the 3 channels) which means we can use a simple 256 values lookup table containing the precomputed resulting linear values. Note that we can already do that with the reference code with a table remapping the 8-bit index into a float value. For our integer version we need to pick an arbitrary precision for the linear representation. 8-bit is not going to be enough precision, so we're going to pick the next power of two to be space efficient: 16-bit. We will be using the constant K=(1<<16)-1=0xffff to refer to this scale. Alternatively we could rely on a fixed point mapping (an integer for the decimal part and another integer for the fractional part), but in our case pretty much everything is normalized so the decimal part doesn't really matter. /** * Table mapping formula: * f(x) = x < 0.04045 ? x/12.92 : ((x+0.055)/1.055)^2.4 (sRGB EOTF) * Where x is the normalized index in the table and f(x) the value in the table. * f(x) is remapped to [0;K] and rounded. */ static const uint16_t srgb2linear[256] = { 0x0000, 0x0014, 0x0028, 0x003c, 0x0050, 0x0063, 0x0077, 0x008b, 0x009f, 0x00b3, 0x00c7, 0x00db, 0x00f1, 0x0108, 0x0120, 0x0139, 0x0154, 0x016f, 0x018c, 0x01ab, 0x01ca, 0x01eb, 0x020e, 0x0232, 0x0257, 0x027d, 0x02a5, 0x02ce, 0x02f9, 0x0325, 0x0353, 0x0382, 0x03b3, 0x03e5, 0x0418, 0x044d, 0x0484, 0x04bc, 0x04f6, 0x0532, 0x056f, 0x05ad, 0x05ed, 0x062f, 0x0673, 0x06b8, 0x06fe, 0x0747, 0x0791, 0x07dd, 0x082a, 0x087a, 0x08ca, 0x091d, 0x0972, 0x09c8, 0x0a20, 0x0a79, 0x0ad5, 0x0b32, 0x0b91, 0x0bf2, 0x0c55, 0x0cba, 0x0d20, 0x0d88, 0x0df2, 0x0e5e, 0x0ecc, 0x0f3c, 0x0fae, 0x1021, 0x1097, 0x110e, 0x1188, 0x1203, 0x1280, 0x1300, 0x1381, 0x1404, 0x1489, 0x1510, 0x159a, 0x1625, 0x16b2, 0x1741, 0x17d3, 0x1866, 0x18fb, 0x1993, 0x1a2c, 0x1ac8, 0x1b66, 0x1c06, 0x1ca7, 0x1d4c, 0x1df2, 0x1e9a, 0x1f44, 0x1ff1, 0x20a0, 0x2150, 0x2204, 0x22b9, 0x2370, 0x242a, 0x24e5, 0x25a3, 0x2664, 0x2726, 0x27eb, 0x28b1, 0x297b, 0x2a46, 0x2b14, 0x2be3, 0x2cb6, 0x2d8a, 0x2e61, 0x2f3a, 0x3015, 0x30f2, 0x31d2, 0x32b4, 0x3399, 0x3480, 0x3569, 0x3655, 0x3742, 0x3833, 0x3925, 0x3a1a, 0x3b12, 0x3c0b, 0x3d07, 0x3e06, 0x3f07, 0x400a, 0x4110, 0x4218, 0x4323, 0x4430, 0x453f, 0x4651, 0x4765, 0x487c, 0x4995, 0x4ab1, 0x4bcf, 0x4cf0, 0x4e13, 0x4f39, 0x5061, 0x518c, 0x52b9, 0x53e9, 0x551b, 0x5650, 0x5787, 0x58c1, 0x59fe, 0x5b3d, 0x5c7e, 0x5dc2, 0x5f09, 0x6052, 0x619e, 0x62ed, 0x643e, 0x6591, 0x66e8, 0x6840, 0x699c, 0x6afa, 0x6c5b, 0x6dbe, 0x6f24, 0x708d, 0x71f8, 0x7366, 0x74d7, 0x764a, 0x77c0, 0x7939, 0x7ab4, 0x7c32, 0x7db3, 0x7f37, 0x80bd, 0x8246, 0x83d1, 0x855f, 0x86f0, 0x8884, 0x8a1b, 0x8bb4, 0x8d50, 0x8eef, 0x9090, 0x9235, 0x93dc, 0x9586, 0x9732, 0x98e2, 0x9a94, 0x9c49, 0x9e01, 0x9fbb, 0xa179, 0xa339, 0xa4fc, 0xa6c2, 0xa88b, 0xaa56, 0xac25, 0xadf6, 0xafca, 0xb1a1, 0xb37b, 0xb557, 0xb737, 0xb919, 0xbaff, 0xbce7, 0xbed2, 0xc0c0, 0xc2b1, 0xc4a5, 0xc69c, 0xc895, 0xca92, 0xcc91, 0xce94, 0xd099, 0xd2a1, 0xd4ad, 0xd6bb, 0xd8cc, 0xdae0, 0xdcf7, 0xdf11, 0xe12e, 0xe34e, 0xe571, 0xe797, 0xe9c0, 0xebec, 0xee1b, 0xf04d, 0xf282, 0xf4ba, 0xf6f5, 0xf933, 0xfb74, 0xfdb8, 0xffff, }; int32_t srgb_u8_to_linear_int(uint8_t x) { return (int32_t)srgb2linear[x]; } You may have noticed that we are returning the value in a i32: this is to ease arithmetic operations (preserving the 16-bit unsigned precision would have overflow warping implications when working with the value). Linear to OkLab The OkLab is expressed in a virtually continuous space (floats). If we feed all 16.7 millions sRGB colors to the OkLab transform we get the following ranges in output: min Lab: 0.000000 -0.233887 -0.311528 max Lab: 1.000000 0.276216 0.198570 We observe that L is always positive and neatly within [0;1] while a and b are in a more restricted and signed range. Multiple choices are offered to us with regard to the integer representation we pick. Since we chose 16-bit for the input linear value, it makes sense to preserve that precision for Lab. For the L component, this fits neatly ([0;1] in the ref maps to [0;0xffff] in the integer version), but for the a and b component, not so much. We could pick a signed 16-bit, but that would imply a 15-bit precision for the arithmetic and 1-bit for the sign, which is going to be troublesome: we want to preserve the same precision for L, a and b since the whole point of this operation is to have a uniform space. Instead, I decided to go with 16-bits of precision, with one extra bit for the sign (which will be used for a and b), and thus storing Lab in 3 signed i32. Alternatively, we could decide to have a 15-bit precision with an extra bit for the sign by using 3 i16. This should work mostly fine but having the values fit exactly the boundaries of the storage can be problematic in various situations, typically anything that involves boundary checks and overflows. Picking a larger storage simplifies a bunch of things. Looking at srgb_u8_to_oklab_f32 we quickly see that for most of the function it's simple arithmetic, but we have a cube root (cbrt()), so let's study that first. Cube root All the cbrt inputs are driven by this: const float l = 0.4122214708f * r + 0.5363325363f * g + 0.0514459929f * b; const float m = 0.2119034982f * r + 0.6806995451f * g + 0.1073969566f * b; const float s = 0.0883024619f * r + 0.2817188376f * g + 0.6299787005f * b; This might not be obvious at first glance but here l, m and s all are in [0;1] range (the sum of the coefficients of each row is 1), so we will only need to deal with this range in our cbrt implementation. This simplifies greatly the problem! Now, what does it look like? This function is simply the inverse of f(x)=x³, which is a more convenient function to work with. And I have some great news: not long ago, I wrote an article on how to inverse a function, so that's exactly what we are going to do here: inverse f(x)=x³. What we first need though is a good approximation of the curve. A straight line is probably fine but we could try to use some symbolic regression in order to get some sort of rough polynomial approximation. PySR can do that in a few lines of code: import numpy as np from pysr import PySRRegressor # 25 points of ³√x within [0;1] x = np.linspace(0, 1, 25).reshape(-1, 1) y = x ** (1/3) model = PySRRegressor(model_selection="accuracy", binary_operators=["+", "-", "*"], niterations=200) r = model.fit(x, y, variable_names=["x"]) print(r) The output is not deterministic for some reason (which is quite annoying) and the expressions provided usually follows a wonky form. Still, in my run it seemed to take a liking to the following polynomial: u₀ = x³ - 2.19893x² + 2.01593x + 0.219407 (reformatted in a sane polynomial form thanks to WolframAlpha). Note that increasing the number of data points is not really a good idea because we quickly start being confronted to Runge's phenomenon. No need to overthink it, 25 points is just fine. Now we can make a few Newton iterations. For that, we need the derivative of f(uₙ)=uₙ³-x, so f'(uₙ)=3uₙ² and thus the iteration expressions can be obtained easily: uₙ₊₁ = uₙ - (f(uₙ)-v)/f'(uₙ) = uₙ - (uₙ³-v)/(3uₙ²) = (2uₙ³+v)/(3uₙ²) If you don't understand what the hell is going on here, check the article referred to earlier, we're simply following the recipe here. Now I had a look into how most libc compute cbrt, and despite sometimes referring to Newton iterations, they were actually using Halley iterations. So we're going to do the same (not lying, just the Halley part). To get the Halley iteration instead of Newton, we need the first but also the second derivative of f(uₙ)=uₙ³-x (f'(uₙ)=3uₙ² and f"(uₙ)=6uₙ) from which we deduce a relatively simple expression: uₙ₊₁ = uₙ-2f(uₙ)f'(uₙ)/(2f'(uₙ)²-f(uₙ)f"(uₙ)) = uₙ(2x+uₙ³)/(x+2uₙ³) We have everything we need to approximate a cube root of a real between 0 and 1. In Python a complete implementation would be as simple as this snippet: b, c, d = -2.19893, 2.01593, 0.219407 def cbrt01(x): # We only support [0;1] if x <= 0: return 0 if x >= 1: return 1 # Initial approximation u = x**3 + b*x**2 + c*x + d # 2 Halley iterations u = u * (2*x+u**3) / (x+2*u**3) u = u * (2*x+u**3) / (x+2*u**3) return u But now we need to scale the floating values up into 16-bit integers. First of all, in the integer version our x is actually in K scale, which means we want to express u according to X=x·K. Similarly, we want to use B=b·K, C=c·K and D=d·K instead of b, c and d because we have no way of expressing the former as integer otherwise. Finally, we're not actually going to compute u₀ but u₀·K because we're preserving the scale through the function. We have: u₀·K = K·(x³ + bx² + cx + d) = K·((x·K)³/K³ + b(x·K)²/K² + c(x·K)/K + d) = K·(X³/K³ + bX²/K² + cX/K + d) = X³·K/K³ + bX²·K/K² + cX·K/K + d·K = X³/K² + BX²/K² + CX/K + D = X³/K² + BX²/K² + CX/K + D = (X³ + BX²)/K² + CX/K + D = ((X³ + BX²)/K + CX)/K + D = (X(X² + BX)/K + CX)/K + D U₀ = (X(X(X + B)/K + CX)/K + D With this we have a relatively cheap expression where the K divisions would still preserve enough precision even if evaluated as integer division. We can do the same for the Halley iteration. I spare you the algebra, the expression u(2x+u³) / (x+2u³) becomes (U(2X+U³/K²)) / (X+2U³/K²). Looking at this expression you may start to worry about overflows, and that would fair since even K² is getting dangerously close to the sun (it's actually already larger than INT32_MAX). For this reason we're going to cheat and simply use 64-bit arithmetic in this function. I believe we could reduce the risk of overflow, but I don't think there is a way to remain in 32-bit without nasty compromises anyway. This is also why in the code below you'll notice the constants are suffixed with LL (to force long-long/64-bit arithmetic). Beware that overflows are a terrible predicament to get into as they will lead to undefined behaviour. Do not underestimate this risk. You might not detect them early enough, and missing them may mislead you when interpreting the results. For this reason, I strongly suggest to always build with -fsanitize=undefined during test and development. I don't do that often, but for this kind of research, I also highly recommend to first write tests that cover all possible integers input (when applicable) so that overflows are detected as soon as possible. Before we write the integer version of our function, we need to address rounding. In the case of the initial approximation I don't think we need to bother, but for our Halley iteration we're going to need as much precision as we can get. Since we know U is positive (remember we're evaluating cbrt(x) where x is in [0;1]), we can use the (a+b/2)/b rounding formula. Our function finally just looks like: #define K2 ((int64_t)K*K) int32_t cbrt01_int(int32_t x) { int64_t u; /* Approximation curve is for the [0;1] range */ if (x <= 0) return 0; if (x >= K) return K; /* Initial approximation: x³ - 2.19893x² + 2.01593x + 0.219407 */ u = x*(x*(x - 144107LL) / K + 132114LL) / K + 14379LL; /* Refine with 2 Halley iterations. */ for (int i = 0; i < 2; i++) { const int64_t u3 = u*u*u; const int64_t den = x + (2*u3 + K2/2) / K2; u = (u * (2*x + (u3 + K2/2) / K2) + den/2) / den; } return u; } Cute, isn't it? If we test the accuracy of this function by calling it for all the possible values we actually get extremely good results. Here is a test code: int main(void) { float max_diff = 0; float total_diff = 0; for (int i = 0; i <= K; i++) { const float ref = cbrtf(i / (float)K); const float out = cbrt01_int(i) / (float)K; const float d = fabs(ref - out); if (d > max_diff) max_diff = d; total_diff += d; } printf("max_diff=%f total_diff=%f avg_diff=%f\n", max_diff, total_diff, total_diff / (K + 1)); return 0; } Output: max_diff=0.030831 total_diff=0.816078 avg_diff=0.000012 If we want to trade precision for speed, we could adjust the function to use Newton iterations, and maybe remove the rounding. Back to the core Going back to our sRGB-to-OkLab function, everything should look straightforward to implement now. There is one thing though, while lms computation (at the beginning of the function) is exclusively working with positive values, the output Lab value expression is signed. For this reason we will need a more involved rounded division, so referring again to my last article we will use: static int64_t div_round64(int64_t a, int64_t b) { return (a^b)<0 ? (a-b/2)/b : (a+b/2)/b; } And thus, we have: struct LabInt { int32_t L, a, b; }; struct LabInt srgb_u8_to_oklab_int(uint32_t srgb) { const int32_t r = (int32_t)srgb2linear[srgb >> 16 & 0xff]; const int32_t g = (int32_t)srgb2linear[srgb >> 8 & 0xff]; const int32_t b = (int32_t)srgb2linear[srgb & 0xff]; // Note: lms can actually be slightly over K due to rounded coefficients const int32_t l = (27015LL*r + 35149LL*g + 3372LL*b + K/2) / K; const int32_t m = (13887LL*r + 44610LL*g + 7038LL*b + K/2) / K; const int32_t s = (5787LL*r + 18462LL*g + 41286LL*b + K/2) / K; const int32_t l_ = cbrt01_int(l); const int32_t m_ = cbrt01_int(m); const int32_t s_ = cbrt01_int(s); const struct LabInt ret = { .L = div_round64( 13792LL*l_ + 52010LL*m_ - 267LL*s_, K), .a = div_round64(129628LL*l_ - 159158LL*m_ + 29530LL*s_, K), .b = div_round64( 1698LL*l_ + 51299LL*m_ - 52997LL*s_, K), }; return ret; } The note in this code is here to remind us that we have to saturate lms to a maximum of K (corresponding to 1.0 with floats), which is what we're doing in cbrt01_int(). At this point we can already work within the OkLab space but we're only half-way through the pain. Fortunately, things are going to be easier from now on. OkLab to sRGB Our OkLab-to-sRGB function relies on the Linear-to-sRGB function (at the end), so we're going to deal with it first. Linear to sRGB Contrary to sRGB-to-Linear it's going to be tricky to rely on a table because it would be way too large to hold all possible values (since it would require K entries). I initially considered computing powf(x, 1.f/2.4f) with integer arithmetic somehow, but this is much more involved than how we managed to implement cbrt. So instead I thought about approximating the curve with a bunch of points (stored in a table), and then approximate any intermediate value with a linear interpolation, that is as if the point were joined through small segments. We gave 256 16-bit entries to srgb2linear, so if we were to give as much storage to linear2srgb we could have a table of 512 8-bit entries (our output is 8-bit). Here it is: /** * Table mapping formula: * f(x) = x < 0.0031308 ? x*12.92 : (1.055)*x^(1/2.4)-0.055 (sRGB OETF) * Where x is the normalized index in the table and f(x) the value in the table. * f(x) is remapped to [0;0xff] and rounded. * * Since a 16-bit table is too large, we reduce its precision to 9-bit. */ static const uint8_t linear2srgb[P + 1] = { 0x00, 0x06, 0x0d, 0x12, 0x16, 0x19, 0x1c, 0x1f, 0x22, 0x24, 0x26, 0x28, 0x2a, 0x2c, 0x2e, 0x30, 0x32, 0x33, 0x35, 0x36, 0x38, 0x39, 0x3b, 0x3c, 0x3d, 0x3e, 0x40, 0x41, 0x42, 0x43, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, 0x5f, 0x60, 0x61, 0x62, 0x62, 0x63, 0x64, 0x65, 0x65, 0x66, 0x67, 0x67, 0x68, 0x69, 0x6a, 0x6a, 0x6b, 0x6c, 0x6c, 0x6d, 0x6e, 0x6e, 0x6f, 0x6f, 0x70, 0x71, 0x71, 0x72, 0x73, 0x73, 0x74, 0x74, 0x75, 0x76, 0x76, 0x77, 0x77, 0x78, 0x79, 0x79, 0x7a, 0x7a, 0x7b, 0x7b, 0x7c, 0x7d, 0x7d, 0x7e, 0x7e, 0x7f, 0x7f, 0x80, 0x80, 0x81, 0x81, 0x82, 0x82, 0x83, 0x84, 0x84, 0x85, 0x85, 0x86, 0x86, 0x87, 0x87, 0x88, 0x88, 0x89, 0x89, 0x8a, 0x8a, 0x8b, 0x8b, 0x8c, 0x8c, 0x8c, 0x8d, 0x8d, 0x8e, 0x8e, 0x8f, 0x8f, 0x90, 0x90, 0x91, 0x91, 0x92, 0x92, 0x93, 0x93, 0x93, 0x94, 0x94, 0x95, 0x95, 0x96, 0x96, 0x97, 0x97, 0x97, 0x98, 0x98, 0x99, 0x99, 0x9a, 0x9a, 0x9a, 0x9b, 0x9b, 0x9c, 0x9c, 0x9c, 0x9d, 0x9d, 0x9e, 0x9e, 0x9f, 0x9f, 0x9f, 0xa0, 0xa0, 0xa1, 0xa1, 0xa1, 0xa2, 0xa2, 0xa3, 0xa3, 0xa3, 0xa4, 0xa4, 0xa5, 0xa5, 0xa5, 0xa6, 0xa6, 0xa6, 0xa7, 0xa7, 0xa8, 0xa8, 0xa8, 0xa9, 0xa9, 0xa9, 0xaa, 0xaa, 0xab, 0xab, 0xab, 0xac, 0xac, 0xac, 0xad, 0xad, 0xae, 0xae, 0xae, 0xaf, 0xaf, 0xaf, 0xb0, 0xb0, 0xb0, 0xb1, 0xb1, 0xb1, 0xb2, 0xb2, 0xb3, 0xb3, 0xb3, 0xb4, 0xb4, 0xb4, 0xb5, 0xb5, 0xb5, 0xb6, 0xb6, 0xb6, 0xb7, 0xb7, 0xb7, 0xb8, 0xb8, 0xb8, 0xb9, 0xb9, 0xb9, 0xba, 0xba, 0xba, 0xbb, 0xbb, 0xbb, 0xbc, 0xbc, 0xbc, 0xbd, 0xbd, 0xbd, 0xbe, 0xbe, 0xbe, 0xbf, 0xbf, 0xbf, 0xc0, 0xc0, 0xc0, 0xc1, 0xc1, 0xc1, 0xc1, 0xc2, 0xc2, 0xc2, 0xc3, 0xc3, 0xc3, 0xc4, 0xc4, 0xc4, 0xc5, 0xc5, 0xc5, 0xc6, 0xc6, 0xc6, 0xc6, 0xc7, 0xc7, 0xc7, 0xc8, 0xc8, 0xc8, 0xc9, 0xc9, 0xc9, 0xc9, 0xca, 0xca, 0xca, 0xcb, 0xcb, 0xcb, 0xcc, 0xcc, 0xcc, 0xcc, 0xcd, 0xcd, 0xcd, 0xce, 0xce, 0xce, 0xce, 0xcf, 0xcf, 0xcf, 0xd0, 0xd0, 0xd0, 0xd0, 0xd1, 0xd1, 0xd1, 0xd2, 0xd2, 0xd2, 0xd2, 0xd3, 0xd3, 0xd3, 0xd4, 0xd4, 0xd4, 0xd4, 0xd5, 0xd5, 0xd5, 0xd6, 0xd6, 0xd6, 0xd6, 0xd7, 0xd7, 0xd7, 0xd7, 0xd8, 0xd8, 0xd8, 0xd9, 0xd9, 0xd9, 0xd9, 0xda, 0xda, 0xda, 0xda, 0xdb, 0xdb, 0xdb, 0xdc, 0xdc, 0xdc, 0xdc, 0xdd, 0xdd, 0xdd, 0xdd, 0xde, 0xde, 0xde, 0xde, 0xdf, 0xdf, 0xdf, 0xe0, 0xe0, 0xe0, 0xe0, 0xe1, 0xe1, 0xe1, 0xe1, 0xe2, 0xe2, 0xe2, 0xe2, 0xe3, 0xe3, 0xe3, 0xe3, 0xe4, 0xe4, 0xe4, 0xe4, 0xe5, 0xe5, 0xe5, 0xe5, 0xe6, 0xe6, 0xe6, 0xe6, 0xe7, 0xe7, 0xe7, 0xe7, 0xe8, 0xe8, 0xe8, 0xe8, 0xe9, 0xe9, 0xe9, 0xe9, 0xea, 0xea, 0xea, 0xea, 0xeb, 0xeb, 0xeb, 0xeb, 0xec, 0xec, 0xec, 0xec, 0xed, 0xed, 0xed, 0xed, 0xee, 0xee, 0xee, 0xee, 0xef, 0xef, 0xef, 0xef, 0xef, 0xf0, 0xf0, 0xf0, 0xf0, 0xf1, 0xf1, 0xf1, 0xf1, 0xf2, 0xf2, 0xf2, 0xf2, 0xf3, 0xf3, 0xf3, 0xf3, 0xf3, 0xf4, 0xf4, 0xf4, 0xf4, 0xf5, 0xf5, 0xf5, 0xf5, 0xf6, 0xf6, 0xf6, 0xf6, 0xf6, 0xf7, 0xf7, 0xf7, 0xf7, 0xf8, 0xf8, 0xf8, 0xf8, 0xf9, 0xf9, 0xf9, 0xf9, 0xf9, 0xfa, 0xfa, 0xfa, 0xfa, 0xfb, 0xfb, 0xfb, 0xfb, 0xfb, 0xfc, 0xfc, 0xfc, 0xfc, 0xfd, 0xfd, 0xfd, 0xfd, 0xfd, 0xfe, 0xfe, 0xfe, 0xfe, 0xff, 0xff, 0xff, }; Again we're going to start with the floating point version as it's easier to reason with. We have a precision P of 9-bits: P = (1<<9)-1 = 511 = 0x1ff. But for the sake of understanding the math, the following diagram will assume a P of 3 so that we can clearly see the segment divisions: The input of our table is an integer index which needs to be calculated according to our input x. But as stated earlier, we won't need one but two indices in order to interpolate a point between 2 discrete values from our table. We will refer to these indices as iₚ and iₙ, which can be computed like this: i = x·P iₚ = ⌊i⌋ iₙ = iₚ + 1 (⌊a⌋ means floor(a)) In order to get an approximation of y according to i, we simply need a linear remapping: the ratio of i between iₚ and iₙ is the same ratio as y between yₚ and yₙ. So yet again we're going to rely on the most useful maths formulas: remap(iₚ,iₙ,yₚ,yₙ,i) = mix(yₚ,yₙ,linear(iₚ,iₙ,i)). The ratio r we're computing as an input to the y-mix can be simplified a bit: r = linear(iₚ,iₙ,i) = (i-iₚ) / (iₙ-iₚ) = i-iₚ = x·P - ⌊x·P⌋ = fract(x·P) So in the end our formula is simply: y = mix(yₚ,yₙ,fract(x·P)) Translated into C we can write it like this: uint8_t linear_f32_to_srgb_u8_fast(float x) { if (x <= 0.f) { return 0; } else if (x >= 1.f) { return 0xff; } else { const float i = x * P; const int32_t idx = (int32_t)floorf(i); const float y0 = linear2srgb[idx]; const float y1 = linear2srgb[idx + 1]; const float r = i - idx; return lrintf(mix(y0, y1, r)); } } Note: in case you are concerned about idx+1 overflowing, floorf((1.0-FLT_EPSILON)*P) is P-1, so this is safe. Linear to sRGB, integer version In the integer version, our function input x is within [0;K], so we need to make a few adjustments. The first issue we have is that with integer arithmetic our i and idx are the same. We have X=x·K as input, so i = idx = X·P/K because we are using an integer division, which in this case is equivalent to the floor() expression in the float version. So while it's a simple and fast way to get yₚ and yₙ, we have an issue figuring out the ratio r. One tool we have is the modulo operator: the integer division is destructive of the fractional part, but fortunately the modulo (the rest of the division) gives this information back. It can also be obtained for free most of the time because CPU division instructions tend to also provide that modulo as well without extra computation. If we give m = (X·P) % K, we have the fractional part of the division expressed in the K scale, which means we can derivate our ratio r from it: r = m / K. Slipping the K division in our mix() expression we end up with the following code: uint8_t linear_int_to_srgb_u8(int32_t x) { if (x <= 0) { return 0; } else if (x >= K) { return 0xff; } else { const int32_t xP = x * P; const int32_t i = xP / K; const int32_t m = xP % K; const int32_t y0 = linear2srgb[i]; const int32_t y1 = linear2srgb[i + 1]; return (m * (y1 - y0) + K/2) / K + y0; } } Testing this function for all the possible input of x, the biggest inaccuracy is a off-by-one, which concerns 6280 of the 65536 possible values (less than 10%): 2886 "off by -1" and 3394 "off by +1". It matches exactly the inaccuracy of the float version of this function, so I think we can be pretty happy with it. Given how good this approach is, we could also consider applying the same strategy for cbrt, so this is left as an exercise to the reader. Back to the core We're finally in our last function. Using everything we've learned so far, it can be trivially converted to integer arithmetic: uint32_t oklab_int_to_srgb_u8(struct LabInt c) { const int64_t l_ = c.L + div_round64(25974LL * c.a, K) + div_round64( 14143LL * c.b, K); const int64_t m_ = c.L + div_round64(-6918LL * c.a, K) + div_round64( -4185LL * c.b, K); const int64_t s_ = c.L + div_round64(-5864LL * c.a, K) + div_round64(-84638LL * c.b, K); const int32_t l = l_*l_*l_ / K2; const int32_t m = m_*m_*m_ / K2; const int32_t s = s_*s_*s_ / K2; const uint8_t r = linear_int_to_srgb_u8((267169LL * l - 216771LL * m + 15137LL * s + K/2) / K); const uint8_t g = linear_int_to_srgb_u8((-83127LL * l + 171030LL * m - 22368LL * s + K/2) / K); const uint8_t b = linear_int_to_srgb_u8(( -275LL * l - 46099LL * m + 111909LL * s + K/2) / K); return r<<16 | g<<8 | b; } Important things to notice: we're storing l_, m_ and s_ in 64-bits values so that the following cubic do not overflow we're using div_round64 for part of the expressions of l_, m_ and s_ because they are using signed sub-expressions we're using a naive integer division in r, g and b because the value is expected to be positive Evaluation We're finally there. In the end the complete code is less than 200 lines of code and even less for the optimized float one (assuming we don't implement our own cbrt). The complete code, test functions and benchmarks tools can be found on Github. Accuracy Comparing the integer version to the reference float gives use the following results: sRGB to OkLab: max_diff=0.000883 total_diff=0.051189 OkLab to sRGB: max_diff_r=2 max_diff_g=1 max_diff_b=1 I find these results pretty decent for an integer version, but you're free to disagree and improve them. Speed The benchmarks are also interesting: on my main workstation (Intel® Core™ i7-12700, glibc 2.36, GCC 12.2.0), the integer arithmetic is slightly slower that the optimized float version: Command Mean [s] Min [s] Max [s] Relative Reference 1.425 ± 0.008 1.414 1.439 1.59 ± 0.01 Fast float 0.897 ± 0.005 0.888 0.902 1.00 Integer arithmetic 0.937 ± 0.006 0.926 0.947 1.04 ± 0.01 Observations: The FPU is definitely fast in modern CPUs Both integer and optimized float versions are destroying the reference code (note that this only because of the transfer functions optimizations, as we have no change in the OkLab functions themselves in the optimized float version) On the other hand, on one of my random ARM board (NanoPI NEO 2 with a Cortex A53, glibc 2.35, GCC 12.1.0), I get different results: Command Mean [s] Min [s] Max [s] Relative Reference 27.678 ± 0.009 27.673 27.703 2.04 ± 0.00 Fast float 15.769 ± 0.001 15.767 15.772 1.16 ± 0.00 Integer arithmetic 13.551 ± 0.001 13.550 13.553 1.00 Not that much faster proportionally speaking, but the integer version is still significantly faster overall on such low-end device. Conclusion This took me ages to complete, way longer than I expected but I'm pretty happy with the end results and with everything I learned in the process. Also, you may have noticed how much I referred to previous work; this has been particularly satisfying from my point of view (re-using previous toolboxes means they were actually useful). This write-up won't be an exception to the rule: in a later article, I will make use of OkLab for another project I've been working on for a while now. See you soon!

over a year ago 51 votes
GCC undefined behaviors are getting wild

Happy with my recent breakthrough in understanding C integer divisions after weeks of struggle, I was minding my own business having fun writing integer arithmetic code. Life was good, when suddenly… zsh: segmentation fault (core dumped). That code wasn't messing with memory much so it was more likely to be a side effect of an arithmetic overflow or something. Using -fsanitize=undefined quickly identified the issue, which confirmed the presence of an integer overflow. The fix was easy but something felt off. I was under the impression my code was robust enough against that kind of honest mistake. Turns out, the protecting condition I had in place should indeed have been enough, so I tried to extract a minimal reproducible case: #include <stdint.h> #include <stdio.h> #include <stdlib.h> uint8_t tab[0x1ff + 1]; uint8_t f(int32_t x) { if (x < 0) return 0; int32_t i = x * 0x1ff / 0xffff; if (i >= 0 && i < sizeof(tab)) { printf("tab[%d] looks safe because %d is between [0;%d[\n", i, i, (int)sizeof(tab)); return tab[i]; } return 0; } int main(int ac, char **av) { return f(atoi(av[1])); } The overflow can happen on x * 0x1ff. Since an integer overflow is undefined, GCC makes the assumption that it cannot happen, ever. In practice in this case it does, but the i >= 0 && i < sizeof(tab) condition should be enough to take care of it, whatever crazy value it becomes, right? Well, I have bad news: % cc -Wall -O2 overflow.c -o overflow && ./overflow 50000000 tab[62183] looks safe because 62183 is between [0;512[ zsh: segmentation fault (core dumped) ./overflow 50000000 Note: this is GCC 12.2.0 on x86-64. We have i=62183 as the result of the overflow, and nevertheless the execution violates the gate condition, spout a non-sense lie, go straight into dereferencing tab, and die miserably. Let's study what GCC is doing here. Firing up Ghidra we observe the following decompiled code: uint8_t f(int x) { int tmp; if (-1 < x) { tmp = x * 0x1ff; if (tmp < 0x1fffe00) { printf("tab[%d] looks safe because %d is between [0;%d[\n",(ulong)(uint)tmp / 0xffff, (ulong)(uint)tmp / 0xffff,0x200); return tab[(int)((uint)tmp / 0xffff)]; } } return '\0'; } When I said GCC makes the assumption that it cannot happen this is what I meant: tmp is not supposed to overflow so part of the condition I had in place was simply removed. More specifically since x can not be lesser than 0, and since GCC assumes a multiplication cannot overflow into a random value (that could be negative) because it is undefined behaviour, it then decides to drop the "redundant" i >= 0 condition because "it cannot happen". I reported that exact issue to GCC to make sure it wasn't a bug, and it was indeed confirmed to me that the undefined behaviour of an integer overflow is not limited in scope to whatever insane value it could take: it is apparently perfectly acceptable to mess up the code flow entirely. While I understand how attractive it can be from an optimization point of view, the paranoid developer in me is straight up terrified by the perspective of a single integer overflow removing security protection and causing such havoc. I've worked several years in a project where the integer overflows were (and probably still are) legion. Identifying and fixing of all them is likely a lifetime mission of several opinionated individuals. I'm expecting this article to make the rust crew go in a crusade again, and I think I might be with them this time. Edit: it was made clear to me while reading Predrag's blog that the key to my misunderstanding boils down to this: "Undefined behavior is not the same as implementation-defined behavior". While I was indeed talking about undefined behaviour, subconsciously I was thinking that the behaviour of an overflow on a multiplication would be "implementation-defined behaviour". This is not the case, it is indeed an undefined behaviour, and yes the compiler is free to do whatever it wants to because it is compliant with the specifications. It's my mistake of course, but to my defense, despite the arrogant comments I read, this confusion happens a lot. This happens I believe because it's violating the Principle of least astonishment. To illustrate this I'll take this interesting old OpenBSD developer blog post being concerned about the result of the multiplication rather than the invalidation of any guarantee with regard to what's going to happen to the execution flow (before and after). This is not uncommon and in my opinion perfectly understandable.

over a year ago 47 votes

More in programming

What can agents actually do?

There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they’re a bit too abstract. Sort of like saying that your company could be much better if you merely adopted software. That’s certainly true, but it’s not a particularly helpful claim. This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and make the case that the potential of AI agents is equivalent to the potential of this generation of AI. By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company. How do agents work? At its core, using an LLM is an API call that includes a prompt. For example, you might call Anthropic’s /v1/message with a prompt: How should I adopt LLMs in my company? That prompt is used to fill the LLM’s context window, which conditions the model to generate certain kinds of responses. This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result. Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely. However, composing the perfect context window is very time intensive, benefiting from techniques like metaprompting to improve your context. Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing relevant context. For example, if you prompt, Who is going to become the next mayor of New York City?, then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know the answer, which is why you’re asking the question to begin with! This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window. This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response. However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive: The program designer that calls the LLM API must also define a set of tools that the LLM is allowed to suggest using. Every API call to the LLM includes that defined set of tools as options that the LLM is allowed to recommend The response from the API call with defined functions is either: Generated text as any other call to an LLM might provide A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a get_weather tool, when prompted about the weather in Paris, might return this response: [{ "type": "function_call", "name": "get_weather", "arguments": "{\"location\":\"Paris, France\"}" }] The program that calls the LLM API then decides whether and how to honor that requested tool use. The program might decide to reject the requested tool because it’s been used too frequently recently (e.g. rate limiting), it might check if the associated user has permission to use the tool (e.g. maybe it’s a premium only tool), it might check if the parameters match the user’s role-based permissions as well (e.g. the user can check weather, but only admin users are allowed to check weather in France). If the program does decide to call the tool, it invokes the tool, then calls the LLM API with the output of the tool appended to the prior call’s context window. The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context. What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context. This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios: Flow control via rules has concrete rules about how tools can be used. Some examples: it might only allow a given tool to be used once in a given workflow (or a usage limit of a tool for each user, etc) it might require that a human-in-the-loop approves parameters over a certain value (e.g. refunds more than $100 require human approval) it might run a generated Python program and return the output to analyze a dataset (or provide error messages if it fails) apply a permission system to tool use, restricting who can use which tools and which parameters a given user is able to use (e.g. you can only retrieve your own personal data) a tool to escalate to a human representative can only be called after five back and forths with the LLM agent Flow control via statistics can use statistics to identify and act on abnormal behavior: if the size of a refund is higher than 99% of other refunds for the order size, you might want to escalate to a human if a user has used a tool more than 99% of other users, then you might want to reject usage for the rest of the day it might escalate to a human representative if tool parameters are more similar to prior parameters that required escalation to a human agent LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters. At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include: Building general context to add to context window, sometimes thought of as maintaining memory Initiating a workflow based on an incoming ticket in a ticket tracker, customer support system, etc Periodically initiating workflows at a certain time, such as hourly review of incoming tickets Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are: Use an LLM to evaluate a context window and get a result Use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response Manage flow control for tool usage via rules or statistical analysis Agents are software programs, and can do anything other software programs do Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities. Use Case 1: Customer Support Agent One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact. Our approach might be: Allow tickets (or support chats) to flow into an AI agent Provide a variety of tools to the agent to support: Retrieving information about the user: recent customer support tickets, account history, account state, and so on Escalating to next tier of customer support Refund a purchase (almost certainly implemented as “refund purchase” referencing a specific purchase by the user, rather than “refund amount” to prevent scenarios where the agent can be fooled into refunding too much) Closing the user account on request Include customer support guidelines in the context window, describe customer problems, map those problems to specific tools that should be used to solve the problems Flow control rules that ensure all calls escalate to a human if not resolved within a certain time period, number of back-and-forth exchanges, if they run into an error in the agent, and so on. These rules should be both rules-based and statistics-based, ensuring that gaps in your rules are neither exploitable nor create a terrible customer experience Review agent-customer interactions for quality control, making improvements to the support guidelines provided to AI agents. Initially you would want to review every interaction, then move to interactions that lead to unusual outcomes (e.g. escalations to human) and some degree of random sampling Review hourly, then daily, and then weekly metrics of agent performance Based on your learnings from the metric reviews, you should set baselines for alerts which require more immediate response. For example, if a new topic comes up frequently, it probably means a serious regression in your product or process, and it requires immediate review rather than periodical review. Note that even when you’ve moved “Customer Support to AI agents”, you still have: a tier of human agents dealing with the most complex calls humans reviewing the periodic performance statistics humans performing quality control on AI agent-customer interactions You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones. Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent. Use Case 2: Triaging incoming bug reports When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow. The process might work as follows: Pipe all created incidents and all created tickets to this agent for review. Expose these tools to the agent: Open an incident Retrieve current incidents Retrieve recently created tickets Retrieve production metrics Retrieve deployment logs Retrieve feature flag change logs Toggle known-safe feature flags Propose merging an incident with another for human approval Propose merging a ticket with another ticket for human approval Redundant LLM providers for critical workflows. If the LLM provider’s API is unavailable, retry three times over ten seconds, then resort to using a second model provider (e.g. Anthropic first, if unavailable try OpenAI), and then finally create an incident that the triaging mechanism is unavailable. For critical workflows, we can’t simply assume the APIs will be available, because in practice all major providers seem to have monthly availability issues. Merge duplicates. When a ticket comes in, first check ongoing incidents and recently created tickets for potential duplicates. If there is a probable duplicate, suggest merging the ticket or incident with the existing issue and exit the workflow. Assess impact. If production statistics are severely impacted, or if there is a new kind of error in production, then this is likely an issue that merits quick human review. If it’s high priority, open an incident. If it’s low priority, create a ticket. Propose cause. Now that the incident has been sized, switch to analyzing the potential causes of the incident. Look at the code commits in recent deploys and suggest potential issues that might have caused the current error. In some cases this will be obvious (e.g. spiking errors with a traceback of a line of code that changed recently), and in other cases it will only be proximity in time. Apply known-safe feature flags. Establish an allow list of known safe feature flags that the system is allowed to activate itself. For example, if there are expensive features that are safe to disable, it could be allowed to disable them, e.g. restricting paginating through deeper search results when under load might be a reasonable tradeoff between stability and user experience. Defer to humans. At this point, rely on humans to drive incident, or ticket, remediation to completion. Draft initial incident report. If an incident was opened, the agent should draft an initial incident report including the timeline, related changes, and the human activities taken over the course of the incident. This report should then be finalized by the human involved in the incident. Run incident review. Your existing incident review process should take the incident review and determine how to modify your systems, including the triaging agent, to increase reliability over time. Safeguard to reenable feature flags. Since we now have an agent disabling feature flags, we also need to add a periodic check (agent-driven or otherwise) to reenable the “known safe” feature flags if there isn’t an ongoing incident to avoid accidentally disabling them for long periods of time. This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes: The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident. You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval. Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL. Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed. Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation. Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely. Do AI Agents Represent Entirety of this Generation of AI? If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit. Closing thoughts LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work. Software isn’t magic, it’s very logical, but what it can accomplish is magical. The same goes for agents and LLMs. The more we can accelerate that learning curve, the better for our industry.

21 hours ago 4 votes
Buying a house in Karuizawa, Japan

After 18 months of living in Karuizawa, a resort town about an hour away from Tokyo via the Shinkansen, I have bought a house here. This article describes my experience of purchasing a house, and contains tips that are useful both if you’re considering buying in Karuizawa specifically, and in Japan more generally. In this article, I’ll cover: My personal journey to Karuizawa Why you might choose to live in Karuizawa My process of buying a house Tips for potential homebuyers in Japan From Tokyo to Karuizawa: my own journey In April 2020 I was living in central Tokyo. Three days after my one-year-old son entered daycare, a state of emergency was declared because of COVID-19. The daycare was closed, so my wife and I took turns looking after him while working remotely from our apartment. The small park next to our apartment was also closed due to the state of emergency, and so my time with my son included such fun activities as walking up and down the stairs, and looking at ants in the parking lot. After a week or two, we’d had enough. We moved in with my in-laws, who were living in a house in a small commuter town in Saitama. Our lives improved dramatically. The nearby parks were not closed. The grandparents could help look after my son. I could enjoy excellent cycling in the nearby mountains. We kept our apartment in Tokyo, as we didn’t know how long we’d stay, but let my brother-in-law use it. It was more spacious than his own, and gave him a better place to work remotely from. Leaving Tokyo for good In June of 2020 the state of emergency was lifted, yet we decided not to move back to Tokyo. Before having a kid, Tokyo had been a wonderful place to live. There was always something new to try, and opportunities to make new professional connections abounded. But as a father, my lifestyle had changed dramatically, even before COVID. No longer was I attending multiple events per week, but at most one per month. I basically stopped drinking alcohol, as maximizing the quality of what little sleep I could get was too important. With a stroller, getting around by train was no longer easy. My life had become much more routine: excitement came from watching my son grow, not my own activities. Though we’d enjoyed living in an area that was suburban bordering on rural, we didn’t want to live with my in-laws indefinitely. Still, seeing how useful it was to have them around with a small child, we decided to look for a house to rent nearby. We found a relatively modern one in the next town over, about a 10-minute drive from my in-laws. The rent was about half of what our Tokyo apartment’s had been, yet this house was twice as big, and even had a small garden. Living there was a wonderful change of pace. When my second child was born a year later, having the in-laws nearby became even more helpful. But as the COVID restrictions began rolling back, we started to think about where we wanted to settle down permanently. Choosing Karuizawa There were some definite downsides to the town we were living in. Practically all our neighbours were retirees. There were few restaurants that were worth going to. The equipment at the playgrounds was falling apart, and there was no sign it would be replaced. The summers were brutally hot, making it unpleasant to be outside. There wasn’t anything for kids to do inside. We’d visited Karuizawa before, and it seemed like it had the access to nature we appreciated, while also offering more options for dining out and other cultural activities. The cooler climate was also a big motivating factor. So in June 2022, we started looking at places. Most of the rental options were incredibly expensive seasonal properties, and so buying a house seemed like the most realistic option. But the market moved extremely quickly, and any property that seemed like a good deal was snatched up before we even had a chance to look at it. After about half a year of searching, we gave up. We then considered buying a house in Gotenba, a town at the base of Mount Fuji. We’d visited there in the past, and though it didn’t seem quite as nice an option as Karuizawa, we were able to find a decent house that was cheap enough that buying it was a relatively small risk. We placed a lowball offer on the property, and were rejected. Frustrated, I took a look at the rental options for Karuizawa again. This time we saw two houses for rent that were good candidates! We heard from the real estate agent that one of the properties already had someone coming to view the next afternoon, and so we went the next morning. Visiting the house, it seemed like exactly what we were looking for. We immediately put in a rental application, and moved in the next month. Why Karuizawa? I’ve already covered some of the reasons why we chose Karuizawa, but I’ll go into more detail about them, along with other unexpected good points. Mild summers According to historical data, the average temperature in Karuizawa in August is 20.8°C. While daytime temperatures get hotter than that, you’re unlikely to risk heat stroke by being active outside, something that isn’t true for much of Japan. That being said, climate change is inescapable, and summers in Karuizawa are getting hotter than historical data would suggest. This is an issue because while it cools down at night, many of the houses are built only with the cold winters in mind. For instance, despite the house I rented being built in 2019, it only had a single AC unit. This meant that many days in July and August were unbearably hot inside, and I had trouble sleeping at night as it wouldn’t cool down until well past midnight. Cold but cheerful winters While summers are cooler, this also means winters are cold. The average temperature in January is -3.3°C. However, much like Tokyo, winters here tend to be quite sunny, and there have only been a few heavy snowfalls per year while I’ve lived here. That’s enough that you can enjoy the snow without having to worry about shovelling it daily. Proximity to Tokyo The Shinkansen makes it quicker for me to get into central Tokyo than when I was living in suburban Saitama. It’s about a 70-minute ride from Karuizawa station to Tokyo station. While this is not something I’d want to do daily, I now typically go into Tokyo only a couple of times per month, which is reasonable. Greenery and nature As a resort community, Karuizawa places a lot of emphasis on greenery. Throughout the town, there are plenty of areas where residential and commercial spaces mingle with forest. Many houses have nice gardens. Shade exists everywhere. Wilder nature is close by, too. While you can do day hikes from Tokyo, they involve an hour plus on the train, followed by a climb where you’re usually surrounded by other people. In Karuizawa, I can walk 15 minutes to a trailhead where I’ll see at most a couple of groups per hour. Cheaper than Tokyo, but not significantly so Money spent on housing goes a lot further in Karuizawa than in Tokyo. For the same price as a used Tokyo apartment, you can get a nice used house on a relatively large plot of land in Karuizawa. However, it’s hard to find truly affordable housing. For starters, Karuizawa’s plots tend to be large by Japanese standards, so even if the per-unit price of the land is cheaper, the overall cost isn’t so cheap. For instance, a centrally located 100 tsubo (330 m²) plot in Nakakaruizawa, which is on the small side of what you can find, will currently sell for at least 30 million yen. It may be possible to get a deal on a more remote piece of land, but true steals are hard to come by. Dining out is another area where it’s easy to spend lots of money. Prices are on par with Tokyo, and there are few cheap options, with Sukiya, McDonald’s, and a couple of ramen shops being practically the only sub-1,000 yen choices. Groceries, at least, are cheaper. While the Tsuruya grocery store does sell a lot of high-end items, the basics like milk, eggs, vegetables, and chicken are significantly cheaper than in Tokyo or even suburban Saitama. All this means that many residents of Karuizawa are wealthy compared to Japan’s overall population. Sometimes that makes them take for granted that others might struggle financially. For instance, when my son graduated from public daycare, the PTA organized a graduation party. We paid 16,000 yen for myself, my wife, and two children to attend. Since that’s equivalent to two eight-hour days at minimum wage, I think it was too expensive. I noticed that some of the children didn’t attend, while others came with just one parent, perhaps because of the prohibitive cost. Liquid housing market While we were living in suburban Saitama, we considered buying a house, and even looked at a couple of places. It was very affordable, with even mansions (in the traditional English usage of the word) being cheaper than Tokyo mansions (in the Japanese sense). But the reason they were so cheap was because there wasn’t a secondary market for them. So even if we could get a good deal, we were worried that should we ever want to sell it again, it might not even be possible to give it away. Karuizawa’s real estate market moves fast. This sucks as a buyer but is great for sellers. Furthermore, because it’s perceived as a place to have a second home by wealthy Tokyoites, houses that are by our standards very nice are nowhere near the top of the market. This reduces the risk of buying a property and not being able to sell it later. Great public facilities If you have young children, Karuizawa offers many free or cheap public facilities. This includes nice playgrounds, children’s halls stocked with toys and activities, a beautiful library with a decent selection of English picture books, and a recreation centre with a gym, training room, pool, indoor and outdoor skating rinks, and a curling rink (built for 1998 Winter Olympics). A welcoming and international community Some areas of Japan have a reputation for being insular and not friendly toward newcomers. Perhaps because much of Karuizawa’s population has moved here from elsewhere, the people I’ve met have been quite welcoming. I’m also always happy to meet new residents, and pass on the helpful tips I’ve picked up. In addition, I’ve discovered that many residents have some sort of international background. While non-Japanese residents are very much the minority, a large percentage of the Japanese parents I’ve spoken with have either studied or worked abroad. There is also an International Association of Karuizawa, whose members include many long-term non-Japanese residents. Their Facebook group makes it easy to ask for advice in English. You can get around without a car Most days I don’t use a car, and get around by walking or bicycling. The lack of heavy snow makes it possible to do this year round (though I’m one of the few people who still uses a bicycle when it is -10°C out). Mostly I’m able to get around without a car because I live in a relatively central area, so it’s not feasible for every resident. It can be quite helpful, though, as traffic becomes absolutely horrible during the peak seasons. For instance, the house I was renting was a kilometer away from the one I bought. It was less than a 5-minute drive normally, but during Golden Week it took 30 minutes. That being said, it’s practically required to use a car sometimes. For instance, there’s a great pediatrician, but they’re only accessible by car. Similarly, I use our car to take my kids to the pool, pick up things from the home centre, and so on. My process for buying a house After a year of renting in Karuizawa, we decided we wanted to continue living there indefinitely. Continuing to rent the same house wasn’t an option: the owners were temporarily abroad and so we had a fixed-term contract. While we could have looked for another place to rent, we figured we’d be settling down, and so wanted to buy a house. Starting the search On September 1st, 2024, we started our search and immediately found our first candidate: a recently-built house near Asama Fureai Park. That park is the nicest one for young children in Karuizawa, as it has a good playground but is never too crowded. The property was listed with Royal Resort, which has the most listings for Karuizawa. Rather than contacting Royal Resort directly, though, we asked Masaru Takeyama of Resort Innovation to do so on our behalf. In Japan, there is a buyer’s agent and seller’s agent. Often the seller’s agent wants to act as the buyer’s agent as well, as it earns them double the commission, despite the obvious conflicts of interest. Of the agents we’d contacted previously, Masaru seemed the most trustworthy. I may have also been a bit biased, as he speaks English, whereas the other agents only spoke Japanese. He set up a viewing for us, but we discovered it wasn’t quite our ideal house. While the house was well built, the layout wasn’t our preference. It also was on a road with many large, noisy trucks, so it was kind of unpleasant to be outside. Making the first offer Over the next three months, we’d look at new listings online practically every day. I set up automated alerts for when the websites of different real estate agents were updated. We considered dozens of properties, and actually visited four plots and three used houses. None of them were a good match. On December 3rd, 2024, I found a great-looking property. It was being advertised as 5.1 km from Nakakaruiza station, which would have made it inconvenient, but it was also listed as 中部小学校近隣戸建 (Chubu Elementary Neighborhood Detached House). As the elementary school isn’t that far from the station, I assumed there was an error. The “house” was more of an old cabin, but it was priced cheaply for just the land itself. I sent Masaru an email asking him to give us the address. Meanwhile, my wife and I had gotten pretty good at identifying the locations of properties based on clues in the listing. The startup of a fellow Karuizawa international resident has a good tool for searching lots. My wife managed to identify this one, and so the following morning we looked at it. It wasn’t perfect, but it was a great location and a good deal, so before Masaru had even responded to us with the address, we sent another email saying we’d like to buy it. Masaru responded later that morning that he had a concern: the access to the house was a private road divided into two parcels, one owned by a real estate company, and the other by a pair of individuals. We’d need to negotiate with those parties about things like usage rights, waterworks, and so on. While he wasn’t worried about dealing with the real estate company, as it should just be a question of compensating them, the individuals owning the other parcel might theoretically be unwilling to negotiate. Based on his analysis, the following day (December 5th) we submitted a non-binding purchase application, with the condition that the road issue be resolved. That evening Masaru told us that he had discovered that the seller was a woman in her nineties. Because things like dementia are common among people that old, he added another condition, which was that a judicial scrivener testify to her mental capacity to enter into a contract. He warned that if we didn’t do that, and later the seller was to be found to be mentally unfit, the sale could be reversed, even if we had already proceeded with the construction of a new house. Given the cost of building a new home, this seemed like a prudent measure to take. A few days later we heard that the seller’s son was discussing the sale with his mother, and that they’d be having a family meeting the following weekend to decide whether to make the sale. To try to encourage them to sell to us, my wife wrote a letter (in Japanese) to them, and attached some photos of our family in Karuizawa. On December 16th, we received word that there was actually another potential buyer for the property who’d made an identical offer. In Japan, once someone submits a purchase application, it is normally handled on a first-come, first-served basis. However I noted that the seller didn’t mark the property as “under negotiation” on their website. I suspect we were the first to make an offer, as we did it literally the day after it was published. Apparently the seller’s son wanted to do business with us, but the daughter preferred the other family. We were not told why the daughter wanted to go with the other potential buyers, but it may have been because of our request to have the seller’s mental capacity tested. Dropping that requirement seemed too risky, though. On December 18th, we were informed that the seller had chosen the other buyer. This was quite a disappointing turn of events. We’d initially thought it was a sure thing because we were so fast to submit the offer and were giving them what they’d asked for. We’d even talked with a couple of home builders already. Furthermore, as winter had arrived, it seemed like the real estate market had also frozen. Whereas we’d previously seen new properties appear almost daily, now we spotted one or two a week at most. Making a successful offer Over the holidays, we looked at a couple more plots. On January 6th, we identified a used house in Nakakaruizawa that seemed to meet our criteria. It was by no means perfect, but close enough to be a candidate, so on January 10th we visited it. The house was pretty much as expected. Our one concern was the heating situation. It had previously had central heating that was based on a kerosene boiler, but that had been replaced with central air conditioning. While visiting the house, we turned it on, but it didn’t seem to make much difference. After some back and forth, we got the seller to run the AC constantly for a couple of days, and then on the morning of January 14th, we visited the house again. It was a grey and cold day, with little sunlight—basically the worst conditions possible for keeping the house warm, which I suppose was a good test. It was about 1°C outside and 10°C inside. We heard that the house had previously been rented out, and that the prior tenants had needed to use extra kerosene heaters to stay warm. This wasn’t ideal, but we also saw that with some renovations, we could improve the insulation. We considered making a lowball offer, but Masaru thought it was unlikely that it would be accepted. Instead, on January 17th, we made an offer at the asking price, conditional on it passing a home inspection and us getting an estimate for renovation costs. We heard that, once again, there was another potential buyer. They were “preparing” to submit an offer, but hadn’t done so yet. This time, the seller followed the standard process of considering our offer first, and accepting it in a non-binding manner. We learned that the seller itself was also a real estate company, whose main business seemed to be developing hotels and golf courses. The sale of this property was a relatively minor deal for them, and their main concern was that everything went smoothly. On January 23rd, we had a housing inspection carried out. The inspector found several issues, but they were all cosmetic or otherwise minor, so we said we’d proceed with the contract. Signing the contract The seller’s agent didn’t appear to be in a hurry to finalize the contract. It took a while for them to present us with the paperwork, and after that there were some minor changes to it, so it wasn’t until March 19th that we finally signed. One reason for the delay was that we’d suggested we sign the contract electronically, something the buyer had never done before. Signing a contract electronically was not only simpler from our perspective, but also avoided the need to affix a revenue stamp on the contract, saving some money. But even though Masaru was experienced with electronic contracts, it seemed to take the seller some time to confirm that everything was okay. When we signed the contract, we agreed to immediately make a 10 percent down payment, with the outstanding amount to be transferred by May 16th. We’d already gotten pre-approval on a housing loan, but needed the extra time for the final approval. Obtaining the loan On January 22nd, we visited Hachijuni Bank, which is headquartered in Nagano and has a branch in Karuizawa, and started the loan pre-approval process. This involved submitting a fair amount of paperwork, especially as I was the director of a company as opposed to a regular employee. The whole process took about two weeks. In parallel, we started the pre-approval process for PayPay Bank. The application involved minimal paperwork, and we were able to quickly get pre-approved by them too. To receive the actual approval for the loan, though, we needed to have already signed the purchase contract, so we couldn’t begin until March 19th. This time, the situation was reversed, and PayPay Bank required much more documentation than Hachijuni Bank. On April 9th, we met again with Hachijuni Bank. There was a problem with our application: my middle names hadn’t been properly entered into the contract, and so we’d have to go through the approval process again. The representative said he’d make sure they reviewed it quickly because it was his fault. That evening, we got the result of our application to PayPay Bank: rejected! No reason was given, but as they offer quite low rates, I’m guessing they only accept the lowest-risk loans. Maybe the reason was the house itself, maybe it was that I was the director of a small private company, maybe it was something else. While the Hachijuni Bank representative seemed positive about our application, I was worried. On April 15th we got the result from Hachijuni Bank: approved! Two days later we signed the paperwork with the bank and scheduled the handover for April 30th. Toilet troubles When the previous tenants had moved out of the house, the seller had decided to drain the water. Karuizawa temperatures are cold enough that if the house isn’t being heated, the pipes can freeze and burst. This meant that when we had the housing inspection performed, the inspector couldn’t confirm anything about the plumbing. Given that by mid-April temperatures were no longer below zero, we asked that the seller put the water back in. They agreed, and on April 23rd we heard that there was a problem with the toilets—some sort of leak. Likely some of the rubber sealing had dried out while the water had been removed. The seller offered to replace the toilets or pay us the cost of replacing them with equivalent ones. My understanding was that because they hadn’t declared any issues with the toilets in the original purchase agreement, they were obligated to do so. My wife and I had already talked about upgrading the toilets, but I wasn’t convinced that it’d be worth it. With the seller offering to pay two-thirds of the cost of the more premium toilets we wanted, though, it seemed like a stroke of luck. Handover On April 30th, the handover went smoothly. A representative from the seller’s company met with myself, the real estate agents, and a judicial scrivener. Technically I didn’t need to be there for it, but as the representative had come from Hiroshima I thought I should be present just in case anything happened. Nothing went awry, however, and they gave me the keys and some omiyage. Renovations We had already decided to do some renovations before moving in. With the toilets not working, that became an immediate necessity. One of the challenges of buying a used house is deciding how extensively to renovate. We settled on doing the minimum: replacing the toilets, putting up a wall to split the kids’ bedroom in two (it already had two separate doors and was designed to be split), switching the lights in the bedroom from fluorescent tubes to LEDs, and adding inner windows (内窓, uchi mado) to improve the insulation. We’d kicked off the process shortly after completing the home inspection, and had already had contractors visit to give estimates in advance. Still, we weren’t able to finalize any plans until we’d decided the exact handover date. After some negotiating, we got an agreement to wrap up the renovations by the beginning of June, and set a move-in date of June 5th. Moving in We managed to move in as scheduled, almost half a year after we first saw the house, and 10 months after we began our initial search. We weren’t exactly in a rush with the whole process, but it did take a lot of our time. Tips for potential homebuyers in Japan Here’s what I’d recommend if you’re looking to buy a house in Japan. Don’t rely on real estate agents to find properties for you My impression of real estate agents in Karuizawa is that they’re quite passive. I think this is a combination of the market being hot, and the fact that many inquiries they get are from people who dream about living here but will never make it a reality. I’d suggest being proactive about the search yourself. All of the best properties we found because we discovered the listings ourselves, not because an agent sent them to us. Find an agent you can trust The main value I got out of using a real estate agent was not because he introduced specific properties, but because he could point out potential issues with them. Checking the ownership of the road leading up to the property, or whether a contract could be reversed if the signer was years later deemed to be mentally incompetent, weren’t issues that had ever even occurred to me. Every property I considered purchasing had some unapparent problems. While there are some legal obligations placed upon the buyer’s agent, my understanding is that those don’t extend to every possible situation. What’s more, real estate seems to attract unscrupulous people, and some agents aren’t averse to behaving unethically or even illegally if it helps them close more deals. Because of this, I think the most important quality in an agent is trustworthiness. That can be hard to evaluate, but go with your gut. You don’t need your company to be profitable three years in a row I’d previously heard that it’s challenging to get a loan as the director of a company, and that the last three years of a business needed to be profitable to receive approval. That made me apprehensive, as TokyoDev was only incorporated in 2022, and 2024 was an unprofitable year for us due to it being the first year we had a decrease in the number of hires. However, when Hachijuni Bank reviewed TokyoDev’s finances, they said it was a great business. I was the sole employee of the business and it was only unprofitable because I had set my director’s compensation too aggressively. We’d had previous profitable years, and the company didn’t have any debts. In other words, one bad year for your company isn’t going to tank your chances of getting a loan. Go through the final approval process with multiple banks Some will advise you not to go through the final approval process of multiple banks at the same time. Since there’s a central database that allows banks to see that you’re doing this, it apparently increases your risk of rejection. If you’re only applying at a few banks, though, the risk remains fairly low. I guess PayPay Bank theoretically could have rejected us for this reason, but I doubt it. If we had initially applied only with them, however, and had waited to be rejected before applying with Hachijuni Bank, we would have risked not having the final approval go through before the payment deadline. Get a housing inspection If you’re buying a used house of any significant value, I highly recommend getting an inspection. We used Sakura Home Inspection, the biggest inspection company in Japan. While the base price for the inspection was only 66,000 yen, we ended up paying about 240,000 yen due to additional options, the size of the house, and transportation fees. While that’s not exactly cheap, it was less than one percent of the total purchase price, and seemed worth it to mitigate the risk of serious issues that might appear after the purchase. Rent locally before buying Our plan of first renting, then buying worked out quite well for us. Not only did it give me confidence that Karuizawa was somewhere I wanted to live for the long haul, it was much easier visiting potential properties when I lived a 10-minute bicycle ride away from them, instead of a 3-hour drive. If you’re considering buying a house in Karuizawa… Please get in touch with me! I’m happy to answer any questions you might have. In my previous article on coworking spaces in Karuizawa, I invited people to get in touch, and already have made several new connections through it. I’d love to continue to grow my community, and help transform Karuizawa from a resort town to one that’s focused on its full-time residents.

3 hours ago 2 votes
Do You Even Personalize, Bro?

There’s a video on YouTube from “Technology Connections” — who I’ve never heard of or watched until now — called Algorithms are breaking how we think. I learned of this video from Gedeon Maheux of The Iconfactory fame. Speaking in the context of why they made Tapestry, he said the ideas in this video would be their manifesto. So I gave it a watch. Generally speaking, the video asks: Does anyone care to have a self-directed experience online, or with a computer more generally? I'm not sure how infrequently we’re actually deciding for ourselves these days [how we decide what we want to see, watch, and do on the internet] Ironically we spend more time than ever on computing devices, but less time than ever curating our own experiences with them. Which — again ironically — is the inverse of many things in our lives. Generally speaking, the more time we spend with something, the more we invest in making it our own — customizing it to our own idiosyncrasies. But how much time do you spend curating, customizing, and personalizing your digital experience? (If you’re reading this in an RSS reader, high five!) I’m not talking about “I liked that post, or saved that video, so the algorithm is personalizing things for me”. Do you know what to get yourself more of? Do you know where to find it? Do you even ask yourself these questions? “That sounds like too much work” you might say. And you’re right, it is work. As the guy in the video says: I'm one of those weirdos who think the most rewarding things in life take effort Me too. Email · Mastodon · Bluesky

13 hours ago 2 votes
My first year since coming back to Linux

<![CDATA[It has been a year since I set up my System76 Merkaat with Linux Mint. In July of 2024 I migrated from ChromeOS and the Merkaat has been my daily driver on the desktop. A year later I have nothing major to report, which is the point. Despite the occasional unplanned reinstallation I have been enjoying the stability of Linux and just using the PC. This stability finally enabled me to burn bridges with mainstream operating systems and fully embrace Linux and open systems. I'm ready to handle the worst and get back to work. Just a few years ago the frustration of troubleshooting a broken system would have made me seriously consider the switch to a proprietary solution. But a year of regular use, with an ordinary mix of quiet moments and glitches, gave me the confidence to stop worrying and learn to love Linux. linux a href="https://remark.as/p/journal.paoloamoroso.com/my-first-year-since-coming-back-to-linux"Discuss.../a Email | Reply @amoroso@oldbytes.space !--emailsub--]]>

yesterday 4 votes
Overanalyzing a minor quirk of Espressif’s reset circuit

The mystery In the previous article, I briefly mentioned a slight difference between the ESP-Prog and the reproduced circuit, when it comes to EN: Focusing on EN, it looks like the voltage level goes back to 3.3V much faster on the ESP-Prog than on the breadboard circuit. The grid is horizontally spaced at 2ms, so … Continue reading Overanalyzing a minor quirk of Espressif’s reset circuit → The post Overanalyzing a minor quirk of Espressif’s reset circuit appeared first on Quentin Santos.

yesterday 3 votes