More from A small freedom area RSS
As I'm exploring the fantastic world of indie game development lately, I end up watching a large number of video tutorials on the subject. Even though the quality of the content is pretty variable, I'm very grateful to the creators for it. That being said, I couldn't help noticing this particular bit times and times again: a = lerp(a, B, delta * RATE) Behind this apparent banal call hides a terrible curse, forever perpetrated by innocent souls on the Internet. In this article we will study what it's trying to achieve, how it works, why it's wrong, and then we'll come up with a good solution to the initial problem. The usual warning: I don't have a mathematics or academic background, so the article is addressed at other neanderthals like myself, who managed to understand that pressing keys on a keyboard make pixels turn on and off. What is it? Let's start from the beginning. We're in a game engine main loop callback called at a regular interval (roughly), passing down the time difference from the last call. In Godot engine, it looks like this: func _physics_process(delta: float): ... If the game is configured to refresh at 60 FPS, we can expect this function to be called around 60 times per second with delta = 1/60 = 0.01666.... As a game developer, we want some smooth animations for all kind of transformations. For example, we may want the speed of the player to go down to zero as they release the moving key. We could do that linearly, but to make the stop less brutal and robotic we want to slow down the speed progressively. Linear (top) versus smooth/exponential (bottom) animation Virtually every tutorial will suggest updating some random variable with something like that: velocity = lerp(velocity, 0, delta * RATE) At 60 FPS, a decay RATE defined to 3.5, and an initial velocity of 100, the velocity will go down to 0 following this curve: Example curve of a decaying variable Note: velocity is just a variable name example, it can be found in many other contexts If you're familiar with lerp() ("linear interpolation") you may be wondering why this is making a curve. Indeed, this lerp() function, also known as mix(), is a simple linear function defined as lerp(a,b,x) = x*(b-a) + a or its alternative stable form lerp(a,b,x) = (1-a)x + bx. For more information, see a previous article about this particular function. But here we are re-using the previous value, so this essentially means nesting lerp() function calls, which expands into a power formula, forming a curve composed of a chain of small straight segments. Why is it wrong? The main issue is that the formula is heavily depending on the refresh rate. If the game is supposed to work at 30, 60, or 144 FPS, then it means the physics engine is going to behave differently. Here is an illustration of the kind of instability we can expect: Comparison of the curves at different frame rates with the problematic formula Note that the inaccuracy when compared to an ideal curve is not the issue here. The problem is that the game mechanics are different depending on the hardware, the system, and the wind direction observed in a small island of Japan. Imagine being able to jump further if we replace our 60Hz monitor with a 144Hz one, that would be some nasty pay to win incentive. We may be able to get away with this by forcing a constant refresh rate for the game and consider this a non-issue (I'm not convinced this is achievable on all engines and platforms), but then we meet another problem: the device may not be able to hold this requirement at all times because of potential lags (for reasons that may be outside our control). That's right, so far we assumed delta=1/FPS but that's merely a target, it could fluctuate, causing mild to dramatic situations gameplay wise. One last issue with that formula is the situation of a huge delay spike, causing an overshooting of the target. For example if we have RATE=3 and we end up with a frame that takes 500ms for whatever random reason, we're going to interpolate with a value of 1.5, which is way above 1. This is easily fixed by maxing out the 3rd argument of lerp to 1, but we have to keep that issue in mind. To summarize, the formula is: not frame rate agnostic ❌ non deterministic ❌ vulnerable to overshooting ❌ If you're not interested in the gory details on the how, you can now jump straight to the conclusion for a better alternative. Study We're going to switch to a more mathematical notation from now on. It's only going to be linear algebra, nothing particularly fancy, but we're going to make a mess of 1 letter symbols, bear with me. Let's name the exhaustive list of inputs of our problem: initial value: a_0=\Alpha (from where we start, only used once) target value: \Beta (where we are going, constant value) time delta: \Delta_n (time difference from last call) the rate of change: R (arbitrary scaling user constant) original sequence: a_{n+1} = \texttt{lerp}(a_n, \Beta, R\Delta_n) (the code in the main loop callback) frame rate: F (the target frame rate, for example 60 FPS) time: t (animation time elapsed) What we are looking for is a new sequence formula u_n (u standing for purfect) that doesn't have the 3 previously mentioned pitfalls. The first thing we can do is to transform this recursive sequence into the expected ideal contiguous time based function. The original sequence was designed for a given rate R and FPS F: this means that while \Delta_n changes in practice every frame, the ideal function we are looking for is constant: \Delta=1/F. So instead of starting from a_{n+1} = \texttt{lerp}(a_n, \Beta, R\Delta_n), we will look for u_n starting from u_{n+1} = \texttt{lerp}(u_n, \Beta, R\Delta) with u_0=a_0=\Alpha. Since I'm lazy and incompetent, we are just going to ask WolframAlpha for help finding the solution to the recursive sequence. But to feed its input we need to simplify the terms a bit: ...with P=(1-R\Delta) and Q=\Beta R\Delta. We do that so we have a familiar ax+b linear form. According to WolframAlpha this is equivalent to: This is great because we now have the formula according to n, our frame number. We can also express that discrete sequence into a contiguous function according to the time t: Expanding our temporary P and Q placeholders with their values and unrolling, we get: This function perfectly matches the initial lerp() sequence in the hypothetical situation where the frame rate is honored. Basically, it's what the sequence a_{n+1} was meant to emulate at a given frame rate F. Note: we swapped the first 2 terms of lerp() at the last step because it makes more sense semantically to go from \Alpha to \Beta. Let's again summarize what we have and what we want: we're into the game main loop and we want our running value to stick to that f(t) function. We have: v=f(t): the value previously computed (t is the running duration so far, but we don't have it); in the original sequence this is known as a_n \Delta_n: the delta time for the current frame We are looking for a function \Eta(v,\Delta_n) which defines the position of a new point on the curve, only knowing v and \Delta_n. It's a "time agnostic" version of f(t). Basically, it is defined as \Eta(v,\Delta_n)=f(t+\Delta_n), but since we don't have t it's not very helpful. That being said, while we don't have t, we do have f(t) (the previous value v). Looking at the curve, we know the y-value of the previous point, and we know the difference between the new point and the previous point on the x-axis: Previous and current point in time If we want t (the total time elapsed at the previous point), we need the inverse function f^{-1}. Indeed, t = f^{-1}(f(t)): taking the inverse of a function gives back the input. We know f so we can inverse it, relying on WolframAlpha again (what a blessing this website is): Note: \ln stands for natural logarithm, sometimes also called \log. Careful though, on Desmos for example \log is in base 10, not base e (while its \exp is in base e for some reason). This complex formula may feel a bit intimidating but we can now find \Eta only using its two parameters: It's pretty messy but we can simplify that down to something much simpler. An interesting property that is going to be helpful here is m^n = e^{n \ln m}. For my fellow programmers getting tensed here: pow(m, n) == exp(n * log(m)). Similarly, e^{\ln x} = x (exp(log(x)) == x), which give us an idea where this is going. Again we swapped the first 2 arguments of lerp at the last step at the cost of an additional subtraction: this is more readable because \Beta is our destination point. Rewriting this in a sequence notation, we get: We still have this annoying F\ln(1-R/F) bit in the formula, but we can take it out and precompute it because it is constant: it's our rate conversion formula: We're going to make one extra adjustment: R' is negative, which is not exactly intuitive to work with as a user (in case it is defined arbitrarily and not through the conversion formula), so we make a sign swap for convenience: The conversion formula is optional, it's only needed to port a previously broken code to the new formula. One interesting thing here is that R' is fairly close to R when R is small. For example, a rate factor R=5 at 60 FPS gives us R' \approx 5.22. This means that if the rate factors weren't closely tuned, it is probably acceptable to go with R'=R and not bother with any conversion. Still, having that formula can be useful to update all the decay constants and check that everything still works as expected. Also, notice how if the delta gets very large, -R'\Delta_n is going toward -\infty, e^{-R'\Delta_n} toward 0, 1-e^{-R'\Delta_n} toward 1, and so the interpolation is going to reach our final target \Beta without overshooting. This means the formula doesn't need any extra care with regard to the 3rd issue we pointed out earlier. Looking at the previous curves but now with the new formula and an adjusted rate: Comparison of the curves at different frame rates with the new formula Conclusion So there we have it, the perfect formula, frame rate agnostic ✅, deterministic ✅ and resilient to overshooting ✅. If you've quickly skimmed through the maths, here is what you need to know: a = lerp(a, B, delta * RATE) Should be changed to: a = lerp(a, B, 1.0 - exp(-delta * RATE2)) With the precomputed RATE2 = -FPS * log(1 - RATE/FPS) (where log is the natural logarithm), or simply using RATE2 = RATE as a rough equivalent. Also, any existing overshooting clamping can safely be dropped. Now please adjust your game to make the world a better and safer place for everyone ♥
This write-up is meant to present the rationale and technical details behind a tiny project I wrote the other day, WTH, or WindowTitleHack, which is meant to force a constant window name for apps that keep changing it (I'm looking specifically at Firefox and Krita, but there are probably many others). Why tho? I've been streaming on Twitch from Linux (X11) with a barebone OBS Studio setup for a while now, and while most of the experience has been relatively smooth, one particularly striking frustration has been dealing with windows detection. If you don't want to capture the whole desktop for privacy reasons or simply to have control over the scene layout depending on the currently focused app, you need to rely on the Window Capture (XComposite) source. This works mostly fine, and it is actually able to track windows even when their title bar is renamed. But obviously, upon restart it can't find it again because both the window title and the window ID changed, meaning you have to redo your setup by reselecting the window again. It would have been acceptable if that was the only issue I had, but one of the more advanced feature I'm extensively using is the Advanced Scene Switcher (the builtin one, available through the Tools menu). This tool is a basic window title pattern matching system that allows automatic scene switches depending on the current window. Since it doesn't even seem to support regex, it's troublesome to have it reliably work with apps constantly changing their title (and even if it had, there is no guarantee that the app would leave a recognizable matchable pattern in its title). Hacking Windows One unreliable hack would be to spam xdotool commands to correct the window title. This could be a resource hog, and it would create quite a bunch of races. One slight improvement over this would be to use xprop -spy, but that wouldn't address the race conditions (since we would adjust the title after it's been already changed). So how do we deal with that properly? Well, on X11 with the reference library (Xlib) there are actually various (actually a lot of) ways of changing the title bar. It took me a while to identify which call(s) to target, but ended up with the following call graph, where each function is actually exposed publicly: From this we can easily see that we only need to hook the deepest function XChangeProperty, and check if the property is XA_WM_NAME (or its "modern" sibling, _NET_WM_NAME). How do we do that? With the help of the LD_PRELOAD environment variable and a dynamic library that implements a custom XChangeProperty. First, we grab the original function: #include <dlfcn.h> /* A type matching the prototype of the target function */ typedef int (*XChangeProperty_func_type)( Display *display, Window w, Atom property, Atom type, int format, int mode, const unsigned char *data, int nelements ); /* [...] */ XChangeProperty_func_type XChangeProperty_orig = dlsym(RTLD_NEXT, "XChangeProperty"); We also need to craft a custom _NET_WM_NAME atom: _NET_WM_NAME = XInternAtom(display, "_NET_WM_NAME", 0); With this we are now able to intercept all the WM_NAME events and override them with our own: if (property == XA_WM_NAME || property == _NET_WM_NAME) { data = (const unsigned char *)new_title; nelements = (int)strlen(new_title); } return XChangeProperty_orig(display, w, property, type, format, mode, data, nelements); We wrap all of this into our own redefinition of XChangeProperty and… that's pretty much it. Now due to a long history of development, Xlib has been "deprecated" and superseded by libxcb. Both are widely used, but fortunately the APIs are more or less similar. The function to hook is xcb_change_property, and defining _NET_WM_NAME is slightly more cumbered but not exactly challenging: const xcb_intern_atom_cookie_t cookie = xcb_intern_atom(conn, 0, strlen("_NET_WM_NAME"), "_NET_WM_NAME"); xcb_intern_atom_reply_t *reply = xcb_intern_atom_reply(conn, cookie, NULL); if (reply) _NET_WM_NAME = reply->atom; free(reply); Aside from that, the code is pretty much the same. Configuration To pass down the custom title to override, I've been relying on an environment variable WTH_TITLE. From a user point of view, it looks like this: LD_PRELOAD="builddir/libwth.so" WTH_TITLE="Krita4ever" krita We could probably improve the usability by creating a wrapping tool (so that we could have something such as ./wth --title=Krita4ever krita). Unfortunately I wasn't yet able to make a self-referencing executable accepted by LD_PRELOAD, so for now the manual LD_PRELOAD and WTH_TITLE environment will do just fine. Thread safety To avoid a bunch of redundant function roundtrips we need to globally cache a few things: the new title (to avoid fetching it in the environment all the time), the original functions (to save the dlsym call), and _NET_WM_NAME. Those are loaded lazily at the first function call, but we have no guarantee with regards to concurrent calls on that hooked function so we must create our own lock. I initially though about using pthread_once but unfortunately the initialization callback mechanism doesn't allow any custom argument. Again, this is merely a slight annoyance since we can implement our own in a few lines of code: /* The "once" API is similar to pthread_once but allows a custom function argument */ struct wth_once { pthread_mutex_t lock; int initialized; }; #define WTH_ONCE_INITIALIZER {.lock=PTHREAD_MUTEX_INITIALIZER} typedef void (*init_func_type)(void *user_arg); void wth_init_once(struct wth_once *once, init_func_type init_func, void *user_arg) { pthread_mutex_lock(&once->lock); if (!once->initialized) { init_func(user_arg); once->initialized = 1; } pthread_mutex_unlock(&once->lock); } Which we use like this: static struct wth_once once = WTH_ONCE_INITIALIZER; static void init_once(void *user_arg) { Display *display = user_arg; /* [...] */ } /* [...] */ wth_init_once(&once, init_once, display); The End? I've been delaying doing this project for weeks because it felt complex at first glance, but it actually just took me a few hours. Probably the same amount of time it took me to write this article. While the project is admittedly really small, it still feel like a nice accomplishment. I hope it's useful to other people. Now, the Wayland support is probably the most obvious improvement the project can receive, but I don't have such a setup locally to test yet, so this is postponed for an undetermined amount of time. The code is released with a permissive license (MIT); if you want to contribute you can open a pull request but getting in touch with me first is appreciated to avoid unnecessary and overlapping efforts.
In 2015, I wrote an article about how the palette color quantization was improved in FFmpeg in order to make nice animated GIF files. For some reason, to this day this is one of my most popular article. As time passed, my experience with colors grew and I ended up being quite ashamed and frustrated with the state of these filters. A lot of the code was naive (when not terribly wrong), despite the apparent good results. One of the major change I wanted to do was to evaluate the color distances using a perceptually uniform colorspace, instead of using a naive euclidean distance of RGB triplets. As usual it felt like a week-end long project; after all, all I have to do is change the distance function to work in a different space, right? Well, if you're following my blog you might have noticed I've add numerous adventures that stacked up on each others: I had to work out the colorspace with integer arithmetic first ...which forced me to look into integer division more deeply ...which confronted me to all sort of undefined behaviours in the process And when I finally reached the point where I could make the switch to OkLab (the perceptual colorspace), a few experiments showed that the flavour of the core algorithm I was using might contain some fundamental flaws, or at least was not implementing optimal heuristics. So here we go again, quickly enough I find myself starting a new research study in the pursuit of understanding how to put pixels on the screen. This write-up is the story of yet another self-inflicted struggle. Palette quantization But what is palette quantization? It essentially refers to the process of reducing the number of available colors of an image down to a smaller subset. In sRGB, an image can have up to 16.7 million colors. In practice though it's generally much less, to the surprise of no one. Still, it's not rare to have a few hundreds of thousands different colors in a single picture. Our goal is to reduce that to something like 256 colors that represent them best, and use these colors to create a new picture. Why you may ask? There are multiple reasons, here are some: Improve size compression (this is a lossy operation of course, and using dithering on top might actually defeat the original purpose) Some codecs might not support anything else than limited palettes (GIF or subtitles codecs are examples) Various artistic purposes Following is an example of a picture quantized at different levels: Original (26125 colors) Quantized to 8bpp (256 colors) Quantized to 2bpp (4 colors) This color quantization process can be roughly summarized in a 4-steps based process: Sample the input image: we build an histogram of all the colors in the picture (basically a simple statistical analysis) Design a colormap: we build the palette through various means using the histograms Create a pixel mapping which associates a color (one that can be found in the input image) with another (one that can be found in the newly created palette) Image quantizing: we use the color mapping to build our new image. This step may also involve some dithering. The study here will focus on step 2 (which itself relies on step 1). Colormap design algorithms A palette is simply a set of colors. It can be represented in various ways, for example here in 2D and 3D: To generate such a palette, all sort of algorithms exists. They are usually classified into 2 large categories: Dividing/splitting algorithms (such as Median-Cut and its various flavors) Clustering algorithms (such as K-means, maximin distance, (E)LBG or pairwise clustering) The former are faster but non-optimal while the latter are slower but better. The problem is NP-complete, meaning it's possible to find the optimal solution but it can be extremely costly. On the other hand, it's possible to find "local optimums" at minimal cost. Since I'm working within FFmpeg, speed has always been a priority. This was the reason that motivated me to initially implement the Median-Cut over a more expensive algorithm. The rough picture of the algorithm is relatively easy to grasp. Assuming we want a palette of K colors: A set S of all the colors in the input picture is constructed, along with a respective set W of the weight of each color (how much they appear) Since the colors are expressed as RGB triplets, they can be encapsulated in one big cuboid, or box The box is cut in two along one of the axis (R, G or B) on the median (hence the name of the algorithm) If we don't have a total K boxes yet, pick one of them and go back to previous step All the colors in each of the K boxes are then averaged to form the color palette entries Here is how the process looks like visually: Median-Cut algorithm targeting 16 boxes You may have spotted in this video that the colors are not expressed in RGB but in Lab: this is because instead of representing the colors in a traditional RGB colorspace, we are instead using the OkLab colorspace which has the property of being perceptually uniform. It doesn't really change the Median Cut algorithm, but it definitely has an impact on the resulting palette. One striking limitation of this algorithm is that we are working exclusively with cuboids: the cuts are limited to an axis, we are not cutting along an arbitrary plane or a more complex shape. Think of it like working with voxels instead of more free-form geometries. The main benefit is that the algorithm is pretty simple to implement. Now the description provided earlier conveniently avoided describing two important aspects happening in step 3 and 4: How do we choose the next box to split? How do we choose along which axis of the box we make the cut? I pondered about that for a quite a long time. An overview of the possible heuristics In bulk, some of the heuristics I started thinking of: should we take the box that has the longest axis across all boxes? should we take the box that has the largest volume? should we take the box that has the biggest Mean Squared Error when compared to its average color? should we take the box that has the axis with the biggest MSE? assuming we choose to go with the MSE, should it be normalized across all boxes? should we even account for the weight of each color or consider them equal? what about the axis? Is it better to pick the longest or the one with the higher MSE? I tried to formalize these questions mathematically to the best of my limited abilities. So let's start by saying that all the colors c of given box are stored in a N×M 2D-array following the matrix notation: L₁L₂L₃…Lₘ a₁a₂a₃…aₘ b₁b₂b₃…bₘ N is the number of components (3 in our case, whether it's RGB or Lab), and M the number of colors in that box. You can visualize this as a list of vectors as well, where c_{i,j} is the color at row i and column j. With that in mind we can sketch the following diagram representing the tree of heuristic possibilities to implement: Mathematicians are going to kill me for doodling random notes all over this perfectly understandable symbols gibberish, but I believe this is required for the human beings reading this article. In summary, we end up with a total of 24 combinations to try out: 2 axis selection heuristics: cut the axis with the maximum error squared cut the axis with the maximum length 3 operators: maximum measurement out of all the channels product of the measurements of all the channels sum of the measurements of all the channels 4 measurements: error squared, honoring weights error squared, not honoring weights error squared, honoring weights, normalized length of the axis If we start to intuitively think about which ones are likely going to perform the best, we quickly realize that we haven't actually formalized what we are trying to achieve. Such a rookie mistake. Clarifying this will help us getting a better feeling about the likely outcome. I chose to target an output that minimizes the MSE against the reference image, in a perceptual way. Said differently, trying to make the perceptual distance between an input and output color pixel as minimal as possible. This is an arbitrary and debatable target, but it's relatively simple and objective to evaluate if we have faith in the selected perceptual model. Another appropriate metric could have been to find the ideal palette through another algorithm, and compare against that instead. Doing that unfortunately implied that I would trust that other algorithm, its implementation, and that I have enough computing power. So to summarize, we want to minimize the MSE between the input and output, evaluated in the OkLab colorspace. This can be expressed with the following formula: Where: P is a partition (which we constrain to a box in our implementation) C the set of colors in the partition P w the weight of a color c a single color µ the average color of the set C Special thanks to criver for helping me a ton on the math area, this last formula is from them. Looking at the formula, we can see how similar it is to certain branches of the heuristics tree, so we can start getting an intuition about the result of the experiment. Experiment language Short deviation from the main topic (feel free to skip to the next section): working in C within FFmpeg quickly became a hurdle more than anything. Aside from the lack of flexibility, the implicit casts destroying the precision deceitfully, and the undefined behaviours, all kind of C quirks went in the way several times, which made me question my sanity. This one typically severly messed me up while trying to average the colors: #include <stdio.h> #include <stdint.h> int main (void) { const int32_t x = -30; const uint32_t y = 10; const uint32_t a = 30; const int32_t b = -10; printf("%d×%u=%d\n", x, y, x * y); printf("%u×%d=%d\n", a, b, a * b); printf("%d/%u=%d\n", x, y, x / y); printf("%u/%d=%d\n", a, b, a / b); return 0; } % cc -Wall -Wextra -fsanitize=undefined test.c -o test && ./test -30×10=-300 30×-10=-300 -30/10=429496726 30/-10=0 Anyway, I know this is obvious but if you aren't already doing that I suggest you build your experiments in another language, Python or whatever, and rewrite them in C later when you figured out your expected output. Re-implementing what I needed in Python didn't take me long. It was, and still is obviously much slower at runtime, but that's fine. There is a lot of room for speed improvement, typically by relying on numpy (which I didn't bother with). Experiment results I created a research repository for the occasion. The code to reproduce and the results can be found in the color quantization README. In short, based on the results, we can conclude that: Overall, the box that has the axis with the largest non-normalized weighted sum of squared error is the best candidate in the box selection algorithm Overall, cutting the axis with the largest weighted sum of squared error is the best axis cut selection algorithm To my surprise, normalizing the weights per box is not a good idea. I initially observed that by trial and error, which was actually one of the main motivator for this research. I initially thought normalizing each box was necessary in order to compare them against each others (such that they are compared on a common ground). My loose explanation of the phenomenon was that not normalizing causes a bias towards boxes with many colors, but that's actually exactly what we want. I believe it can also be explained by our evaluation function: we want to minimize the error across the whole set of colors, so small partitions (in color counts) must not be made stronger. At least not in the context of the target we chose. It's also interesting to see how the max() seems to perform better than the sum() of the variance of each component most of the time. Admittedly my picture samples set is not that big, which may imply that more experiments to confirm that tendency are required. In retrospective, this might have been quickly predictable to someone with a mathematical background. But since I don't have that, nor do I trust my abstract thinking much, I'm kind of forced to try things out often. This is likely one of the many instances where I spent way too much energy on something obvious from the beginning, but I have the hope it will actually provide some useful information for other lost souls out there. Known limitations There are two main limitations I want to discuss before closing this article. The first one is related to minimizing the MSE even more. K-means refinement We know the Median-Cut actually provides a rough estimate of the optimal palette. One thing we could do is use it as a first step before refinement, for example by running a few K-means iterations as post-processing (how much refinement/iterations could be a user control). The general idea of K-means is to progressively move each colors individually to a more appropriate box, that is a box for which the color distance to the average color of that box is smaller. I started implementing that in a very naive way, so it's extremely slow, but that's something to investigate further because it definitely improves the results. Most of the academic literature seems to suggest the use of the K-means clustering, but all of them require some startup step. Some come up with various heuristics, some use PCA, but I've yet to see one that rely on Median-Cut as first pass; maybe that's not such a good idea, but who knows. Bias toward perceived lightness Another more annoying problem for which I have no solution for is with regards to the human perception being much more sensitive to light changes than hue. If you look at the first demo with the parrot, you may have observed the boxes are kind of thin. This is because the a and b components (respectively how green/red and blue/yellow the color is) have a much smaller amplitude compared to the L (perceived lightness). Here is a side by side comparison of the spread of colors between a stretched and normalized view: You may rightfully question whether this is a problem or not. In practice, this means that when K is low (let's say smaller than 8 or even 16), cuts along L will almost always be preferred, causing the picture to be heavily desaturated. This is because it tries to preserve the most significant attribute in human perception: the lightness. That particular picture is actually a pathological study case: 4 colors 8 colors 12 colors 16 colors We can see the hue timidly appearing around K=16 (specifically it starts being more strongly noticeable starting the cut K=13). Conclusion For now, I'm mostly done with this "week-end long project" into which I actually poured 2 or 3 months of lifetime. The FFmpeg patchset will likely be upstreamed soon so everyone should hopefully be able to benefit from it in the next release. It will also come with additional dithering methods, which implementation actually was a relaxing distraction from all this hardship. There are still many ways of improving this work, but it's the end of the line for me, so I'll trust the Internet with it.
Happy with my recent breakthrough in understanding C integer divisions after weeks of struggle, I was minding my own business having fun writing integer arithmetic code. Life was good, when suddenly… zsh: segmentation fault (core dumped). That code wasn't messing with memory much so it was more likely to be a side effect of an arithmetic overflow or something. Using -fsanitize=undefined quickly identified the issue, which confirmed the presence of an integer overflow. The fix was easy but something felt off. I was under the impression my code was robust enough against that kind of honest mistake. Turns out, the protecting condition I had in place should indeed have been enough, so I tried to extract a minimal reproducible case: #include <stdint.h> #include <stdio.h> #include <stdlib.h> uint8_t tab[0x1ff + 1]; uint8_t f(int32_t x) { if (x < 0) return 0; int32_t i = x * 0x1ff / 0xffff; if (i >= 0 && i < sizeof(tab)) { printf("tab[%d] looks safe because %d is between [0;%d[\n", i, i, (int)sizeof(tab)); return tab[i]; } return 0; } int main(int ac, char **av) { return f(atoi(av[1])); } The overflow can happen on x * 0x1ff. Since an integer overflow is undefined, GCC makes the assumption that it cannot happen, ever. In practice in this case it does, but the i >= 0 && i < sizeof(tab) condition should be enough to take care of it, whatever crazy value it becomes, right? Well, I have bad news: % cc -Wall -O2 overflow.c -o overflow && ./overflow 50000000 tab[62183] looks safe because 62183 is between [0;512[ zsh: segmentation fault (core dumped) ./overflow 50000000 Note: this is GCC 12.2.0 on x86-64. We have i=62183 as the result of the overflow, and nevertheless the execution violates the gate condition, spout a non-sense lie, go straight into dereferencing tab, and die miserably. Let's study what GCC is doing here. Firing up Ghidra we observe the following decompiled code: uint8_t f(int x) { int tmp; if (-1 < x) { tmp = x * 0x1ff; if (tmp < 0x1fffe00) { printf("tab[%d] looks safe because %d is between [0;%d[\n",(ulong)(uint)tmp / 0xffff, (ulong)(uint)tmp / 0xffff,0x200); return tab[(int)((uint)tmp / 0xffff)]; } } return '\0'; } When I said GCC makes the assumption that it cannot happen this is what I meant: tmp is not supposed to overflow so part of the condition I had in place was simply removed. More specifically since x can not be lesser than 0, and since GCC assumes a multiplication cannot overflow into a random value (that could be negative) because it is undefined behaviour, it then decides to drop the "redundant" i >= 0 condition because "it cannot happen". I reported that exact issue to GCC to make sure it wasn't a bug, and it was indeed confirmed to me that the undefined behaviour of an integer overflow is not limited in scope to whatever insane value it could take: it is apparently perfectly acceptable to mess up the code flow entirely. While I understand how attractive it can be from an optimization point of view, the paranoid developer in me is straight up terrified by the perspective of a single integer overflow removing security protection and causing such havoc. I've worked several years in a project where the integer overflows were (and probably still are) legion. Identifying and fixing of all them is likely a lifetime mission of several opinionated individuals. I'm expecting this article to make the rust crew go in a crusade again, and I think I might be with them this time. Edit: it was made clear to me while reading Predrag's blog that the key to my misunderstanding boils down to this: "Undefined behavior is not the same as implementation-defined behavior". While I was indeed talking about undefined behaviour, subconsciously I was thinking that the behaviour of an overflow on a multiplication would be "implementation-defined behaviour". This is not the case, it is indeed an undefined behaviour, and yes the compiler is free to do whatever it wants to because it is compliant with the specifications. It's my mistake of course, but to my defense, despite the arrogant comments I read, this confusion happens a lot. This happens I believe because it's violating the Principle of least astonishment. To illustrate this I'll take this interesting old OpenBSD developer blog post being concerned about the result of the multiplication rather than the invalidation of any guarantee with regard to what's going to happen to the execution flow (before and after). This is not uncommon and in my opinion perfectly understandable.
More in programming
To be a successful founder, you have to believe that what you're working on is going to work — despite knowing it probably won't! That sounds like an oxymoron, but it's really not. Believing that what you're building is going to work is an essential component of coming to work with the energy, fortitude, and determination it's going to require to even have a shot. Knowing it probably won't is accepting the odds of that shot. It's simply the reality that most things in business don't work out. At least not in the long run. Most businesses fail. If not right away, then eventually. Yet the world economy is full of entrepreneurs who try anyway. Not because they don't know the odds, but because they've chosen to believe they're special. The best way to balance these opposing points — the conviction that you'll make it work, the knowledge that it probably won't — is to do all your work in a manner that'll make you proud either way. If it doesn't work, you still made something you wouldn't be ashamed to put your name on. And if it does work, you'll beam with pride from making it on the basis of something solid. The deep regret from trying and failing only truly hits when you look in the mirror and see Dostoevsky staring back at you with this punch to the gut: "Your worst sin is that you have destroyed and betrayed yourself for nothing." Oof. Believe it's going to work. Build it in a way that makes you proud to sign it. Base your worth on a human on something greater than a business outcome.
I recently went into a deep dive on “UART” and will publish a much longer article on the topic. This is just a recap of the basics to help put things in context. Many tutorials focus on using UART over USB, which adds many layers of abstraction, hiding what it actually is. Here, I deliberately … Continue reading How to use “real” UART → The post How to use “real” UART appeared first on Quentin Santos.
You know about Critical Race Theory, right? It says that if there’s an imbalance in, say, income between races, it must be due to discrimination. This is what wokism seems to be, and it’s moronic and false. The right wing has invented something equally stupid. Introducing Critical Trade Theory, stolen from this tweet. If there’s an imbalance in trade between countries, it must be due to unfair practices. (not due to the obvious, like one country is 10x richer than the other) There’s really only one way the trade deficits will go away, and that’s if trade goes to zero (or maybe if all these countries become richer than America). Same thing with the race deficits, no amount of “leg up” bullshit will change them. Why are all the politicians in America anti-growth anti-reality idiots who want to drive us into the poor house? The way this tariff shit is being done is another stupid form of anti-merit benefits to chosen groups of people, with a whole lot of grift to go along with it. Makes me just not want to play.
One of the most memorable quotes in Arthur Miller’s The Death of a Salesman comes from Uncle Ben, who describes his path to becoming wealthy as, “When I was seventeen, I walked into the jungle, and when I was twenty-one I walked out. And by God I was rich.” I wish I could describe the path to learning engineering strategy in similar terms, but by all accounts it’s a much slower path. Two decades in, I am still learning more from each project I work on. This book has aimed to accelerate your learning path, but my experience is that there’s still a great deal left to learn, despite what this book has hoped to accomplish. This final chapter is focused on the remaining advice I have to give on how you can continue to improve at strategy long after reading this book’s final page. Inescapably, this chapter has become advice on writing your own strategy for improving at strategy. You are already familiar with my general suggestions on creating strategy, so this chapter provides focused advice on creating your own plan to get better at strategy. It covers: Exploring strategy creation to find strategies you can learn from via public and private resources, and through creating learning communities How to diagnose the strategies you’ve found, to ensure you learn the right lessons from each one Policies that will help you find ways to perform and practice strategy within your organization, whether or not you have organizational authority Operational mechanisms to hold yourself accountable to developing a strategy practice My final benediction to you as a strategy practitioner who has finished reading this book With that preamble, let’s write this book’s final strategy: your personal strategy for developing your strategy practice. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Exploring strategy creation Ideally, we’d start our exploration of how to improve at engineering strategy by reading broadly from the many publicly available examples. Unfortunately, there simply aren’t many easily available works to learn from others’ experience. Nonetheless, resources do exist, and we’ll discuss the three categories that I’ve found most useful: Public resources on engineering strategy, such as companies’ engineering blogs Private and undocumented strategies available through your professional network Learning communities that you build together, including ongoing learning circles Each of these is explored in its own section below. Public resources While there aren’t as many public engineering strategy resources as I’d like, I’ve found that there are still a reasonable number available. This book collects a number of such resources in the appendix of engineering strategy resources. That appendix also includes some individuals’ blog posts that are adjacent to this topic. You can go a long way by searching and prompting your way into these resources. As you read them, it’s important to recognize that public strategies are often misleading, as discussed previously in evaluating strategies. Everyone writing in public has an agenda, and that agenda often means that they’ll omit important details to make themselves, or their company, come off well. Make sure you read through the lines rather than taking things too literally. Private resources Ironically, where public resources are hard to find, I’ve found it much easier to find privately held strategy resources. While private recollections are still prone to inaccuracies, the incentives to massage the truth are less pronounced. The most useful sources I’ve found are: peers’ stories – strategies are often oral histories, and they are shared freely among peers within and across companies. As you build out your professional network, you can usually get access to any company’s engineering strategy on any topic by just asking. There are brief exceptions. Even a close peer won’t share a sensitive strategy before its existence becomes obvious externally, but they’ll be glad to after it does. People tend to over-estimate how much information companies can keep private anyway: even reading recent job postings can usually expose a surprising amount about a company. internal strategy archaeologists – while surprisingly few companies formally collect their strategies into a repository, the stories are informally collected by the tenured members of the organization. These folks are the company’s strategy archaeologists, and you can learn a great deal by explicitly consulting them becoming a strategy archaeologist yourself – whether or not you’re a tenured member of your company, you can learn a tremendous amount by starting to build your own strategy repository. As you start collecting them, you’ll interest others in contributing their strategies as well. As discussed in Staff Engineer’s section on the Write five then synthesize approach to strategy, over time you can foster a culture of documentation where one didn’t exist before. Even better, building that culture doesn’t require any explicit authority, just an ongoing show of excitement. There are other sources as well, ranging from attending the hallway track in conferences to organizing dinners where stories are shared with a commitment to privacy. Working in community My final suggestion for seeing how others work on strategy is to form a learning circle. I formed a learning circle when I first moved into an executive role, and at this point have been running it for more than five years. What’s surprised me the most is how much I’ve learned from it. There are a few reasons why ongoing learning circles are exceptional for sharing strategy: Bi-directional discussion allows so much more learning and understanding than mono-directional communication like conference talks or documents. Groups allow you to learn from others’ experiences and others’ questions, rather than having to guide the entire learning yourself. Continuity allows you to see the strategy at inception, during the rollout, and after it’s been in practice for some time. Trust is built slowly, and you only get the full details about a problem when you’ve already successfully held trust about smaller things. An ongoing group makes this sort of sharing feasible where a transient group does not. Although putting one of these communities together requires a commitment, they are the best mechanism I’ve found. As a final secret, many people get stuck on how they can get invited to an existing learning circle, but that’s almost always the wrong question to be asking. If you want to join a learning circle, make one. That’s how I got invited to mine. Diagnosing your prior and current strategy work Collecting strategies to learn from is a valuable part of learning. You also have to determine what lessons to learn from each strategy. For example, you have to determine whether Calm’s approach to resourcing Engineering-driven projects is something to copy or something to avoid. What I’ve found effective is to apply the strategy rubric we developed in the “Is this strategy any good?” chapter to each of the strategies you’ve collected. Even by splitting a strategy into its various phases, you’ll learn a lot. Applying the rubric to each phase will teach you more. Each time you do this to another strategy, you’ll get a bit faster at applying the rubric, and you’ll start to see interesting, recurring patterns. As you dig into a strategy that you’ve split into phases and applied the evaluation rubric to, here are a handful of questions that I’ve found interesting to ask myself: How long did it take to determine a strategy’s initial phase could be improved? How high was the cost to fund that initial phase’s discovery? Why did the strategy reach its final stage and get repealed or replaced? How long did that take to get there? If you had to pick only one, did this strategy fail in its approach to exploration, diagnosis, policy or operations? To what extent did the strategy outlive the tenure of its primary author? Did it get repealed quickly after their departure, did it endure, or was it perhaps replaced during their tenure? Would you generally repeat this strategy, or would you strive to avoid repeating it? If you did repeat it, what conditions seem necessary to make it a success? How might you apply this strategy to your current opportunities and challenges? It’s not necessary to work through all of these questions for every strategy you’re learning from. I often try to pick the two that I think might be most interesting for a given strategy. Policy for improving at strategy At a high level, there are just a few key policies to consider for improving your strategic abilities. The first is implementing strategy, and the second is practicing implementing strategy. While those are indeed the starting points, there are a few more detailed options worth consideration: If your company has existing strategies that are not working, debug one and work to fix it. If you lack the authority to work at the company scope, then decrease altitude until you find an altitude you can work at. Perhaps setting Engineering organizational strategies is beyond your circumstances, but strategy for your team is entirely accessible. If your company has no documented strategies, document one to make it debuggable. Again, if operating at a high altitude isn’t attainable for some reason, operate at a lower altitude that is within reach. If your company’s or team’s strategies are effective but have low adoption, see if you can iterate on operational mechanisms to increase adoption. Many such mechanisms require no authority at all, such as low-noise nudges or the model-document-share approach. If existing strategies are effective and have high adoption, see if you can build excitement for a new strategy. Start by mining for which problems Staff-plus engineers and senior managers believe are important. Once you find one, you have a valuable strategy vein to start mining. If you don’t feel comfortable sharing your work internally, then try writing proposals while only sharing them to a few trusted peers. You can even go further to only share proposals with trusted external peers, perhaps within a learning circle that you create or join. Trying all of these at once would be overwhelming, so I recommend picking one in any given phase. If you aren’t able to make traction, then try another until something works. It’s particularly important to recognize in your diagnosis where things are not working–perhaps you simply don’t have the sponsorship you need to enforce strategy so you need to switch towards suggesting strategies instead–and you’ll find something that works. What if you’re not allowed to do strategy? If you’re looking to find one, you’ll always unearth a reason why it’s not possible to do strategy in your current environment. If you’ve convinced yourself that there’s simply no policy that would allow you to do strategy in your current role, then the two most useful levers I’ve found are: Lower your altitude – there’s always a scale where you can perform strategy, even if it’s just your team or even just yourself. Only you can forbid yourself from developing personal strategies. Practice rather than perform – organizations can only absorb so much strategy development at a given time, so sometimes they won’t be open to you doing more strategy. In that case, you should focus on practicing strategy work rather than directly performing it. Only you can stop yourself from practice. Don’t believe the hype: you can always do strategy work. Operating your strategy improvement policies As the refrain goes, even the best policies don’t accomplish much if they aren’t paired with operational mechanisms to ensure the policies actually happen, and debug why they aren’t happening. Although it’s tempting to ignore operations when it comes to our personal habits, I think that would be a mistake: our personal habits have the most significant long-term impact on ourselves, and are the easiest habits to ignore since others generally won’t ask about them. The mechanisms I’d recommend: Explicitly track the strategies that you’ve implemented, refined, documented, or read. This should be in a document, spreadsheet or folder where you can explicitly see if you have or haven’t done the work. Review your tracked strategies every quarter: are you working on the expected number and in the expected way? If not, why not? Ideally, your review should be done in community with a peer or a learning circle. It’s too easy to deceive yourself, it’s much harder to trick someone else. If your periodic review ever discovers that you’re simply not doing the work you expected, sit down for an hour with someone that you trust–ideally someone equally or more experienced than you–and debug what’s going wrong. Commit to doing this before your next periodic review. Tracking your personal habits can feel a bit odd, but it’s something I highly recommend. I’ve been setting and tracking personal goals for some time now—for example, in my 2024 year in review—and have benefited greatly from it. Too busy for strategy Many companies convince themselves that they’re too much in a rush to make good decisions. I’ve certainly gotten stuck in this view at times myself, although at this point in my career I find it increasingly difficult to not recognize that I have a number of tools to create time for strategy, and an obligation to do strategy rather than inflict poor decisions on the organizations I work in. Here’s my advice for creating time: If you’re not tracking how often you’re creating strategies, then start there. If you’ve not worked on a single strategy in the past six months, then start with one. If implementing a strategy has been prohibitively time consuming, then focus on practicing a strategy instead. If you do try all those things and still aren’t making progress, then accept your reality: you don’t view doing strategy as particularly important. Spend some time thinking about why that is, and if you’re comfortable with your answer, then maybe this is a practice you should come back to later. Final words At this point, you’ve read everything I have to offer on drafting engineering strategy. I hope this has refined your view on what strategy can be in your organization, and has given you the tools to draft a more thoughtful future for your corner of the software engineering industry. What I’d never ask is for you to wholly agree with my ideas here. They are my best thinking on this topic, but strategy is a topic where I’m certain Hegel’s world view is the correct one: even the best ideas here are wrong in interesting ways, and will be surpassed by better ones.
From 1995 to 2019, I ran my own mail server. It began with a UUCP link, an expensive long-distance call for me then. Later, I ran a mail server in my apartment, then ran it as a VPS at various places. But running an email server got difficult. You can’t just run it on a … Continue reading Announcing the NNCPNET Email Network →