More from Krzysztof Kowalczyk blog
How AI beat me at code optimization game. When I started writing this article I did not expect AI to beat me at optimizing JavaScript code. But it did. I’m really passionate about optimizing JavaScript. Some say it’s a mental illness but I like my code to go balls to the wall fast. I feel the need. The need for speed. Optimizing code often requires tedious refactoring. Can we delegate the tedious parts to AI? Can I just have ideas and get AI to be my programming slave? Let’s find out. Optimizing Unicode range lookup with AI In my experiment I used Cursor with Claude 3.5 Sonnet model. I assume it could be done with other tools / models. I was browsing pdf.js code and saw this function: const UnicodeRanges = [ [0x0000, 0x007f], // 0 - Basic Latin ... omited [0x0250, 0x02af, 0x1d00, 0x1d7f, 0x1d80, 0x1dbf], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement ... omited ]; function getUnicodeRangeFor(value, lastPosition = -1) { // TODO: create a map range => position, sort the ranges and cache it. // Then we can make a binary search for finding a range for a given unicode. if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; for (let i = 0, ii = range.length; i < ii; i += 2) { if (value >= range[i] && value <= range[i + 1]) { return lastPosition; } } } for (let i = 0, ii = UnicodeRanges.length; i < ii; i++) { const range = UnicodeRanges[i]; for (let j = 0, jj = range.length; j < jj; j += 2) { if (value >= range[j] && value <= range[j + 1]) { return i; } } } return -1; } This is a simple function: given Unicode character it returns Unicode range (number 0 ... UnicodeRanges.length - 1). The number is cleverly encoded implicitly as array index. The code already has a TODO about a potential optimization: sort UnicodeRanges and do binary search. There’s a minor difficulty: some Unicode ranges have multiple intervals, so we have to change the UnicodeRanges to explicitly include the range as 3rd element: const UnicodeRanges = [ [0x0000, 0x007f, 0], // 0 - Basic Latin [0x0250, 0x02af, 4], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement [0x1d00, 0x1d7f, 4], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement ... omitted ]; Then sort it and change the implementation of getUnicodeRangeFor(). Re-writing UnicodeRanges is simple but tedious and I don’t feel like doing it. Can AI do such change for us? Changing representation I’ve selected the whole of UnicodeRanges, added to chat with Ctrl + Shift + L and commanded: Change representation of UnicodeRanges array where each entry is 3 elements: [start, end, rangeNo]; preserve comments; if rangeNo has multiple ranges, split into separate entries Cursor responded with: const UnicodeRanges = [ [0x0000, 0x007f, 0], // 0 - Basic Latin [0x0080, 0x00ff, 1], // 1 - Latin-1 Supplement ... omitted ]; There was an issue: it only did part of the array. So I commanded: don’t be lazy, finish the whole UnicodeRanges array It did but the result was too long so I had to click Continue button. It worked and the generated code seems to be correct so I applied it. For binary search, the array must be sorted, so I commanded: Sort the array by first element It did, the result looked correct so I accepted the change. Hmm, I can get used to having a programming slave. Updating the code to implement binary search I copied getUnicodeRangeFor to chat and commanded: change the implementation to use binary search over UnicodeRanges array It gave me: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; if (value >= range[0] && value <= range[1]) { return lastPosition; } } // Binary search let left = 0; let right = UnicodeRanges.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const range = UnicodeRanges[mid]; if (value < range[0]) { right = mid - 1; } else if (value > range[1]) { left = mid + 1; } else { return range[2]; // Return the range index } } return -1; // Not found } It looks correct so I accepted it. We achieved the goal: replaced linear search with binary search using cheap AI labor. But I want more. More speed. Changing array of arrays to flat array A more efficient representation of the ranges would be a flat array: const UnicodeRanges = [ 0x0000, 0x007f, 0, // 0 - Basic Latin 0x0080, 0x00ff, 1, // 1 - Latin-1 Supplement ]; Let’s see if our AI slave can do it: change the whole UnicodeRanges array to flat array, preserve comments, don’t be lazy and do all of it I pre-emptively command the AI to convert the whole array. It did it. Now is the time to update getUnicodeRangeFor(). It’s a simple change I could do myself: now we just multiply index * 3 to get range definition. But I kind of like commanding the AI so I did it again: update getUnicodeRangeFor for new flat array representations of UnicodeRanges It gave me: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Binary search let left = 0; let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const index = mid * 3; const rangeStart = UnicodeRanges[index]; const rangeEnd = UnicodeRanges[index + 1]; if (value < rangeStart) { right = mid - 1; } else if (value > rangeEnd) { left = mid + 1; } else { return UnicodeRanges[index + 2]; // Return the range index } } return -1; // Not found } It does seem correct, but let’s do one more tweak: move binary search code to its own function And it worked: function binarySearchUnicodeRange(value) { let left = 0; let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const index = mid * 3; const rangeStart = UnicodeRanges[index]; const rangeEnd = UnicodeRanges[index + 1]; if (value < rangeStart) { right = mid - 1; } else if (value > rangeEnd) { left = mid + 1; } else { return UnicodeRanges[index + 2]; // Return the range index } } return -1; // Not found } function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Perform binary search return binarySearchUnicodeRange(value); } Inlining some of the binary search It’s good but we haven’t yet reached the levels of insanity I’ve hinted at. UnicodeRanges is a fixed array. We could “unroll” the binary search into code. It would be very painful to do manually. I would have to write custom code generator to ensure it’s correct but maybe AI can do it for us? unroll first 3 levels of binarySearchUnicodeRange To my surprise, it worked. AI understood what I wanted and gave me this: function getUnicodeRangeFor(value, lastPosition = -1) { ... omited const length = UnicodeRanges.length / 3; let mid = Math.floor(length / 2) * 3; if (value < UnicodeRanges[mid]) { mid = Math.floor(mid / 2); if (value < UnicodeRanges[mid]) { mid = Math.floor(mid / 2); } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 2); } else { return UnicodeRanges[mid + 2]; } } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 2); if (value < UnicodeRanges[mid]) { mid = mid - Math.floor(mid / 4); } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 4); } else { return UnicodeRanges[mid + 2]; } } else { return UnicodeRanges[mid + 2]; } ... omited } Can we do even better? unroll the values of UnicodeRanges[mid] etc. and replace with constants Holly cow, it worked: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Unrolled binary search with constants if (value < 0x4E00) { // CJK Unified Ideographs start if (value < 0x0530) { // Armenian start if (value < 0x0250) { // IPA Extensions start if (value < 0x0100) { // Latin Extended-A start if (value < 0x0080) { // Latin-1 Supplement start return 0; // Basic Latin } return 1; // Latin-1 Supplement } if (value < 0x0180) { return 2; // Latin Extended-A } return 3; // Latin Extended-B } if (value < 0x0300) { // Combining Diacritical Marks start if (value < 0x02B0) { // Spacing Modifier Letters start return 4; // IPA Extensions } return 5; // Spacing Modifier Letters } if (value < 0x0370) { // Greek and Coptic start return 6; // Combining Diacritical Marks } if (value < 0x0400) { // Cyrillic start return 7; // Greek and Coptic } return 9; // Cyrillic } // Continue with binary search for remaining ranges let left = 10; // Index of Armenian in UnicodeRanges let right = 102; // Index of CJK Unified Ideographs in UnicodeRanges while (left <= right) { const mid = Math.floor((left + right) / 2) * 3; if (value < UnicodeRanges[mid]) { right = Math.floor(mid / 3) - 1; } else if (value > UnicodeRanges[mid + 1]) { left = Math.floor(mid / 3) + 1; } else { return UnicodeRanges[mid + 2]; } } } else { // Binary search for ranges after CJK Unified Ideographs let left = 103; // Index after CJK Unified Ideographs in UnicodeRanges let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2) * 3; if (value < UnicodeRanges[mid]) { right = Math.floor(mid / 3) - 1; } else if (value > UnicodeRanges[mid + 1]) { left = Math.floor(mid / 3) + 1; } else { return UnicodeRanges[mid + 2]; } } } return -1; // Not found } Is it correct? I have no idea. Would be easy to write a test that uses UnicodeRanges as source of truth and tests getUnicodeRangeFor() for every possible value. Can I just ask AI to do it faster? So far I micro-managed the AI coder. I was telling it exactly what kind of change I wanted and commanded AI until it did what I wanted. A good boss delegates. Good boss tells the worker the high level goal and leaves implementation detail to the worker. Let’s start from scratch and command AI thusly: could you make a faster implementation of getUnicodeRangeFor()? You can change the representation of UnicodeRanges to achieve that goal Holly cow again. The AI figured out the binary search: const UnicodeRanges = [ { start: 0x0000, end: 0x007f, index: 0 }, // Basic Latin { start: 0x0080, end: 0x00ff, index: 1 }, // Latin-1 Supplement { start: 0x0100, end: 0x017f, index: 2 }, // Latin Extended-A // ... (other ranges) { start: 0x1f000, end: 0x1f09f, index: 122 }, // Domino Tiles - Mahjong Tiles ].sort((a, b) => a.start - b.start); function getUnicodeRangeFor(value, lastPosition = -1) { if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; if (value >= range.start && value <= range.end) { return range.index; } } let left = 0; let right = UnicodeRanges.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const range = UnicodeRanges[mid]; if (value >= range.start && value <= range.end) { return range.index; } if (value < range.start) { right = mid - 1; } else { left = mid + 1; } } return -1; } The object {start, end, index} is less efficient than my flat array representation and it didn’t unroll the binary search but the core idea of binary search is there. But just for kicks, maybe there’s more: can you make it even more efficient? It did something slightly different, but really the same. I’m afraid no bonus this quarter. Ok, one last try: but could you go all out and use every possible implementation trick you know to optkmize getUnicodeRangeFor and UnicodeRanges representation? Fuck me, I have no idea if the generated code is correct but it does seem it would be faster than the non-unrolled binary search I came up with earlier: // Preprocess UnicodeRanges into a more efficient structure const BUCKET_SIZE = 256; const MAX_UNICODE = 0x10FFFF; const buckets = new Uint16Array(Math.ceil((MAX_UNICODE + 1) / BUCKET_SIZE)); const ranges = []; UnicodeRanges.forEach((range, index) => { for (let i = 0; i < range.length; i += 2) { const start = range[i]; const end = range[i + 1]; ranges.push({ start, end, index }); const startBucket = start >>> 8; const endBucket = end >>> 8; for (let j = startBucket; j <= endBucket; j++) { buckets[j] |= (1 << (index & 15)); } } }); ranges.sort((a, b) => a.start - b.start); // Create a compact representation of ranges const compactRanges = new Uint32Array(ranges.length * 3); ranges.forEach((range, i) => { compactRanges[i * 3] = range.start; compactRanges[i * 3 + 1] = range.end; compactRanges[i * 3 + 2] = range.index; }); function getUnicodeRangeFor(value, lastPosition = -1) { if (value > MAX_UNICODE) return -1; // Check last position first if (lastPosition !== -1) { const start = compactRanges[lastPosition * 3]; const end = compactRanges[lastPosition * 3 + 1]; if (value >= start && value <= end) { return compactRanges[lastPosition * 3 + 2]; } } // Quick check using buckets const bucket = buckets[value >>> 8]; if (bucket === 0) return -1; // Binary search within the relevant ranges let left = 0; let right = compactRanges.length / 3 - 1; while (left <= right) { const mid = (left + right) >>> 1; const start = compactRanges[mid * 3]; const end = compactRanges[mid * 3 + 1]; if (value < start) { right = mid - 1; } else if (value > end) { left = mid + 1; } else { return compactRanges[mid * 3 + 2]; } } return -1; } In retrospect, using Uint32Array for UnicodeRanges array was a no-brainer. It’s more compact in memory, the values will probably be compared as integers, not floats. Should be faster to scan than array of floats. The buckets are not necessary. They seems to be for quick “doesn’t exist check” which is not important. I believe most lookups are for values that exist. I’m humbled that just asking for super duper optimization made AI produce something I didn’t think of. More optimization ideas I can’t help myself. Those are ideas I didn’t ask AI to implement. UnicodeRanges is small. A linear search of compact Uint32Array representation where we just have (start, end) values for each range would be faster than binary search due to cache lines. We could start the search in the middle of array and scan half the data going forward or backwards. We could also store ranges smaller than 0x10000 in Uint16Array and larger in Uint32Array. And do linear search starting in the middle. Since the values are smaller than 256, we could encode the first 0xffff values in 64kB as Uint8Array and the rest as Uint32Array. That would probably be the fastest on average, because I believe most lookups are for Unicode chars smaller than 0xffff. Finally, we could calculate the the frequency of each range in representative sample of PDF documents, check the ranges based on that frequency, fully unrolled into code, without any tables. Conclusions AI is a promising way to do tedious code refactoring. If I didn’t have the AI, I would have to write a program to e.g. convert UnicodeRanges to a flat representation. It’s simple and therefore doable but certainly would take longer than few minutes it took me to command AI. The final unrolling of getUnicodeRangeFor() would probably never happen. It would require writing a sophisticated code generator which would be a big project by itself. AI can generate buggy code so it needs to be carefully reviewed. The unrolled binary search could not be verified by review, it would need a test. But hey, I could command my AI sidekick to write the test for me. There was this idea of organizing programming teams into master programmer and coding grunts. The job of master programmer, the thinking was, to generate high level ideas and having coding grunts implement them. Turns out that we can’t organize people that way but now we can use AI to be our coding grunt. Prompt engineering is a thing. I wasted a bunch of time doing incremental improvements. I should have started by asking for super-duper optimization. Productivity gains is real. The whole thing took me about an hour. For this particular task easily 2x compared to not using cheap AI labor. Imagine you’re running a software business and instead of spending 2 months on a task, you only spend 1 month. I’ll be using more AI for coding in the future.
Porting a medium-sized Vue application to Svelte 5 The short version: porting from Vue to Svelte is pretty straightforward and Svelte 5 is nice upgrade to Svelte 4. Why port? I’m working on Edna, a note taking application for developers. It started as a fork of Heynote. I’ve added a bunch of features, most notably managing multiple notes. Heynote is written in Vue. Vue is similar enough to Svelte that I was able to add features without really knowing Vue but Svelte is what I use for all my other projects. At some point I invested enough effort (over 350 commits) into Edna that I decided to port from Vue to Svelte. That way I can write future code faster (I know Svelte much better than Vue) and re-use code from my other Svelte projects. Since Svelte 5 is about to be released, I decided to try it out. There were 10 .vue components. It took me about 3 days to port everything. Adding Svelte 5 to build pipeline I started by adding Svelte 5 and converting the simplest component. In the above commit: I’ve installed Svelte 5 and it’s vite plugin by adding it to package.json updated tailwind.config.cjs to also scan .svelte files added Svelte plugin to vite.config.js to run Svelte compiler on .svelte and .svelte.js files during build deleted Help.vue, which is not related to porting, I just wasn’t using it anymore started converting smallest component AskFSPermissions.vue as AskFSPermissions.svelte In the next commit: I finished porting AskFSPermissions.vue I tweaked tsconfig.json so that VSCode type-checks .svelte files I replaced AskFSPermissions.vue with Svelte 5 version Here replacing was easy because the component was a stand-alone component. All I had to do was to replace Vue’s: app = createApp(AskFSPermissions); app.mount("#app"); with Svelte 5: const args = { target: document.getElementById("app"), }; appSvelte = mount(AskFSPermissions, args); Overall porting strategy Next part was harder. Edna’s structure is: App.vue is the main component which shows / hides other components depending on state and desired operations. My preferred way of porting would be to start with leaf components and port them to Svelte one by one. However, I haven’t found an easy way of using .svelte components from .vue components. It’s possible: Svelte 5 component can be imported and mounted into arbitrary html element and I could pass props down to it. If the project was bigger (say weeks of porting) I would try to make it work so that I have a working app at all times. Given I estimated I can port it quickly, I went with a different strategy: I created mostly empty App.svelte and started porting components, starting with the simplest leaf components. I didn’t have a working app but I could see and test the components I’ve ported so far. This strategy had it’s challenges. Namely: most of the state is not there so I had to fake it for a while. For example the first component I ported was TopNav.vue, which displays name of the current note in the top upper part of the screen. The problem was: I didn’t port the logic to load the file yet. For a while I had to fake the state i.e. I created noteName variable with a dummy value. With each ported component I would port App.vue parts needed by the component Replacing third-party components Most of the code in Edna is written by me (or comes from the original Heynote project) and doesn’t use third-party Vue libraries. There are 2 exceptions: I wanted to show notification messages and have a context menu. Showing notifications messages isn’t hard: for another project I wrote a Svelte component for that in a few hours. But since I didn’t know Vue well, it would have taken me much longer, possibly days. For that reason I’ve opted to use a third-party toast notifications Vue library. The same goes menu component. Even more so: implementing menu component is complicated. At least few days of effort. When porting to Svelte I replaced third-party vue-toastification library with my own code. At under 100 loc it was trivial to write. For context menu I re-used context menu I wrote for my notepad2 project. It’s a more complicated component so it took longer to port it. Vue => Svelte 5 porting Vue and Svelte have very similar structure so porting is straightforward and mostly mechanical. The big picture: <template> become Svelte templates. Remove <template> and replace Vue control flow directives with Svelte equivalent. For example <div v-if="foo"> becomes {#if foo}<div>{/if} setup() can be done either at top-level, when component is imported, or in $effect( () => { ... } ) when component is mounted data() become variables. Some of them are regular JavaScript variables and some of them become reactive $state() props becomes $props() mounted() becomes $effect( () => { ... } ) methods() become regular JavaScript functions computed() become $derived.by( () => { ... } ) ref() becomes $state() $emit('foo') becomes onfoo callback prop. Could also be an event but Svelte 5 recommends callback props over events @click becomes onclick v-model="foo" becomes bind:value={foo} {{ foo }} in HTML template becomes { foo } ref="foo" becomes bind:this={foo} :disabled="!isEnabled" becomes disabled={!isEnabled} CSS was scoped so didn’t need any changes Svelte 5 At the time of this writing Svelte 5 is Release Candidates and the creators tell you not use it in production. Guess what, I’m using it in production. It works and it’s stable. I think Svelte 5 devs operate from the mindset of “abundance of caution”. All software has bugs, including Svelte 4. If Svelte 5 doesn’t work, you’ll know it. Coming from Svelte 4, Svelte 5 is a nice upgrade. One small project is too early to have deep thoughts but I like it so far. It’s easy to learn new ways of doing things. It’s easy to convert Svelte 4 to Svelte 5, even without any tools. Things are even more compact and more convenient than in Svelte 4. {#snippet} adds functionality that I was missing from Svelte 4.
How to dynamically change font size in a Windows dialog Windows’s win32 API is old and crufty. Many things that are trivial to do in HTML are difficult in win32. One of those things is changing size of font used by your native, desktop app. I encountered this in SumatraPDF. A user asked for a way to increase the font size. I introduced UIFontSize option but implementing that was difficult and time consuming. One of the issues was changing the font size used in dialogs. This article describes how I did it. The method is based on https://stackoverflow.com/questions/14370238/can-i-dynamically-change-the-font-size-of-a-dialog-window-created-with-c-in-vi How dialogs work SumatraPDF defines a bunch of dialogs in SumatraPDF.rc. Here’s a find dialog: IDD_DIALOG_FIND DIALOGEX 0, 0, 247, 52 STYLE DS_SETFONT | DS_MODALFRAME | DS_FIXEDSYS | WS_POPUP | WS_CAPTION | WS_SYSMENU CAPTION "Find" FONT 8, "MS Shell Dlg", 400, 0, 0x1 BEGIN LTEXT "&Find what:",IDC_STATIC,6,8,60,9 EDITTEXT IDC_FIND_EDIT,66,6,120,13,ES_AUTOHSCROLL CONTROL "&Match case",IDC_MATCH_CASE,"Button",BS_AUTOCHECKBOX | WS_TABSTOP,6,24,180,9 LTEXT "Hint: Use the F3 key for finding again",IDC_FIND_NEXT_HINT,6,37,180,9,WS_DISABLED DEFPUSHBUTTON "Find",IDOK,191,6,50,14 PUSHBUTTON "Cancel",IDCANCEL,191,24,50,14 END .rc is compiled by a resource compiler rc.exe and embedded in resources section of a PE .exe file. Compiled version is a binary blob that has a stable format. At runtime we can get that binary blob from resources and pass it to DialogBoxIndirectParam() function to create a dialog. How to change font size of a dialog at runtime DIALOGEX tell us it’s an extended dialog, which has different binary layout than non-extended DIALOG. As you can see part of dialog definition is a font definition: FONT 8, "MS Shell Dlg", 400, 0, 0x1 To provide a FONT you also need to specify DS_SETFONT or DS_FIXEDSYS flag. We’re asking for MS Shell Dlg font with size of 8 points (12 pixels). 400 specifies standard weight (800 would be bold font). Unfortunately the binary blob is generated at compilation time and we want to change font size when application runs. The simplest way to achieve that is to patch the binary blob in memory. The code for changing dialog font size at runtime You can find the full code at https://github.com/sumatrapdfreader/sumatrapdf/blob/b6aed9e7d257510ff82fee915506ce2e75481c64/src/SumatraDialogs.cpp#L20 It uses small number of SumatraPDF base code so you’ll need to lightly massage it to use it in your own code. The layout of binary blob is documented at http://msdn.microsoft.com/en-us/library/ms645398(v=VS.85).aspx In C++ this is represented by the following struct: #pragma pack(push, 1) struct DLGTEMPLATEEX { WORD dlgVer; // 0x0001 WORD signature; // 0xFFFF DWORD helpID; DWORD exStyle; DWORD style; WORD cDlgItems; short x, y, cx, cy; /* sz_Or_Ord menu; sz_Or_Ord windowClass; WCHAR title[titleLen]; WORD fontPointSize; WORD fontWWeight; BYTE fontIsItalic; BYTE fontCharset; WCHAR typeface[stringLen]; */ }; #pragma pack(pop) #pragma pack(push, 1) tells C++ compiler to not do padding between struct members. That part after x, y, cx, cy is commented out because sz_or_Ord and WCHAR [] are variable length, which can’t be represented in C++ struct. fontPointSize is the value we need to patch. But first we need to get a copy binary blob. DLGTEMPLATE* DupTemplate(int dlgId) { HRSRC dialogRC = FindResourceW(nullptr, MAKEINTRESOURCE(dlgId), RT_DIALOG); CrashIf(!dialogRC); HGLOBAL dlgTemplate = LoadResource(nullptr, dialogRC); CrashIf(!dlgTemplate); void* orig = LockResource(dlgTemplate); size_t size = SizeofResource(nullptr, dialogRC); CrashIf(size == 0); DLGTEMPLATE* ret = (DLGTEMPLATE*)memdup(orig, size); UnlockResource(orig); return ret; } dlgId is from .rc file (e.g. IDD_DIALOG_FIND for our find dialog). Most of it is win32 APIs, memdup() makes a copy of memory block. Here’s the code to patch the font size: static void SetDlgTemplateExFont(DLGTEMPLATE* tmp, int fontSize) { CrashIf(!IsDlgTemplateEx(tmp)); DLGTEMPLATEEX* tpl = (DLGTEMPLATEEX*)tmp; CrashIf(!HasDlgTemplateExFont(tpl)); u8* d = (u8*)tpl; d += sizeof(DLGTEMPLATEEX); // sz_Or_Ord menu d = SkipSzOrOrd(d); // sz_Or_Ord windowClass; d = SkipSzOrOrd(d); // WCHAR[] title d = SkipSz(d); // WCHAR pointSize; WORD* wd = (WORD*)d; fontSize = ToFontPointSize(fontSize); *wd = fontSize; } We start at the end of fixed-size portion of the blob () d += sizeof(DLGTEMPLATEEX). We then skip variable-length fields menu, windowClass and title and patch the font size in points. SumatraPDF code operates in pixels so has to convert that to Windows points: static int ToFontPointSize(int fontSize) { int res = (fontSize * 72) / 96; return res; } Here’s how we skip past sz_or_Ord fields: /* Type: sz_Or_Ord A variable-length array of 16-bit elements that identifies a menu resource for the dialog box. If the first element of this array is 0x0000, the dialog box has no menu and the array has no other elements. If the first element is 0xFFFF, the array has one additional element that specifies the ordinal value of a menu resource in an executable file. If the first element has any other value, the system treats the array as a null-terminated Unicode string that specifies the name of a menu resource in an executable file. */ static u8* SkipSzOrOrd(u8* d) { WORD* pw = (WORD*)d; WORD w = *pw++; if (w == 0x0000) { // no menu } else if (w == 0xffff) { // menu id followed by another WORD item pw++; } else { // anything else: zero-terminated WCHAR* WCHAR* s = (WCHAR*)pw; while (*s) { s++; } s++; pw = (WORD*)s; } return (u8*)pw; } Strings are zero-terminated utf-16: static u8* SkipSz(u8* d) { WCHAR* s = (WCHAR*)d; while (*s) { s++; } s++; // skip terminating zero return (u8*)s; } To make the code more robust, we check the dialog is extended and has font information to patch: static bool IsDlgTemplateEx(DLGTEMPLATE* tpl) { return tpl->style == MAKELONG(0x0001, 0xFFFF); } static bool HasDlgTemplateExFont(DLGTEMPLATEEX* tpl) { DWORD style = tpl->style & (DS_SETFONT | DS_FIXEDSYS); return style != 0; } Changing font name It’s also possible to change font name but it’s slightly harder (which is why I didn’t implement it). WCHAR typeface[] is inline null-terminated string that is name of the font. To change it we would also have to move the data that follows it. The roads not taken There are other ways to achieve that. Dialog is just a HWND. In WM_INITDIALOG message we could iterate over all controls, change their font with WM_SETFONT message and then resize the controls and the window. That’s much more work than our solution. We just patch the font size and let Windows do the font setting and resizing. Another option would be to generate binary blog representing dialogs at runtime. It would require writing more code but then we could define new dialogs in C++ code that wouldn’t be that much different than .rc syntax. I want to explore that solution because this would also allow adding simple layout system to simplify definition the dialogs. In .rc files everything must be absolutely positioned. The visual dialog editor helps a bit but is unreliable and I need resizing logic anyway because after translating strings absolute positioning doesn’t work.
Building wc in the browser From time to time I like to run wc -l on my source code to see how much code I wrote. For those not in the know: wc -l shows number of lines in files. Actually, what I have to do is more like find -name "*.go" | xargs wc -l because wc isn’t a particularly good at handling directories. I just want to see number of lines in all my source files, man. I don’t want to google the syntax of find and xargs for a hundredth time. After learning about File System API I decided to write a tool that does just that as a web app. No need to install software. I did just that and you can use it yourself. Here’s how it sees itself: The rest of this article describes how I would have done it if I did it. Building software quickly It only took me 3 days, which is a testament to how productive the web platform can be. My weapons of choice are: Svelte for frontend Tailwind CSS for CSS JSDoc for static typing of JavaScript File System API to access files and directories on your computer vite for a bundler and dev server render to deploy For a small project Svelte and Tailwind CSS are arguably an overkill. I used them because I standardized on that toolset. Standardization allows me to re-use prior experience and sometimes even code. Why those technologies? Svelte is React without the bloat. Try it and you’ll love it. Tailwind CSS is CSS but more productive. You have to try it to believe it. JSDoc is happy medium between no types at all and TypeScript. I have great internal resistance to switching to TypeScript. Maybe 5 years from now. And none of that would be possible without browser APIs that allow access to files on your computer. Which FireFox doesn’t implement because they are happy to loose market share to browser that implement useful features. Clearly $3 million a year is not enough to buy yourself a CEO with understanding of the obvious. Implementation tidbits Getting list of files To get a recursive listing of files in a directory use showDirectoryPicker to get a FileSystemDirectoryHandle. Call dirHandle.values() to get a list of directory entries. Recurse if an entry is a directory. Not all browsers support that API. To detect if it works: /** * @returns {boolean} */ export function isIFrame() { let isIFrame = false; try { // in iframe, those are different isIFrame = window.self !== window.top; } catch { // do nothing } return isIFrame; } /** * @returns {boolean} */ export function supportsFileSystem() { return "showDirectoryPicker" in window && !isIFrame(); } Because people on Hacker News always complain about slow, bloated software I took pains to make my code fast. One of those pains was using an array instead of an object to represent a file system entry. Wait, now HN people will complain that I’m optimizing prematurely. Listen buddy, Steve Wozniak wrote assembly in hex and he liked it. In comparison, optimizing memory layout of most frequently used object in JavaScript is like drinking champagne on Jeff Bezos’ yacht. Here’s a JavaScript trick to optimizing memory layout of objects with fixed number of fields: derive your class from an Array. Deriving a class from an Array Little known thing about JavaScript is that an Array is just an object and you can derive your class from it and add methods, getters and setters. You get a compact layout of an array and convenience of accessors. Here’s the sketch of how I implemented FsEntry object: // a directory tree. each element is either a file: // [file, dirHandle, name, path, size, null] // or directory: // [[entries], dirHandle, name, path, size, null] // extra null value is for the caller to stick additional data // without the need to re-allocate the array // if you need more than 1, use an object // handle (file or dir), parentHandle (dir), size, path, dirEntries, meta const handleIdx = 0; const parentHandleIdx = 1; const sizeIdx = 2; const pathIdx = 3; const dirEntriesIdx = 4; const metaIdx = 5; export class FsEntry extends Array { get size() { return this[sizeIdx]; } // ... rest of the accessors } We have 6 slots in the array and we can access them as e.g. entry[sizeIdx]. We can hide this implementation detail by writing a getter as FsEntry.size() shown above. Reading a directory recursively Once you get FileSystemDirectoryHandle by using window.showDirectoryPicker() you can read the content of the directory. Here’s one way to implement recursive read of directory: /** * @param {FileSystemDirectoryHandle} dirHandle * @param {Function} skipEntryFn * @param {string} dir * @returns {Promise<FsEntry>} */ export async function readDirRecur( dirHandle, skipEntryFn = dontSkip, dir = dirHandle.name ) { /** @type {FsEntry[]} */ let entries = []; // @ts-ignore for await (const handle of dirHandle.values()) { if (skipEntryFn(handle, dir)) { continue; } const path = dir == "" ? handle.name : `${dir}/${handle.name}`; if (handle.kind === "file") { let e = await FsEntry.fromHandle(handle, dirHandle, path); entries.push(e); } else if (handle.kind === "directory") { let e = await readDirRecur(handle, skipEntryFn, path); e.path = path; entries.push(e); } } let res = new FsEntry(dirHandle, null, dir); res.dirEntries = entries; return res; } Function skipEntryFn is called for every entry and allows the caller to decide to not include a given entry. You can, for example, skip a directory like .git. It can also be used to show progress of reading the directory to the user, as it happens asynchronously. Showing the files I use tables and I’m not ashamed. It’s still the best technology to display, well, a table of values where cells are sized to content and columns are aligned. Flexbox doesn’t remember anything across rows so it can’t align columns. Grid can layout things properly but I haven’t found a way to easily highlight the whole row when mouse is over it. With CSS you can only target individual cells in a grid, not rows. With table I just style <tr class="hover:bg-gray-100">. That’s Tailwind speak for: on mouse hover set background color to light gray. Folder can contain other folders so we need recursive components to implement it. Svelte supports that with <svelte:self>. I implemented it as a tree view where you can expand folders to see their content. It’s one big table for everything but I needed to indent each expanded folder to make it look like a tree. It was a bit tricky. I went with indent property in my Folder component. Starts with 0 and goes +1 for each level of nesting. Then I style the first file name column as <td class="ind-{indent}">...</td> and use those CSS styles: <style> :global(.ind-1) { padding-left: 0.5rem; } :global(.ind-2) { padding-left: 1rem; } /* ... up to .ind-17 */ Except it goes to .ind-17. Yes, if you have deeper nesting, it won’t show correctly. I’ll wait for a bug report before increasing it further. Calculating line count You can get the size of the file from FileSystemFileEntry. For source code I want to see number of lines. It’s quite trivial to calculate: /** * @param {Blob} f * @returns {Promise<number>} */ export async function lineCount(f) { if (f.size === 0) { // empty files have no lines return 0; } let ab = await f.arrayBuffer(); let a = new Uint8Array(ab); let nLines = 0; // if last character is not newline, we must add +1 to line count let toAdd = 0; for (let b of a) { // line endings are: // CR (13) LF (10) : windows // LF (10) : unix // CR (13) : mac // mac is very rare so we just count 10 as they count // windows and unix lines if (b === 10) { toAdd = 0; nLines++; } else { toAdd = 1; } } return nLines + toAdd; } It doesn’t handle Mac files that use CR for newlines. It’s ok to write buggy code as long as you document it. I also skip known binary files (.png, .exe etc.) and known “not mine” directories like .git and node_modules. Small considerations like that matter. Remembering opened directories I typically use it many times on the same directories and it’s a pain to pick the same directory over and over again. FileSystemDirectoryHandle can be stored in IndexedDB so I implemented a history of opened directories using a persisted store using IndexedDB. Asking for permissions When it comes to accessing files and directories on disk you can’t ask for forgiveness, you have to ask for permission. User grants permissions in window.showDirectoryPicker() and browser remembers them for a while, but they expire quite quickly. You need to re-check and re-ask for permission to FileSystemFileHandle and FileSystemDirectoryHandle before each access: export async function verifyHandlePermission(fileHandle, readWrite) { const options = {}; if (readWrite) { options.mode = "readwrite"; } // Check if permission was already granted. If so, return true. if ((await fileHandle.queryPermission(options)) === "granted") { return true; } // Request permission. If the user grants permission, return true. if ((await fileHandle.requestPermission(options)) === "granted") { return true; } // The user didn't grant permission, so return false. return false; } If permissions are still valid from before, it’s a no-op. If not, the browser will show a dialog asking for permissions. If you ask for write permissions, Chrome will show 2 confirmations dialogs vs. 1 for read-only access. I start with read-only access and, if needed, ask again to get a write (or delete) permissions. Deleting files and directories Deleting files has nothing to do with showing line counts but it was easy to implement, it was useful so I added it. You need to remember FileSystemDirectoryHandle for the parent directory. To delete a file: parentDirHandle.removeEntry("foo.txt") To delete a directory: parentDirHandle.removeEntry("node_modules", {recursive: true}) Getting bit by a multi-threading bug JavaScript doesn’t have multiple threads and you can’t have all those nasty bugs? Right? Right? Yes and no. Async is not multi-threading but it does create non-obvious execution flows. I had a bug: I noticed that some .txt files were showing line count of 0 even though they clearly did have lines. I went bug hunting. I checked the lineCount function. Seems ok. I added console.log(), I stepped through the code. Time went by and my frustration level was reaching DEFCON 1. Thankfully before I reached cocked pistol I had an epiphany. You see, JavaScript has async where some code can interleave with some other code. The browser can splice those async “threads” with UI code. No threads means there are no data races i.e. writing memory values that other thread is in the middle of reading. But we do have non-obvious execution flows. Here’s how my code worked: get a list of files (async) show the files in UI calculate line counts for all files (async) update UI to show line counts after we get them all Async is great for users: calculating line counts could take a long time as we need to read all those files. If this process wasn’t async it would block the UI. Thanks to async there’s enough checkpoints for the browser to process UI events in between processing files. The issue was that function to calculate line counts was using an array I got from reading a directory. I passed the same array to Folder component to show the files. And I sorted the array to show files in human friendly order. In JavaScript sorting mutates an array and that array was partially processed by line counting function. As a result if series of events was unfortunate enough, I would skip some files in line counting. They would be resorted to a position that line counting thought it already counted. Result: no lines for you! A happy ending and an easy fix: Folder makes a copy of an array so sorting doesn’t affect line counting process. The future No software is ever finished but I arrived at a point where it does the majority of the job I wanted so I shipped it. There is a feature I would find useful: statistics for each extensions. How many lines in .go files vs. .js files etc.? But I’m holding off implementing it until: I really, really want it I get feature requests from people who really, really want it You can look at the source code. It’s source visible but not open source.
More in programming
I was chatting with a friend recently, and she mentioned an annoyance when reading fanfiction on her iPad. She downloads fic from AO3 as EPUB files, and reads it in the Kindle app – but the files don’t have a cover image, and so the preview thumbnails aren’t very readable: She’s downloaded several hundred stories, and these thumbnails make it difficult to find things in the app’s “collections” view. This felt like a solvable problem. There are tools to add cover images to EPUB files, if you already have the image. The EPUB file embeds some key metadata, like the title and author. What if you had a tool that could extract that metadata, auto-generate an image, and use it as the cover? So I built that. It’s a small site where you upload EPUB files you’ve downloaded from AO3, the site generates a cover image based on the metadata, and it gives you an updated EPUB to download. The new covers show the title and author in large text on a coloured background, so they’re much easier to browse in the Kindle app: If you’d find this helpful, you can use it at alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ Otherwise, I’m going to explain how it works, and what I learnt from building it. There are three steps to this process: Open the existing EPUB to get the title and author Generate an image based on that metadata Modify the EPUB to insert the new cover image Let’s go through them in turn. Open the existing EPUB I’ve not worked with EPUB before, and I don’t know much about it. My first instinct was to look for Python EPUB libraries on PyPI, but there was nothing appealing. The results were either very specific tools (convert EPUB to/from format X) or very unmaintained (the top result was last updated in April 2014). I decied to try writing my own code to manipulate EPUBs, rather than using somebody else’s library. I had a vague memory that EPUB files are zips, so I changed the extension from .epub to .zip and tried unzipping one – and it turns out that yes, it is a zip file, and the internal structure is fairly simple. I found a file called content.opf which contains metadata as XML, including the title and author I’m looking for: <?xml version='1.0' encoding='utf-8'?> <package xmlns="http://www.idpf.org/2007/opf" version="2.0" unique-identifier="uuid_id"> <metadata xmlns:opf="http://www.idpf.org/2007/opf" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:calibre="http://calibre.kovidgoyal.net/2009/metadata"> <dc:title>Operation Cameo</dc:title> <meta name="calibre:timestamp" content="2025-01-25T18:01:43.253715+00:00"/> <dc:language>en</dc:language> <dc:creator opf:file-as="alexwlchan" opf:role="aut">alexwlchan</dc:creator> <dc:identifier id="uuid_id" opf:scheme="uuid">13385d97-35a1-4e72-830b-9757916d38a7</dc:identifier> <meta name="calibre:title_sort" content="operation cameo"/> <dc:description><p>Some unusual orders arrive at Operation Mincemeat HQ.</p></dc:description> <dc:publisher>Archive of Our Own</dc:publisher> <dc:subject>Fanworks</dc:subject> <dc:subject>General Audiences</dc:subject> <dc:subject>Operation Mincemeat: A New Musical - SpitLip</dc:subject> <dc:subject>No Archive Warnings Apply</dc:subject> <dc:date>2023-12-14T00:00:00+00:00</dc:date> </metadata> … That dc: prefix was instantly familiar from my time working at Wellcome Collection – this is Dublin Core, a standard set of metadata fields used to describe books and other objects. I’m unsurprised to see it in an EPUB; this is exactly how I’d expect it to be used. I found an article that explains the structure of an EPUB file, which told me that I can find the content.opf file by looking at the root-path element inside the mandatory META-INF/container.xml file which is every EPUB. I wrote some code to find the content.opf file, then a few XPath expressions to extract the key fields, and I had the metadata I needed. Generate a cover image I sketched a simple cover design which shows the title and author. I wrote the first version of the drawing code in Pillow, because that’s what I’m familiar with. It was fine, but the code was quite flimsy – it didn’t wrap properly for long titles, and I couldn’t get custom fonts to work. Later I rewrote the app in JavaScript, so I had access to the HTML canvas element. This is another tool that I haven’t worked with before, so a fun chance to learn something new. The API felt fairly familiar, similar to other APIs I’ve used to build HTML elements. This time I did implement some line wrapping – there’s a measureText() API for canvas, so you can see how much space text will take up before you draw it. I break the text into words, and keeping adding words to a line until measureText tells me the line is going to overflow the page. I have lots of ideas for how I could improve the line wrapping, but it’s good enough for now. I was also able to get fonts working, so I picked Georgia to match the font used for titles on AO3. Here are some examples: I had several ideas for choosing the background colour. I’m trying to help my friend browse her collection of fic, and colour would be a useful way to distinguish things – so how do I use it? I realised I could get the fandom from the EPUB file, so I decided to use that. I use the fandom name as a seed to a random number generator, then I pick a random colour. This means that all the fics in the same fandom will get the same colour – for example, all the Star Wars stories are a shade of red, while Star Trek are a bluey-green. This was a bit harder than I expected, because it turns out that JavaScript doesn’t have a built-in seeded random number generator – I ended up using some snippets from a Stack Overflow answer, where bryc has written several pseudorandom number generators in plain JavaScript. I didn’t realise until later, but I designed something similar to the placeholder book covers in the Apple Books app. I don’t use Apple Books that often so it wasn’t a deliberate choice to mimic this style, but clearly it was somewhere in my subconscious. One difference is that Apple’s app seems to be picking from a small selection of background colours, whereas my code can pick a much nicer variety of colours. Apple’s choices will have been pre-approved by a designer to look good, but I think mine is more fun. Add the cover image to the EPUB My first attempt to add a cover image used pandoc: pandoc input.epub --output output.epub --epub-cover-image cover.jpeg This approach was no good: although it added the cover image, it destroyed the formatting in the rest of the EPUB. This made it easier to find the fic, but harder to read once you’d found it. An EPUB file I downloaded from AO3, before/after it was processed by pandoc. So I tried to do it myself, and it turned out to be quite easy! I unzipped another EPUB which already had a cover image. I found the cover image in OPS/images/cover.jpg, and then I looked for references to it in content.opf. I found two elements that referred to cover images: <?xml version="1.0" encoding="UTF-8"?> <package xmlns="http://www.idpf.org/2007/opf" version="3.0" unique-identifier="PrimaryID"> <metadata xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:opf="http://www.idpf.org/2007/opf"> <meta name="cover" content="cover-image"/> … </metadata> <manifest> <item id="cover-image" href="images/cover.jpg" media-type="image/jpeg" properties="cover-image"/> … </manifest> </package> This gave me the steps for adding a cover image to an EPUB file: add the image file to the zipped bundle, then add these two elements to the content.opf. Where am I going to deploy this? I wrote the initial prototype of this in Python, because that’s the language I’m most familiar with. Python has all the libraries I need: The zipfile module can unpack and modify the EPUB/ZIP The xml.etree or lxml modules can manipulate XML The Pillow library can generate images I built a small Flask web app: you upload the EPUB to my server, my server does some processing, and sends the EPUB back to you. But for such a simple app, do I need a server? I tried rebuilding it as a static web page, doing all the processing in client-side JavaScript. That’s simpler for me to host, and it doesn’t involve a round-trip to my server. That has lots of other benefits – it’s faster, less of a privacy risk, and doesn’t require a persistent connection. I love static websites, so can they do this? Yes! I just had to find a different set of libraries: The JSZip library can unpack and modify the EPUB/ZIP, and is the only third-party code I’m using in the tool Browsers include DOMParser for manipulating XML I’ve already mentioned the HTML <canvas> element for rendering the image This took a bit longer because I’m not as familiar with JavaScript, but I got it working. As a bonus, this makes the tool very portable. Everything is bundled into a single HTML file, so if you download that file, you have the whole tool. If my friend finds this tool useful, she can save the file and keep a local copy of it – she doesn’t have to rely on my website to keep using it. What should it look like? My first design was very “engineer brain” – I just put the basic controls on the page. It was fine, but it wasn’t good. That might be okay, because the only person I need to be able to use this app is my friend – but wouldn’t it be nice if other people were able to use it? If they’re going to do that, they need to know what it is – most people aren’t going to read a 2,500 word blog post to understand a tool they’ve never heard of. (Although if you have read this far, I appreciate you!) I started designing a proper page, including some explanations and descriptions of what the tool is doing. I got something that felt pretty good, including FAQs and acknowledgements, and I added a grey area for the part where you actually upload and download your EPUBs, to draw the user’s eye and make it clear this is the important stuff. But even with that design, something was missing. I realised I was telling you I’d create covers, but not showing you what they’d look like. Aha! I sat down and made up a bunch of amusing titles for fanfic and fanfic authors, so now you see a sample of the covers before you upload your first EPUB: This makes it clearer what the app will do, and was a fun way to wrap up the project. What did I learn from this project? Don’t be scared of new file formats My first instinct was to look for a third-party library that could handle the “complexity” of EPUB files. In hindsight, I’m glad I didn’t find one – it forced me to learn more about how EPUBs work, and I realised I could write my own code using built-in libraries. EPUB files are essentially ZIP files, and I only had basic needs. I was able to write my own code. Because I didn’t rely on a library, now I know more about EPUBs, I have code that’s simpler and easier for me to understand, and I don’t have a dependency that may cause problems later. There are definitely some file formats where I need existing libraries (I’m not going to write my own JPEG parser, for example) – but I should be more open to writing my own code, and not jumping to add a dependency. Static websites can handle complex file manipulations I love static websites and I’ve used them for a lot of tasks, but mostly read-only display of information – not anything more complex or interactive. But modern JavaScript is very capable, and you can do a lot of things with it. Static pages aren’t just for static data. One of the first things I made that got popular was find untagged Tumblr posts, which was built as a static website because that’s all I knew how to build at the time. Somewhere in the intervening years, I forgot just how powerful static sites can be. I want to build more tools this way. Async JavaScript calls require careful handling The JSZip library I’m using has a lot of async functions, and this is my first time using async JavaScript. I got caught out several times, because I forgot to wait for async calls to finish properly. For example, I’m using canvas.toBlob to render the image, which is an async function. I wasn’t waiting for it to finish, and so the zip would be repackaged before the cover image was ready to add, and I got an EPUB with a missing image. Oops. I think I’ll always prefer the simplicity of synchronous code, but I’m sure I’ll get better at async JavaScript with practice. Final thoughts I know my friend will find this helpful, and that feels great. Writing software that’s designed for one person is my favourite software to write. It’s not hyper-scale, it won’t launch the next big startup, and it’s usually not breaking new technical ground – but it is useful. I can see how I’m making somebody’s life better, and isn’t that what computers are for? If other people like it, that’s a nice bonus, but I’m really thinking about that one person. Normally the one person I’m writing software for is me, so it’s extra nice when I can do it for somebody else. If you want to try this tool yourself, go to alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ If you want to read the code, it’s all on GitHub. [If the formatting of this post looks odd in your feed reader, visit the original article]
I’ve been doing Dry January this year. One thing I missed was something for apéro hour, a beverage to mark the start of the evening. Something complex and maybe bitter, not like a drink you’d have with lunch. I found some good options. Ghia sodas are my favorite. Ghia is an NA apéritif based on grape juice but with enough bitterness (gentian) and sourness (yuzu) to be interesting. You can buy a bottle and mix it with soda yourself but I like the little cans with extra flavoring. The Ginger and the Sumac & Chili are both great. Another thing I like are low-sugar fancy soda pops. Not diet drinks, they still have a little sugar, but typically 50 calories a can. De La Calle Tepache is my favorite. Fermented pineapple is delicious and they have some fun flavors. Culture Pop is also good. A friend gave me the Zero book, a drinks cookbook from the fancy restaurant Alinea. This book is a little aspirational but the recipes are doable, it’s just a lot of labor. Very fancy high end drink mixing, really beautiful flavor ideas. The only thing I made was their gin substitute (mostly junipers extracted in glycerin) and it was too sweet for me. Need to find the right use for it, a martini definitely ain’t it. An easier homemade drink is this Nonalcoholic Dirty Lemon Tonic. It’s basically a lemonade heavily flavored with salted preserved lemons, then mixed with tonic. I love the complexity and freshness of this drink and enjoy it on its own merits. Finally, non-alcoholic beer has gotten a lot better in the last few years thanks to manufacturing innovations. I’ve been enjoying NA Black Butte Porter, Stella Artois 0.0, Heineken 0.0. They basically all taste just like their alcoholic uncles, no compromise. One thing to note about non-alcoholic substitutes is they are not cheap. They’ve become a big high end business. Expect to pay the same for an NA drink as one with alcohol even though they aren’t taxed nearly as much.
The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with our bags packed, thinking they'd probably get it under control. But by 2am, the roaring blades of fire choppers shaking the house got us up. As we sped down the canyon towards Pacific Coast Highway (PCH), the fire had reached the ridge across from ours, and flames were blazing large out the car windows. It felt like we had left the evacuation a little too late, but they eventually did get Franklin under control before it reached us. Humans have a strange relationship with risk and disasters. We're so prone to wishful thinking and bad pattern matching. I remember people being shocked when the flames jumped the PCH during the Woolsey fire in 2017. IT HAD NEVER DONE THAT! So several friends of ours had to suddenly escape a nightmare scenario, driving through burning streets, in heavy smoke, with literally their lives on the line. Because the past had failed to predict the future. I feel into that same trap for a moment with the dramatic proclamations of wind and fire weather in the days leading up to January 7. Warning after warning of "extremely dangerous, life-threatening wind" coming from the City of Malibu, and that overly-bureaucratic-but-still-ominous "Particularly Dangerous Situation" designation. Because, really, how much worse could it be? Turns out, a lot. It was a little before noon on the 7th when we first saw the big plumes of smoke rise from the Palisades fire. And immediately the pattern matching ran astray. Oh, it's probably just like Franklin. It's not big yet, they'll get it out. They usually do. Well, they didn't. By the late afternoon, we had once more packed our bags, and by then it was also clear that things actually were different this time. Different worse. Different enough that even Santa Monica didn't feel like it was assured to be safe. So we headed far North, to be sure that we wouldn't have to evacuate again. Turned out to be a good move. Because by now, into the evening, few people in the connected world hadn't started to see the catastrophic images emerging from the Palisades and Eaton fires. Well over 10,000 houses would ultimately burn. Entire neighborhoods leveled. Pictures that could be mistaken for World War II. Utter and complete destruction. By the night of the 7th, the fire reached our canyon, and it tore through the chaparral and brush that'd been building since the last big fire that area saw in 1993. Out of some 150 houses in our immediate vicinity, nearly a hundred burned to the ground. Including the first house we moved to in Malibu back in 2009. But thankfully not ours. That's of course a huge relief. This was and is our Malibu Dream House. The site of that gorgeous home office I'm so fond to share views from. Our home. But a house left standing in a disaster zone is still a disaster. The flames reached all the way up to the base of our construction, incinerated much of our landscaping, and devoured the power poles around it to dysfunction. We have burnt-out buildings every which way the eye looks. The national guard is still stationed at road blocks on the access roads. Utility workers are tearing down the entire power grid to rebuild it from scratch. It's going to be a long time before this is comfortably habitable again. So we left. That in itself feels like defeat. There's an urge to stay put, and to help, in whatever helpless ways you can. But with three school-age children who've already missed over a months worth of learning from power outages, fire threats, actual fires, and now mudslide dangers, it was time to go. None of this came as a surprise, mind you. After Woolsey in 2017, Malibu life always felt like living on borrowed time to us. We knew it, even accepted it. Beautiful enough to be worth the risk, we said. But even if it wasn't a surprise, it's still a shock. The sheer devastation, especially in the Palisades, went far beyond our normal range of comprehension. Bounded, as it always is, by past experiences. Thus, we find ourselves back in Copenhagen. A safe haven for calamities of all sorts. We lived here for three years during the pandemic, so it just made sense to use it for refuge once more. The kids' old international school accepted them right back in, and past friendships were quickly rebooted. I don't know how long it's going to be this time. And that's an odd feeling to have, just as America has been turning a corner, and just as the optimism is back in so many areas. Of the twenty years I've spent in America, this feels like the most exciting time to be part of the exceptionalism that the US of A offers. And of course we still are. I'll still be in the US all the time on both business, racing, and family trips. But it won't be exclusively so for a while, and it won't be from our Malibu Dream House. And that burns.
Thou shalt not suffer a flaky test to live, because it’s annoying, counterproductive, and dangerous: one day it might fail for real, and you won’t notice. Here’s what to do.
The ware for January 2025 is shown below. Thanks to brimdavis for contributing this ware! …back in the day when you would get wares that had “blue wires” in them… One thing I wonder about this ware is…where are the ROMs? Perhaps I’ll find out soon! Happy year of the snake!