More from Krzysztof Kowalczyk blog
How AI beat me at code optimization game. When I started writing this article I did not expect AI to beat me at optimizing JavaScript code. But it did. I’m really passionate about optimizing JavaScript. Some say it’s a mental illness but I like my code to go balls to the wall fast. I feel the need. The need for speed. Optimizing code often requires tedious refactoring. Can we delegate the tedious parts to AI? Can I just have ideas and get AI to be my programming slave? Let’s find out. Optimizing Unicode range lookup with AI In my experiment I used Cursor with Claude 3.5 Sonnet model. I assume it could be done with other tools / models. I was browsing pdf.js code and saw this function: const UnicodeRanges = [ [0x0000, 0x007f], // 0 - Basic Latin ... omited [0x0250, 0x02af, 0x1d00, 0x1d7f, 0x1d80, 0x1dbf], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement ... omited ]; function getUnicodeRangeFor(value, lastPosition = -1) { // TODO: create a map range => position, sort the ranges and cache it. // Then we can make a binary search for finding a range for a given unicode. if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; for (let i = 0, ii = range.length; i < ii; i += 2) { if (value >= range[i] && value <= range[i + 1]) { return lastPosition; } } } for (let i = 0, ii = UnicodeRanges.length; i < ii; i++) { const range = UnicodeRanges[i]; for (let j = 0, jj = range.length; j < jj; j += 2) { if (value >= range[j] && value <= range[j + 1]) { return i; } } } return -1; } This is a simple function: given Unicode character it returns Unicode range (number 0 ... UnicodeRanges.length - 1). The number is cleverly encoded implicitly as array index. The code already has a TODO about a potential optimization: sort UnicodeRanges and do binary search. There’s a minor difficulty: some Unicode ranges have multiple intervals, so we have to change the UnicodeRanges to explicitly include the range as 3rd element: const UnicodeRanges = [ [0x0000, 0x007f, 0], // 0 - Basic Latin [0x0250, 0x02af, 4], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement [0x1d00, 0x1d7f, 4], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement ... omitted ]; Then sort it and change the implementation of getUnicodeRangeFor(). Re-writing UnicodeRanges is simple but tedious and I don’t feel like doing it. Can AI do such change for us? Changing representation I’ve selected the whole of UnicodeRanges, added to chat with Ctrl + Shift + L and commanded: Change representation of UnicodeRanges array where each entry is 3 elements: [start, end, rangeNo]; preserve comments; if rangeNo has multiple ranges, split into separate entries Cursor responded with: const UnicodeRanges = [ [0x0000, 0x007f, 0], // 0 - Basic Latin [0x0080, 0x00ff, 1], // 1 - Latin-1 Supplement ... omitted ]; There was an issue: it only did part of the array. So I commanded: don’t be lazy, finish the whole UnicodeRanges array It did but the result was too long so I had to click Continue button. It worked and the generated code seems to be correct so I applied it. For binary search, the array must be sorted, so I commanded: Sort the array by first element It did, the result looked correct so I accepted the change. Hmm, I can get used to having a programming slave. Updating the code to implement binary search I copied getUnicodeRangeFor to chat and commanded: change the implementation to use binary search over UnicodeRanges array It gave me: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; if (value >= range[0] && value <= range[1]) { return lastPosition; } } // Binary search let left = 0; let right = UnicodeRanges.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const range = UnicodeRanges[mid]; if (value < range[0]) { right = mid - 1; } else if (value > range[1]) { left = mid + 1; } else { return range[2]; // Return the range index } } return -1; // Not found } It looks correct so I accepted it. We achieved the goal: replaced linear search with binary search using cheap AI labor. But I want more. More speed. Changing array of arrays to flat array A more efficient representation of the ranges would be a flat array: const UnicodeRanges = [ 0x0000, 0x007f, 0, // 0 - Basic Latin 0x0080, 0x00ff, 1, // 1 - Latin-1 Supplement ]; Let’s see if our AI slave can do it: change the whole UnicodeRanges array to flat array, preserve comments, don’t be lazy and do all of it I pre-emptively command the AI to convert the whole array. It did it. Now is the time to update getUnicodeRangeFor(). It’s a simple change I could do myself: now we just multiply index * 3 to get range definition. But I kind of like commanding the AI so I did it again: update getUnicodeRangeFor for new flat array representations of UnicodeRanges It gave me: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Binary search let left = 0; let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const index = mid * 3; const rangeStart = UnicodeRanges[index]; const rangeEnd = UnicodeRanges[index + 1]; if (value < rangeStart) { right = mid - 1; } else if (value > rangeEnd) { left = mid + 1; } else { return UnicodeRanges[index + 2]; // Return the range index } } return -1; // Not found } It does seem correct, but let’s do one more tweak: move binary search code to its own function And it worked: function binarySearchUnicodeRange(value) { let left = 0; let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const index = mid * 3; const rangeStart = UnicodeRanges[index]; const rangeEnd = UnicodeRanges[index + 1]; if (value < rangeStart) { right = mid - 1; } else if (value > rangeEnd) { left = mid + 1; } else { return UnicodeRanges[index + 2]; // Return the range index } } return -1; // Not found } function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Perform binary search return binarySearchUnicodeRange(value); } Inlining some of the binary search It’s good but we haven’t yet reached the levels of insanity I’ve hinted at. UnicodeRanges is a fixed array. We could “unroll” the binary search into code. It would be very painful to do manually. I would have to write custom code generator to ensure it’s correct but maybe AI can do it for us? unroll first 3 levels of binarySearchUnicodeRange To my surprise, it worked. AI understood what I wanted and gave me this: function getUnicodeRangeFor(value, lastPosition = -1) { ... omited const length = UnicodeRanges.length / 3; let mid = Math.floor(length / 2) * 3; if (value < UnicodeRanges[mid]) { mid = Math.floor(mid / 2); if (value < UnicodeRanges[mid]) { mid = Math.floor(mid / 2); } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 2); } else { return UnicodeRanges[mid + 2]; } } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 2); if (value < UnicodeRanges[mid]) { mid = mid - Math.floor(mid / 4); } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 4); } else { return UnicodeRanges[mid + 2]; } } else { return UnicodeRanges[mid + 2]; } ... omited } Can we do even better? unroll the values of UnicodeRanges[mid] etc. and replace with constants Holly cow, it worked: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Unrolled binary search with constants if (value < 0x4E00) { // CJK Unified Ideographs start if (value < 0x0530) { // Armenian start if (value < 0x0250) { // IPA Extensions start if (value < 0x0100) { // Latin Extended-A start if (value < 0x0080) { // Latin-1 Supplement start return 0; // Basic Latin } return 1; // Latin-1 Supplement } if (value < 0x0180) { return 2; // Latin Extended-A } return 3; // Latin Extended-B } if (value < 0x0300) { // Combining Diacritical Marks start if (value < 0x02B0) { // Spacing Modifier Letters start return 4; // IPA Extensions } return 5; // Spacing Modifier Letters } if (value < 0x0370) { // Greek and Coptic start return 6; // Combining Diacritical Marks } if (value < 0x0400) { // Cyrillic start return 7; // Greek and Coptic } return 9; // Cyrillic } // Continue with binary search for remaining ranges let left = 10; // Index of Armenian in UnicodeRanges let right = 102; // Index of CJK Unified Ideographs in UnicodeRanges while (left <= right) { const mid = Math.floor((left + right) / 2) * 3; if (value < UnicodeRanges[mid]) { right = Math.floor(mid / 3) - 1; } else if (value > UnicodeRanges[mid + 1]) { left = Math.floor(mid / 3) + 1; } else { return UnicodeRanges[mid + 2]; } } } else { // Binary search for ranges after CJK Unified Ideographs let left = 103; // Index after CJK Unified Ideographs in UnicodeRanges let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2) * 3; if (value < UnicodeRanges[mid]) { right = Math.floor(mid / 3) - 1; } else if (value > UnicodeRanges[mid + 1]) { left = Math.floor(mid / 3) + 1; } else { return UnicodeRanges[mid + 2]; } } } return -1; // Not found } Is it correct? I have no idea. Would be easy to write a test that uses UnicodeRanges as source of truth and tests getUnicodeRangeFor() for every possible value. Can I just ask AI to do it faster? So far I micro-managed the AI coder. I was telling it exactly what kind of change I wanted and commanded AI until it did what I wanted. A good boss delegates. Good boss tells the worker the high level goal and leaves implementation detail to the worker. Let’s start from scratch and command AI thusly: could you make a faster implementation of getUnicodeRangeFor()? You can change the representation of UnicodeRanges to achieve that goal Holly cow again. The AI figured out the binary search: const UnicodeRanges = [ { start: 0x0000, end: 0x007f, index: 0 }, // Basic Latin { start: 0x0080, end: 0x00ff, index: 1 }, // Latin-1 Supplement { start: 0x0100, end: 0x017f, index: 2 }, // Latin Extended-A // ... (other ranges) { start: 0x1f000, end: 0x1f09f, index: 122 }, // Domino Tiles - Mahjong Tiles ].sort((a, b) => a.start - b.start); function getUnicodeRangeFor(value, lastPosition = -1) { if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; if (value >= range.start && value <= range.end) { return range.index; } } let left = 0; let right = UnicodeRanges.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const range = UnicodeRanges[mid]; if (value >= range.start && value <= range.end) { return range.index; } if (value < range.start) { right = mid - 1; } else { left = mid + 1; } } return -1; } The object {start, end, index} is less efficient than my flat array representation and it didn’t unroll the binary search but the core idea of binary search is there. But just for kicks, maybe there’s more: can you make it even more efficient? It did something slightly different, but really the same. I’m afraid no bonus this quarter. Ok, one last try: but could you go all out and use every possible implementation trick you know to optkmize getUnicodeRangeFor and UnicodeRanges representation? Fuck me, I have no idea if the generated code is correct but it does seem it would be faster than the non-unrolled binary search I came up with earlier: // Preprocess UnicodeRanges into a more efficient structure const BUCKET_SIZE = 256; const MAX_UNICODE = 0x10FFFF; const buckets = new Uint16Array(Math.ceil((MAX_UNICODE + 1) / BUCKET_SIZE)); const ranges = []; UnicodeRanges.forEach((range, index) => { for (let i = 0; i < range.length; i += 2) { const start = range[i]; const end = range[i + 1]; ranges.push({ start, end, index }); const startBucket = start >>> 8; const endBucket = end >>> 8; for (let j = startBucket; j <= endBucket; j++) { buckets[j] |= (1 << (index & 15)); } } }); ranges.sort((a, b) => a.start - b.start); // Create a compact representation of ranges const compactRanges = new Uint32Array(ranges.length * 3); ranges.forEach((range, i) => { compactRanges[i * 3] = range.start; compactRanges[i * 3 + 1] = range.end; compactRanges[i * 3 + 2] = range.index; }); function getUnicodeRangeFor(value, lastPosition = -1) { if (value > MAX_UNICODE) return -1; // Check last position first if (lastPosition !== -1) { const start = compactRanges[lastPosition * 3]; const end = compactRanges[lastPosition * 3 + 1]; if (value >= start && value <= end) { return compactRanges[lastPosition * 3 + 2]; } } // Quick check using buckets const bucket = buckets[value >>> 8]; if (bucket === 0) return -1; // Binary search within the relevant ranges let left = 0; let right = compactRanges.length / 3 - 1; while (left <= right) { const mid = (left + right) >>> 1; const start = compactRanges[mid * 3]; const end = compactRanges[mid * 3 + 1]; if (value < start) { right = mid - 1; } else if (value > end) { left = mid + 1; } else { return compactRanges[mid * 3 + 2]; } } return -1; } In retrospect, using Uint32Array for UnicodeRanges array was a no-brainer. It’s more compact in memory, the values will probably be compared as integers, not floats. Should be faster to scan than array of floats. The buckets are not necessary. They seems to be for quick “doesn’t exist check” which is not important. I believe most lookups are for values that exist. I’m humbled that just asking for super duper optimization made AI produce something I didn’t think of. More optimization ideas I can’t help myself. Those are ideas I didn’t ask AI to implement. UnicodeRanges is small. A linear search of compact Uint32Array representation where we just have (start, end) values for each range would be faster than binary search due to cache lines. We could start the search in the middle of array and scan half the data going forward or backwards. We could also store ranges smaller than 0x10000 in Uint16Array and larger in Uint32Array. And do linear search starting in the middle. Since the values are smaller than 256, we could encode the first 0xffff values in 64kB as Uint8Array and the rest as Uint32Array. That would probably be the fastest on average, because I believe most lookups are for Unicode chars smaller than 0xffff. Finally, we could calculate the the frequency of each range in representative sample of PDF documents, check the ranges based on that frequency, fully unrolled into code, without any tables. Conclusions AI is a promising way to do tedious code refactoring. If I didn’t have the AI, I would have to write a program to e.g. convert UnicodeRanges to a flat representation. It’s simple and therefore doable but certainly would take longer than few minutes it took me to command AI. The final unrolling of getUnicodeRangeFor() would probably never happen. It would require writing a sophisticated code generator which would be a big project by itself. AI can generate buggy code so it needs to be carefully reviewed. The unrolled binary search could not be verified by review, it would need a test. But hey, I could command my AI sidekick to write the test for me. There was this idea of organizing programming teams into master programmer and coding grunts. The job of master programmer, the thinking was, to generate high level ideas and having coding grunts implement them. Turns out that we can’t organize people that way but now we can use AI to be our coding grunt. Prompt engineering is a thing. I wasted a bunch of time doing incremental improvements. I should have started by asking for super-duper optimization. Productivity gains is real. The whole thing took me about an hour. For this particular task easily 2x compared to not using cheap AI labor. Imagine you’re running a software business and instead of spending 2 months on a task, you only spend 1 month. I’ll be using more AI for coding in the future.
Notion-like table of contents in JavaScript Long web pages benefit from having a table of contents. Especially technical, reference documentation. As a reader you want a way to quickly navigate to a specific part of the documentation. This article describes how I implemented table of contents for documentation page for my Edna note taking application. Took only few hours. Here’s full code. A good toc A good table of contents is: always available unobtrusive Table of contents cannot be always visible. Space is always at premium and should be used for the core functionality of a web page. For a documentation page the core is documentation text so space should be used to show documentation. But it should always be available in some compact form. I noticed that Notion implemented toc in a nice way. Since great artists steal, I decided to implement similar behavior for Edna’s documentation When hidden, we show mini toc i.e. for each toc entry we have a gray rectangle. A block rectangle indicates current position in the document: It’s small and unobtrusive. When you hover mouse over that area we show the actual toc: Clicking on a title goes to that part of the page. Implementing table of contents My implementation can be added to any page. Grabbing toc elements I assume h1 to h6 elements mark table of contents entries. I use their text as text of the entry. After page loads I build the HTML for the toc. I grab all headers elements: function getAllHeaders() { return Array.from(document.querySelectorAll("h1, h2, h3, h4, h5, h6")); } Each toc entry is represented by: class TocItem { text = ""; hLevel = 0; nesting = 0; element; } text we show to the user. hLevel is 1 … 6 for h1 … h6. nesting is like hLevel but sanitized. We use it to indent text in toc, to show tree structure of the content. element is the actual HTML element. We remember it so that we can scroll to that element with JavaScript. I create array of TocItem for each header element on the page: function buildTocItems() { let allHdrs = getAllHeaders(); let res = []; for (let el of allHdrs) { /** @type {string} */ let text = el.innerText.trim(); text = removeHash(text); text = text.trim(); let hLevel = parseInt(el.tagName[1]); let h = new TocItem(); h.text = text; h.hLevel = hLevel; h.nesting = 0; h.element = el; res.push(h); } return res; } function removeHash(str) { return str.replace(/#$/, ""); } Generate toc HTML Toc wrapper Here’s the high-level structure: .toc-wrapper has 2 children: .toc-mini, always visible, shows overview of the toc .toc-list hidden by default, shown on hover over .toc-wrapper Wrapper is always shown on the right upper corner using fixed position: .toc-wrapper { position: fixed; top: 1rem; right: 1rem; z-index: 50; } You can adjust top and right for your needs. When toc is too long to fully shown on screen, we must make it scrollable. But also default scrollbars in Chrome are large so we make them smaller and less intrusive: .toc-wrapper { position: fixed; top: 1rem; right: 1rem; z-index: 50; } When user hovers over .toc-wrapper, we switch display from .toc-mini to .toc-list: .toc-wrapper:hover > .toc-mini { display: none; } .toc-wrapper:hover > .toc-list { display: flex; } Generate mini toc We want to generate the following HTML: <div class="toc-mini"> <div class="toc-item-mini toc-light">▃</div> ... repeat for every TocItem </div> ▃ is a Unicode characters that is a filled rectangle of the bottom 30% of the character. We use a very small font becuase it’s only a compact navigation heler. .toc-light is gray color. By removing this class we make it default black which marks current position in the document. .toc-mini { display: flex; flex-direction: column; font-size: 6pt; cursor: pointer; } .toc-light { color: lightgray; } Generating HTML in vanilla JavaScript is not great, but it works for small things: function genTocMini(items) { let tmp = ""; let t = `<div class="toc-item-mini toc-light">▃</div>`; for (let i = 0; i < items.length; i++) { tmp += t; } return `<div class="toc-mini">` + tmp + `</div>`; } items is an array of TocItem we get from buildTocItems(). We mark the items with toc-item-mini class so that we can query them all easily. Generate toc list Table of contents list is only slightly more complicated: <div class="toc-list"> <div title="{title}" class="toc-item toc-trunc {ind}" onclick=tocGoTo({n})>{text}</div> ... repeat for every TocItem </div> {ind} is name of the indent class, like: .toc-ind-1 { padding-left: 4px; } tocGoTo(n) is a function that scroll the page to show n-th TocItem.element at the top. function genTocList(items) { let tmp = ""; let t = `<div title="{title}" class="toc-item toc-trunc {ind}" onclick=tocGoTo({n})>{text}</div>`; let n = 0; for (let h of items) { let s = t; s = s.replace("{n}", n); let ind = "toc-ind-" + h.nesting; s = s.replace("{ind}", ind); s = s.replace("{text}", h.text); s = s.replace("{title}", h.text); tmp += s; n++; } return `<div class="toc-list">` + tmp + `</div>`; } .toc-trunc is for limiting the width of toc and gracefully truncating it: .toc-trunc { max-width: 32ch; min-width: 12ch; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; } Putting it all together Here’s the code that runs at page load, generates HTML and appends it to the page: function genToc() { tocItems = buildTocItems(); fixNesting(tocItems); const container = document.createElement("div"); container.className = "toc-wrapper"; let s = genTocMini(tocItems); let s2 = genTocList(tocItems); container.innerHTML = s + s2; document.body.appendChild(container); } Navigating Showing / hiding toc list is done in CSS. When user clicks toc item, we need to show it at the top of page: let tocItems = []; function tocGoTo(n) { let el = tocItems[n].element; let y = el.getBoundingClientRect().top + window.scrollY; let offY = 12; y -= offY; window.scrollTo({ top: y, }); } We remembered HTML element in TocItem.element so all we need to is to scroll to it to show it at the top of page. You can adjust offY e.g. if you show a navigation bar at the top that overlays the content, you want to make offY at least the height of navigation bar. Updating toc mini to reflect current position When user scrolls the page we want to reflect that in toc mini by changing the color of corresponding rectangle from gray to black. On scroll event we calculate which visible TocItem.element is closest to the top of window. function updateClosestToc() { let closestIdx = -1; let closestDistance = Infinity; for (let i = 0; i < tocItems.length; i++) { let tocItem = tocItems[i]; const rect = tocItem.element.getBoundingClientRect(); const distanceFromTop = Math.abs(rect.top); if ( distanceFromTop < closestDistance && rect.bottom > 0 && rect.top < window.innerHeight ) { closestDistance = distanceFromTop; closestIdx = i; } } if (closestIdx >= 0) { console.log("Closest element:", closestIdx); let els = document.querySelectorAll(".toc-item-mini"); let cls = "toc-light"; for (let i = 0; i < els.length; i++) { let el = els[i]; if (i == closestIdx) { el.classList.remove(cls); } else { el.classList.add(cls); } } } } window.addEventListener("scroll", updateClosestToc); All together now After page loads I run: genToc(); updateClosestToc(); Which I achieve by including this in HTML: <script src="/help.js" defer></script> </body> Possible improvements Software is never finished. Software can always be improved. I have 2 ideas for further improvements. Always visible when enough space Most of the time my browser window uses half of 13 to 15 inch screen. I’m aggravated when websites don’t work well at that size. At that size there’s not enough space to always show toc. But if the user chooses a wider browser window, it makes sense to use available space and always show table of contents. Keyboard navigation It would be nice to navigate table of contents with keyboard, in addition to mouse. For example: t would show table of contents Esc would dismiss it up / down arrows would navigate toc tree Enter would navigate to selected part and dismiss toc
Porting a medium-sized Vue application to Svelte 5 The short version: porting from Vue to Svelte is pretty straightforward and Svelte 5 is nice upgrade to Svelte 4. Why port? I’m working on Edna, a note taking application for developers. It started as a fork of Heynote. I’ve added a bunch of features, most notably managing multiple notes. Heynote is written in Vue. Vue is similar enough to Svelte that I was able to add features without really knowing Vue but Svelte is what I use for all my other projects. At some point I invested enough effort (over 350 commits) into Edna that I decided to port from Vue to Svelte. That way I can write future code faster (I know Svelte much better than Vue) and re-use code from my other Svelte projects. Since Svelte 5 is about to be released, I decided to try it out. There were 10 .vue components. It took me about 3 days to port everything. Adding Svelte 5 to build pipeline I started by adding Svelte 5 and converting the simplest component. In the above commit: I’ve installed Svelte 5 and it’s vite plugin by adding it to package.json updated tailwind.config.cjs to also scan .svelte files added Svelte plugin to vite.config.js to run Svelte compiler on .svelte and .svelte.js files during build deleted Help.vue, which is not related to porting, I just wasn’t using it anymore started converting smallest component AskFSPermissions.vue as AskFSPermissions.svelte In the next commit: I finished porting AskFSPermissions.vue I tweaked tsconfig.json so that VSCode type-checks .svelte files I replaced AskFSPermissions.vue with Svelte 5 version Here replacing was easy because the component was a stand-alone component. All I had to do was to replace Vue’s: app = createApp(AskFSPermissions); app.mount("#app"); with Svelte 5: const args = { target: document.getElementById("app"), }; appSvelte = mount(AskFSPermissions, args); Overall porting strategy Next part was harder. Edna’s structure is: App.vue is the main component which shows / hides other components depending on state and desired operations. My preferred way of porting would be to start with leaf components and port them to Svelte one by one. However, I haven’t found an easy way of using .svelte components from .vue components. It’s possible: Svelte 5 component can be imported and mounted into arbitrary html element and I could pass props down to it. If the project was bigger (say weeks of porting) I would try to make it work so that I have a working app at all times. Given I estimated I can port it quickly, I went with a different strategy: I created mostly empty App.svelte and started porting components, starting with the simplest leaf components. I didn’t have a working app but I could see and test the components I’ve ported so far. This strategy had it’s challenges. Namely: most of the state is not there so I had to fake it for a while. For example the first component I ported was TopNav.vue, which displays name of the current note in the top upper part of the screen. The problem was: I didn’t port the logic to load the file yet. For a while I had to fake the state i.e. I created noteName variable with a dummy value. With each ported component I would port App.vue parts needed by the component Replacing third-party components Most of the code in Edna is written by me (or comes from the original Heynote project) and doesn’t use third-party Vue libraries. There are 2 exceptions: I wanted to show notification messages and have a context menu. Showing notifications messages isn’t hard: for another project I wrote a Svelte component for that in a few hours. But since I didn’t know Vue well, it would have taken me much longer, possibly days. For that reason I’ve opted to use a third-party toast notifications Vue library. The same goes menu component. Even more so: implementing menu component is complicated. At least few days of effort. When porting to Svelte I replaced third-party vue-toastification library with my own code. At under 100 loc it was trivial to write. For context menu I re-used context menu I wrote for my notepad2 project. It’s a more complicated component so it took longer to port it. Vue => Svelte 5 porting Vue and Svelte have very similar structure so porting is straightforward and mostly mechanical. The big picture: <template> become Svelte templates. Remove <template> and replace Vue control flow directives with Svelte equivalent. For example <div v-if="foo"> becomes {#if foo}<div>{/if} setup() can be done either at top-level, when component is imported, or in $effect( () => { ... } ) when component is mounted data() become variables. Some of them are regular JavaScript variables and some of them become reactive $state() props becomes $props() mounted() becomes $effect( () => { ... } ) methods() become regular JavaScript functions computed() become $derived.by( () => { ... } ) ref() becomes $state() $emit('foo') becomes onfoo callback prop. Could also be an event but Svelte 5 recommends callback props over events @click becomes onclick v-model="foo" becomes bind:value={foo} {{ foo }} in HTML template becomes { foo } ref="foo" becomes bind:this={foo} :disabled="!isEnabled" becomes disabled={!isEnabled} CSS was scoped so didn’t need any changes Svelte 5 At the time of this writing Svelte 5 is Release Candidates and the creators tell you not use it in production. Guess what, I’m using it in production. It works and it’s stable. I think Svelte 5 devs operate from the mindset of “abundance of caution”. All software has bugs, including Svelte 4. If Svelte 5 doesn’t work, you’ll know it. Coming from Svelte 4, Svelte 5 is a nice upgrade. One small project is too early to have deep thoughts but I like it so far. It’s easy to learn new ways of doing things. It’s easy to convert Svelte 4 to Svelte 5, even without any tools. Things are even more compact and more convenient than in Svelte 4. {#snippet} adds functionality that I was missing from Svelte 4.
How to dynamically change font size in a Windows dialog Windows’s win32 API is old and crufty. Many things that are trivial to do in HTML are difficult in win32. One of those things is changing size of font used by your native, desktop app. I encountered this in SumatraPDF. A user asked for a way to increase the font size. I introduced UIFontSize option but implementing that was difficult and time consuming. One of the issues was changing the font size used in dialogs. This article describes how I did it. The method is based on https://stackoverflow.com/questions/14370238/can-i-dynamically-change-the-font-size-of-a-dialog-window-created-with-c-in-vi How dialogs work SumatraPDF defines a bunch of dialogs in SumatraPDF.rc. Here’s a find dialog: IDD_DIALOG_FIND DIALOGEX 0, 0, 247, 52 STYLE DS_SETFONT | DS_MODALFRAME | DS_FIXEDSYS | WS_POPUP | WS_CAPTION | WS_SYSMENU CAPTION "Find" FONT 8, "MS Shell Dlg", 400, 0, 0x1 BEGIN LTEXT "&Find what:",IDC_STATIC,6,8,60,9 EDITTEXT IDC_FIND_EDIT,66,6,120,13,ES_AUTOHSCROLL CONTROL "&Match case",IDC_MATCH_CASE,"Button",BS_AUTOCHECKBOX | WS_TABSTOP,6,24,180,9 LTEXT "Hint: Use the F3 key for finding again",IDC_FIND_NEXT_HINT,6,37,180,9,WS_DISABLED DEFPUSHBUTTON "Find",IDOK,191,6,50,14 PUSHBUTTON "Cancel",IDCANCEL,191,24,50,14 END .rc is compiled by a resource compiler rc.exe and embedded in resources section of a PE .exe file. Compiled version is a binary blob that has a stable format. At runtime we can get that binary blob from resources and pass it to DialogBoxIndirectParam() function to create a dialog. How to change font size of a dialog at runtime DIALOGEX tell us it’s an extended dialog, which has different binary layout than non-extended DIALOG. As you can see part of dialog definition is a font definition: FONT 8, "MS Shell Dlg", 400, 0, 0x1 To provide a FONT you also need to specify DS_SETFONT or DS_FIXEDSYS flag. We’re asking for MS Shell Dlg font with size of 8 points (12 pixels). 400 specifies standard weight (800 would be bold font). Unfortunately the binary blob is generated at compilation time and we want to change font size when application runs. The simplest way to achieve that is to patch the binary blob in memory. The code for changing dialog font size at runtime You can find the full code at https://github.com/sumatrapdfreader/sumatrapdf/blob/b6aed9e7d257510ff82fee915506ce2e75481c64/src/SumatraDialogs.cpp#L20 It uses small number of SumatraPDF base code so you’ll need to lightly massage it to use it in your own code. The layout of binary blob is documented at http://msdn.microsoft.com/en-us/library/ms645398(v=VS.85).aspx In C++ this is represented by the following struct: #pragma pack(push, 1) struct DLGTEMPLATEEX { WORD dlgVer; // 0x0001 WORD signature; // 0xFFFF DWORD helpID; DWORD exStyle; DWORD style; WORD cDlgItems; short x, y, cx, cy; /* sz_Or_Ord menu; sz_Or_Ord windowClass; WCHAR title[titleLen]; WORD fontPointSize; WORD fontWWeight; BYTE fontIsItalic; BYTE fontCharset; WCHAR typeface[stringLen]; */ }; #pragma pack(pop) #pragma pack(push, 1) tells C++ compiler to not do padding between struct members. That part after x, y, cx, cy is commented out because sz_or_Ord and WCHAR [] are variable length, which can’t be represented in C++ struct. fontPointSize is the value we need to patch. But first we need to get a copy binary blob. DLGTEMPLATE* DupTemplate(int dlgId) { HRSRC dialogRC = FindResourceW(nullptr, MAKEINTRESOURCE(dlgId), RT_DIALOG); CrashIf(!dialogRC); HGLOBAL dlgTemplate = LoadResource(nullptr, dialogRC); CrashIf(!dlgTemplate); void* orig = LockResource(dlgTemplate); size_t size = SizeofResource(nullptr, dialogRC); CrashIf(size == 0); DLGTEMPLATE* ret = (DLGTEMPLATE*)memdup(orig, size); UnlockResource(orig); return ret; } dlgId is from .rc file (e.g. IDD_DIALOG_FIND for our find dialog). Most of it is win32 APIs, memdup() makes a copy of memory block. Here’s the code to patch the font size: static void SetDlgTemplateExFont(DLGTEMPLATE* tmp, int fontSize) { CrashIf(!IsDlgTemplateEx(tmp)); DLGTEMPLATEEX* tpl = (DLGTEMPLATEEX*)tmp; CrashIf(!HasDlgTemplateExFont(tpl)); u8* d = (u8*)tpl; d += sizeof(DLGTEMPLATEEX); // sz_Or_Ord menu d = SkipSzOrOrd(d); // sz_Or_Ord windowClass; d = SkipSzOrOrd(d); // WCHAR[] title d = SkipSz(d); // WCHAR pointSize; WORD* wd = (WORD*)d; fontSize = ToFontPointSize(fontSize); *wd = fontSize; } We start at the end of fixed-size portion of the blob () d += sizeof(DLGTEMPLATEEX). We then skip variable-length fields menu, windowClass and title and patch the font size in points. SumatraPDF code operates in pixels so has to convert that to Windows points: static int ToFontPointSize(int fontSize) { int res = (fontSize * 72) / 96; return res; } Here’s how we skip past sz_or_Ord fields: /* Type: sz_Or_Ord A variable-length array of 16-bit elements that identifies a menu resource for the dialog box. If the first element of this array is 0x0000, the dialog box has no menu and the array has no other elements. If the first element is 0xFFFF, the array has one additional element that specifies the ordinal value of a menu resource in an executable file. If the first element has any other value, the system treats the array as a null-terminated Unicode string that specifies the name of a menu resource in an executable file. */ static u8* SkipSzOrOrd(u8* d) { WORD* pw = (WORD*)d; WORD w = *pw++; if (w == 0x0000) { // no menu } else if (w == 0xffff) { // menu id followed by another WORD item pw++; } else { // anything else: zero-terminated WCHAR* WCHAR* s = (WCHAR*)pw; while (*s) { s++; } s++; pw = (WORD*)s; } return (u8*)pw; } Strings are zero-terminated utf-16: static u8* SkipSz(u8* d) { WCHAR* s = (WCHAR*)d; while (*s) { s++; } s++; // skip terminating zero return (u8*)s; } To make the code more robust, we check the dialog is extended and has font information to patch: static bool IsDlgTemplateEx(DLGTEMPLATE* tpl) { return tpl->style == MAKELONG(0x0001, 0xFFFF); } static bool HasDlgTemplateExFont(DLGTEMPLATEEX* tpl) { DWORD style = tpl->style & (DS_SETFONT | DS_FIXEDSYS); return style != 0; } Changing font name It’s also possible to change font name but it’s slightly harder (which is why I didn’t implement it). WCHAR typeface[] is inline null-terminated string that is name of the font. To change it we would also have to move the data that follows it. The roads not taken There are other ways to achieve that. Dialog is just a HWND. In WM_INITDIALOG message we could iterate over all controls, change their font with WM_SETFONT message and then resize the controls and the window. That’s much more work than our solution. We just patch the font size and let Windows do the font setting and resizing. Another option would be to generate binary blog representing dialogs at runtime. It would require writing more code but then we could define new dialogs in C++ code that wouldn’t be that much different than .rc syntax. I want to explore that solution because this would also allow adding simple layout system to simplify definition the dialogs. In .rc files everything must be absolutely positioned. The visual dialog editor helps a bit but is unreliable and I need resizing logic anyway because after translating strings absolute positioning doesn’t work.
Building wc in the browser From time to time I like to run wc -l on my source code to see how much code I wrote. For those not in the know: wc -l shows number of lines in files. Actually, what I have to do is more like find -name "*.go" | xargs wc -l because wc isn’t a particularly good at handling directories. I just want to see number of lines in all my source files, man. I don’t want to google the syntax of find and xargs for a hundredth time. After learning about File System API I decided to write a tool that does just that as a web app. No need to install software. I did just that and you can use it yourself. Here’s how it sees itself: The rest of this article describes how I would have done it if I did it. Building software quickly It only took me 3 days, which is a testament to how productive the web platform can be. My weapons of choice are: Svelte for frontend Tailwind CSS for CSS JSDoc for static typing of JavaScript File System API to access files and directories on your computer vite for a bundler and dev server render to deploy For a small project Svelte and Tailwind CSS are arguably an overkill. I used them because I standardized on that toolset. Standardization allows me to re-use prior experience and sometimes even code. Why those technologies? Svelte is React without the bloat. Try it and you’ll love it. Tailwind CSS is CSS but more productive. You have to try it to believe it. JSDoc is happy medium between no types at all and TypeScript. I have great internal resistance to switching to TypeScript. Maybe 5 years from now. And none of that would be possible without browser APIs that allow access to files on your computer. Which FireFox doesn’t implement because they are happy to loose market share to browser that implement useful features. Clearly $3 million a year is not enough to buy yourself a CEO with understanding of the obvious. Implementation tidbits Getting list of files To get a recursive listing of files in a directory use showDirectoryPicker to get a FileSystemDirectoryHandle. Call dirHandle.values() to get a list of directory entries. Recurse if an entry is a directory. Not all browsers support that API. To detect if it works: /** * @returns {boolean} */ export function isIFrame() { let isIFrame = false; try { // in iframe, those are different isIFrame = window.self !== window.top; } catch { // do nothing } return isIFrame; } /** * @returns {boolean} */ export function supportsFileSystem() { return "showDirectoryPicker" in window && !isIFrame(); } Because people on Hacker News always complain about slow, bloated software I took pains to make my code fast. One of those pains was using an array instead of an object to represent a file system entry. Wait, now HN people will complain that I’m optimizing prematurely. Listen buddy, Steve Wozniak wrote assembly in hex and he liked it. In comparison, optimizing memory layout of most frequently used object in JavaScript is like drinking champagne on Jeff Bezos’ yacht. Here’s a JavaScript trick to optimizing memory layout of objects with fixed number of fields: derive your class from an Array. Deriving a class from an Array Little known thing about JavaScript is that an Array is just an object and you can derive your class from it and add methods, getters and setters. You get a compact layout of an array and convenience of accessors. Here’s the sketch of how I implemented FsEntry object: // a directory tree. each element is either a file: // [file, dirHandle, name, path, size, null] // or directory: // [[entries], dirHandle, name, path, size, null] // extra null value is for the caller to stick additional data // without the need to re-allocate the array // if you need more than 1, use an object // handle (file or dir), parentHandle (dir), size, path, dirEntries, meta const handleIdx = 0; const parentHandleIdx = 1; const sizeIdx = 2; const pathIdx = 3; const dirEntriesIdx = 4; const metaIdx = 5; export class FsEntry extends Array { get size() { return this[sizeIdx]; } // ... rest of the accessors } We have 6 slots in the array and we can access them as e.g. entry[sizeIdx]. We can hide this implementation detail by writing a getter as FsEntry.size() shown above. Reading a directory recursively Once you get FileSystemDirectoryHandle by using window.showDirectoryPicker() you can read the content of the directory. Here’s one way to implement recursive read of directory: /** * @param {FileSystemDirectoryHandle} dirHandle * @param {Function} skipEntryFn * @param {string} dir * @returns {Promise<FsEntry>} */ export async function readDirRecur( dirHandle, skipEntryFn = dontSkip, dir = dirHandle.name ) { /** @type {FsEntry[]} */ let entries = []; // @ts-ignore for await (const handle of dirHandle.values()) { if (skipEntryFn(handle, dir)) { continue; } const path = dir == "" ? handle.name : `${dir}/${handle.name}`; if (handle.kind === "file") { let e = await FsEntry.fromHandle(handle, dirHandle, path); entries.push(e); } else if (handle.kind === "directory") { let e = await readDirRecur(handle, skipEntryFn, path); e.path = path; entries.push(e); } } let res = new FsEntry(dirHandle, null, dir); res.dirEntries = entries; return res; } Function skipEntryFn is called for every entry and allows the caller to decide to not include a given entry. You can, for example, skip a directory like .git. It can also be used to show progress of reading the directory to the user, as it happens asynchronously. Showing the files I use tables and I’m not ashamed. It’s still the best technology to display, well, a table of values where cells are sized to content and columns are aligned. Flexbox doesn’t remember anything across rows so it can’t align columns. Grid can layout things properly but I haven’t found a way to easily highlight the whole row when mouse is over it. With CSS you can only target individual cells in a grid, not rows. With table I just style <tr class="hover:bg-gray-100">. That’s Tailwind speak for: on mouse hover set background color to light gray. Folder can contain other folders so we need recursive components to implement it. Svelte supports that with <svelte:self>. I implemented it as a tree view where you can expand folders to see their content. It’s one big table for everything but I needed to indent each expanded folder to make it look like a tree. It was a bit tricky. I went with indent property in my Folder component. Starts with 0 and goes +1 for each level of nesting. Then I style the first file name column as <td class="ind-{indent}">...</td> and use those CSS styles: <style> :global(.ind-1) { padding-left: 0.5rem; } :global(.ind-2) { padding-left: 1rem; } /* ... up to .ind-17 */ Except it goes to .ind-17. Yes, if you have deeper nesting, it won’t show correctly. I’ll wait for a bug report before increasing it further. Calculating line count You can get the size of the file from FileSystemFileEntry. For source code I want to see number of lines. It’s quite trivial to calculate: /** * @param {Blob} f * @returns {Promise<number>} */ export async function lineCount(f) { if (f.size === 0) { // empty files have no lines return 0; } let ab = await f.arrayBuffer(); let a = new Uint8Array(ab); let nLines = 0; // if last character is not newline, we must add +1 to line count let toAdd = 0; for (let b of a) { // line endings are: // CR (13) LF (10) : windows // LF (10) : unix // CR (13) : mac // mac is very rare so we just count 10 as they count // windows and unix lines if (b === 10) { toAdd = 0; nLines++; } else { toAdd = 1; } } return nLines + toAdd; } It doesn’t handle Mac files that use CR for newlines. It’s ok to write buggy code as long as you document it. I also skip known binary files (.png, .exe etc.) and known “not mine” directories like .git and node_modules. Small considerations like that matter. Remembering opened directories I typically use it many times on the same directories and it’s a pain to pick the same directory over and over again. FileSystemDirectoryHandle can be stored in IndexedDB so I implemented a history of opened directories using a persisted store using IndexedDB. Asking for permissions When it comes to accessing files and directories on disk you can’t ask for forgiveness, you have to ask for permission. User grants permissions in window.showDirectoryPicker() and browser remembers them for a while, but they expire quite quickly. You need to re-check and re-ask for permission to FileSystemFileHandle and FileSystemDirectoryHandle before each access: export async function verifyHandlePermission(fileHandle, readWrite) { const options = {}; if (readWrite) { options.mode = "readwrite"; } // Check if permission was already granted. If so, return true. if ((await fileHandle.queryPermission(options)) === "granted") { return true; } // Request permission. If the user grants permission, return true. if ((await fileHandle.requestPermission(options)) === "granted") { return true; } // The user didn't grant permission, so return false. return false; } If permissions are still valid from before, it’s a no-op. If not, the browser will show a dialog asking for permissions. If you ask for write permissions, Chrome will show 2 confirmations dialogs vs. 1 for read-only access. I start with read-only access and, if needed, ask again to get a write (or delete) permissions. Deleting files and directories Deleting files has nothing to do with showing line counts but it was easy to implement, it was useful so I added it. You need to remember FileSystemDirectoryHandle for the parent directory. To delete a file: parentDirHandle.removeEntry("foo.txt") To delete a directory: parentDirHandle.removeEntry("node_modules", {recursive: true}) Getting bit by a multi-threading bug JavaScript doesn’t have multiple threads and you can’t have all those nasty bugs? Right? Right? Yes and no. Async is not multi-threading but it does create non-obvious execution flows. I had a bug: I noticed that some .txt files were showing line count of 0 even though they clearly did have lines. I went bug hunting. I checked the lineCount function. Seems ok. I added console.log(), I stepped through the code. Time went by and my frustration level was reaching DEFCON 1. Thankfully before I reached cocked pistol I had an epiphany. You see, JavaScript has async where some code can interleave with some other code. The browser can splice those async “threads” with UI code. No threads means there are no data races i.e. writing memory values that other thread is in the middle of reading. But we do have non-obvious execution flows. Here’s how my code worked: get a list of files (async) show the files in UI calculate line counts for all files (async) update UI to show line counts after we get them all Async is great for users: calculating line counts could take a long time as we need to read all those files. If this process wasn’t async it would block the UI. Thanks to async there’s enough checkpoints for the browser to process UI events in between processing files. The issue was that function to calculate line counts was using an array I got from reading a directory. I passed the same array to Folder component to show the files. And I sorted the array to show files in human friendly order. In JavaScript sorting mutates an array and that array was partially processed by line counting function. As a result if series of events was unfortunate enough, I would skip some files in line counting. They would be resorted to a position that line counting thought it already counted. Result: no lines for you! A happy ending and an easy fix: Folder makes a copy of an array so sorting doesn’t affect line counting process. The future No software is ever finished but I arrived at a point where it does the majority of the job I wanted so I shipped it. There is a feature I would find useful: statistics for each extensions. How many lines in .go files vs. .js files etc.? But I’m holding off implementing it until: I really, really want it I get feature requests from people who really, really want it You can look at the source code. It’s source visible but not open source.
More in programming
The concept of Product-Market Fit (PMF) collapse has gained renewed attention with the rise of large language models (LLMs), as highlighted in a recent Reforge article. The article argues we’re witnessing unprecedented market disruption, in this post, I propose we’re experiencing an acceleration of a familiar pattern rather than a fundamentally new phenomenon. Adoption Curves […] The post The Exodus Curve appeared first on Marc Astbury.
In 1940, President Roosevelt tapped William S. Knudsen to run the government's production of military equipment. Knudsen had spent a pivotal decade at Ford during the mass-production revolution, and was president of General Motors, when he was drafted as a civilian into service as a three-star general. Not bad for a Dane, born just ten minutes on bike from where I'm writing this in Copenhagen! Knudsen's leadership raised the productive capacity of the US war machine by a 100x in areas like plane production, where it went from producing 3,000 planes in 1939 to over 300,000 by 1945. He was quoted on his achievement: "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible". Knudsen wasn't an elected politician. He wasn't even a military man. But Roosevelt saw that this remarkable Dane had the skills needed to reform a puny war effort into one capable of winning the Second World War. Do you see where I'm going with this? Elon Musk is a modern day William S. Knudsen. Only even more accomplished in efficiency management, factory optimization, and first-order systems thinking. No, America isn't in a hot war with the Axis powers, but for the sake of the West, it damn well better be prepared for one in the future. Or better still, be so formidable that no other country or alliance would even think to start one. And this requires a strong, confident, and sound state with its affairs in order. If you look at the government budget alone, this is direly not so. The US was knocking on a two-trillion-dollar budget deficit in 2024! Adding to a towering debt that's now north of 36 trillion. A burden that's already consuming $881 billion in yearly interest payments. More than what's spent on the military or Medicare. Second to only Social Security on the list of line items. Clearly, this is not sustainable. This is the context of DOGE. The program, lead by Musk, that's been deputized by Trump to turn the ship around. History doesn't repeat, but it rhymes, and Musk is dropping beats that Knudsen would have surely been tapping his foot to. And just like Knudsen in his time, it's hard to think of any other American entrepreneur more qualified to tackle exactly this two-trillion dollar problem. It is through The Musk Algorithm that SpaceX lowered the cost of sending a kilo of goods into lower orbit from the US by well over a magnitude. And now America's share of worldwide space transit has risen from less than 30% in 2010 to about 85%. Thanks to reusable rockets and chopstick-catching landing towers. Thanks to Musk. Or to take a more earthly example with Twitter. Before Musk took over, Twitter had revenues of $5 billion and earned $682 million. After the take over, X has managed to earn $1.25 billion on $2.7 billion in revenue. Mostly thank to the fact that Musk cut 80% of the staff out of the operation, and savaged the cloud costs of running the service. This is not what people expected at the time of the take over! Not only did many commentators believe that Twitter was going to collapse from the drastic costs in staff, they also thought that the financing for the deal would implode. Chiefly as a result of advertisers withdrawing from the platform under intense media pressure. But that just didn't happen. Today, the debt used to take over Twitter and turn it into X is trading at 97 cents on the dollar. The business is twice as profitable as it was before, and arguably as influential as ever. All with just a fifth of the staff required to run it. Whatever you think of Musk and his personal tweets, it's impossible to deny what an insane achievement of efficiency this has been! These are just two examples of Musk's incredible ability to defy the odds and deliver the most unbelievable efficiency gains known to modern business records. And we haven't even talked about taking Tesla from producing 35,000 cars in 2014 to making 1.7 million in 2024. Or turning xAI into a major force in AI by assembling a 100,000 H100 cluster at "superhuman" pace. Who wouldn't want such a capacity involved in finding the waste, sloth, and squander in the US budget? Well, his political enemies, of course! And I get it. Musk's magic is balanced with mania and even a dash of madness. This is usually the case with truly extraordinary humans. The taller they stand, the longer the shadow. Expecting Musk to do what he does and then also be a "normal, chill dude" is delusional. But even so, I think it's completely fair to be put off by his tendency to fire tweets from the hip, opine on world affairs during all hours of the day, and offer his support to fringe characters in politics, business, and technology. I'd be surprised if even the most ardent Musk super fans don't wince a little every now and then at some of the antics. And yet, I don't have any trouble weighing those antics against the contributions he's made to mankind, and finding an easy and overwhelming balance in favor of his positive achievements. Musk is exactly the kind of formidable player you want on your team when you're down two trillion to nothing, needing a Hail Mary pass for the destiny of America, and eager to see the West win the future. He's a modern-day Knudsen on steroids (or Ketamine?). Let him cook.
Last week, James Truitt asked a question on Mastodon: James Truitt (he/him) @linguistory@code4lib.social Mastodon #digipres folks happen to have a handy repo of small invalid bags for testing purposes? I'm trying to automate our ingest process, and want to make sure I'm accounting for as many broken expectations as possible. Jan 31, 2025 at 07:49 PM The “bags” he’s referring to are BagIt bags. BagIt is an open format developed by the Library of Congress for packaging digital files. Bags include manifests and checksums that describe their contents, and they’re often used by libraries and archives to organise files before transfering them to permanent storage. Although I don’t use BagIt any more, I spent a lot of time working with it when I was a software developer at Wellcome Collection. We used BagIt as the packaging format for files saved to our cloud storage service, and we built a microservice very similar to what James is describing. The “bag verifier” would look for broken bags, and reject them before they were copied to long-term storage. I wrote a lot of bag verifier test cases to confirm that it would spot invalid or broken bags, and that it would give a useful error message when it did. All of the code for Wellcome’s storage service is shared on GitHub under an MIT license, including the bag verifier tests. They’re wrapped in a Scala test framework that might not be the easiest thing to read, so I’m going to describe the test cases in a more human-friendly way. Before diving into specific examples, it’s worth remembering: context is king. BagIt is described by RFC 8493, and you could create invalid bags by doing a line-by-line reading and deliberately ignoring every “MUST” or “SHOULD” but I wouldn’t recommend this aproach. You’d get a long list of test cases, but you’d be overwhelmed by examples, and you might miss specific requirements for your system. The BagIt RFC is written for the most general case, but if you’re actually building a storage service, you’ll have more concrete requirements and context. It’s helpful to look at that context, and how it affects the data you want to store. Who’s creating the bags? How will they name files? Where are you going to store bags? How do bags fit into your wider systems? And so on. Understanding your context will allow you to skip verification steps that you don’t need, and to add verification steps that are important to you. I doubt any two systems implement the exact same set of checks, because every system has different context. Here are examples of potential validation issues drawn from the BagIt specification and my real-world experience. You won’t need to check for everything on this list, and this list isn’t exhaustive – but it should help you think about bag validation in your own context. The Bag Declaration bagit.txt This file declares that this is a BagIt bag, and the version of BagIt you’re using (RFC 8493 §2.1.1). It looks the same in almost every bag, for example: BagIt-Version: 1.0 Tag-File-Character-Encoding: UTF-8 This tightly prescribed format means it can only be invalid in a few ways: What if the bag doesn’t have a bag declaration? It’s a required element of every BagIt bag; it has to be there. What if the bag declaration is the wrong format? It should contain exactly two lines: a version number and a character encoding, in that order. What if the bag declaration has an unexpected version number? If you see a BagIt version that you’ve not seen before, the bag might have a different structure than what you expect. The Payload Files and Payload Manifest The payload files are the actual content you want to save and preserve. They get saved in the payload directory data/ (RFC 8493 §2.1.2), and there’s a payload manifest manifest-algorithm.txt that lists them, along with their checksums (RFC 8493 §2.1.3). Here’s an example of a payload manifest with MD5 checksums: 37d0b74d5300cf839f706f70590194c3 data/waterfall.jpg This tells us that the bag contains a single file data/waterfall.jpg, and it has the MD5 checksum 37d0…. These checksums can be used to verify that the files have transferred correctly, and haven’t been corrupted in the process. There are lots of ways a payload manifest could be invalid: What if the bag doesn’t have a payload manifest? Every BagIt bag must have at least one Payload Manifest file. What if the payload manifest is the wrong format? These files have a prescribed format – one file per line, with a checksum and file path. What if the payload manifest refers to a file that isn’t in the bag? Either one of the files in the bag has been deleted, or the manifest has an erroneous entry. What if the bag has a file that isn’t listed in the payload manifest? The manifest should be a complete listing of all the payload files in the bag. If the bag has a file which isn’t in the payload manifest, either that file isn’t meant to be there, or the manifest is missing an entry. Checking for unlisted files is how I spotted unwanted .DS_Store and Thumbs.db files. What if the checksum in the payload manifest doesn’t match the checksum of the file? Either the file has been corrupted, or the checksum is incorrect. What if there are payload files outside the data/ directory? All the payload files should be stored in data/. Anything outside that is an error. What if there are duplicate entries in the payload manifest? Every payload file must be listed exactly once in the manifest. This avoids ambiguity – suppose a file is listed twice, with two different checksums. Is the bag valid if one of those checksums is correct? Requiring unique entries avoids this sort of issue. What if the payload directory is empty? This is perfectly acceptable in the BagIt RFC, but it may not be what you want. If you know that you will always be sending bags that contain files, you should flag empty payload directories as an error. What if the payload manifest contains paths outside data/, or relative paths that try to escape the bag? (e.g. ../file.txt) Now we’re into “malicious bag” territory – a bag uploaded by somebody who’s trying to compromise your ingest pipeline. Any such bags should be treated with suspicion and rejected. If you’re concerned about malicious bags, you need a more thorough test suite to catch other shenanigans. We never went this far at Wellcome Collection, because we didn’t ingest bags from arbitrary sources. The bags only came from internal systems, and our verification was mainly about spotting bugs in those systems, not defending against malicious actors. A bag can contain multiple payload manifests – for example, it might contain both MD5 and SHA1 checksums. Every payload manifest must be valid for the overall bag to be valid. Payload filenames There are lots of gotchas around filenames and paths. It’s a complicated problem, and I definitely don’t understand all of it. It’s worth understanding the filename rules of any filesystem where you will be storing bags. For example, Azure Blob Storage has a number of rules around how you can name files, and Amazon S3 has different rules. We stored files in both at Wellcome Collection, and so the storage service had to enforce the superset of these rules. I’ve listed some edge cases of filenames you might want to consider, but it’s not a comlpete list. There are lots of ways that unexpected filenames could cause you issues, but whether you care depends on the source of your bags. If you control the bags and you know you’re not going to include any weird filenames, you can probably skip most of these. We only checked for one of these conditions at Wellcome Collection, because we had a pre-ingest step that normalised filenames. It converted filenames to ASCII, and saved a mapping between original and normalised filename in the bag. However, the normalisation was only designed for one filesystem, and produced filenames with trailing dots that were still disallowed in Azure Blob. What if a filename is too long? Some systems have a maximum path length, and an excessively deep directory structure or long filename could cause issues. What if a filename contains special characters? Spaces, emoji, or special characters (\, :, *, etc.) can cause problems for some tools. You should also think about characters that need to be URL-encoded. What if a filename has trailing spaces or dots? Some filesystems can’t support filenames ending in a dot or a space. What happens if your bag contains such a file, and you try to save it to the filesystem? This caused us issues at Wellcome Collection. We initially stored bags just in Amazon S3, which is happy to take filenames with a trailing dot – then we added backups to Azure Blob, which doesn’t. One of the bags we’d stored in Amazon S3 had a trailing dot in the filename, and caused us headaches when we tried to copy it to Azure. What if a filename contains a mix of path separators? The payload manifest uses a forward slash (/) as a path separator. If you have a filename with an alternative path separator, it might behave differently on different systems. For example, consider the payload file a\b\c. This would be a single file on macOS or Linux, but it would be nested inside two folders on Windows. What if the filenames are a mix of uppercase and lowercase characters? Some fileystems are case-sensitive, others aren’t. This can cause issues when you move bags between systems. For example, suppose a bag contains two different files Macrodata.txt and macrodata.txt. When you save that bag on a case-insensitive filesystem, only one file will be saved. What if the same filename appears twice with different Unicode normalisations? This is similar to filenames which only differ in upper/lowercase. They might be treated as two files on one filesystem, but collapsed into one file on another. The classic example is the word “café”: this can be encoded as caf\xc3\xa9 (UTF-8 encoded é) or cafe\xcc\x81 (e + combining acute accent). What if a filename contains a directory reference? A directory reference is /./ (current directory) or /../ (parent directory). It’s used on both Unix and Windows-like systems, and it’s another case of two filenames that look different but can resolve to the same path. For example: a/b, a/./b and a/subdir/../b all resolve to the same path under these rules. This can cause particular issues if you’re moving between local filesystems and cloud storage. Local filesystems treat filenames as hierarchical paths, where cloud storage like Amazon S3 often treats them as opaque strings. This can cause issues if you try to copy files from cloud storage to a local system – if you’re not careful, you could lose files in the process. The Tag Manifest tagmanifest-algorithm.txt Similar to the payload manifest, the tag manifest lists the tag files and their checksums. A “tag file” is the BagIt term for any metadata file that isn’t part of the payload (RFC 8493 §2.2.1). Unlike the payload manifest, the tag manifest is optional. A bag without a tag manifest can still be a valid bag. If the tag manifest is present, then many of the ways that a payload manifest can invalidate a bag – malformed contents, unreferenced files, or incorrect checksums – can also apply to tag manifests. There are some additional things to consider: What if a tag manifest lists payload files? The tag manifest lists tag files; the payload manifest lists payload files in the data/ directory. A tag manifest that lists files in the data/ directory is incorrect. What if the bag has a file that isn’t listed in either manifest? Every file in a bag (except the tag manifests) should be listed in either a payload or a tag manifest. A file that appears in neither could mean an unexpected file, or a missing manifest entry. Although the tag manifest is optional in the BagIt spec, at Wellcome Collection we made it a required file. Every bag had to have at least one tag manifest file, or our storage service would refuse to ingest it. The Bag Metadata bag-info.txt This is an optional metadata file that describes the bag and its contents (RFC 8493 §2.2.2). It’s a list of metadata elements, as simple label-value pairs, one per line. Here’s an example of a bag metadata file: Source-Organization: Lumon Industries Organization-Address: 100 Main Street, Kier, PE, 07043 Contact-Name: Harmony Cobel Unlike the manifest files, this is primarily intended for human readers. You can put arbitrary metadata in here, so you can add fields specific to your organisation. Although this file is more flexible, there are still ways it can be invalid: What if the bag metadata is the wrong format? It should have one metadata entry per line, with a label-value pair that’s separated by a colon. What if the Payload-Oxum is incorrect? The Payload-Oxum contains some concise statistics about the payload files: their total size in bytes, and how many there are. For example: Payload-Oxum: 517114.42 This tells us that the bag contains 42 payload files, and their total size is 517,114 bytes. If these stats don’t match the rest of the bag, something is wrong. What if non-repeatable metadata element names are repeated? The BagIt RFC defines a small number of reserved metadata element names which have a standard meaning. Although most metadata element names can be repeated, there are some which can’t, because they can only have one value. In particular: Bagging-Date, Bag-Size, Payload-Oxum and Bag-Group-Identifier. Although the bag metadata file is optional in a general BagIt bag, you may want to add your own rules based on how you use it. For example, at Wellcome Collection, we required all bags to have an External-Identifier value, that matched a specific schema. This allowed us to link bags to records in other databases, and our bag verifier would reject bags that didn’t include it. The Fetch File fetch.txt This is an optional element that allows you to reference files stored elsewhere (RFC 8493 §2.2.3). It tells the person reading the bag that a file hasn’t been included in this copy of the bag; they have to go and fetch it from somewhere else. The file is still recorded in the payload manifest (with a checksum you can verify), but you don’t have a complete bag until you’ve downloaded all the files. Here’s an example of a fetch.txt: https://topekastar.com/~daria/article.txt 1841 data/article.txt This tells us that data/article.txt isn’t included in this copy of the bag, but we we can download it from https://topekastar.com/~daria/article.txt. (The number 1841 is the size of the file in bytes. It’s optional.) Using fetch.txt allows you to send a bag with “holes”, which saves disk space and network bandwidth, but at a cost – we’re now relying on the remote location to remain available. From a preservation standpoint, this is scary! If topekastar.com goes away, this bag will be broken. I know some people don’t use fetch.txt for precisely this reason. If you do use fetch.txt, here are some things to consider: What if the fetch file is the wrong format? There’s a prescribed format – one file per line, with a URL, optional file size, and file path. What if the fetch file lists a file which isn’t in the payload manifest? The fetch.txt should only tell us that a file is stored elsewhere, and shouldn’t be introducing otherwise unreferenced files. If a file appears in fetch.txt but not the payload manifest, then we can’t verify the remote file because we don’t have a checksum for it. There’s either an erroneous fetch file entry or a missing manifest entry. What if the fetch file points to a file at an unusable URL? The URL is only useful if the person who receives the bag can use it to download the file. If they can’t, the bag might technically be valid, but it’s functionally broken. For example, you might reject URLs that don’t start with http:// or https://. What if the fetch file points to a file with the wrong length? The fetch.txt can optionally specify the size of a file, so you know how much storage you need to download it. If you download the file, the actual size should match the stated size. What if the fetch files points to a file that’s already included in the bag? Now you have two ways to get this file: you can read it from the bag, or from the remote URL. If a file is listed in both fetch.txt and included in the bag, either that file isn’t meant to be in the bag, or the fetch file has an erroneous entry. We used fetch files at Wellcome Collection to implement versioning, and we added extra rules about what remote URLs were allowed. In particular, we didn’t allow fetching a file from just anywhere – you could fetch from our S3 buckets, but not the general Internet. The bag verifier would reject a fetch file entry that pointed elsewhere. These examples illustrate just how many ways a BagIt bag can be invalid, from simple structural issues to complex edge cases. Remember: the key is to understand your specific needs and requirements. By considering your context – who creates your bags, where they’ll be stored, and how they fit into your wider systems – you can build a validation process to catch the issues that matter to you, while avoiding unnecessary complexity. I can give you my ideas, but only you can build your system. [If the formatting of this post looks odd in your feed reader, visit the original article]
We bought sixty-one servers for the launch of Basecamp 3 back in 2015. Dell R430s and R630s, packing thousands of cores and terabytes of RAM. Enough to fill all the app, job, cache, and database duties we needed. The entire outlay for this fleet was about half a million dollars, and it's only now, almost a decade later, that we're finally retiring the bulk of them for a full hardware refresh. What a bargain! That's over 3,500 days of service from this fleet, at a fully amortized cost of just $142/day. For everything needed to run Basecamp. A software service that has grossed hundreds of millions of dollars in that decade. We've of course had other expenses beyond hardware from operating Basecamp over the past decade. The ops team, the bandwidth, the power, and the cabinet rental across both our data centers. But none the less, owning our own iron has been a fantastically profitable proposition. Millions of dollars saved over renting in the cloud. And we aren't even done deriving value from this venerable fleet! The database servers, Dell R630s w/ Xeon E5-2699 CPUs and 768G of RAM, are getting handed down to some of our heritage apps. They will keep on trucking until they give up the ghost. When we did the public accounting for our cloud exit, it was based on five years of useful life from the hardware. But as this example shows, that's pretty conservative. Most servers can easily power your applications much longer than that. Owning your own servers has easily been one of our most effective cost advantages. Together with running a lean team. And managing our costs remains key to reaping the profitable fruit from the business. The dollar you keep at the end of the year is just as real whether you earn it or save it. So you just might want to run those cloud-exit numbers once more with a longer server lifetime value. It might just tip the equation, and motivate you to become a server owner rather than a renter.
At some point in a startup’s lifecycle, they decide that they need to be ready to go public in 18 months, and a flurry of IPO-readiness activity kicks off. This strategy focuses on a company working on IPO readiness, which has identified a gap in their internal controls for managing access to their users’ data. It’s a company that wants to meaningfully improve their security posture around user data access, but which has had a number of failed security initiatives over the years. Most of those initiatives have failed because they significantly degraded internal workflows for teams like customer support, such that the initial progress was reverted and subverted over time, to little long-term effect. This strategy represents the Chief Information Security Officer’s (CISO) attempt to acknowledge and overcome those historical challenges while meeting their IPO readiness obligations, and–most importantly–doing right by their users. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore, then Diagnose and so on. Relative to the default structure, this document has been refactored in two ways to improve readability: first, Operation has been folded into Policy; second, Refine has been embedded in Diagnose. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operations Our new policies, and the mechanisms to operate them are: Controls for accessing user data must be significantly stronger prior to our IPO. Senior leadership, legal, compliance and security have decided that we are not comfortable accepting the status quo of our user data access controls as a public company, and must meaningfully improve the quality of resource-level access controls as part of our pre-IPO readiness efforts. Our Security team is accountable for the exact mechanisms and approach to addressing this risk. We will continue to prioritize a hybrid solution to resource-access controls. This has been our approach thus far, and the fastest available option. Directly expose the log of our resource-level accesses to our users. We will build towards a user-accessible log of all company accesses of user data, and ensure we are comfortable explaining each and every access. In addition, it means that each rationale for access must be comprehensible and reasonable from a user perspective. This is important because it aligns our approach with our users’ perspectives. They will be able to evaluate how we access their data, and make decisions about continuing to use our product based on whether they agree with our use. Good security discussions don’t frame decisions as a compromise between security and usability. We will pursue multi-dimensional tradeoffs to simultaneously improve security and efficiency. Whenever we frame a discussion on trading off between security and utility, it’s a sign that we are having the wrong discussion, and that we should rethink our approach. We will prioritize mechanisms that can both automatically authorize and automatically document the rationale for accesses to customer data. The most obvious example of this is automatically granting access to a customer support agent for users who have an open support ticket assigned to that agent. (And removing that access when that ticket is reassigned or resolved.) Measure progress on percentage of customer data access requests justified by a user-comprehensible, automated rationale. This will anchor our approach on simultaneously improving the security of user data and the usability of our colleagues’ internal tools. If we only expand requirements for accessing customer data, we won’t view this as progress because it’s not automated (and consequently is likely to encourage workarounds as teams try to solve problems quickly). Similarly, if we only improve usability, charts won’t represent this as progress, because we won’t have increased the number of supported requests. As part of this effort, we will create a private channel where the security and compliance team has visibility into all manual rationales for user-data access, and will directly message the manager of any individual who relies on a manual justification for accessing user data. Expire unused roles to move towards principle of least privilege. Today we have a number of roles granted in our role-based access control (RBAC) system to users who do not use the granted permissions. To address that issue, we will automatically remove roles from colleagues after 90 days of not using the role’s permissions. Engineers in an active on-call rotation are the exception to this automated permission pruning. Weekly reviews until we see progress; monthly access reviews in perpetuity. Starting now, there will be a weekly sync between the security engineering team, teams working on customer data access initiatives, and the CISO. This meeting will focus on rapid iteration and problem solving. This is explicitly a forum for ongoing strategy testing, with CISO serving as the meeting’s sponsor, and their Principal Security Engineer serving as the meeting’s guide. It will continue until we have clarity on the path to 100% coverage of user-comprehensible, automated rationales for access to customer data. Separately, we are also starting a monthly review of sampled accesses to customer data to ensure the proper usage and function of the rationale-creation mechanisms we build. This meeting’s goal is to review access rationales for quality and appropriateness, both by reviewing sampled rationales in the short-term, and identifying more automated mechanisms for identifying high-risk accesses to review in the future. Exceptions must be granted in writing by CISO. While our overarching Engineering Strategy states that we follow an advisory architecture process as described in Facilitating Software Architecture, the customer data access policy is an exception and must be explicitly approved, with documentation, by the CISO. Start that process in the #ciso channel. Diagnose We have a strong baseline of role-based access controls (RBAC) and audit logging. However, we have limited mechanisms for ensuring assigned roles follow the principle of least privilege. This is particularly true in cases where individuals change teams or roles over the course of their tenure at the company: some individuals have collected numerous unused roles over five-plus years at the company. Similarly, our audit logs are durable and pervasive, but we have limited proactive mechanisms for identifying anomalous usage. Instead they are typically used to understand what occurred after an incident is identified by other mechanisms. For resource-level access controls, we rely on a hybrid approach between a 3rd-party platform for incoming user requests, and approval mechanisms within our own product. Providing a rationale for access across these two systems requires manual work, and those rationales are later manually reviewed for appropriateness in a batch fashion. There are two major ongoing problems with our current approach to resource-level access controls. First, the teams making requests view them as a burdensome obligation without much benefit to them or on behalf of the user. Second, because the rationale review steps are manual, there is no verifiable evidence of the quality of the review. We’ve found no evidence of misuse of user data. When colleagues do access user data, we have uniformly and consistently found that there is a clear, and reasonable rationale for that access. For example, a ticket in the user support system where the user has raised an issue. However, the quality of our documented rationales is consistently low because it depends on busy people manually copying over significant information many times a day. Because the rationales are of low quality, the verification of these rationales is somewhat arbitrary. From a literal compliance perspective, we do provide rationales and auditing of these rationales, but it’s unclear if the majority of these audits increase the security of our users’ data. Historically, we’ve made significant security investments that caused temporary spikes in our security posture. However, looking at those initiatives a year later, in many cases we see a pattern of increased scrutiny, followed by a gradual repeal or avoidance of the new mechanisms. We have found that most of them involved increased friction for essential work performed by other internal teams. In the natural order of performing work, those teams would subtly subvert the improvements because it interfered with their immediate goals (e.g. supporting customer requests). As such, we have high conviction from our track record that our historical approach can create optical wins internally. We have limited conviction that it can create long-term improvements outside of significant, unlikely internal changes (e.g. colleagues are markedly less busy a year from now than they are today). It seems likely we need a new approach to meaningfully shift our stance on these kinds of problems. Explore Our experience is that best practices around managing internal access to user data are widely available through our networks, and otherwise hard to find. The exact rationale for this is hard to determine, but it seems possible that it’s a topic that folks are generally uncomfortable discussing in public on account of potential future liability and compliance issues. In our exploration, we found two standardized dimensions (role-based access controls, audit logs), and one highly divergent dimension (resource-specific access controls): Role-based access controls (RBAC) are a highly standardized approach at this point. The core premise is that users are mapped to one or more roles, and each role is granted a certain set of permissions. For example, a role representing the customer support agent might be granted permission to deactivate an account, whereas a role representing the sales engineer might be able to configure a new account. Audit logs are similarly standardized. All access and mutation of resources should be tied in a durable log to the human who performed the action. These logs should be accumulated in a centralized, queryable solution. One of the core challenges is determining how to utilize these logs proactively to detect issues rather than reactively when an issue has already been flagged. Resource-level access controls are significantly less standardized than RBAC or audit logs. We found three distinct patterns adopted by companies, with little consistency across companies on which is adopted. Those three patterns for resource-level access control were: 3rd-party enrichment where access to resources is managed in a 3rd-party system such as Zendesk. This requires enriching objects within those systems with data and metadata from the product(s) where those objects live. It also requires implementing actions on the platform, such as archiving or configuration, allowing them to live entirely in that platform’s permission structure. The downside of this approach is tight coupling with the platform vendor, any limitations inherent to that platform, and the overhead of maintaining engineering teams familiar with both your internal technology stack and the platform vendor’s technology stack. 1st-party tool implementation where all activity, including creation and management of user issues, is managed within the core product itself. This pattern is most common in earlier stage companies or companies whose customer support leadership “grew up” within the organization without much exposure to the approach taken by peer companies. The advantage of this approach is that there is a single, tightly integrated and infinitely extensible platform for managing interactions. The downside is that you have to build and maintain all of that work internally rather than pushing it to a vendor that ought to be able to invest more heavily into their tooling. Hybrid solutions where a 3rd-party platform is used for most actions, and is further used to permit resource-level access within the 1st-party system. For example, you might be able to access a user’s data only while there is an open ticket created by that user, and assigned to you, in the 3rd-party platform. The advantage of this approach is that it allows supporting complex workflows that don’t fit within the platform’s limitations, and allows you to avoid complex coupling between your product and the vendor platform. Generally, our experience is that all companies implement RBAC, audit logs, and one of the resource-level access control mechanisms. Most companies pursue either 3rd-party enrichment with a sizable, long-standing team owning the platform implementation, or rely on a hybrid solution where they are able to avoid a long-standing dedicated team by lumping that work into existing teams.