Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
14
The need to generate a globally unique identifier comes up often. The way described in RFC 4122 is popular but it can be done better. I wrote betterguid Go package that does it better. Unique id generated by this package: is a 20 character string, safe to include in urls (no need for escaping) consist of 8 bytes of timestamp (millisecond precision) and 9 bytes of random data sorts lexicographically 72-bits of random data ensures IDs won’t collide with IDs generated by other clients are monotonically increasing even within the same timestamp You can read a longer description of the algorithm. My implementation is based on this JavaScript code. Related: comparison of 7 Go libraries for unique id generation.
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Krzysztof Kowalczyk blog

Man vs. AI: optimizing JavaScript (Claude, Cursor)

How AI beat me at code optimization game. When I started writing this article I did not expect AI to beat me at optimizing JavaScript code. But it did. I’m really passionate about optimizing JavaScript. Some say it’s a mental illness but I like my code to go balls to the wall fast. I feel the need. The need for speed. Optimizing code often requires tedious refactoring. Can we delegate the tedious parts to AI? Can I just have ideas and get AI to be my programming slave? Let’s find out. Optimizing Unicode range lookup with AI In my experiment I used Cursor with Claude 3.5 Sonnet model. I assume it could be done with other tools / models. I was browsing pdf.js code and saw this function: const UnicodeRanges = [ [0x0000, 0x007f], // 0 - Basic Latin ... omited [0x0250, 0x02af, 0x1d00, 0x1d7f, 0x1d80, 0x1dbf], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement ... omited ]; function getUnicodeRangeFor(value, lastPosition = -1) { // TODO: create a map range => position, sort the ranges and cache it. // Then we can make a binary search for finding a range for a given unicode. if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; for (let i = 0, ii = range.length; i < ii; i += 2) { if (value >= range[i] && value <= range[i + 1]) { return lastPosition; } } } for (let i = 0, ii = UnicodeRanges.length; i < ii; i++) { const range = UnicodeRanges[i]; for (let j = 0, jj = range.length; j < jj; j += 2) { if (value >= range[j] && value <= range[j + 1]) { return i; } } } return -1; } This is a simple function: given Unicode character it returns Unicode range (number 0 ... UnicodeRanges.length - 1). The number is cleverly encoded implicitly as array index. The code already has a TODO about a potential optimization: sort UnicodeRanges and do binary search. There’s a minor difficulty: some Unicode ranges have multiple intervals, so we have to change the UnicodeRanges to explicitly include the range as 3rd element: const UnicodeRanges = [ [0x0000, 0x007f, 0], // 0 - Basic Latin [0x0250, 0x02af, 4], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement [0x1d00, 0x1d7f, 4], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement ... omitted ]; Then sort it and change the implementation of getUnicodeRangeFor(). Re-writing UnicodeRanges is simple but tedious and I don’t feel like doing it. Can AI do such change for us? Changing representation I’ve selected the whole of UnicodeRanges, added to chat with Ctrl + Shift + L and commanded: Change representation of UnicodeRanges array where each entry is 3 elements: [start, end, rangeNo]; preserve comments; if rangeNo has multiple ranges, split into separate entries Cursor responded with: const UnicodeRanges = [ [0x0000, 0x007f, 0], // 0 - Basic Latin [0x0080, 0x00ff, 1], // 1 - Latin-1 Supplement ... omitted ]; There was an issue: it only did part of the array. So I commanded: don’t be lazy, finish the whole UnicodeRanges array It did but the result was too long so I had to click Continue button. It worked and the generated code seems to be correct so I applied it. For binary search, the array must be sorted, so I commanded: Sort the array by first element It did, the result looked correct so I accepted the change. Hmm, I can get used to having a programming slave. Updating the code to implement binary search I copied getUnicodeRangeFor to chat and commanded: change the implementation to use binary search over UnicodeRanges array It gave me: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; if (value >= range[0] && value <= range[1]) { return lastPosition; } } // Binary search let left = 0; let right = UnicodeRanges.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const range = UnicodeRanges[mid]; if (value < range[0]) { right = mid - 1; } else if (value > range[1]) { left = mid + 1; } else { return range[2]; // Return the range index } } return -1; // Not found } It looks correct so I accepted it. We achieved the goal: replaced linear search with binary search using cheap AI labor. But I want more. More speed. Changing array of arrays to flat array A more efficient representation of the ranges would be a flat array: const UnicodeRanges = [ 0x0000, 0x007f, 0, // 0 - Basic Latin 0x0080, 0x00ff, 1, // 1 - Latin-1 Supplement ]; Let’s see if our AI slave can do it: change the whole UnicodeRanges array to flat array, preserve comments, don’t be lazy and do all of it I pre-emptively command the AI to convert the whole array. It did it. Now is the time to update getUnicodeRangeFor(). It’s a simple change I could do myself: now we just multiply index * 3 to get range definition. But I kind of like commanding the AI so I did it again: update getUnicodeRangeFor for new flat array representations of UnicodeRanges It gave me: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Binary search let left = 0; let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const index = mid * 3; const rangeStart = UnicodeRanges[index]; const rangeEnd = UnicodeRanges[index + 1]; if (value < rangeStart) { right = mid - 1; } else if (value > rangeEnd) { left = mid + 1; } else { return UnicodeRanges[index + 2]; // Return the range index } } return -1; // Not found } It does seem correct, but let’s do one more tweak: move binary search code to its own function And it worked: function binarySearchUnicodeRange(value) { let left = 0; let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const index = mid * 3; const rangeStart = UnicodeRanges[index]; const rangeEnd = UnicodeRanges[index + 1]; if (value < rangeStart) { right = mid - 1; } else if (value > rangeEnd) { left = mid + 1; } else { return UnicodeRanges[index + 2]; // Return the range index } } return -1; // Not found } function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Perform binary search return binarySearchUnicodeRange(value); } Inlining some of the binary search It’s good but we haven’t yet reached the levels of insanity I’ve hinted at. UnicodeRanges is a fixed array. We could “unroll” the binary search into code. It would be very painful to do manually. I would have to write custom code generator to ensure it’s correct but maybe AI can do it for us? unroll first 3 levels of binarySearchUnicodeRange To my surprise, it worked. AI understood what I wanted and gave me this: function getUnicodeRangeFor(value, lastPosition = -1) { ... omited const length = UnicodeRanges.length / 3; let mid = Math.floor(length / 2) * 3; if (value < UnicodeRanges[mid]) { mid = Math.floor(mid / 2); if (value < UnicodeRanges[mid]) { mid = Math.floor(mid / 2); } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 2); } else { return UnicodeRanges[mid + 2]; } } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 2); if (value < UnicodeRanges[mid]) { mid = mid - Math.floor(mid / 4); } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 4); } else { return UnicodeRanges[mid + 2]; } } else { return UnicodeRanges[mid + 2]; } ... omited } Can we do even better? unroll the values of UnicodeRanges[mid] etc. and replace with constants Holly cow, it worked: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Unrolled binary search with constants if (value < 0x4E00) { // CJK Unified Ideographs start if (value < 0x0530) { // Armenian start if (value < 0x0250) { // IPA Extensions start if (value < 0x0100) { // Latin Extended-A start if (value < 0x0080) { // Latin-1 Supplement start return 0; // Basic Latin } return 1; // Latin-1 Supplement } if (value < 0x0180) { return 2; // Latin Extended-A } return 3; // Latin Extended-B } if (value < 0x0300) { // Combining Diacritical Marks start if (value < 0x02B0) { // Spacing Modifier Letters start return 4; // IPA Extensions } return 5; // Spacing Modifier Letters } if (value < 0x0370) { // Greek and Coptic start return 6; // Combining Diacritical Marks } if (value < 0x0400) { // Cyrillic start return 7; // Greek and Coptic } return 9; // Cyrillic } // Continue with binary search for remaining ranges let left = 10; // Index of Armenian in UnicodeRanges let right = 102; // Index of CJK Unified Ideographs in UnicodeRanges while (left <= right) { const mid = Math.floor((left + right) / 2) * 3; if (value < UnicodeRanges[mid]) { right = Math.floor(mid / 3) - 1; } else if (value > UnicodeRanges[mid + 1]) { left = Math.floor(mid / 3) + 1; } else { return UnicodeRanges[mid + 2]; } } } else { // Binary search for ranges after CJK Unified Ideographs let left = 103; // Index after CJK Unified Ideographs in UnicodeRanges let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2) * 3; if (value < UnicodeRanges[mid]) { right = Math.floor(mid / 3) - 1; } else if (value > UnicodeRanges[mid + 1]) { left = Math.floor(mid / 3) + 1; } else { return UnicodeRanges[mid + 2]; } } } return -1; // Not found } Is it correct? I have no idea. Would be easy to write a test that uses UnicodeRanges as source of truth and tests getUnicodeRangeFor() for every possible value. Can I just ask AI to do it faster? So far I micro-managed the AI coder. I was telling it exactly what kind of change I wanted and commanded AI until it did what I wanted. A good boss delegates. Good boss tells the worker the high level goal and leaves implementation detail to the worker. Let’s start from scratch and command AI thusly: could you make a faster implementation of getUnicodeRangeFor()? You can change the representation of UnicodeRanges to achieve that goal Holly cow again. The AI figured out the binary search: const UnicodeRanges = [ { start: 0x0000, end: 0x007f, index: 0 }, // Basic Latin { start: 0x0080, end: 0x00ff, index: 1 }, // Latin-1 Supplement { start: 0x0100, end: 0x017f, index: 2 }, // Latin Extended-A // ... (other ranges) { start: 0x1f000, end: 0x1f09f, index: 122 }, // Domino Tiles - Mahjong Tiles ].sort((a, b) => a.start - b.start); function getUnicodeRangeFor(value, lastPosition = -1) { if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; if (value >= range.start && value <= range.end) { return range.index; } } let left = 0; let right = UnicodeRanges.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const range = UnicodeRanges[mid]; if (value >= range.start && value <= range.end) { return range.index; } if (value < range.start) { right = mid - 1; } else { left = mid + 1; } } return -1; } The object {start, end, index} is less efficient than my flat array representation and it didn’t unroll the binary search but the core idea of binary search is there. But just for kicks, maybe there’s more: can you make it even more efficient? It did something slightly different, but really the same. I’m afraid no bonus this quarter. Ok, one last try: but could you go all out and use every possible implementation trick you know to optkmize getUnicodeRangeFor and UnicodeRanges representation? Fuck me, I have no idea if the generated code is correct but it does seem it would be faster than the non-unrolled binary search I came up with earlier: // Preprocess UnicodeRanges into a more efficient structure const BUCKET_SIZE = 256; const MAX_UNICODE = 0x10FFFF; const buckets = new Uint16Array(Math.ceil((MAX_UNICODE + 1) / BUCKET_SIZE)); const ranges = []; UnicodeRanges.forEach((range, index) => { for (let i = 0; i < range.length; i += 2) { const start = range[i]; const end = range[i + 1]; ranges.push({ start, end, index }); const startBucket = start >>> 8; const endBucket = end >>> 8; for (let j = startBucket; j <= endBucket; j++) { buckets[j] |= (1 << (index & 15)); } } }); ranges.sort((a, b) => a.start - b.start); // Create a compact representation of ranges const compactRanges = new Uint32Array(ranges.length * 3); ranges.forEach((range, i) => { compactRanges[i * 3] = range.start; compactRanges[i * 3 + 1] = range.end; compactRanges[i * 3 + 2] = range.index; }); function getUnicodeRangeFor(value, lastPosition = -1) { if (value > MAX_UNICODE) return -1; // Check last position first if (lastPosition !== -1) { const start = compactRanges[lastPosition * 3]; const end = compactRanges[lastPosition * 3 + 1]; if (value >= start && value <= end) { return compactRanges[lastPosition * 3 + 2]; } } // Quick check using buckets const bucket = buckets[value >>> 8]; if (bucket === 0) return -1; // Binary search within the relevant ranges let left = 0; let right = compactRanges.length / 3 - 1; while (left <= right) { const mid = (left + right) >>> 1; const start = compactRanges[mid * 3]; const end = compactRanges[mid * 3 + 1]; if (value < start) { right = mid - 1; } else if (value > end) { left = mid + 1; } else { return compactRanges[mid * 3 + 2]; } } return -1; } In retrospect, using Uint32Array for UnicodeRanges array was a no-brainer. It’s more compact in memory, the values will probably be compared as integers, not floats. Should be faster to scan than array of floats. The buckets are not necessary. They seems to be for quick “doesn’t exist check” which is not important. I believe most lookups are for values that exist. I’m humbled that just asking for super duper optimization made AI produce something I didn’t think of. More optimization ideas I can’t help myself. Those are ideas I didn’t ask AI to implement. UnicodeRanges is small. A linear search of compact Uint32Array representation where we just have (start, end) values for each range would be faster than binary search due to cache lines. We could start the search in the middle of array and scan half the data going forward or backwards. We could also store ranges smaller than 0x10000 in Uint16Array and larger in Uint32Array. And do linear search starting in the middle. Since the values are smaller than 256, we could encode the first 0xffff values in 64kB as Uint8Array and the rest as Uint32Array. That would probably be the fastest on average, because I believe most lookups are for Unicode chars smaller than 0xffff. Finally, we could calculate the the frequency of each range in representative sample of PDF documents, check the ranges based on that frequency, fully unrolled into code, without any tables. Conclusions AI is a promising way to do tedious code refactoring. If I didn’t have the AI, I would have to write a program to e.g. convert UnicodeRanges to a flat representation. It’s simple and therefore doable but certainly would take longer than few minutes it took me to command AI. The final unrolling of getUnicodeRangeFor() would probably never happen. It would require writing a sophisticated code generator which would be a big project by itself. AI can generate buggy code so it needs to be carefully reviewed. The unrolled binary search could not be verified by review, it would need a test. But hey, I could command my AI sidekick to write the test for me. There was this idea of organizing programming teams into master programmer and coding grunts. The job of master programmer, the thinking was, to generate high level ideas and having coding grunts implement them. Turns out that we can’t organize people that way but now we can use AI to be our coding grunt. Prompt engineering is a thing. I wasted a bunch of time doing incremental improvements. I should have started by asking for super-duper optimization. Productivity gains is real. The whole thing took me about an hour. For this particular task easily 2x compared to not using cheap AI labor. Imagine you’re running a software business and instead of spending 2 months on a task, you only spend 1 month. I’ll be using more AI for coding in the future.

6 months ago 58 votes
Implementing Notion-like table of contents in JavaScript

Notion-like table of contents in JavaScript Long web pages benefit from having a table of contents. Especially technical, reference documentation. As a reader you want a way to quickly navigate to a specific part of the documentation. This article describes how I implemented table of contents for documentation page for my Edna note taking application. Took only few hours. Here’s full code. A good toc A good table of contents is: always available unobtrusive Table of contents cannot be always visible. Space is always at premium and should be used for the core functionality of a web page. For a documentation page the core is documentation text so space should be used to show documentation. But it should always be available in some compact form. I noticed that Notion implemented toc in a nice way. Since great artists steal, I decided to implement similar behavior for Edna’s documentation When hidden, we show mini toc i.e. for each toc entry we have a gray rectangle. A block rectangle indicates current position in the document: It’s small and unobtrusive. When you hover mouse over that area we show the actual toc: Clicking on a title goes to that part of the page. Implementing table of contents My implementation can be added to any page. Grabbing toc elements I assume h1 to h6 elements mark table of contents entries. I use their text as text of the entry. After page loads I build the HTML for the toc. I grab all headers elements: function getAllHeaders() { return Array.from(document.querySelectorAll("h1, h2, h3, h4, h5, h6")); } Each toc entry is represented by: class TocItem { text = ""; hLevel = 0; nesting = 0; element; } text we show to the user. hLevel is 1 … 6 for h1 … h6. nesting is like hLevel but sanitized. We use it to indent text in toc, to show tree structure of the content. element is the actual HTML element. We remember it so that we can scroll to that element with JavaScript. I create array of TocItem for each header element on the page: function buildTocItems() { let allHdrs = getAllHeaders(); let res = []; for (let el of allHdrs) { /** @type {string} */ let text = el.innerText.trim(); text = removeHash(text); text = text.trim(); let hLevel = parseInt(el.tagName[1]); let h = new TocItem(); h.text = text; h.hLevel = hLevel; h.nesting = 0; h.element = el; res.push(h); } return res; } function removeHash(str) { return str.replace(/#$/, ""); } Generate toc HTML Toc wrapper Here’s the high-level structure: .toc-wrapper has 2 children: .toc-mini, always visible, shows overview of the toc .toc-list hidden by default, shown on hover over .toc-wrapper Wrapper is always shown on the right upper corner using fixed position: .toc-wrapper { position: fixed; top: 1rem; right: 1rem; z-index: 50; } You can adjust top and right for your needs. When toc is too long to fully shown on screen, we must make it scrollable. But also default scrollbars in Chrome are large so we make them smaller and less intrusive: .toc-wrapper { position: fixed; top: 1rem; right: 1rem; z-index: 50; } When user hovers over .toc-wrapper, we switch display from .toc-mini to .toc-list: .toc-wrapper:hover > .toc-mini { display: none; } .toc-wrapper:hover > .toc-list { display: flex; } Generate mini toc We want to generate the following HTML: <div class="toc-mini"> <div class="toc-item-mini toc-light">▃</div> ... repeat for every TocItem </div> ▃ is a Unicode characters that is a filled rectangle of the bottom 30% of the character. We use a very small font becuase it’s only a compact navigation heler. .toc-light is gray color. By removing this class we make it default black which marks current position in the document. .toc-mini { display: flex; flex-direction: column; font-size: 6pt; cursor: pointer; } .toc-light { color: lightgray; } Generating HTML in vanilla JavaScript is not great, but it works for small things: function genTocMini(items) { let tmp = ""; let t = `<div class="toc-item-mini toc-light">▃</div>`; for (let i = 0; i < items.length; i++) { tmp += t; } return `<div class="toc-mini">` + tmp + `</div>`; } items is an array of TocItem we get from buildTocItems(). We mark the items with toc-item-mini class so that we can query them all easily. Generate toc list Table of contents list is only slightly more complicated: <div class="toc-list"> <div title="{title}" class="toc-item toc-trunc {ind}" onclick=tocGoTo({n})>{text}</div> ... repeat for every TocItem </div> {ind} is name of the indent class, like: .toc-ind-1 { padding-left: 4px; } tocGoTo(n) is a function that scroll the page to show n-th TocItem.element at the top. function genTocList(items) { let tmp = ""; let t = `<div title="{title}" class="toc-item toc-trunc {ind}" onclick=tocGoTo({n})>{text}</div>`; let n = 0; for (let h of items) { let s = t; s = s.replace("{n}", n); let ind = "toc-ind-" + h.nesting; s = s.replace("{ind}", ind); s = s.replace("{text}", h.text); s = s.replace("{title}", h.text); tmp += s; n++; } return `<div class="toc-list">` + tmp + `</div>`; } .toc-trunc is for limiting the width of toc and gracefully truncating it: .toc-trunc { max-width: 32ch; min-width: 12ch; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; } Putting it all together Here’s the code that runs at page load, generates HTML and appends it to the page: function genToc() { tocItems = buildTocItems(); fixNesting(tocItems); const container = document.createElement("div"); container.className = "toc-wrapper"; let s = genTocMini(tocItems); let s2 = genTocList(tocItems); container.innerHTML = s + s2; document.body.appendChild(container); } Navigating Showing / hiding toc list is done in CSS. When user clicks toc item, we need to show it at the top of page: let tocItems = []; function tocGoTo(n) { let el = tocItems[n].element; let y = el.getBoundingClientRect().top + window.scrollY; let offY = 12; y -= offY; window.scrollTo({ top: y, }); } We remembered HTML element in TocItem.element so all we need to is to scroll to it to show it at the top of page. You can adjust offY e.g. if you show a navigation bar at the top that overlays the content, you want to make offY at least the height of navigation bar. Updating toc mini to reflect current position When user scrolls the page we want to reflect that in toc mini by changing the color of corresponding rectangle from gray to black. On scroll event we calculate which visible TocItem.element is closest to the top of window. function updateClosestToc() { let closestIdx = -1; let closestDistance = Infinity; for (let i = 0; i < tocItems.length; i++) { let tocItem = tocItems[i]; const rect = tocItem.element.getBoundingClientRect(); const distanceFromTop = Math.abs(rect.top); if ( distanceFromTop < closestDistance && rect.bottom > 0 && rect.top < window.innerHeight ) { closestDistance = distanceFromTop; closestIdx = i; } } if (closestIdx >= 0) { console.log("Closest element:", closestIdx); let els = document.querySelectorAll(".toc-item-mini"); let cls = "toc-light"; for (let i = 0; i < els.length; i++) { let el = els[i]; if (i == closestIdx) { el.classList.remove(cls); } else { el.classList.add(cls); } } } } window.addEventListener("scroll", updateClosestToc); All together now After page loads I run: genToc(); updateClosestToc(); Which I achieve by including this in HTML: <script src="/help.js" defer></script> </body> Possible improvements Software is never finished. Software can always be improved. I have 2 ideas for further improvements. Always visible when enough space Most of the time my browser window uses half of 13 to 15 inch screen. I’m aggravated when websites don’t work well at that size. At that size there’s not enough space to always show toc. But if the user chooses a wider browser window, it makes sense to use available space and always show table of contents. Keyboard navigation It would be nice to navigate table of contents with keyboard, in addition to mouse. For example: t would show table of contents Esc would dismiss it up / down arrows would navigate toc tree Enter would navigate to selected part and dismiss toc

6 months ago 62 votes
Porting a medium-sized Vue application to Svelte 5

Porting a medium-sized Vue application to Svelte 5 The short version: porting from Vue to Svelte is pretty straightforward and Svelte 5 is nice upgrade to Svelte 4. Why port? I’m working on Edna, a note taking application for developers. It started as a fork of Heynote. I’ve added a bunch of features, most notably managing multiple notes. Heynote is written in Vue. Vue is similar enough to Svelte that I was able to add features without really knowing Vue but Svelte is what I use for all my other projects. At some point I invested enough effort (over 350 commits) into Edna that I decided to port from Vue to Svelte. That way I can write future code faster (I know Svelte much better than Vue) and re-use code from my other Svelte projects. Since Svelte 5 is about to be released, I decided to try it out. There were 10 .vue components. It took me about 3 days to port everything. Adding Svelte 5 to build pipeline I started by adding Svelte 5 and converting the simplest component. In the above commit: I’ve installed Svelte 5 and it’s vite plugin by adding it to package.json updated tailwind.config.cjs to also scan .svelte files added Svelte plugin to vite.config.js to run Svelte compiler on .svelte and .svelte.js files during build deleted Help.vue, which is not related to porting, I just wasn’t using it anymore started converting smallest component AskFSPermissions.vue as AskFSPermissions.svelte In the next commit: I finished porting AskFSPermissions.vue I tweaked tsconfig.json so that VSCode type-checks .svelte files I replaced AskFSPermissions.vue with Svelte 5 version Here replacing was easy because the component was a stand-alone component. All I had to do was to replace Vue’s: app = createApp(AskFSPermissions); app.mount("#app"); with Svelte 5: const args = { target: document.getElementById("app"), }; appSvelte = mount(AskFSPermissions, args); Overall porting strategy Next part was harder. Edna’s structure is: App.vue is the main component which shows / hides other components depending on state and desired operations. My preferred way of porting would be to start with leaf components and port them to Svelte one by one. However, I haven’t found an easy way of using .svelte components from .vue components. It’s possible: Svelte 5 component can be imported and mounted into arbitrary html element and I could pass props down to it. If the project was bigger (say weeks of porting) I would try to make it work so that I have a working app at all times. Given I estimated I can port it quickly, I went with a different strategy: I created mostly empty App.svelte and started porting components, starting with the simplest leaf components. I didn’t have a working app but I could see and test the components I’ve ported so far. This strategy had it’s challenges. Namely: most of the state is not there so I had to fake it for a while. For example the first component I ported was TopNav.vue, which displays name of the current note in the top upper part of the screen. The problem was: I didn’t port the logic to load the file yet. For a while I had to fake the state i.e. I created noteName variable with a dummy value. With each ported component I would port App.vue parts needed by the component Replacing third-party components Most of the code in Edna is written by me (or comes from the original Heynote project) and doesn’t use third-party Vue libraries. There are 2 exceptions: I wanted to show notification messages and have a context menu. Showing notifications messages isn’t hard: for another project I wrote a Svelte component for that in a few hours. But since I didn’t know Vue well, it would have taken me much longer, possibly days. For that reason I’ve opted to use a third-party toast notifications Vue library. The same goes menu component. Even more so: implementing menu component is complicated. At least few days of effort. When porting to Svelte I replaced third-party vue-toastification library with my own code. At under 100 loc it was trivial to write. For context menu I re-used context menu I wrote for my notepad2 project. It’s a more complicated component so it took longer to port it. Vue => Svelte 5 porting Vue and Svelte have very similar structure so porting is straightforward and mostly mechanical. The big picture: <template> become Svelte templates. Remove <template> and replace Vue control flow directives with Svelte equivalent. For example <div v-if="foo"> becomes {#if foo}<div>{/if} setup() can be done either at top-level, when component is imported, or in $effect( () => { ... } ) when component is mounted data() become variables. Some of them are regular JavaScript variables and some of them become reactive $state() props becomes $props() mounted() becomes $effect( () => { ... } ) methods() become regular JavaScript functions computed() become $derived.by( () => { ... } ) ref() becomes $state() $emit('foo') becomes onfoo callback prop. Could also be an event but Svelte 5 recommends callback props over events @click becomes onclick v-model="foo" becomes bind:value={foo} {{ foo }} in HTML template becomes { foo } ref="foo" becomes bind:this={foo} :disabled="!isEnabled" becomes disabled={!isEnabled} CSS was scoped so didn’t need any changes Svelte 5 At the time of this writing Svelte 5 is Release Candidates and the creators tell you not use it in production. Guess what, I’m using it in production. It works and it’s stable. I think Svelte 5 devs operate from the mindset of “abundance of caution”. All software has bugs, including Svelte 4. If Svelte 5 doesn’t work, you’ll know it. Coming from Svelte 4, Svelte 5 is a nice upgrade. One small project is too early to have deep thoughts but I like it so far. It’s easy to learn new ways of doing things. It’s easy to convert Svelte 4 to Svelte 5, even without any tools. Things are even more compact and more convenient than in Svelte 4. {#snippet} adds functionality that I was missing from Svelte 4.

8 months ago 59 votes
Changing font size in Windows dialog in C++

How to dynamically change font size in a Windows dialog Windows’s win32 API is old and crufty. Many things that are trivial to do in HTML are difficult in win32. One of those things is changing size of font used by your native, desktop app. I encountered this in SumatraPDF. A user asked for a way to increase the font size. I introduced UIFontSize option but implementing that was difficult and time consuming. One of the issues was changing the font size used in dialogs. This article describes how I did it. The method is based on https://stackoverflow.com/questions/14370238/can-i-dynamically-change-the-font-size-of-a-dialog-window-created-with-c-in-vi How dialogs work SumatraPDF defines a bunch of dialogs in SumatraPDF.rc. Here’s a find dialog: IDD_DIALOG_FIND DIALOGEX 0, 0, 247, 52 STYLE DS_SETFONT | DS_MODALFRAME | DS_FIXEDSYS | WS_POPUP | WS_CAPTION | WS_SYSMENU CAPTION "Find" FONT 8, "MS Shell Dlg", 400, 0, 0x1 BEGIN LTEXT "&Find what:",IDC_STATIC,6,8,60,9 EDITTEXT IDC_FIND_EDIT,66,6,120,13,ES_AUTOHSCROLL CONTROL "&Match case",IDC_MATCH_CASE,"Button",BS_AUTOCHECKBOX | WS_TABSTOP,6,24,180,9 LTEXT "Hint: Use the F3 key for finding again",IDC_FIND_NEXT_HINT,6,37,180,9,WS_DISABLED DEFPUSHBUTTON "Find",IDOK,191,6,50,14 PUSHBUTTON "Cancel",IDCANCEL,191,24,50,14 END .rc is compiled by a resource compiler rc.exe and embedded in resources section of a PE .exe file. Compiled version is a binary blob that has a stable format. At runtime we can get that binary blob from resources and pass it to DialogBoxIndirectParam() function to create a dialog. How to change font size of a dialog at runtime DIALOGEX tell us it’s an extended dialog, which has different binary layout than non-extended DIALOG. As you can see part of dialog definition is a font definition: FONT 8, "MS Shell Dlg", 400, 0, 0x1 To provide a FONT you also need to specify DS_SETFONT or DS_FIXEDSYS flag. We’re asking for MS Shell Dlg font with size of 8 points (12 pixels). 400 specifies standard weight (800 would be bold font). Unfortunately the binary blob is generated at compilation time and we want to change font size when application runs. The simplest way to achieve that is to patch the binary blob in memory. The code for changing dialog font size at runtime You can find the full code at https://github.com/sumatrapdfreader/sumatrapdf/blob/b6aed9e7d257510ff82fee915506ce2e75481c64/src/SumatraDialogs.cpp#L20 It uses small number of SumatraPDF base code so you’ll need to lightly massage it to use it in your own code. The layout of binary blob is documented at http://msdn.microsoft.com/en-us/library/ms645398(v=VS.85).aspx In C++ this is represented by the following struct: #pragma pack(push, 1) struct DLGTEMPLATEEX { WORD dlgVer; // 0x0001 WORD signature; // 0xFFFF DWORD helpID; DWORD exStyle; DWORD style; WORD cDlgItems; short x, y, cx, cy; /* sz_Or_Ord menu; sz_Or_Ord windowClass; WCHAR title[titleLen]; WORD fontPointSize; WORD fontWWeight; BYTE fontIsItalic; BYTE fontCharset; WCHAR typeface[stringLen]; */ }; #pragma pack(pop) #pragma pack(push, 1) tells C++ compiler to not do padding between struct members. That part after x, y, cx, cy is commented out because sz_or_Ord and WCHAR [] are variable length, which can’t be represented in C++ struct. fontPointSize is the value we need to patch. But first we need to get a copy binary blob. DLGTEMPLATE* DupTemplate(int dlgId) { HRSRC dialogRC = FindResourceW(nullptr, MAKEINTRESOURCE(dlgId), RT_DIALOG); CrashIf(!dialogRC); HGLOBAL dlgTemplate = LoadResource(nullptr, dialogRC); CrashIf(!dlgTemplate); void* orig = LockResource(dlgTemplate); size_t size = SizeofResource(nullptr, dialogRC); CrashIf(size == 0); DLGTEMPLATE* ret = (DLGTEMPLATE*)memdup(orig, size); UnlockResource(orig); return ret; } dlgId is from .rc file (e.g. IDD_DIALOG_FIND for our find dialog). Most of it is win32 APIs, memdup() makes a copy of memory block. Here’s the code to patch the font size: static void SetDlgTemplateExFont(DLGTEMPLATE* tmp, int fontSize) { CrashIf(!IsDlgTemplateEx(tmp)); DLGTEMPLATEEX* tpl = (DLGTEMPLATEEX*)tmp; CrashIf(!HasDlgTemplateExFont(tpl)); u8* d = (u8*)tpl; d += sizeof(DLGTEMPLATEEX); // sz_Or_Ord menu d = SkipSzOrOrd(d); // sz_Or_Ord windowClass; d = SkipSzOrOrd(d); // WCHAR[] title d = SkipSz(d); // WCHAR pointSize; WORD* wd = (WORD*)d; fontSize = ToFontPointSize(fontSize); *wd = fontSize; } We start at the end of fixed-size portion of the blob () d += sizeof(DLGTEMPLATEEX). We then skip variable-length fields menu, windowClass and title and patch the font size in points. SumatraPDF code operates in pixels so has to convert that to Windows points: static int ToFontPointSize(int fontSize) { int res = (fontSize * 72) / 96; return res; } Here’s how we skip past sz_or_Ord fields: /* Type: sz_Or_Ord A variable-length array of 16-bit elements that identifies a menu resource for the dialog box. If the first element of this array is 0x0000, the dialog box has no menu and the array has no other elements. If the first element is 0xFFFF, the array has one additional element that specifies the ordinal value of a menu resource in an executable file. If the first element has any other value, the system treats the array as a null-terminated Unicode string that specifies the name of a menu resource in an executable file. */ static u8* SkipSzOrOrd(u8* d) { WORD* pw = (WORD*)d; WORD w = *pw++; if (w == 0x0000) { // no menu } else if (w == 0xffff) { // menu id followed by another WORD item pw++; } else { // anything else: zero-terminated WCHAR* WCHAR* s = (WCHAR*)pw; while (*s) { s++; } s++; pw = (WORD*)s; } return (u8*)pw; } Strings are zero-terminated utf-16: static u8* SkipSz(u8* d) { WCHAR* s = (WCHAR*)d; while (*s) { s++; } s++; // skip terminating zero return (u8*)s; } To make the code more robust, we check the dialog is extended and has font information to patch: static bool IsDlgTemplateEx(DLGTEMPLATE* tpl) { return tpl->style == MAKELONG(0x0001, 0xFFFF); } static bool HasDlgTemplateExFont(DLGTEMPLATEEX* tpl) { DWORD style = tpl->style & (DS_SETFONT | DS_FIXEDSYS); return style != 0; } Changing font name It’s also possible to change font name but it’s slightly harder (which is why I didn’t implement it). WCHAR typeface[] is inline null-terminated string that is name of the font. To change it we would also have to move the data that follows it. The roads not taken There are other ways to achieve that. Dialog is just a HWND. In WM_INITDIALOG message we could iterate over all controls, change their font with WM_SETFONT message and then resize the controls and the window. That’s much more work than our solution. We just patch the font size and let Windows do the font setting and resizing. Another option would be to generate binary blog representing dialogs at runtime. It would require writing more code but then we could define new dialogs in C++ code that wouldn’t be that much different than .rc syntax. I want to explore that solution because this would also allow adding simple layout system to simplify definition the dialogs. In .rc files everything must be absolutely positioned. The visual dialog editor helps a bit but is unreliable and I need resizing logic anyway because after translating strings absolute positioning doesn’t work.

11 months ago 58 votes
How I implemented wc in the browser in 3 days

Building wc in the browser From time to time I like to run wc -l on my source code to see how much code I wrote. For those not in the know: wc -l shows number of lines in files. Actually, what I have to do is more like find -name "*.go" | xargs wc -l because wc isn’t a particularly good at handling directories. I just want to see number of lines in all my source files, man. I don’t want to google the syntax of find and xargs for a hundredth time. After learning about File System API I decided to write a tool that does just that as a web app. No need to install software. I did just that and you can use it yourself. Here’s how it sees itself: The rest of this article describes how I would have done it if I did it. Building software quickly It only took me 3 days, which is a testament to how productive the web platform can be. My weapons of choice are: Svelte for frontend Tailwind CSS for CSS JSDoc for static typing of JavaScript File System API to access files and directories on your computer vite for a bundler and dev server render to deploy For a small project Svelte and Tailwind CSS are arguably an overkill. I used them because I standardized on that toolset. Standardization allows me to re-use prior experience and sometimes even code. Why those technologies? Svelte is React without the bloat. Try it and you’ll love it. Tailwind CSS is CSS but more productive. You have to try it to believe it. JSDoc is happy medium between no types at all and TypeScript. I have great internal resistance to switching to TypeScript. Maybe 5 years from now. And none of that would be possible without browser APIs that allow access to files on your computer. Which FireFox doesn’t implement because they are happy to loose market share to browser that implement useful features. Clearly $3 million a year is not enough to buy yourself a CEO with understanding of the obvious. Implementation tidbits Getting list of files To get a recursive listing of files in a directory use showDirectoryPicker to get a FileSystemDirectoryHandle. Call dirHandle.values() to get a list of directory entries. Recurse if an entry is a directory. Not all browsers support that API. To detect if it works: /** * @returns {boolean} */ export function isIFrame() { let isIFrame = false; try { // in iframe, those are different isIFrame = window.self !== window.top; } catch { // do nothing } return isIFrame; } /** * @returns {boolean} */ export function supportsFileSystem() { return "showDirectoryPicker" in window && !isIFrame(); } Because people on Hacker News always complain about slow, bloated software I took pains to make my code fast. One of those pains was using an array instead of an object to represent a file system entry. Wait, now HN people will complain that I’m optimizing prematurely. Listen buddy, Steve Wozniak wrote assembly in hex and he liked it. In comparison, optimizing memory layout of most frequently used object in JavaScript is like drinking champagne on Jeff Bezos’ yacht. Here’s a JavaScript trick to optimizing memory layout of objects with fixed number of fields: derive your class from an Array. Deriving a class from an Array Little known thing about JavaScript is that an Array is just an object and you can derive your class from it and add methods, getters and setters. You get a compact layout of an array and convenience of accessors. Here’s the sketch of how I implemented FsEntry object: // a directory tree. each element is either a file: // [file, dirHandle, name, path, size, null] // or directory: // [[entries], dirHandle, name, path, size, null] // extra null value is for the caller to stick additional data // without the need to re-allocate the array // if you need more than 1, use an object // handle (file or dir), parentHandle (dir), size, path, dirEntries, meta const handleIdx = 0; const parentHandleIdx = 1; const sizeIdx = 2; const pathIdx = 3; const dirEntriesIdx = 4; const metaIdx = 5; export class FsEntry extends Array { get size() { return this[sizeIdx]; } // ... rest of the accessors } We have 6 slots in the array and we can access them as e.g. entry[sizeIdx]. We can hide this implementation detail by writing a getter as FsEntry.size() shown above. Reading a directory recursively Once you get FileSystemDirectoryHandle by using window.showDirectoryPicker() you can read the content of the directory. Here’s one way to implement recursive read of directory: /** * @param {FileSystemDirectoryHandle} dirHandle * @param {Function} skipEntryFn * @param {string} dir * @returns {Promise<FsEntry>} */ export async function readDirRecur( dirHandle, skipEntryFn = dontSkip, dir = dirHandle.name ) { /** @type {FsEntry[]} */ let entries = []; // @ts-ignore for await (const handle of dirHandle.values()) { if (skipEntryFn(handle, dir)) { continue; } const path = dir == "" ? handle.name : `${dir}/${handle.name}`; if (handle.kind === "file") { let e = await FsEntry.fromHandle(handle, dirHandle, path); entries.push(e); } else if (handle.kind === "directory") { let e = await readDirRecur(handle, skipEntryFn, path); e.path = path; entries.push(e); } } let res = new FsEntry(dirHandle, null, dir); res.dirEntries = entries; return res; } Function skipEntryFn is called for every entry and allows the caller to decide to not include a given entry. You can, for example, skip a directory like .git. It can also be used to show progress of reading the directory to the user, as it happens asynchronously. Showing the files I use tables and I’m not ashamed. It’s still the best technology to display, well, a table of values where cells are sized to content and columns are aligned. Flexbox doesn’t remember anything across rows so it can’t align columns. Grid can layout things properly but I haven’t found a way to easily highlight the whole row when mouse is over it. With CSS you can only target individual cells in a grid, not rows. With table I just style <tr class="hover:bg-gray-100">. That’s Tailwind speak for: on mouse hover set background color to light gray. Folder can contain other folders so we need recursive components to implement it. Svelte supports that with <svelte:self>. I implemented it as a tree view where you can expand folders to see their content. It’s one big table for everything but I needed to indent each expanded folder to make it look like a tree. It was a bit tricky. I went with indent property in my Folder component. Starts with 0 and goes +1 for each level of nesting. Then I style the first file name column as <td class="ind-{indent}">...</td> and use those CSS styles: <style> :global(.ind-1) { padding-left: 0.5rem; } :global(.ind-2) { padding-left: 1rem; } /* ... up to .ind-17 */ Except it goes to .ind-17. Yes, if you have deeper nesting, it won’t show correctly. I’ll wait for a bug report before increasing it further. Calculating line count You can get the size of the file from FileSystemFileEntry. For source code I want to see number of lines. It’s quite trivial to calculate: /** * @param {Blob} f * @returns {Promise<number>} */ export async function lineCount(f) { if (f.size === 0) { // empty files have no lines return 0; } let ab = await f.arrayBuffer(); let a = new Uint8Array(ab); let nLines = 0; // if last character is not newline, we must add +1 to line count let toAdd = 0; for (let b of a) { // line endings are: // CR (13) LF (10) : windows // LF (10) : unix // CR (13) : mac // mac is very rare so we just count 10 as they count // windows and unix lines if (b === 10) { toAdd = 0; nLines++; } else { toAdd = 1; } } return nLines + toAdd; } It doesn’t handle Mac files that use CR for newlines. It’s ok to write buggy code as long as you document it. I also skip known binary files (.png, .exe etc.) and known “not mine” directories like .git and node_modules. Small considerations like that matter. Remembering opened directories I typically use it many times on the same directories and it’s a pain to pick the same directory over and over again. FileSystemDirectoryHandle can be stored in IndexedDB so I implemented a history of opened directories using a persisted store using IndexedDB. Asking for permissions When it comes to accessing files and directories on disk you can’t ask for forgiveness, you have to ask for permission. User grants permissions in window.showDirectoryPicker() and browser remembers them for a while, but they expire quite quickly. You need to re-check and re-ask for permission to FileSystemFileHandle and FileSystemDirectoryHandle before each access: export async function verifyHandlePermission(fileHandle, readWrite) { const options = {}; if (readWrite) { options.mode = "readwrite"; } // Check if permission was already granted. If so, return true. if ((await fileHandle.queryPermission(options)) === "granted") { return true; } // Request permission. If the user grants permission, return true. if ((await fileHandle.requestPermission(options)) === "granted") { return true; } // The user didn't grant permission, so return false. return false; } If permissions are still valid from before, it’s a no-op. If not, the browser will show a dialog asking for permissions. If you ask for write permissions, Chrome will show 2 confirmations dialogs vs. 1 for read-only access. I start with read-only access and, if needed, ask again to get a write (or delete) permissions. Deleting files and directories Deleting files has nothing to do with showing line counts but it was easy to implement, it was useful so I added it. You need to remember FileSystemDirectoryHandle for the parent directory. To delete a file: parentDirHandle.removeEntry("foo.txt") To delete a directory: parentDirHandle.removeEntry("node_modules", {recursive: true}) Getting bit by a multi-threading bug JavaScript doesn’t have multiple threads and you can’t have all those nasty bugs? Right? Right? Yes and no. Async is not multi-threading but it does create non-obvious execution flows. I had a bug: I noticed that some .txt files were showing line count of 0 even though they clearly did have lines. I went bug hunting. I checked the lineCount function. Seems ok. I added console.log(), I stepped through the code. Time went by and my frustration level was reaching DEFCON 1. Thankfully before I reached cocked pistol I had an epiphany. You see, JavaScript has async where some code can interleave with some other code. The browser can splice those async “threads” with UI code. No threads means there are no data races i.e. writing memory values that other thread is in the middle of reading. But we do have non-obvious execution flows. Here’s how my code worked: get a list of files (async) show the files in UI calculate line counts for all files (async) update UI to show line counts after we get them all Async is great for users: calculating line counts could take a long time as we need to read all those files. If this process wasn’t async it would block the UI. Thanks to async there’s enough checkpoints for the browser to process UI events in between processing files. The issue was that function to calculate line counts was using an array I got from reading a directory. I passed the same array to Folder component to show the files. And I sorted the array to show files in human friendly order. In JavaScript sorting mutates an array and that array was partially processed by line counting function. As a result if series of events was unfortunate enough, I would skip some files in line counting. They would be resorted to a position that line counting thought it already counted. Result: no lines for you! A happy ending and an easy fix: Folder makes a copy of an array so sorting doesn’t affect line counting process. The future No software is ever finished but I arrived at a point where it does the majority of the job I wanted so I shipped it. There is a feature I would find useful: statistics for each extensions. How many lines in .go files vs. .js files etc.? But I’m holding off implementing it until: I really, really want it I get feature requests from people who really, really want it You can look at the source code. It’s source visible but not open source.

a year ago 13 votes

More in programming

ChatGPT Would be a Decent Policy Advisor

Revealed: How the UK tech secretary uses ChatGPT for policy advice by Chris Stokel-Walker for the New Scientist

14 hours ago 3 votes
Setting policy for strategy.

This book’s introduction started by defining strategy as “making decisions.” Then we dug into exploration, diagnosis, and refinement: three chapters where you could argue that we didn’t decide anything at all. Clarifying the problem to be solved is the prerequisite of effective decision making, but eventually decisions do have to be made. Here in this chapter on policy, and the following chapter on operations, we finally start to actually make some decisions. In this chapter, we’ll dig into: How we define policy, and how setting policy differs from operating policy as discussed in the next chapter The structured steps for setting policy How many policies should you set? Is it preferable to have one policy, many policies, or does it not matter much either way? Recurring kinds of policies that appear frequently in strategies Why it’s valuable to be intentional about your strategy’s altitude, and how engineers and executives generally maintain different altitudes in their strategies Criteria to use for evaluating whether your policies are likely to be impactful How to develop novel policies, and why it’s rare Why having multiple bundles of alternative policies is generally a phase in strategy development that indicates a gap in your diagnosis How policies that ignore constraints sound inspirational, but accomplish little Dealing with ambiguity and uncertainty created by missing strategies from cross-functional stakeholders By the end, you’ll be ready to evaluate why an existing strategy’s policies are struggling to make an impact, and to start iterating on policies for strategy of your own. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. What is policy? Policy is interpreting your diagnosis into a concrete plan. That plan will be a collection of decisions, tradeoffs, and approaches. They’ll range from coding practices, to hiring mandates, to architectural decisions, to guidance about how choices are made within your organization. An effective policy solves the entirety of the strategy’s diagnosis, although the diagnosis itself is encouraged to specify which aspects can be ignored. For example, the strategy for working with private equity ownership acknowledges in its diagnosis that they don’t have clear guidance on what kind of reduction to expect: Based on general practice, it seems likely that our new Private Equity ownership will expect us to reduce R&D headcount costs through a reduction. However, we don’t have any concrete details to make a structured decision on this, and our approach would vary significantly depending on the size of the reduction. Faced with that uncertainty, the policy simply acknowledges the ambiguity and commits to reconsider when more information becomes available: We believe our new ownership will provide a specific target for Research and Development (R&D) operating expenses during the upcoming financial year planning. We will revise these policies again once we have explicit targets, and will delay planning around reductions until we have those numbers to avoid running two overlapping processes. There are two frequent points of confusion when creating policies that are worth addressing directly: Policy is a subset of strategy, rather than the entirety of strategy, because policy is only meaningful in the context of the strategy’s diagnosis. For example, the “N-1 backfill policy” makes sense in the context of new, private equity ownership. The policy wouldn’t work well in a rapidly expanding organization. Any strategy without a policy is useless, but you’ll also find policies without context aren’t worth much either. This is particularly unfortunate, because so often strategies are communicated without those critical sections. Policy describes how tradeoffs should be made, but it doesn’t verify how the tradeoffs are actually being made in practice. The next chapter on operations covers how to inspect an organization’s behavior to ensure policies are followed. When reworking a strategy to be more readable, it often makes sense to merge policy and operation sections together. However, when drafting strategy it’s valuable to keep them separate. Yes, you might use a weekly meeting to review whether the policy is being followed, but whether it’s an effective policy is independent of having such a meeting, and what operational mechanisms you use will vary depending on the number of policies you intend to implement. With this definition in mind, now we can move onto the more interesting discussion of how to set policy. How to set policy Every part of writing a strategy feels hard when you’re doing it, but I personally find that writing policy either feels uncomfortably easy or painfully challenging. It’s never a happy medium. Fortunately, the exploration and diagnosis usually come together to make writing your policy simple: although sometimes that simple conclusion may be a difficult one to swallow. The steps I follow to write a strategy’s policy are: Review diagnosis to ensure it captures the most important themes. It doesn’t need to be perfect, but it shouldn’t have omissions so obvious that you can immediately identify them. Select policies that address the diagnosis. Explicitly match each policy to one or more diagnoses that it addresses. Continue adding policies until every diagnosis is covered. This is a broad instruction, but it’s simpler than it sounds because you’ll typically select from policies identified during your exploration phase. However, there certainly is space to tweak those policies, and to reapply familiar policies to new circumstances. If you do find yourself developing a novel policy, there’s a later section in this chapter, Developing novel policies, that addresses that topic in more detail. Consolidate policies in cases where they overlap or adjoin. For example, two policies about specific teams might be generalized into a policy about all teams in the engineering organization. Backtest policy against recent decisions you’ve made. This is particularly effective if you maintain a decision log in your organization. Mine for conflict once again, much as you did in developing your diagnosis. Emphasize feedback from teams and individuals with a different perspective than your own, but don’t wholly eliminate those that you agree with. Just as it’s easy to crowd out opposing views in diagnosis if you don’t solicit their input, it’s possible to accidentally crowd out your own perspective if you anchor too much on others’ perspectives. Consider refinement if you finish writing, and you just aren’t sure your approach works – that’s fine! Return to the refinement phase by deploying one of the refinement techniques to increase your conviction. Remember that we talk about strategy like it’s done in one pass, but almost all real strategy takes many refinement passes. The steps of writing policy are relatively pedestrian, largely because you’ve done so much of the work already in the exploration, diagnosis, and refinement steps. If you skip those phases, you’d likely follow the above steps for writing policy, but the expected quality of the policy itself would be far lower. How many policies? Addressing the entirety of the diagnosis is often complex, which is why most strategies feature a set of policies rather than just one. The strategy for decomposing a monolithic application is not one policy deciding not to decompose, but a series of four policies: Business units should always operate in their own code repository and monolith. New integrations across business unit monoliths should be done using gRPC. Except for new business unit monoliths, we don’t allow new services. Merge existing services into business-unit monoliths where you can. Four isn’t universally the right number either. It’s simply the number that was required to solve that strategy’s diagnosis. With an excellent diagnosis, your policies will often feel inevitable, and perhaps even boring. That’s great: what makes a policy good is that it’s effective, not that it’s novel or inspiring. Kinds of policies While there are so many policies you can write, I’ve found they generally fall into one of four major categories: approvals, allocations, direction, and guidance. This section introduces those categories. Approvals define the process for making a recurring decision. This might require invoking an architecture advice process, or it might require involving an authority figure like an executive. In the Index post-acquisition integration strategy, there were a number of complex decisions to be made, and the approval mechanism was: Escalations come to paired leads: given our limited shared context across teams, all escalations must come to both Stripe’s Head of Traffic Engineering and Index’s Head of Engineering. This allowed the acquired and acquiring teams to start building trust between each other by ensuring both were consulted before any decision was finalized. On the other hand, the user data access strategy’s approval strategy was more focused on managing corporate risk: Exceptions must be granted in writing by CISO. While our overarching Engineering Strategy states that we follow an advisory architecture process as described in Facilitating Software Architecture, the customer data access policy is an exception and must be explicitly approved, with documentation, by the CISO. Start that process in the #ciso channel. These two different approval processes had different goals, so they made tradeoffs differently. There are so many ways to tweak approval, allowing for many different tradeoffs between safety, productivity, and trust. Allocations describe how resources are split across multiple potential investments. Allocations are the most concrete statement of organizational priority, and also articulate the organization’s belief about how productivity happens in teams. Some companies believe you go fast by swarming more people onto critical problems. Other companies believe you go fast by forcing teams to solve problems without additional headcount. Both can work, and teach you something important about the company’s beliefs. The strategy on Uber’s service migration has two concrete examples of allocation policies. The first describes the Infrastructure engineering team’s allocation between manual provision tasks and investing into creating a self-service provisioning platform: Constrain manual provisioning allocation to maximize investment in self-service provisioning. The service provisioning team will maintain a fixed allocation of one full time engineer on manual service provisioning tasks. We will move the remaining engineers to work on automation to speed up future service provisioning. This will degrade manual provisioning in the short term, but the alternative is permanently degrading provisioning by the influx of new service requests from newly hired product engineers. The second allocation policy is implicitly noted in this strategy’s diagnosis, where it describes the allocation policy in the Engineering organization’s higher altitude strategy: Within infrastructure engineering, there is a team of four engineers responsible for service provisioning today. While our organization is growing at a similar rate as product engineering, none of that additional headcount is being allocated directly to the team working on service provisioning. We do not anticipate this changing. Allocation policies often create a surprising amount of clarity for the team, and I include them in almost every policy I write either explicitly, or implicitly in a higher altitude strategy. Direction provides explicit instruction on how a decision must be made. This is the right tool when you know where you want to go, and exactly the way that you want to get there. Direction is appropriate for problems you understand clearly, and you value consistency more than empowering individual judgment. Direction works well when you need an unambiguous policy that doesn’t leave room for interpretation. For example, Calm’s policy for working in the monolith: We write all code in the monolith. It has been ambiguous if new code (especially new application code) should be written in our JavaScript monolith, or if all new code must be written in a new service outside of the monolith. This is no longer ambiguous: all new code must be written in the monolith. In the rare case that there is a functional requirement that makes writing in the monolith implausible, then you should seek an exception as described below. In that case, the team couldn’t agree on what should go into the monolith. Individuals would often make incompatible decisions, so creating consistency required removing personal judgment from the equation. Sometimes judgment is the issue, and sometimes consistency is difficult due to misaligned incentives. A good example of this comes in strategy on working with new Private Equity ownership: We will move to an “N-1” backfill policy, where departures are backfilled with a less senior level. We will also institute a strict maximum of one Principal Engineer per business unit. It’s likely that hiring managers would simply ignore this backfill policy if it was stated more softly, although sometimes less forceful policies are useful. Guidance provides a recommendation about how a decision should be made. Guidance is useful when there’s enough nuance, ambiguity, or complexity that you can explain the desired destination, but you can’t mandate the path to reaching it. One example of guidance comes from the Index acquisition integration strategy: Minimize changes to tokenization environment: because point-of-sale devices directly work with customer payment details, the API that directly supports the point-of-sale device must live within our secured environment where payment details are stored. However, any other functionality must not be added to our tokenization environment. This might read like direction, but it’s clarifying the desired outcome of avoiding unnecessary complexity in the tokenization environment. However, it’s not able to articulate what complexity is necessary, so ultimately it’s guidance because it requires significant judgment to interpret. A second example of guidance comes in the strategy on decomposing a monolithic codebase: Merge existing services into business-unit monoliths where you can. We believe that each choice to move existing services back into a monolith should be made “in the details” rather than from a top-down strategy perspective. Consequently, we generally encourage teams to wind down their existing services outside of their business unit’s monolith, but defer to teams to make the right decision for their local context. This is another case of knowing the desired outcome, but encountering too much uncertainty to direct the team on how to get there. If you ask five engineers about whether it’s possible to merge a given service back into a monolithic codebase, they’ll probably disagree. That’s fine, and highlights the value of guidance: it makes it possible to make incremental progress in areas where more concrete direction would cause confusion. When you’re working on a strategy’s policy section, it’s important to consider all of these categories. Which feel most natural to use will vary depending on your team and role, but they’re all usable: If you’re a developer productivity team, you might have to lean heavily on guidance in your policies and increased support for that guidance within the details of your platform. If you’re an executive, you might lean heavily on direction. Indeed, you might lean too heavily on direction, where guidance often works better for areas where you understand the direction but not the path. If you’re a product engineering organization, you might have to narrow the scope of your direction to the engineers within that organization to deal with the realities of complex cross-organization dynamics. Finally, if you have a clear approach you want to take that doesn’t fit cleanly into any of these categories, then don’t let this framework dissuade you. Give it a try, and adapt if it doesn’t initially work out. Maintaining strategy altitude The chapter on when to write engineering strategy introduced the concept of strategy altitude, which is being deliberate about where certain kinds of policies are created within your organization. Without repeating that section in its entirety, it’s particularly relevant when you set policy to consider how your new policies eliminate flexibility within your organization. Consider these two somewhat opposing strategies: Stripe’s Sorbet strategy only worked in an organization that enforced the use of a single programming language across (essentially) all teams Uber’s service migration strategy worked well in an organization that was unwilling to enforce consistent programming language adoption across teams Stripe’s organization-altitude policy took away the freedom of individual teams to select their preferred technology stack. In return, they unlocked the ability to centralize investment in a powerful way. Uber went the opposite way, unlocking the ability of teams to pick their preferred technology stack, while significantly reducing their centralized teams’ leverage. Both altitudes make sense. Both have consequences. Criteria for effective policies In The Engineering Executive’s Primer’s chapter on engineering strategy, I introduced three criteria for evaluating policies. They ought to be applicable, enforced, and create leverage. Defining those a bit: Applicable: it can be used to navigate complex, real scenarios, particularly when making tradeoffs. Enforced: teams will be held accountable for following the guiding policy. Create Leverage: create compounding or multiplicative impact. The last of these three, create leverage, made sense in the context of a book about engineering executives, but probably doesn’t make as much sense here. Some policies certainly should create leverage (e.g. empower developer experience team by restricting new services), but others might not (e.g. moving to an N-1 backfill policy). Outside the executive context, what’s important isn’t necessarily creating leverage, but that a policy solves for part of the diagnosis. That leaves the other two–being applicable and enforced–both of which are necessary for a policy to actually address the diagnosis. Any policy which you can’t determine how to apply, or aren’t willing to enforce, simply won’t be useful. Let’s apply these criteria to a handful of potential policies. First let’s think about policies we might write to improve the talent density of our engineering team: “We only hire world-class engineers.” This isn’t applicable, because it’s unclear what a world-class engineer means. Because there’s no mutually agreeable definition in this policy, it’s also not consistently enforceable. “We only hire engineers that get at least one ‘strong yes’ in scorecards.” This is applicable, because there’s a clear definition. This is enforceable, depending on the willingness of the organization to reject seemingly good candidates who don’t happen to get a strong yes. Next, let’s think about a policy regarding code reuse within a codebase: “We follow a strict Don’t Repeat Yourself policy in our codebase.” There’s room for debate within a team about whether two pieces of code are truly duplicative, but this is generally applicable. Because there’s room for debate, it’s a very context specific determination to decide how to enforce a decision. “Code authors are responsible for determining if their contributions violate Don’t Repeat Yourself, and rewriting them if they do.” This is much more applicable, because now there’s only a single person’s judgment to assess the potential repetition. In some ways, this policy is also more enforceable, because there’s no longer any ambiguity around who is deciding whether a piece of code is a repetition. The challenge is that enforceability now depends on one individual, and making this policy effective will require holding individuals accountable for the quality of their judgement. An organization that’s unwilling to distinguish between good and bad judgment won’t get any value out of the policy. This is a good example of how a good policy in one organization might become a poor policy in another. If you ever find yourself wanting to include a policy that for some reason either can’t be applied or can’t be enforced, stop to ask yourself what you’re trying to accomplish and ponder if there’s a different policy that might be better suited to that goal. Developing novel policies My experience is that there are vanishingly few truly novel policies to write. There’s almost always someone else has already done something similar to your intended approach. Calm’s engineering strategy is such a case: the details are particular to the company, but the general approach is common across the industry. The most likely place to find truly novel policies is during the adoption phase of a new widespread technology, such as the rise of ubiquitous mobile phones, cloud computing, or large language models. Even then, as explored in the strategy for adopting large-language models, the new technology can be engaged with as a generic technology: Develop an LLM-backed process for reactivating departed and suspended drivers in mature markets. Through modeling our driver lifecycle, we determined that improving onboarding time will have little impact on the total number of active drivers. Instead, we are focusing on mechanisms to reactivate departed and suspended drivers, which is the only opportunity to meaningfully impact active drivers. You could simply replace “LLM” with “data-driven” and it would be equally readable. In this way, policy can generally sidestep areas of uncertainty by being a bit abstract. This avoids being overly specific about topics you simply don’t know much about. However, even if your policy isn’t novel to the industry, it might still be novel to you or your organization. The steps that I’ve found useful to debug novel policies are the same steps as running a condensed version of the strategy process, with a focus on exploration and refinement: Collect a number of similar policies, with a focus on how those policies differ from the policy you are creating Create a systems model to articulate how this policy will work, and also how it will differ from the similar policies you’re considering Run a strategy testing cycle for your proto-policy to discover any unknown-unknowns about how it works in practice Whether you run into this scenario is largely a function of the extent of your, and your organization’s, experience. Early in my career, I found myself doing novel (for me) strategy work very frequently, and these days I rarely find myself doing novel work, instead focusing on adaptation of well-known policies to new circumstances. Are competing policy proposals an anti-pattern? When creating policy, you’ll often have to engage with the question of whether you should develop one preferred policy or a series of potential strategies to pick from. Developing these is a useful stage of setting policy, but rather than helping you refine your policy, I’d encourage you to think of this as exposing gaps in your diagnosis. For example, when Stripe developed the Sorbet ruby-typing tooling, there was debate between two policies: Should we build a ruby-typing tool to allow a centralized team to gradually migrate the company to a typed codebase? Should we migrate the codebase to a preexisting strongly typed language like Golang or Java? These were, initially, equally valid hypotheses. It was only by clarifying our diagnosis around resourcing that it became clear that incurring the bulk of costs in a centralized team was clearly preferable to spreading the costs across many teams. Specifically, recognizing that we wanted to prioritize short-term product engineering velocity, even if it led to a longer migration overall. If you do develop multiple policy options, I encourage you to move the alternatives into an appendix rather than including them in the core of your strategy document. This will make it easier for readers of your final version to understand how to follow your policies, and they are the most important long-term user of your written strategy. Recognizing constraints A similar problem to competing solutions is developing a policy that you cannot possibly fund. It’s easy to get enamored with policies that you can’t meaningfully enforce, but that’s bad policy, even if it would work in an alternate universe where it was possible to enforce or resource it. To consider a few examples: The strategy for controlling access to user data might have proposed requiring manual approval by a second party of every access to customer data. However, that would have gone nowhere. Our approach to Uber’s service migration might have required more staffing for the infrastructure engineering team, but we knew that wasn’t going to happen, so it was a meaningless policy proposal to make. The strategy for navigating private equity ownership might have argued that new ownership should not hold engineering accountable to a new standard on spending. But they would have just invalidated that strategy in the next financial planning period. If you find a policy that contemplates an impractical approach, it doesn’t only indicate that the policy is a poor one, it also suggests your policy is missing an important pillar. Rather than debating the policy options, the fastest path to resolution is to align on the diagnosis that would invalidate potential paths forward. In cases where aligning on the diagnosis isn’t possible, for example because you simply don’t understand the possibilities of a new technology as encountered in the strategy for adopting LLMs, then you’ve typically found a valuable opportunity to use strategy refinement to build alignment. Dealing with missing strategies At a recent company offsite, we were debating which policies we might adopt to deal with annual plans that kept getting derailed after less than a month. Someone remarked that this would be much easier if we could get the executive team to commit to a clearer, written strategy about which business units we were prioritizing. They were, of course, right. It would be much easier. Unfortunately, it goes back to the problem we discussed in the diagnosis chapter about reframing blockers into diagnosis. If a strategy from the company or a peer function is missing, the empowering thing to do is to include the absence in your diagnosis and move forward. Sometimes, even when you do this, it’s easy to fall back into the belief that you cannot set a policy because a peer function might set a conflicting policy in the future. Whether you’re an executive or an engineer, you’ll never have the details you want to make the ideal policy. Meaningful leadership requires taking meaningful risks, which is never something that gets comfortable. Summary After working through this chapter, you know how to develop policy, how to assemble policies to solve your diagnosis, and how to avoid a number of the frequent challenges that policy writers encounter. At this point, there’s only one phase of strategy left to dig into, operating the policies you’ve created.

20 hours ago 3 votes
Fast and random sampling in SQLite

I was building a small feature for the Flickr Commons Explorer today: show a random selection of photos from the entire collection. I wanted a fast and varied set of photos. This meant getting a random sample of rows from a SQLite table (because the Explorer stores all its data in SQLite). I’m happy with the code I settled on, but it took several attempts to get right. Approach #1: ORDER BY RANDOM() My first attempt was pretty naïve – I used an ORDER BY RANDOM() clause to sort the table, then limit the results: SELECT * FROM photos ORDER BY random() LIMIT 10 This query works, but it was slow – about half a second to sample a table with 2 million photos (which is very small by SQLite standards). This query would run on every request for the homepage, so that latency is unacceptable. It’s slow because it forces SQLite to generate a value for every row, then sort all the rows, and only then does it apply the limit. SQLite is fast, but there’s only so fast you can sort millions of values. I found a suggestion from Stack Overflow user Ali to do a random sort on the id column first, pick my IDs from that, and only fetch the whole row for the photos I’m selecting: SELECT * FROM photos WHERE id IN ( SELECT id FROM photos ORDER BY RANDOM() LIMIT 10 ) This means SQLite only has to load the rows it’s returning, not every row in the database. This query was over three times faster – about 0.15s – but that’s still slower than I wanted. Approach #2: WHERE rowid > (…) Scrolling down the Stack Overflow page, I found an answer by Max Shenfield with a different approach: SELECT * FROM photos WHERE rowid > ( ABS(RANDOM()) % (SELECT max(rowid) FROM photos) ) LIMIT 10 The rowid is a unique identifier that’s used as a primary key in most SQLite tables, and it can be looked up very quickly. SQLite automatically assigns a unique rowid unless you explicitly tell it not to, or create your own integer primary key. This query works by picking a point between the biggest and smallest rowid values used in the table, then getting the rows with rowids which are higher than that point. If you want to know more, Max’s answer has a more detailed explanation. This query is much faster – around 0.0008s – but I didn’t go this route. The result is more like a random slice than a random sample. In my testing, it always returned contiguous rows – 101, 102, 103, … – which isn’t what I want. The photos in the Commons Explorer database were inserted in upload order, so photos with adjacent row IDs were uploaded at around the same time and are probably quite similar. I’d get one photo of an old plane, then nine more photos of other planes. I want more variety! (This behaviour isn’t guaranteed – if you don’t add an ORDER BY clause to a SELECT query, then the order of results is undefined. SQLite is returning rows in rowid order in my table, and a quick Google suggests that’s pretty common, but that may not be true in all cases. It doesn’t affect whether I want to use this approach, but I mention it here because I was confused about the ordering when I read this code.) Approach #3: Select random rowid values outside SQLite Max’s answer was the first time I’d heard of rowid, and it gave me an idea – what if I chose random rowid values outside SQLite? This is a less “pure” approach because I’m not doing everything in the database, but I’m happy with that if it gets the result I want. Here’s the procedure I came up with: Create an empty list to store our sample. Find the highest rowid that’s currently in use: sqlite> SELECT MAX(rowid) FROM photos; 1913389 Use a random number generator to pick a rowid between 1 and the highest rowid: >>> import random >>> random.randint(1, max_rowid) 196476 If we’ve already got this rowid, discard it and generate a new one. (The rowid is a signed, 64-bit integer, so the minimum possible value is always 1.) Look for a row with that rowid: SELECT * FROM photos WHERE rowid = 196476 If such a row exists, add it to our sample. If we have enough items in our sample, we’re done. Otherwise, return to step 3 and generate another rowid. If such a row doesn’t exist, return to step 3 and generate another rowid. This requires a bit more code, but it returns a diverse sample of photos, which is what I really care about. It’s a bit slower, but still plenty fast enough (about 0.001s). This approach is best for tables where the rowid values are mostly contiguous – it would be slower if there are lots of rowids between 1 and the max that don’t exist. If there are large gaps in rowid values, you might try multiple missing entries before finding a valid row, slowing down the query. You might want to try something different, like tracking valid rowid values separately. This is a good fit for my use case, because photos don’t get removed from Flickr Commons very often. Once a row is written, it sticks around, and over 97% of the possible rowid values do exist. Summary Here are the four approaches I tried: Approach Performance (for 2M rows) Notes ORDER BY RANDOM() ~0.5s Slowest, easiest to read WHERE id IN (SELECT id …) ~0.15s Faster, still fairly easy to understand WHERE rowid > ... ~0.0008s Returns clustered results Random rowid in Python ~0.001s Fast and returns varied results, requires code outside SQL I’m using the random rowid in Python in the Commons Explorer, trading code complexity for speed. I’m using this random sample to render a web page, so it’s important that it returns quickly – when I was testing ORDER BY RANDOM(), I could feel myself waiting for the page to load. But I’ve used ORDER BY RANDOM() in the past, especially for asynchronous data pipelines where I don’t care about absolute performance. It’s simpler to read and easier to see what’s going on. Now it’s your turn – visit the Commons Explorer and see what random gems you can find. Let me know if you spot anything cool! [If the formatting of this post looks odd in your feed reader, visit the original article]

10 hours ago 1 votes
Choosing Languages
yesterday 3 votes
05 · Syncing Keyhive

How we sync Keyhive and Automerge

yesterday 1 votes