More from Krzysztof Kowalczyk blog
How AI beat me at code optimization game. When I started writing this article I did not expect AI to beat me at optimizing JavaScript code. But it did. I’m really passionate about optimizing JavaScript. Some say it’s a mental illness but I like my code to go balls to the wall fast. I feel the need. The need for speed. Optimizing code often requires tedious refactoring. Can we delegate the tedious parts to AI? Can I just have ideas and get AI to be my programming slave? Let’s find out. Optimizing Unicode range lookup with AI In my experiment I used Cursor with Claude 3.5 Sonnet model. I assume it could be done with other tools / models. I was browsing pdf.js code and saw this function: const UnicodeRanges = [ [0x0000, 0x007f], // 0 - Basic Latin ... omited [0x0250, 0x02af, 0x1d00, 0x1d7f, 0x1d80, 0x1dbf], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement ... omited ]; function getUnicodeRangeFor(value, lastPosition = -1) { // TODO: create a map range => position, sort the ranges and cache it. // Then we can make a binary search for finding a range for a given unicode. if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; for (let i = 0, ii = range.length; i < ii; i += 2) { if (value >= range[i] && value <= range[i + 1]) { return lastPosition; } } } for (let i = 0, ii = UnicodeRanges.length; i < ii; i++) { const range = UnicodeRanges[i]; for (let j = 0, jj = range.length; j < jj; j += 2) { if (value >= range[j] && value <= range[j + 1]) { return i; } } } return -1; } This is a simple function: given Unicode character it returns Unicode range (number 0 ... UnicodeRanges.length - 1). The number is cleverly encoded implicitly as array index. The code already has a TODO about a potential optimization: sort UnicodeRanges and do binary search. There’s a minor difficulty: some Unicode ranges have multiple intervals, so we have to change the UnicodeRanges to explicitly include the range as 3rd element: const UnicodeRanges = [ [0x0000, 0x007f, 0], // 0 - Basic Latin [0x0250, 0x02af, 4], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement [0x1d00, 0x1d7f, 4], // 4 - IPA Extensions - Phonetic Extensions - Phonetic Extensions Supplement ... omitted ]; Then sort it and change the implementation of getUnicodeRangeFor(). Re-writing UnicodeRanges is simple but tedious and I don’t feel like doing it. Can AI do such change for us? Changing representation I’ve selected the whole of UnicodeRanges, added to chat with Ctrl + Shift + L and commanded: Change representation of UnicodeRanges array where each entry is 3 elements: [start, end, rangeNo]; preserve comments; if rangeNo has multiple ranges, split into separate entries Cursor responded with: const UnicodeRanges = [ [0x0000, 0x007f, 0], // 0 - Basic Latin [0x0080, 0x00ff, 1], // 1 - Latin-1 Supplement ... omitted ]; There was an issue: it only did part of the array. So I commanded: don’t be lazy, finish the whole UnicodeRanges array It did but the result was too long so I had to click Continue button. It worked and the generated code seems to be correct so I applied it. For binary search, the array must be sorted, so I commanded: Sort the array by first element It did, the result looked correct so I accepted the change. Hmm, I can get used to having a programming slave. Updating the code to implement binary search I copied getUnicodeRangeFor to chat and commanded: change the implementation to use binary search over UnicodeRanges array It gave me: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; if (value >= range[0] && value <= range[1]) { return lastPosition; } } // Binary search let left = 0; let right = UnicodeRanges.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const range = UnicodeRanges[mid]; if (value < range[0]) { right = mid - 1; } else if (value > range[1]) { left = mid + 1; } else { return range[2]; // Return the range index } } return -1; // Not found } It looks correct so I accepted it. We achieved the goal: replaced linear search with binary search using cheap AI labor. But I want more. More speed. Changing array of arrays to flat array A more efficient representation of the ranges would be a flat array: const UnicodeRanges = [ 0x0000, 0x007f, 0, // 0 - Basic Latin 0x0080, 0x00ff, 1, // 1 - Latin-1 Supplement ]; Let’s see if our AI slave can do it: change the whole UnicodeRanges array to flat array, preserve comments, don’t be lazy and do all of it I pre-emptively command the AI to convert the whole array. It did it. Now is the time to update getUnicodeRangeFor(). It’s a simple change I could do myself: now we just multiply index * 3 to get range definition. But I kind of like commanding the AI so I did it again: update getUnicodeRangeFor for new flat array representations of UnicodeRanges It gave me: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Binary search let left = 0; let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const index = mid * 3; const rangeStart = UnicodeRanges[index]; const rangeEnd = UnicodeRanges[index + 1]; if (value < rangeStart) { right = mid - 1; } else if (value > rangeEnd) { left = mid + 1; } else { return UnicodeRanges[index + 2]; // Return the range index } } return -1; // Not found } It does seem correct, but let’s do one more tweak: move binary search code to its own function And it worked: function binarySearchUnicodeRange(value) { let left = 0; let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const index = mid * 3; const rangeStart = UnicodeRanges[index]; const rangeEnd = UnicodeRanges[index + 1]; if (value < rangeStart) { right = mid - 1; } else if (value > rangeEnd) { left = mid + 1; } else { return UnicodeRanges[index + 2]; // Return the range index } } return -1; // Not found } function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Perform binary search return binarySearchUnicodeRange(value); } Inlining some of the binary search It’s good but we haven’t yet reached the levels of insanity I’ve hinted at. UnicodeRanges is a fixed array. We could “unroll” the binary search into code. It would be very painful to do manually. I would have to write custom code generator to ensure it’s correct but maybe AI can do it for us? unroll first 3 levels of binarySearchUnicodeRange To my surprise, it worked. AI understood what I wanted and gave me this: function getUnicodeRangeFor(value, lastPosition = -1) { ... omited const length = UnicodeRanges.length / 3; let mid = Math.floor(length / 2) * 3; if (value < UnicodeRanges[mid]) { mid = Math.floor(mid / 2); if (value < UnicodeRanges[mid]) { mid = Math.floor(mid / 2); } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 2); } else { return UnicodeRanges[mid + 2]; } } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 2); if (value < UnicodeRanges[mid]) { mid = mid - Math.floor(mid / 4); } else if (value > UnicodeRanges[mid + 1]) { mid = mid + Math.floor(mid / 4); } else { return UnicodeRanges[mid + 2]; } } else { return UnicodeRanges[mid + 2]; } ... omited } Can we do even better? unroll the values of UnicodeRanges[mid] etc. and replace with constants Holly cow, it worked: function getUnicodeRangeFor(value, lastPosition = -1) { // Check the last position first if (lastPosition !== -1) { const index = lastPosition * 3; if (value >= UnicodeRanges[index] && value <= UnicodeRanges[index + 1]) { return UnicodeRanges[index + 2]; } } // Unrolled binary search with constants if (value < 0x4E00) { // CJK Unified Ideographs start if (value < 0x0530) { // Armenian start if (value < 0x0250) { // IPA Extensions start if (value < 0x0100) { // Latin Extended-A start if (value < 0x0080) { // Latin-1 Supplement start return 0; // Basic Latin } return 1; // Latin-1 Supplement } if (value < 0x0180) { return 2; // Latin Extended-A } return 3; // Latin Extended-B } if (value < 0x0300) { // Combining Diacritical Marks start if (value < 0x02B0) { // Spacing Modifier Letters start return 4; // IPA Extensions } return 5; // Spacing Modifier Letters } if (value < 0x0370) { // Greek and Coptic start return 6; // Combining Diacritical Marks } if (value < 0x0400) { // Cyrillic start return 7; // Greek and Coptic } return 9; // Cyrillic } // Continue with binary search for remaining ranges let left = 10; // Index of Armenian in UnicodeRanges let right = 102; // Index of CJK Unified Ideographs in UnicodeRanges while (left <= right) { const mid = Math.floor((left + right) / 2) * 3; if (value < UnicodeRanges[mid]) { right = Math.floor(mid / 3) - 1; } else if (value > UnicodeRanges[mid + 1]) { left = Math.floor(mid / 3) + 1; } else { return UnicodeRanges[mid + 2]; } } } else { // Binary search for ranges after CJK Unified Ideographs let left = 103; // Index after CJK Unified Ideographs in UnicodeRanges let right = UnicodeRanges.length / 3 - 1; while (left <= right) { const mid = Math.floor((left + right) / 2) * 3; if (value < UnicodeRanges[mid]) { right = Math.floor(mid / 3) - 1; } else if (value > UnicodeRanges[mid + 1]) { left = Math.floor(mid / 3) + 1; } else { return UnicodeRanges[mid + 2]; } } } return -1; // Not found } Is it correct? I have no idea. Would be easy to write a test that uses UnicodeRanges as source of truth and tests getUnicodeRangeFor() for every possible value. Can I just ask AI to do it faster? So far I micro-managed the AI coder. I was telling it exactly what kind of change I wanted and commanded AI until it did what I wanted. A good boss delegates. Good boss tells the worker the high level goal and leaves implementation detail to the worker. Let’s start from scratch and command AI thusly: could you make a faster implementation of getUnicodeRangeFor()? You can change the representation of UnicodeRanges to achieve that goal Holly cow again. The AI figured out the binary search: const UnicodeRanges = [ { start: 0x0000, end: 0x007f, index: 0 }, // Basic Latin { start: 0x0080, end: 0x00ff, index: 1 }, // Latin-1 Supplement { start: 0x0100, end: 0x017f, index: 2 }, // Latin Extended-A // ... (other ranges) { start: 0x1f000, end: 0x1f09f, index: 122 }, // Domino Tiles - Mahjong Tiles ].sort((a, b) => a.start - b.start); function getUnicodeRangeFor(value, lastPosition = -1) { if (lastPosition !== -1) { const range = UnicodeRanges[lastPosition]; if (value >= range.start && value <= range.end) { return range.index; } } let left = 0; let right = UnicodeRanges.length - 1; while (left <= right) { const mid = Math.floor((left + right) / 2); const range = UnicodeRanges[mid]; if (value >= range.start && value <= range.end) { return range.index; } if (value < range.start) { right = mid - 1; } else { left = mid + 1; } } return -1; } The object {start, end, index} is less efficient than my flat array representation and it didn’t unroll the binary search but the core idea of binary search is there. But just for kicks, maybe there’s more: can you make it even more efficient? It did something slightly different, but really the same. I’m afraid no bonus this quarter. Ok, one last try: but could you go all out and use every possible implementation trick you know to optkmize getUnicodeRangeFor and UnicodeRanges representation? Fuck me, I have no idea if the generated code is correct but it does seem it would be faster than the non-unrolled binary search I came up with earlier: // Preprocess UnicodeRanges into a more efficient structure const BUCKET_SIZE = 256; const MAX_UNICODE = 0x10FFFF; const buckets = new Uint16Array(Math.ceil((MAX_UNICODE + 1) / BUCKET_SIZE)); const ranges = []; UnicodeRanges.forEach((range, index) => { for (let i = 0; i < range.length; i += 2) { const start = range[i]; const end = range[i + 1]; ranges.push({ start, end, index }); const startBucket = start >>> 8; const endBucket = end >>> 8; for (let j = startBucket; j <= endBucket; j++) { buckets[j] |= (1 << (index & 15)); } } }); ranges.sort((a, b) => a.start - b.start); // Create a compact representation of ranges const compactRanges = new Uint32Array(ranges.length * 3); ranges.forEach((range, i) => { compactRanges[i * 3] = range.start; compactRanges[i * 3 + 1] = range.end; compactRanges[i * 3 + 2] = range.index; }); function getUnicodeRangeFor(value, lastPosition = -1) { if (value > MAX_UNICODE) return -1; // Check last position first if (lastPosition !== -1) { const start = compactRanges[lastPosition * 3]; const end = compactRanges[lastPosition * 3 + 1]; if (value >= start && value <= end) { return compactRanges[lastPosition * 3 + 2]; } } // Quick check using buckets const bucket = buckets[value >>> 8]; if (bucket === 0) return -1; // Binary search within the relevant ranges let left = 0; let right = compactRanges.length / 3 - 1; while (left <= right) { const mid = (left + right) >>> 1; const start = compactRanges[mid * 3]; const end = compactRanges[mid * 3 + 1]; if (value < start) { right = mid - 1; } else if (value > end) { left = mid + 1; } else { return compactRanges[mid * 3 + 2]; } } return -1; } In retrospect, using Uint32Array for UnicodeRanges array was a no-brainer. It’s more compact in memory, the values will probably be compared as integers, not floats. Should be faster to scan than array of floats. The buckets are not necessary. They seems to be for quick “doesn’t exist check” which is not important. I believe most lookups are for values that exist. I’m humbled that just asking for super duper optimization made AI produce something I didn’t think of. More optimization ideas I can’t help myself. Those are ideas I didn’t ask AI to implement. UnicodeRanges is small. A linear search of compact Uint32Array representation where we just have (start, end) values for each range would be faster than binary search due to cache lines. We could start the search in the middle of array and scan half the data going forward or backwards. We could also store ranges smaller than 0x10000 in Uint16Array and larger in Uint32Array. And do linear search starting in the middle. Since the values are smaller than 256, we could encode the first 0xffff values in 64kB as Uint8Array and the rest as Uint32Array. That would probably be the fastest on average, because I believe most lookups are for Unicode chars smaller than 0xffff. Finally, we could calculate the the frequency of each range in representative sample of PDF documents, check the ranges based on that frequency, fully unrolled into code, without any tables. Conclusions AI is a promising way to do tedious code refactoring. If I didn’t have the AI, I would have to write a program to e.g. convert UnicodeRanges to a flat representation. It’s simple and therefore doable but certainly would take longer than few minutes it took me to command AI. The final unrolling of getUnicodeRangeFor() would probably never happen. It would require writing a sophisticated code generator which would be a big project by itself. AI can generate buggy code so it needs to be carefully reviewed. The unrolled binary search could not be verified by review, it would need a test. But hey, I could command my AI sidekick to write the test for me. There was this idea of organizing programming teams into master programmer and coding grunts. The job of master programmer, the thinking was, to generate high level ideas and having coding grunts implement them. Turns out that we can’t organize people that way but now we can use AI to be our coding grunt. Prompt engineering is a thing. I wasted a bunch of time doing incremental improvements. I should have started by asking for super-duper optimization. Productivity gains is real. The whole thing took me about an hour. For this particular task easily 2x compared to not using cheap AI labor. Imagine you’re running a software business and instead of spending 2 months on a task, you only spend 1 month. I’ll be using more AI for coding in the future.
Notion-like table of contents in JavaScript Long web pages benefit from having a table of contents. Especially technical, reference documentation. As a reader you want a way to quickly navigate to a specific part of the documentation. This article describes how I implemented table of contents for documentation page for my Edna note taking application. Took only few hours. Here’s full code. A good toc A good table of contents is: always available unobtrusive Table of contents cannot be always visible. Space is always at premium and should be used for the core functionality of a web page. For a documentation page the core is documentation text so space should be used to show documentation. But it should always be available in some compact form. I noticed that Notion implemented toc in a nice way. Since great artists steal, I decided to implement similar behavior for Edna’s documentation When hidden, we show mini toc i.e. for each toc entry we have a gray rectangle. A block rectangle indicates current position in the document: It’s small and unobtrusive. When you hover mouse over that area we show the actual toc: Clicking on a title goes to that part of the page. Implementing table of contents My implementation can be added to any page. Grabbing toc elements I assume h1 to h6 elements mark table of contents entries. I use their text as text of the entry. After page loads I build the HTML for the toc. I grab all headers elements: function getAllHeaders() { return Array.from(document.querySelectorAll("h1, h2, h3, h4, h5, h6")); } Each toc entry is represented by: class TocItem { text = ""; hLevel = 0; nesting = 0; element; } text we show to the user. hLevel is 1 … 6 for h1 … h6. nesting is like hLevel but sanitized. We use it to indent text in toc, to show tree structure of the content. element is the actual HTML element. We remember it so that we can scroll to that element with JavaScript. I create array of TocItem for each header element on the page: function buildTocItems() { let allHdrs = getAllHeaders(); let res = []; for (let el of allHdrs) { /** @type {string} */ let text = el.innerText.trim(); text = removeHash(text); text = text.trim(); let hLevel = parseInt(el.tagName[1]); let h = new TocItem(); h.text = text; h.hLevel = hLevel; h.nesting = 0; h.element = el; res.push(h); } return res; } function removeHash(str) { return str.replace(/#$/, ""); } Generate toc HTML Toc wrapper Here’s the high-level structure: .toc-wrapper has 2 children: .toc-mini, always visible, shows overview of the toc .toc-list hidden by default, shown on hover over .toc-wrapper Wrapper is always shown on the right upper corner using fixed position: .toc-wrapper { position: fixed; top: 1rem; right: 1rem; z-index: 50; } You can adjust top and right for your needs. When toc is too long to fully shown on screen, we must make it scrollable. But also default scrollbars in Chrome are large so we make them smaller and less intrusive: .toc-wrapper { position: fixed; top: 1rem; right: 1rem; z-index: 50; } When user hovers over .toc-wrapper, we switch display from .toc-mini to .toc-list: .toc-wrapper:hover > .toc-mini { display: none; } .toc-wrapper:hover > .toc-list { display: flex; } Generate mini toc We want to generate the following HTML: <div class="toc-mini"> <div class="toc-item-mini toc-light">▃</div> ... repeat for every TocItem </div> ▃ is a Unicode characters that is a filled rectangle of the bottom 30% of the character. We use a very small font becuase it’s only a compact navigation heler. .toc-light is gray color. By removing this class we make it default black which marks current position in the document. .toc-mini { display: flex; flex-direction: column; font-size: 6pt; cursor: pointer; } .toc-light { color: lightgray; } Generating HTML in vanilla JavaScript is not great, but it works for small things: function genTocMini(items) { let tmp = ""; let t = `<div class="toc-item-mini toc-light">▃</div>`; for (let i = 0; i < items.length; i++) { tmp += t; } return `<div class="toc-mini">` + tmp + `</div>`; } items is an array of TocItem we get from buildTocItems(). We mark the items with toc-item-mini class so that we can query them all easily. Generate toc list Table of contents list is only slightly more complicated: <div class="toc-list"> <div title="{title}" class="toc-item toc-trunc {ind}" onclick=tocGoTo({n})>{text}</div> ... repeat for every TocItem </div> {ind} is name of the indent class, like: .toc-ind-1 { padding-left: 4px; } tocGoTo(n) is a function that scroll the page to show n-th TocItem.element at the top. function genTocList(items) { let tmp = ""; let t = `<div title="{title}" class="toc-item toc-trunc {ind}" onclick=tocGoTo({n})>{text}</div>`; let n = 0; for (let h of items) { let s = t; s = s.replace("{n}", n); let ind = "toc-ind-" + h.nesting; s = s.replace("{ind}", ind); s = s.replace("{text}", h.text); s = s.replace("{title}", h.text); tmp += s; n++; } return `<div class="toc-list">` + tmp + `</div>`; } .toc-trunc is for limiting the width of toc and gracefully truncating it: .toc-trunc { max-width: 32ch; min-width: 12ch; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; } Putting it all together Here’s the code that runs at page load, generates HTML and appends it to the page: function genToc() { tocItems = buildTocItems(); fixNesting(tocItems); const container = document.createElement("div"); container.className = "toc-wrapper"; let s = genTocMini(tocItems); let s2 = genTocList(tocItems); container.innerHTML = s + s2; document.body.appendChild(container); } Navigating Showing / hiding toc list is done in CSS. When user clicks toc item, we need to show it at the top of page: let tocItems = []; function tocGoTo(n) { let el = tocItems[n].element; let y = el.getBoundingClientRect().top + window.scrollY; let offY = 12; y -= offY; window.scrollTo({ top: y, }); } We remembered HTML element in TocItem.element so all we need to is to scroll to it to show it at the top of page. You can adjust offY e.g. if you show a navigation bar at the top that overlays the content, you want to make offY at least the height of navigation bar. Updating toc mini to reflect current position When user scrolls the page we want to reflect that in toc mini by changing the color of corresponding rectangle from gray to black. On scroll event we calculate which visible TocItem.element is closest to the top of window. function updateClosestToc() { let closestIdx = -1; let closestDistance = Infinity; for (let i = 0; i < tocItems.length; i++) { let tocItem = tocItems[i]; const rect = tocItem.element.getBoundingClientRect(); const distanceFromTop = Math.abs(rect.top); if ( distanceFromTop < closestDistance && rect.bottom > 0 && rect.top < window.innerHeight ) { closestDistance = distanceFromTop; closestIdx = i; } } if (closestIdx >= 0) { console.log("Closest element:", closestIdx); let els = document.querySelectorAll(".toc-item-mini"); let cls = "toc-light"; for (let i = 0; i < els.length; i++) { let el = els[i]; if (i == closestIdx) { el.classList.remove(cls); } else { el.classList.add(cls); } } } } window.addEventListener("scroll", updateClosestToc); All together now After page loads I run: genToc(); updateClosestToc(); Which I achieve by including this in HTML: <script src="/help.js" defer></script> </body> Possible improvements Software is never finished. Software can always be improved. I have 2 ideas for further improvements. Always visible when enough space Most of the time my browser window uses half of 13 to 15 inch screen. I’m aggravated when websites don’t work well at that size. At that size there’s not enough space to always show toc. But if the user chooses a wider browser window, it makes sense to use available space and always show table of contents. Keyboard navigation It would be nice to navigate table of contents with keyboard, in addition to mouse. For example: t would show table of contents Esc would dismiss it up / down arrows would navigate toc tree Enter would navigate to selected part and dismiss toc
Porting a medium-sized Vue application to Svelte 5 The short version: porting from Vue to Svelte is pretty straightforward and Svelte 5 is nice upgrade to Svelte 4. Why port? I’m working on Edna, a note taking application for developers. It started as a fork of Heynote. I’ve added a bunch of features, most notably managing multiple notes. Heynote is written in Vue. Vue is similar enough to Svelte that I was able to add features without really knowing Vue but Svelte is what I use for all my other projects. At some point I invested enough effort (over 350 commits) into Edna that I decided to port from Vue to Svelte. That way I can write future code faster (I know Svelte much better than Vue) and re-use code from my other Svelte projects. Since Svelte 5 is about to be released, I decided to try it out. There were 10 .vue components. It took me about 3 days to port everything. Adding Svelte 5 to build pipeline I started by adding Svelte 5 and converting the simplest component. In the above commit: I’ve installed Svelte 5 and it’s vite plugin by adding it to package.json updated tailwind.config.cjs to also scan .svelte files added Svelte plugin to vite.config.js to run Svelte compiler on .svelte and .svelte.js files during build deleted Help.vue, which is not related to porting, I just wasn’t using it anymore started converting smallest component AskFSPermissions.vue as AskFSPermissions.svelte In the next commit: I finished porting AskFSPermissions.vue I tweaked tsconfig.json so that VSCode type-checks .svelte files I replaced AskFSPermissions.vue with Svelte 5 version Here replacing was easy because the component was a stand-alone component. All I had to do was to replace Vue’s: app = createApp(AskFSPermissions); app.mount("#app"); with Svelte 5: const args = { target: document.getElementById("app"), }; appSvelte = mount(AskFSPermissions, args); Overall porting strategy Next part was harder. Edna’s structure is: App.vue is the main component which shows / hides other components depending on state and desired operations. My preferred way of porting would be to start with leaf components and port them to Svelte one by one. However, I haven’t found an easy way of using .svelte components from .vue components. It’s possible: Svelte 5 component can be imported and mounted into arbitrary html element and I could pass props down to it. If the project was bigger (say weeks of porting) I would try to make it work so that I have a working app at all times. Given I estimated I can port it quickly, I went with a different strategy: I created mostly empty App.svelte and started porting components, starting with the simplest leaf components. I didn’t have a working app but I could see and test the components I’ve ported so far. This strategy had it’s challenges. Namely: most of the state is not there so I had to fake it for a while. For example the first component I ported was TopNav.vue, which displays name of the current note in the top upper part of the screen. The problem was: I didn’t port the logic to load the file yet. For a while I had to fake the state i.e. I created noteName variable with a dummy value. With each ported component I would port App.vue parts needed by the component Replacing third-party components Most of the code in Edna is written by me (or comes from the original Heynote project) and doesn’t use third-party Vue libraries. There are 2 exceptions: I wanted to show notification messages and have a context menu. Showing notifications messages isn’t hard: for another project I wrote a Svelte component for that in a few hours. But since I didn’t know Vue well, it would have taken me much longer, possibly days. For that reason I’ve opted to use a third-party toast notifications Vue library. The same goes menu component. Even more so: implementing menu component is complicated. At least few days of effort. When porting to Svelte I replaced third-party vue-toastification library with my own code. At under 100 loc it was trivial to write. For context menu I re-used context menu I wrote for my notepad2 project. It’s a more complicated component so it took longer to port it. Vue => Svelte 5 porting Vue and Svelte have very similar structure so porting is straightforward and mostly mechanical. The big picture: <template> become Svelte templates. Remove <template> and replace Vue control flow directives with Svelte equivalent. For example <div v-if="foo"> becomes {#if foo}<div>{/if} setup() can be done either at top-level, when component is imported, or in $effect( () => { ... } ) when component is mounted data() become variables. Some of them are regular JavaScript variables and some of them become reactive $state() props becomes $props() mounted() becomes $effect( () => { ... } ) methods() become regular JavaScript functions computed() become $derived.by( () => { ... } ) ref() becomes $state() $emit('foo') becomes onfoo callback prop. Could also be an event but Svelte 5 recommends callback props over events @click becomes onclick v-model="foo" becomes bind:value={foo} {{ foo }} in HTML template becomes { foo } ref="foo" becomes bind:this={foo} :disabled="!isEnabled" becomes disabled={!isEnabled} CSS was scoped so didn’t need any changes Svelte 5 At the time of this writing Svelte 5 is Release Candidates and the creators tell you not use it in production. Guess what, I’m using it in production. It works and it’s stable. I think Svelte 5 devs operate from the mindset of “abundance of caution”. All software has bugs, including Svelte 4. If Svelte 5 doesn’t work, you’ll know it. Coming from Svelte 4, Svelte 5 is a nice upgrade. One small project is too early to have deep thoughts but I like it so far. It’s easy to learn new ways of doing things. It’s easy to convert Svelte 4 to Svelte 5, even without any tools. Things are even more compact and more convenient than in Svelte 4. {#snippet} adds functionality that I was missing from Svelte 4.
Building wc in the browser From time to time I like to run wc -l on my source code to see how much code I wrote. For those not in the know: wc -l shows number of lines in files. Actually, what I have to do is more like find -name "*.go" | xargs wc -l because wc isn’t a particularly good at handling directories. I just want to see number of lines in all my source files, man. I don’t want to google the syntax of find and xargs for a hundredth time. After learning about File System API I decided to write a tool that does just that as a web app. No need to install software. I did just that and you can use it yourself. Here’s how it sees itself: The rest of this article describes how I would have done it if I did it. Building software quickly It only took me 3 days, which is a testament to how productive the web platform can be. My weapons of choice are: Svelte for frontend Tailwind CSS for CSS JSDoc for static typing of JavaScript File System API to access files and directories on your computer vite for a bundler and dev server render to deploy For a small project Svelte and Tailwind CSS are arguably an overkill. I used them because I standardized on that toolset. Standardization allows me to re-use prior experience and sometimes even code. Why those technologies? Svelte is React without the bloat. Try it and you’ll love it. Tailwind CSS is CSS but more productive. You have to try it to believe it. JSDoc is happy medium between no types at all and TypeScript. I have great internal resistance to switching to TypeScript. Maybe 5 years from now. And none of that would be possible without browser APIs that allow access to files on your computer. Which FireFox doesn’t implement because they are happy to loose market share to browser that implement useful features. Clearly $3 million a year is not enough to buy yourself a CEO with understanding of the obvious. Implementation tidbits Getting list of files To get a recursive listing of files in a directory use showDirectoryPicker to get a FileSystemDirectoryHandle. Call dirHandle.values() to get a list of directory entries. Recurse if an entry is a directory. Not all browsers support that API. To detect if it works: /** * @returns {boolean} */ export function isIFrame() { let isIFrame = false; try { // in iframe, those are different isIFrame = window.self !== window.top; } catch { // do nothing } return isIFrame; } /** * @returns {boolean} */ export function supportsFileSystem() { return "showDirectoryPicker" in window && !isIFrame(); } Because people on Hacker News always complain about slow, bloated software I took pains to make my code fast. One of those pains was using an array instead of an object to represent a file system entry. Wait, now HN people will complain that I’m optimizing prematurely. Listen buddy, Steve Wozniak wrote assembly in hex and he liked it. In comparison, optimizing memory layout of most frequently used object in JavaScript is like drinking champagne on Jeff Bezos’ yacht. Here’s a JavaScript trick to optimizing memory layout of objects with fixed number of fields: derive your class from an Array. Deriving a class from an Array Little known thing about JavaScript is that an Array is just an object and you can derive your class from it and add methods, getters and setters. You get a compact layout of an array and convenience of accessors. Here’s the sketch of how I implemented FsEntry object: // a directory tree. each element is either a file: // [file, dirHandle, name, path, size, null] // or directory: // [[entries], dirHandle, name, path, size, null] // extra null value is for the caller to stick additional data // without the need to re-allocate the array // if you need more than 1, use an object // handle (file or dir), parentHandle (dir), size, path, dirEntries, meta const handleIdx = 0; const parentHandleIdx = 1; const sizeIdx = 2; const pathIdx = 3; const dirEntriesIdx = 4; const metaIdx = 5; export class FsEntry extends Array { get size() { return this[sizeIdx]; } // ... rest of the accessors } We have 6 slots in the array and we can access them as e.g. entry[sizeIdx]. We can hide this implementation detail by writing a getter as FsEntry.size() shown above. Reading a directory recursively Once you get FileSystemDirectoryHandle by using window.showDirectoryPicker() you can read the content of the directory. Here’s one way to implement recursive read of directory: /** * @param {FileSystemDirectoryHandle} dirHandle * @param {Function} skipEntryFn * @param {string} dir * @returns {Promise<FsEntry>} */ export async function readDirRecur( dirHandle, skipEntryFn = dontSkip, dir = dirHandle.name ) { /** @type {FsEntry[]} */ let entries = []; // @ts-ignore for await (const handle of dirHandle.values()) { if (skipEntryFn(handle, dir)) { continue; } const path = dir == "" ? handle.name : `${dir}/${handle.name}`; if (handle.kind === "file") { let e = await FsEntry.fromHandle(handle, dirHandle, path); entries.push(e); } else if (handle.kind === "directory") { let e = await readDirRecur(handle, skipEntryFn, path); e.path = path; entries.push(e); } } let res = new FsEntry(dirHandle, null, dir); res.dirEntries = entries; return res; } Function skipEntryFn is called for every entry and allows the caller to decide to not include a given entry. You can, for example, skip a directory like .git. It can also be used to show progress of reading the directory to the user, as it happens asynchronously. Showing the files I use tables and I’m not ashamed. It’s still the best technology to display, well, a table of values where cells are sized to content and columns are aligned. Flexbox doesn’t remember anything across rows so it can’t align columns. Grid can layout things properly but I haven’t found a way to easily highlight the whole row when mouse is over it. With CSS you can only target individual cells in a grid, not rows. With table I just style <tr class="hover:bg-gray-100">. That’s Tailwind speak for: on mouse hover set background color to light gray. Folder can contain other folders so we need recursive components to implement it. Svelte supports that with <svelte:self>. I implemented it as a tree view where you can expand folders to see their content. It’s one big table for everything but I needed to indent each expanded folder to make it look like a tree. It was a bit tricky. I went with indent property in my Folder component. Starts with 0 and goes +1 for each level of nesting. Then I style the first file name column as <td class="ind-{indent}">...</td> and use those CSS styles: <style> :global(.ind-1) { padding-left: 0.5rem; } :global(.ind-2) { padding-left: 1rem; } /* ... up to .ind-17 */ Except it goes to .ind-17. Yes, if you have deeper nesting, it won’t show correctly. I’ll wait for a bug report before increasing it further. Calculating line count You can get the size of the file from FileSystemFileEntry. For source code I want to see number of lines. It’s quite trivial to calculate: /** * @param {Blob} f * @returns {Promise<number>} */ export async function lineCount(f) { if (f.size === 0) { // empty files have no lines return 0; } let ab = await f.arrayBuffer(); let a = new Uint8Array(ab); let nLines = 0; // if last character is not newline, we must add +1 to line count let toAdd = 0; for (let b of a) { // line endings are: // CR (13) LF (10) : windows // LF (10) : unix // CR (13) : mac // mac is very rare so we just count 10 as they count // windows and unix lines if (b === 10) { toAdd = 0; nLines++; } else { toAdd = 1; } } return nLines + toAdd; } It doesn’t handle Mac files that use CR for newlines. It’s ok to write buggy code as long as you document it. I also skip known binary files (.png, .exe etc.) and known “not mine” directories like .git and node_modules. Small considerations like that matter. Remembering opened directories I typically use it many times on the same directories and it’s a pain to pick the same directory over and over again. FileSystemDirectoryHandle can be stored in IndexedDB so I implemented a history of opened directories using a persisted store using IndexedDB. Asking for permissions When it comes to accessing files and directories on disk you can’t ask for forgiveness, you have to ask for permission. User grants permissions in window.showDirectoryPicker() and browser remembers them for a while, but they expire quite quickly. You need to re-check and re-ask for permission to FileSystemFileHandle and FileSystemDirectoryHandle before each access: export async function verifyHandlePermission(fileHandle, readWrite) { const options = {}; if (readWrite) { options.mode = "readwrite"; } // Check if permission was already granted. If so, return true. if ((await fileHandle.queryPermission(options)) === "granted") { return true; } // Request permission. If the user grants permission, return true. if ((await fileHandle.requestPermission(options)) === "granted") { return true; } // The user didn't grant permission, so return false. return false; } If permissions are still valid from before, it’s a no-op. If not, the browser will show a dialog asking for permissions. If you ask for write permissions, Chrome will show 2 confirmations dialogs vs. 1 for read-only access. I start with read-only access and, if needed, ask again to get a write (or delete) permissions. Deleting files and directories Deleting files has nothing to do with showing line counts but it was easy to implement, it was useful so I added it. You need to remember FileSystemDirectoryHandle for the parent directory. To delete a file: parentDirHandle.removeEntry("foo.txt") To delete a directory: parentDirHandle.removeEntry("node_modules", {recursive: true}) Getting bit by a multi-threading bug JavaScript doesn’t have multiple threads and you can’t have all those nasty bugs? Right? Right? Yes and no. Async is not multi-threading but it does create non-obvious execution flows. I had a bug: I noticed that some .txt files were showing line count of 0 even though they clearly did have lines. I went bug hunting. I checked the lineCount function. Seems ok. I added console.log(), I stepped through the code. Time went by and my frustration level was reaching DEFCON 1. Thankfully before I reached cocked pistol I had an epiphany. You see, JavaScript has async where some code can interleave with some other code. The browser can splice those async “threads” with UI code. No threads means there are no data races i.e. writing memory values that other thread is in the middle of reading. But we do have non-obvious execution flows. Here’s how my code worked: get a list of files (async) show the files in UI calculate line counts for all files (async) update UI to show line counts after we get them all Async is great for users: calculating line counts could take a long time as we need to read all those files. If this process wasn’t async it would block the UI. Thanks to async there’s enough checkpoints for the browser to process UI events in between processing files. The issue was that function to calculate line counts was using an array I got from reading a directory. I passed the same array to Folder component to show the files. And I sorted the array to show files in human friendly order. In JavaScript sorting mutates an array and that array was partially processed by line counting function. As a result if series of events was unfortunate enough, I would skip some files in line counting. They would be resorted to a position that line counting thought it already counted. Result: no lines for you! A happy ending and an easy fix: Folder makes a copy of an array so sorting doesn’t affect line counting process. The future No software is ever finished but I arrived at a point where it does the majority of the job I wanted so I shipped it. There is a feature I would find useful: statistics for each extensions. How many lines in .go files vs. .js files etc.? But I’m holding off implementing it until: I really, really want it I get feature requests from people who really, really want it You can look at the source code. It’s source visible but not open source.
More in programming
To be a successful founder, you have to believe that what you're working on is going to work — despite knowing it probably won't! That sounds like an oxymoron, but it's really not. Believing that what you're building is going to work is an essential component of coming to work with the energy, fortitude, and determination it's going to require to even have a shot. Knowing it probably won't is accepting the odds of that shot. It's simply the reality that most things in business don't work out. At least not in the long run. Most businesses fail. If not right away, then eventually. Yet the world economy is full of entrepreneurs who try anyway. Not because they don't know the odds, but because they've chosen to believe they're special. The best way to balance these opposing points — the conviction that you'll make it work, the knowledge that it probably won't — is to do all your work in a manner that'll make you proud either way. If it doesn't work, you still made something you wouldn't be ashamed to put your name on. And if it does work, you'll beam with pride from making it on the basis of something solid. The deep regret from trying and failing only truly hits when you look in the mirror and see Dostoevsky staring back at you with this punch to the gut: "Your worst sin is that you have destroyed and betrayed yourself for nothing." Oof. Believe it's going to work. Build it in a way that makes you proud to sign it. Base your worth on a human on something greater than a business outcome.
Understand how computers represent numbers and perform operations at the bit level before diving into assembly
I recently went into a deep dive on “UART” and will publish a much longer article on the topic. This is just a recap of the basics to help put things in context. Many tutorials focus on using UART over USB, which adds many layers of abstraction, hiding what it actually is. Here, I deliberately … Continue reading How to use “real” UART → The post How to use “real” UART appeared first on Quentin Santos.
You know about Critical Race Theory, right? It says that if there’s an imbalance in, say, income between races, it must be due to discrimination. This is what wokism seems to be, and it’s moronic and false. The right wing has invented something equally stupid. Introducing Critical Trade Theory, stolen from this tweet. If there’s an imbalance in trade between countries, it must be due to unfair practices. (not due to the obvious, like one country is 10x richer than the other) There’s really only one way the trade deficits will go away, and that’s if trade goes to zero (or maybe if all these countries become richer than America). Same thing with the race deficits, no amount of “leg up” bullshit will change them. Why are all the politicians in America anti-growth anti-reality idiots who want to drive us into the poor house? The way this tariff shit is being done is another stupid form of anti-merit benefits to chosen groups of people, with a whole lot of grift to go along with it. Makes me just not want to play.
One of the most memorable quotes in Arthur Miller’s The Death of a Salesman comes from Uncle Ben, who describes his path to becoming wealthy as, “When I was seventeen, I walked into the jungle, and when I was twenty-one I walked out. And by God I was rich.” I wish I could describe the path to learning engineering strategy in similar terms, but by all accounts it’s a much slower path. Two decades in, I am still learning more from each project I work on. This book has aimed to accelerate your learning path, but my experience is that there’s still a great deal left to learn, despite what this book has hoped to accomplish. This final chapter is focused on the remaining advice I have to give on how you can continue to improve at strategy long after reading this book’s final page. Inescapably, this chapter has become advice on writing your own strategy for improving at strategy. You are already familiar with my general suggestions on creating strategy, so this chapter provides focused advice on creating your own plan to get better at strategy. It covers: Exploring strategy creation to find strategies you can learn from via public and private resources, and through creating learning communities How to diagnose the strategies you’ve found, to ensure you learn the right lessons from each one Policies that will help you find ways to perform and practice strategy within your organization, whether or not you have organizational authority Operational mechanisms to hold yourself accountable to developing a strategy practice My final benediction to you as a strategy practitioner who has finished reading this book With that preamble, let’s write this book’s final strategy: your personal strategy for developing your strategy practice. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Exploring strategy creation Ideally, we’d start our exploration of how to improve at engineering strategy by reading broadly from the many publicly available examples. Unfortunately, there simply aren’t many easily available works to learn from others’ experience. Nonetheless, resources do exist, and we’ll discuss the three categories that I’ve found most useful: Public resources on engineering strategy, such as companies’ engineering blogs Private and undocumented strategies available through your professional network Learning communities that you build together, including ongoing learning circles Each of these is explored in its own section below. Public resources While there aren’t as many public engineering strategy resources as I’d like, I’ve found that there are still a reasonable number available. This book collects a number of such resources in the appendix of engineering strategy resources. That appendix also includes some individuals’ blog posts that are adjacent to this topic. You can go a long way by searching and prompting your way into these resources. As you read them, it’s important to recognize that public strategies are often misleading, as discussed previously in evaluating strategies. Everyone writing in public has an agenda, and that agenda often means that they’ll omit important details to make themselves, or their company, come off well. Make sure you read through the lines rather than taking things too literally. Private resources Ironically, where public resources are hard to find, I’ve found it much easier to find privately held strategy resources. While private recollections are still prone to inaccuracies, the incentives to massage the truth are less pronounced. The most useful sources I’ve found are: peers’ stories – strategies are often oral histories, and they are shared freely among peers within and across companies. As you build out your professional network, you can usually get access to any company’s engineering strategy on any topic by just asking. There are brief exceptions. Even a close peer won’t share a sensitive strategy before its existence becomes obvious externally, but they’ll be glad to after it does. People tend to over-estimate how much information companies can keep private anyway: even reading recent job postings can usually expose a surprising amount about a company. internal strategy archaeologists – while surprisingly few companies formally collect their strategies into a repository, the stories are informally collected by the tenured members of the organization. These folks are the company’s strategy archaeologists, and you can learn a great deal by explicitly consulting them becoming a strategy archaeologist yourself – whether or not you’re a tenured member of your company, you can learn a tremendous amount by starting to build your own strategy repository. As you start collecting them, you’ll interest others in contributing their strategies as well. As discussed in Staff Engineer’s section on the Write five then synthesize approach to strategy, over time you can foster a culture of documentation where one didn’t exist before. Even better, building that culture doesn’t require any explicit authority, just an ongoing show of excitement. There are other sources as well, ranging from attending the hallway track in conferences to organizing dinners where stories are shared with a commitment to privacy. Working in community My final suggestion for seeing how others work on strategy is to form a learning circle. I formed a learning circle when I first moved into an executive role, and at this point have been running it for more than five years. What’s surprised me the most is how much I’ve learned from it. There are a few reasons why ongoing learning circles are exceptional for sharing strategy: Bi-directional discussion allows so much more learning and understanding than mono-directional communication like conference talks or documents. Groups allow you to learn from others’ experiences and others’ questions, rather than having to guide the entire learning yourself. Continuity allows you to see the strategy at inception, during the rollout, and after it’s been in practice for some time. Trust is built slowly, and you only get the full details about a problem when you’ve already successfully held trust about smaller things. An ongoing group makes this sort of sharing feasible where a transient group does not. Although putting one of these communities together requires a commitment, they are the best mechanism I’ve found. As a final secret, many people get stuck on how they can get invited to an existing learning circle, but that’s almost always the wrong question to be asking. If you want to join a learning circle, make one. That’s how I got invited to mine. Diagnosing your prior and current strategy work Collecting strategies to learn from is a valuable part of learning. You also have to determine what lessons to learn from each strategy. For example, you have to determine whether Calm’s approach to resourcing Engineering-driven projects is something to copy or something to avoid. What I’ve found effective is to apply the strategy rubric we developed in the “Is this strategy any good?” chapter to each of the strategies you’ve collected. Even by splitting a strategy into its various phases, you’ll learn a lot. Applying the rubric to each phase will teach you more. Each time you do this to another strategy, you’ll get a bit faster at applying the rubric, and you’ll start to see interesting, recurring patterns. As you dig into a strategy that you’ve split into phases and applied the evaluation rubric to, here are a handful of questions that I’ve found interesting to ask myself: How long did it take to determine a strategy’s initial phase could be improved? How high was the cost to fund that initial phase’s discovery? Why did the strategy reach its final stage and get repealed or replaced? How long did that take to get there? If you had to pick only one, did this strategy fail in its approach to exploration, diagnosis, policy or operations? To what extent did the strategy outlive the tenure of its primary author? Did it get repealed quickly after their departure, did it endure, or was it perhaps replaced during their tenure? Would you generally repeat this strategy, or would you strive to avoid repeating it? If you did repeat it, what conditions seem necessary to make it a success? How might you apply this strategy to your current opportunities and challenges? It’s not necessary to work through all of these questions for every strategy you’re learning from. I often try to pick the two that I think might be most interesting for a given strategy. Policy for improving at strategy At a high level, there are just a few key policies to consider for improving your strategic abilities. The first is implementing strategy, and the second is practicing implementing strategy. While those are indeed the starting points, there are a few more detailed options worth consideration: If your company has existing strategies that are not working, debug one and work to fix it. If you lack the authority to work at the company scope, then decrease altitude until you find an altitude you can work at. Perhaps setting Engineering organizational strategies is beyond your circumstances, but strategy for your team is entirely accessible. If your company has no documented strategies, document one to make it debuggable. Again, if operating at a high altitude isn’t attainable for some reason, operate at a lower altitude that is within reach. If your company’s or team’s strategies are effective but have low adoption, see if you can iterate on operational mechanisms to increase adoption. Many such mechanisms require no authority at all, such as low-noise nudges or the model-document-share approach. If existing strategies are effective and have high adoption, see if you can build excitement for a new strategy. Start by mining for which problems Staff-plus engineers and senior managers believe are important. Once you find one, you have a valuable strategy vein to start mining. If you don’t feel comfortable sharing your work internally, then try writing proposals while only sharing them to a few trusted peers. You can even go further to only share proposals with trusted external peers, perhaps within a learning circle that you create or join. Trying all of these at once would be overwhelming, so I recommend picking one in any given phase. If you aren’t able to make traction, then try another until something works. It’s particularly important to recognize in your diagnosis where things are not working–perhaps you simply don’t have the sponsorship you need to enforce strategy so you need to switch towards suggesting strategies instead–and you’ll find something that works. What if you’re not allowed to do strategy? If you’re looking to find one, you’ll always unearth a reason why it’s not possible to do strategy in your current environment. If you’ve convinced yourself that there’s simply no policy that would allow you to do strategy in your current role, then the two most useful levers I’ve found are: Lower your altitude – there’s always a scale where you can perform strategy, even if it’s just your team or even just yourself. Only you can forbid yourself from developing personal strategies. Practice rather than perform – organizations can only absorb so much strategy development at a given time, so sometimes they won’t be open to you doing more strategy. In that case, you should focus on practicing strategy work rather than directly performing it. Only you can stop yourself from practice. Don’t believe the hype: you can always do strategy work. Operating your strategy improvement policies As the refrain goes, even the best policies don’t accomplish much if they aren’t paired with operational mechanisms to ensure the policies actually happen, and debug why they aren’t happening. Although it’s tempting to ignore operations when it comes to our personal habits, I think that would be a mistake: our personal habits have the most significant long-term impact on ourselves, and are the easiest habits to ignore since others generally won’t ask about them. The mechanisms I’d recommend: Explicitly track the strategies that you’ve implemented, refined, documented, or read. This should be in a document, spreadsheet or folder where you can explicitly see if you have or haven’t done the work. Review your tracked strategies every quarter: are you working on the expected number and in the expected way? If not, why not? Ideally, your review should be done in community with a peer or a learning circle. It’s too easy to deceive yourself, it’s much harder to trick someone else. If your periodic review ever discovers that you’re simply not doing the work you expected, sit down for an hour with someone that you trust–ideally someone equally or more experienced than you–and debug what’s going wrong. Commit to doing this before your next periodic review. Tracking your personal habits can feel a bit odd, but it’s something I highly recommend. I’ve been setting and tracking personal goals for some time now—for example, in my 2024 year in review—and have benefited greatly from it. Too busy for strategy Many companies convince themselves that they’re too much in a rush to make good decisions. I’ve certainly gotten stuck in this view at times myself, although at this point in my career I find it increasingly difficult to not recognize that I have a number of tools to create time for strategy, and an obligation to do strategy rather than inflict poor decisions on the organizations I work in. Here’s my advice for creating time: If you’re not tracking how often you’re creating strategies, then start there. If you’ve not worked on a single strategy in the past six months, then start with one. If implementing a strategy has been prohibitively time consuming, then focus on practicing a strategy instead. If you do try all those things and still aren’t making progress, then accept your reality: you don’t view doing strategy as particularly important. Spend some time thinking about why that is, and if you’re comfortable with your answer, then maybe this is a practice you should come back to later. Final words At this point, you’ve read everything I have to offer on drafting engineering strategy. I hope this has refined your view on what strategy can be in your organization, and has given you the tools to draft a more thoughtful future for your corner of the software engineering industry. What I’d never ask is for you to wholly agree with my ideas here. They are my best thinking on this topic, but strategy is a topic where I’m certain Hegel’s world view is the correct one: even the best ideas here are wrong in interesting ways, and will be surpassed by better ones.