More from Daniel Marino
I’ve always been fascinated to see what other apps or workflows others are using in their day-to-day lives. Every now and then I learn about a new app or some cool trick I didn’t previously know. I doubt anyone seriously cares about what I’m using, but figured I’d list them out anyway—if for no other reason than to keep a historical record at this point in time. Applications Alfred — I have a lifelong license, and I like it. No point in fixing something that isn’t broken. I primarily use it for app switching, but also use it for math, and to search for gifs. Aseprite — Sometimes I do pixel art! Even if the UI is clunky, and some keyboard shortcuts aren’t always convenient, there are some unique features that help facilitate creating pixel art. Audacity — I rarely use it, but sometimes it’s easier to make some quick audio edits with Audacity than to use a full blown DAW. Bear — This is the note-taking, task-tacking app I’ve landed on. The UI is beautiful and it feels snappy. It syncs, so I can use it on my iPhone too. Chrome — I used Arc for the better part of 2024, but after they announced they were done working on it to focus on a new AI-powered browser, I peaced out. There are a couple of features I really missed, but was able to find some extensions to fill those gaps: Copy Current Tab URL, Meetings Page Auto Closer for Zoom, Open Figma app, and JSON Formatter. Figma — I use it because it’s what we use at work. I’m happy enough with Figma. iTerm2 — Has a few features that I like that makes me chose this over Mac’s native Terminal app. Pixelmator Pro — I haven’t paid the Adobe tax for a long time, and it feels good. I started using Pixelmator because at the time it was the best alternative available. I’m comfortable with Pixelmator at this point. I don’t really use image editors often these days, so I probably won’t switch anytime soon. Reaper — My DAW of choice when composing music. It’s very customizable, easyish enough to learn, and the price is right. It also has a die hard community, so I’m always able to find help when I need it. VS Code — I’ve tried a lot of code editors. I prefer Sublime’s UI over VS Code, but VS Code does a lot of things more easily than Sublime does, so I put up with the UI. YouTube Music — I still miss Rdio. YouTube Music works well enough I guess. Paying for YouTube Music has the benefit of not seeing ads on YouTube. Command-line Tools These aren’t apps per se, but these are some tools that I use to help manage packages or that I use regularly when developing. Deno Eleventy Homebrew pure statikk Vite Volta yt-dlp Equipment I have one computer and I use it for everything, and I’m okay with that. It’s more than powerful enough for work, composing music, making games, and occasionaly playing games. Although I have a dedicated home office, lately I tend to work more on the go, often with just my laptop—whether that’s at a cafe, a coworking space, or even just moving around the house. 2021 M1 MacBook Pro AKG K240 Studio Headphones Arturia MiniLab MKII Controller Behringer UMC202HD USB Audio Interface Fender Squire Strat Guitar Fender Squire Bass Guitar Shure SM57 Virtual Instruments This is quite specific for composing music, so if that does’t interest you, feel free to stop reading here. This list is not exhaustive as I’m regularly trying out new VSTs. These are some staples that I use: 🎹 Arturia Analog Lab V (Intro) — My Arturia controller came with this software. It has over 500 presets and I love exploring the variety of sounds. 🎸 Bass Grinder (Free) — I recently came across this VST, and it has a great crunchy overdrive sound for bass guitar. 🥁 Manda Audio Power Drum Kit — Even though you can use this for free, I paid the $9 because it is fantastic. The drums sound real and are great for all styles of music. 🎸 ML Amped Roots (Free) — What I like about this is that I get great metal guitar out of the bost without having to add pedals or chaining other effects. 🥁 ML Drums (Free) — I just started experimenting with this, and the drum tones are amazing. The free set up is pretty limited, but I like how I can add on to the base drum kit to meet my needs rather than having having to buy one big extensive drum VST. 🎹 Spitfire LABS — More variety of eclectic sounds. I also use several built-in VSTs made by Reaper for delay, EQ, reverb, pitch-shifting, and other effects. Reaper’s VSTs are insanely powerful enough for my needs and are much less CPU intensive.
Over the past couple of years I’ve gotten into journaling. Recently I’ve been using a method where you’re given a single inspirational word as a prompt, and go from there. Unfortunately, the process of finding, saving, and accessing inspirational words was a bit of a chore: Google a list of “366 inspirational words”. Get taken to a blog bloated with ads and useless content all in an effort to generate SEO cred. Copy/paste the words into Notion. Fix how the words get formatted becasue Notion is weird, and I have OCD about formatting text. While this gets the job done, I felt like there was room to make this a more pleasant experience. All I really wanted was a small website that serves a single inspirational word on a daily basis without cruft or ads. This would allow me to get the content I want without having to scroll through a long list. I also don't want to manage or store the list of words. Once I've curated a list of words, I want to be done with it. Creating a microsite I love a good microsite, and so I decided to create one that takes all the chore out of obtaining a daily inspirational word. The website is built with all vanilla tech, and doesn’t use any frameworks! It’s nice and lean, and it’s footprint is only 6.5kb. Inspirational words While I’m not a huge fan of AI, I did leverage ChatGPT on obtaining 366 inspirational words. The benefit to ChatGPT was being able to get it to return the words as an array—cutting down on the tedium of having to convert the words I already had into an array. The words are stored in it’s own JSON file, and I use an async/await function to pull in the words, and then process the data upon return. Worth the effort I find these little projects fun and exciting because the scope is super tight, and makes for a great opportunity to learn new things. It’s definitely an overengineered solution to my problem, but it is a much more pleasant experience. And perhaps it will serve other people as well.
Over the past couple of years I’ve gotten into journaling. Recently I’ve been using a method where you’re given a single inspirational word as a prompt, and go from there. Unfortunately, the process of finding, saving, and accessing inspirational words was a bit of a chore: 1. Google a list of “366 inspirational words”. 2. Get taken to a blog bloated with ads and useless content all in an effort to generate SEO cred. 3. Copy/paste the words into Notion. 4. Fix how the words get formatted becasue Notion is weird, and I have OCD about formatting text. While this gets the job done, I felt like there was room to make this a more pleasant experience. All I really wanted was a small website that serves a single inspirational word on a daily basis without cruft or ads. This would allow me to get the content I want without having to scroll through a long list. I also don't want to manage or store the list of words. Once I've curated a list of words, I want to be done with it. ## Creating a microsite I love a good microsite, and so I decided to create one that takes all the chore out of obtaining a [daily inspirational word](https://starzonmyarmz.github.io/daily-inspirational-word/).  The website is built with all vanilla tech, and doesn’t use any frameworks! It’s nice and lean, and it’s footprint is only 6.5kb. ### Inspirational words While I’m not a huge fan of AI, I did leverage ChatGPT on obtaining 366 inspirational words. The benefit to ChatGPT was being able to get it to return the words as an array—cutting down on the tedium of having to convert the words I already had into an array. The words are stored in it’s own JSON file, and I use an async/await function to pull in the words, and then process the data upon return. ## Worth the effort I find these little projects fun and exciting because the scope is super tight, and makes for a great opportunity to learn new things. It’s definitely an overengineered solution to my problem, but it is a much more pleasant experience. And perhaps it will serve other people as well.
The other day I shared why I prefer coding prototypes rather than using design apps to create them. My prototyping environment has evolved over the years. I love to hear how others build prototypes, so I thought I’d share where I’m at now. Maybe you’ll find it useful. A single repository I currently have a single GitHub repo housing all of my prototypes. I do this primarily so I don’t have to remember where any given prototype lives. They all live in the same place! Another benefit is if I pull in a library or some CSS component, I can reuse it in other prototypes without having to go out and grab it from the source again. My old setup In the past I used Sinatra hosted on Heroku. Having Ruby and a basic server (Such as Webrick) in the backend was pretty nice: I could code up some fairly complex prototypes out with realistic url schemes. Gems! If I needed fake data, I could use the Faker gem. If I wanted a table with a 100 rows, I could easily generate that with a super simple loop. But it got clunky. Spinning up a new prototype wasn’t very efficient. Setting up the urls took time. Deploying to Heroku wasn’t always straight forward. Heroku also got rid of their free plan, and I didn’t want to go looking for a new service. Maybe I’m making excuses. A dumb server Now I just use HTML, CSS (vanilla), and Javascript with no special backend. I don’t have Node.js running, and I don’t use a package manager like NPM or Yarn. To start a server I navigate to the prototype directory in iTerm, and run Statikk. Easy peasy. This setup requires no upkeep, so I can focus on actually prototyping! I have a basic file structure for keeping prototypes separate. I typically use Preact for scaffolding. To import Preact or other NPM packages I use esm.sh. When I push changes to the GitHub repo it’s deployed to GitHub pages. I can then share a public URL to folks that need to review the prototype. One glaring problem There’s only one issue I’ve run into using this setup, and its not even related to the setup! The Porchlight design system (which we use at Harvest) doesn’t have it’s styles or components available to consume publicly via CDN. Womp womp. I can get around the CSS issue. I end up having to copy the compiled CSS from the design system and paste it into a new file in my prototype environment. And I’m kind of out of luck with the JavaScript: I have to code these up from scratch. Although, I suppose I could copy the compiled components, and paste them into my prototype environment. The easiest fix is probably to introduce a package manager, but I’d rather not. We have talked about making the design system’s CSS and components available via CDN–it’s just a matter of getting around to it. Prototype! So that’s my current prototyping setup. Maybe it will help inspire you to setup your own prototyping environment. Whether you use code or a design app–you should prototype!
More in programming
I took an amazing trip to SE Asia last month, including Angkor Wat. I had a hard time finding good reading or other resources to learn from before I went, in part because Amazon is awash in AI garbage. Here’s some books and podcasts I found useful about the Khmer empire in general and Angkor in particular: Ancient Angkor by Michael Freeman and Claude Jacques. The closest thing to a coffee-table book to preview what you will see. The practical information is outdated but the pictures and descriptions are good. Empire Podcast #185: The God Kings of Angkor Wat by William Dalrymple and Anita Anand. An entertaining and fully detailed account of the Khmer empire. It’s basically an excerpt from Dalrymple’s new book The Golden Road: How Ancient India Transformed the World. Fall of Civilizations Podcast #5: The Khmer Empire by Paul Cooper. Another history, not quite as magically well told as Dalrymple but full of good information. Angkor and the Khmer Civilization by Michael D. Coe. A highly recommended history of the Khmer region. Honestly I found this very dry and too detailed, but I did learn from it. Lonely Planet Pocket Guide: Siem Reap & the Temples of Angkor. We didn’t use this much but it seemed like a useful practical guide. OTOH it dates to 2018 so things have changed. My other advice for visiting Siem Reap and Angkor is: go. It is amazing. Plan for at least two full days of touristing there. Hire a private guide and driver if you can, it is absolutely worth it. (Email me for a recommendation.)
Often you’ll see a disorganized collection of ideas labeled as a “strategy.” Even when they’re dense with ideas, these can be hard to parse, and are a major reason why most engineers will claim their company doesn’t have a clear strategy even though my experience is that all companies follow some strategy, even if it’s undocumented. This chapter lays out a repeatable, structured approach to drafting strategy. It introduces each step of that approach, which are then detailed further in their respective chapters. Here we’ll cover: How these five steps fit together to facilitate creating strategy, especially by preventing practicioners from skipping steps that feel awkward or challenging. Step 1: Exploring the wider industry’s ideas and practices around the strategy you’re working on. Exploration is understanding what recent research might change your approach, and how the state of art has changed since you last tackled a similar problem. Step 2: Diagnosing the details of your problem. It’s hard to slow down to understand your problem clearly before attempting to solve it, but it’s even more difficult to solve anything well without a clear diagnosis. Step 3: Refinement is taking a raw, unproven set of ideas and testing them against reality. Three techniques are introduced to support this validation process: strategy testing, systems modeling, and Wardley mapping. Step 4: Policy makes the tradeoffs and decisions to solve your diagnosis. These can range from specifying how software is architected, to how pull requests are reviewed, to how headcount is allocated within an organization. Step 5: Operations are the concrete mechanisms that translate policy into an active force within your organization. These can be nudges that remind you about code changes without associated tests, or weekly meetings where you study progress on a migration. Whether these steps are sacred or are open to adaptation and experimentation, including when you personally should persevere in attempting steps that don’t feel effective. From this chapter’s launching point, you’ll have the high-level summaries of each step in strategy creation, and can decide where you want to read further. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. How the steps become strategy Creating effective strategy is not rote incantation of a formula. You can’t merely follow these steps to guarantee that you’ll create a great strategy. However, I’ve found over and over is that strategies fail more due to avoidable errors than from fundamentally unsound thinking. Busy people skip steps. Especially steps they dislike or have failed at before. These steps are the scaffolding to avoid those errors. By practicing routinely, you’ll build powerful habits and intuition around which approach is most appropriate for the current strategy you’re working on. They also help turn strategy into a community practice that you, your colleagues, and the wider engineering ecosystem can participate in together. Each step is an input that flows into the next step. Your exploration is the foundation of a solid diagnosis. Your diagnosis helps you search the infinite space of policy for what you need now. Operational mechanisms help you turn policy into an active force supporting your strategy rather than an abstract treatise. If you’re skeptical of the steps, you should certainly maintain your skepticism, but do give them a few tries before discarding them entirely. You may also appreciate the discussion in the chapter on bridging between theory and practice when doing strategy. Explore Exploration is the deliberate practice of searching through a strategy’s problem and solution spaces before allowing yourself to commit to a given approach. It’s understanding how other companies and teams have approached similar questions, and whether their approaches might also work well for you. It’s also learning why what brought you so much success at your former employer isn’t necessarily the best solution for your current organization. The Uber service migration strategy used exploration to understand the service ecosystem by reading industry literature: As a starting point, we find it valuable to read Large-scale cluster management at Google with Borg which informed some elements of the approach to Kubernetes, and Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center which describes the Mesos/Aurora approach. It also used a Wardley map to explore the cloud compute ecosystem. For more detail, read the Exploration chapter. Diagnose Diagnosis is your attempt to correctly recognize the context that the strategy needs to solve before deciding on the policies to address that context. Starting from your exploration’s learnings, and your understanding of your current circumstances, building a diagnosis forces you to delay thinking about solutions until you fully understand your problem’s nuances. A diagnosis can be largely data driven, such as the navigating a Private Equity ownership transition strategy: Our Engineering headcount costs have grown by 15% YoY this year, and 18% YoY the prior year. Headcount grew 7% and 9% respectively, with the difference between headcount and headcount costs explained by salary band adjustments (4%), a focus on hiring senior roles (3%), and increased hiring in higher cost geographic regions (1%). It can also be less data driven, instead aiming to summarize a problem, such as the Index acquisition strategy’s summary of the known and unknown elements of the technical integration prior to the acquisition closing: We will need to rapidly integrate the acquired startup to meet this timeline. We only know a small number of details about what this will entail. We do know that point-of-sale devices directly operate on payment details (e.g. the point-of-sale device knows the credit card details of the card it reads). Our compliance obligations restrict such activity to our “tokenization environment”, a highly secured and isolated environment with direct access to payment details. This environment converts payment details into a unique token that other environments can utilize to operate against payment details without the compliance overhead of having direct access to the underlying payment details. The approach, and challenges, of developing a diagnosis are detailed in the Diagnosis chapter. Refine (Test, Map & Model) Strategy refinement is a toolkit of methods to identify which parts of your diagnosis are most important, and verify that your approach to solving the diagnosis actually works. This chapter delves into the details of using three methods in particular: strategy testing, systems modeling, and Wardley mapping. An example of a systems modeling diagram. These techniques are also demonstrated in the strategy case studies, such as the Wardley map of the LLM ecosystem, or the systems model of backfilling roles without downleveling them. For more detail, read the Refinement chapter. Why isn’t refinement earlier (or later)? A frequent point of disagreement is that refinement should occur before the diagnosis. Another is that mapping and modeling are two distinct steps, and mapping should occur before diagnosis, and modeling should occur after policy. A third is that refinement ought to be the final step of strategy, turning the steps into a looping cycle. These are all reasonable observations, so let me unpack my rationale for this structure. By far the biggest risk for most strategies is not that you model too early or map too late, but instead that you simply skip both steps entirely. My foremost concern is minimizing the required investment into mapping and modeling such that more folks do these steps at all. Refining after exploring and diagnosing allows you to concentrate your efforts on a smaller number of load-bearing areas. That said, it’s common to refine many places in your strategy creation. You’re just as likely to have three small refinement steps as one bigger one. Policy Policy is interpreting your diagnosis into a concrete plan. This plan also needs to work, which requires careful study of what’s worked within your company, and what new ideas you’ve discovered while exploring the current problem. Policies can range from providing directional guidance, such as the user data controls strategy’s guidance: Good security discussions don’t frame decisions as a compromise between security and usability. We will pursue multi-dimensional tradeoffs to simultaneously improve security and efficiency. Whenever we frame a discussion on trading off between security and utility, it’s a sign that we are having the wrong discussion, and that we should rethink our approach. We will prioritize mechanisms that can both automatically authorize and automatically document the rationale for accesses to customer data. The most obvious example of this is automatically granting access to a customer support agent for users who have an open support ticket assigned to that agent. (And removing that access when that ticket is reassigned or resolved.) To committing not to make a decision until later, as practiced in the Index acquisition strategy: Defer making a decision regarding the introduction of Java to a later date: the introduction of Java is incompatible with our existing engineering strategy, but at this point we’ve also been unable to align stakeholders on how to address this decision. Further, we see attempting to address this issue as a distraction from our timely goal of launching a joint product within six months. We will take up this discussion after launching the initial release. This chapter further goes into evaluating policies, overcoming ambiguous circumstances that make it difficult to decide on an approach, and developing novel policies. For full detail, read the Policy chapter. Operations Even the best policies have to be interpreted. There will be new circumstances their authors never imagined, and the policies may be in effect long after their authors have left the organization. Operational mechanisms are the concrete implementation of your policy. The simplest mechanisms are an explicit escalation path, as shown in Calm’s product engineering strategy: Exceptions are granted by the CTO, and must be in writing. The above policies are deliberately restrictive. Sometimes they may be wrong, and we will make exceptions to them. However, each exception should be deliberate and grounded in concrete problems we are aligned both on solving and how we solve them. If we all scatter towards our preferred solution, then we’ll create negative leverage for Calm rather than serving as the engine that advances our product. From that starting point, the mechanisms can get far more complex. This chapter works through evaluating mechanisms, composing an operational plan, and the most common sorts of operational mechanisms that I’ve seen across strategies. For more detail, read the Operations chapter. Is the structure sacrosanct? When someone’s struggling to write a strategy document, one of the first tools someone will often recommend is a strategy template. Templates are great: they reduce the ambiguity of an already broad project into something more tractable. If you’re wondering if you should use a template to craft strategy: sure, go ahead! However, I find that well-meaning, thoughtful templates often turn into lumbering, callous documents that serve no one well. The secret to good templates is that someone has to own it, and that person has to care about the template writer first and foremost, rather than the various constituencies that want to insert requirements into the strategy creation process. The security, compliance and cost of your plans matter a lot, but many organizations start to layer in more and more requirements into these sorts of documents until the idea of writing them becomes prohibitively painful. The best advice I can give someone attempting to write strategy, is that you should discard every element of strategy that gets in your way as long as you can explain what that element was intended to accomplish. For example, if you’re drafting a strategy and you don’t find any operational mechanisms that fit. That’s fine, discard that section. Ultimately, the structure is not sacrosanct, it’s the thinking behind the sections that really matter. This topic is explored in more detail in the chapter on Making engineering strategies more readable. Summary Now, you know the foundational steps to conducting strategy. From here, you can dive into the details with the strategy case studies like How should you adopt LLMs? or you can maintain a high altitude starting with how exploration creates the foundation for an effective strategy. Whichever you start with, I encourage yout o eventually work through both to get the full perspective.
Logic for Programmers v0.8 now out! The new release has minor changes: new formatting for notes and a better introduction to predicates. I would have rolled it all into v0.9 next month but I like the monthly cadence. Get it here! Betteridge's Law of Software Engineering Specialness In There is No Automatic Reset in Engineering, Tim Ottinger asks: Do the other people have to live with January 2013 for the rest of their lives? Or is it only engineering that has to deal with every dirty hack since the beginning of the organization? Betteridge's Law of Headlines says that if a journalism headline ends with a question mark, the answer is probably "no". I propose a similar law relating to software engineering specialness:1 If someone asks if some aspect of software development is truly unique to just software development, the answer is probably "no". Take the idea that "in software, hacks are forever." My favorite example of this comes from a different profession. The Dewey Decimal System hierarchically categorizes books by discipline. For example, Covered Bridges of Pennsylvania has Dewey number 624.37. 6-- is the technology discipline, 62- is engineering, 624 is civil engineering, and 624.3 is "special types of bridges". I have no idea what the last 0.07 means, but you get the picture. Now if you look at the 6-- "technology" breakdown, you'll see that there's no "software" subdiscipline. This is because when Dewey preallocated the whole technology block in 1876. New topics were instead to be added to the 00- "general-knowledge" catch-all. Eventually 005 was assigned to "software development", meaning The C Programming Language lives at 005.133. Incidentally, another late addition to the general knowledge block is 001.9: "controversial knowledge". And that's why my hometown library shelved the C++ books right next to The Mothman Prophecies. How's that for technical debt? If anything, fixing hacks in software is significantly easier than in other fields. This came up when I was interviewing classic engineers. Kludges happened all the time, but "refactoring" them out is expensive. Need to house a machine that's just two inches taller than the room? Guess what, you're cutting a hole in the ceiling. (Even if we restrict the question to other departments in a software company, we can find kludges that are horrible to undo. I once worked for a company which landed an early contract by adding a bespoke support agreement for that one customer. That plagued them for years afterward.) That's not to say that there aren't things that are different about software vs other fields!2 But I think that most of the time, when we say "software development is the only profession that deals with XYZ", it's only because we're ignorant of how those other professions work. Short newsletter because I'm way behind on writing my April Cools. If you're interested in April Cools, you should try it out! I make it way harder on myself than it actually needs to be— everybody else who participates finds it pretty chill. Ottinger caveats it with "engineering, software or otherwise", so I think he knows that other branches of engineering, at least, have kludges. ↩ The "software is different" idea that I'm most sympathetic to is that in software, the tools we use and the products we create are made from the same material. That's unusual at least in classic engineering. Then again, plenty of machinists have made their own lathes and mills! ↩
We're spending just shy of $1.5 million/year on AWS S3 at the moment to host files for Basecamp, HEY, and everything else. The only way we were able to get the pricing that low was by signing a four-year contract. That contract expires this summer, June 30, so that's our departure date for the final leg of our cloud exit. We've already racked the replacement from Pure Storage in our two primary data centers. A combined 18 petabytes, securely replicated a thousand miles apart. It's a gorgeous rack full of blazing-fast NVMe storage modules. Each card in the chassis capable of storing 150TB now. Pure Storage comes with an S3-compatible API, so no need for CEPH, Minio, or any of the other object storage software solutions you might need, if you were trying to do this exercise on commodity hardware. This makes it pretty easy from the app side to do the swap. But there's still work to do. We have to transfer almost six petabytes out of S3. In an earlier age, that egress alone would have cost hundreds of thousands of dollars in fees alone. But now AWS offers a free 60-day egress window for anyone who wants to leave, so that drops the cost to $0. Nice! It takes a while to transfer that much data, though. Even on the fat 40-Gbit pipe we have set aside for the purpose, it'll probably take at least three weeks, once you factor in overhead and some babysitting of the process. That's when it's good to remind ourselves why June 30th matters. And the reminder math pens out in nice, round numbers for easy recollection: If we don't get this done in time, we'll be paying a cool five thousand dollars a day to continue to use S3 (if all the files are still there). Yikes! That's $35,000/week! That's $150,000/month! Pretty serious money for a company of our size. But so are the savings. Over five years, it'll now be almost five million! Maybe even more, depending on the growth in files we need to store for customers. About $1.5 million for the Pure Storage hardware, and a bit less than a million over five years for warranty and support. But those big numbers always seem a bit abstract to me. The idea of paying $5,000/day, if we miss our departure date, is awfully concrete in comparison.