More from Making software better without sacrificing user experience.
Bringing dwm Shortcuts to GNOME 2023-11-02 The dwm window manager is my standard "go-to" for most of my personal laptop environments. For desktops with larger, higher resolution monitors I tend to lean towards using GNOME. The GNOME DE is fairly solid for my own purposes. This article isn't going to deep dive into GNOME itself, but instead highlight some minor configuration changes I make to mimic a few dwm shortcuts. For reference, I'm running GNOME 45.0 on Ubuntu 23.10 Setting Up Fixed Workspaces When I use dwm I tend to have a hard-set amount of tags to cycle through (normally 4-5). Unfortunately, dynamic rendering is the default for workspaces (ie. tags) in GNOME. For my personal preference I set this setting to fixed. We can achieve this by opening Settings > Multitasking and selecting "Fixed number of workspaces". Screenshot of GNOME's Multitasking Settings GUI Setting Our Keybindings Now all that is left is to mimic dwm keyboard shortcuts, in this case: ALT + $num for switching between workspaces and ALT + SHIFT + $num for moving windows across workspaces. These keyboard shortcuts can be altered under Settings > Keyboard > View and Customize Shortcuts > Navigation. You'll want to make edits to both the "Switch to workspace n" and "Move window to workspace n". Screenshot of GNOME's keyboard shortcut GUI: switch to workspace Screenshot of GNOME's keyboard shortcut GUI: move window to workspace That's it. You're free to include even more custom keyboard shortcuts (open web browser, lock screen, hibernate, etc.) but this is a solid starting point. Enjoy tweaking GNOME!
Installing Older Versions of MongoDB on Arch Linux 2023-09-11 I've recently been using Arch Linux for my main work environment on my ThinkPad X260. It's been great. As someone who is constantly drawn to minimalist operating systems such as Alpine or OpenBSD, it's nice to use something like Arch that boasts that same minimalist approach but with greater documentation/support. Another major reason for the switch was the need to run older versions of "services" locally. Most people would simply suggest using Docker or vmm, but I personally run projects in self-contained, personalized directories on my system itself. I am aware of the irony in that statement... but that's just my personal preference. So I thought I would share my process of setting up an older version of MongoDB (3.4 to be precise) on Arch Linux. AUR to the Rescue You will need to target the specific version of MongoDB using the very awesome AUR packages: yay -S mongodb34-bin Follow the instructions and you'll be good to go. Don't forget to create the /data/db directory and give it proper permissions: mkdir -p /data/db/ chmod -R 777 /date/db What About My "Tools"? If you plan to use MongoDB, then you most likely want to utilize the core database tools (restore, dump, etc). The problem is you can't use the default mongodb-tools package when trying to work with older versions of MongoDB itself. The package will complain about conflicts and ask you to override your existing version. This is not what we want. So, you'll have to build from source locally: git clone https://github.com/mongodb/mongo-tools cd mongodb-tools ./make build Then you'll need to copy the built executables into the proper directory in order to use them from the terminal: cp bin/* /usr/local/bin/ And that's it! Now you can run mongod directly or use systemctl to enable it by default. Hopefully this helps anyone else curious about running older (or even outdated!) versions of MongoDB.
Converting HEIF Images with macOS Automator 2023-07-21 Often times when you save or export photos from iOS to iCloud they often render themselves into heif or heic formats. Both macOS and iOS have no problem working with these formats, but a lot of software programs will not even recognize these filetypes. The obvious step would just be to convert them via an application or online service, right? Not so fast! Wouldn't it be much cleaner if we could simply right-click our heif or heic files and convert them directly in Finder? Well, I've got some good news for you... Basic Requirements You will need to have Homebrew installed You will need to install the libheif package through Homebrew: brew install libheif Creating our custom Automator script For this example script we are going to convert the image to JPG format. You can freely change this to whatever format you wish (PNG, TIFF, etc.). We're just keeping things basic for this tutorial. Don't worry if you've never worked with Automator before because setting things up is incredibly simple. Open the macOS Automator from the Applications folder Select Quick Option from the first prompt Set "Workflow receives current" to image files Set the label "in" to Finder From the left pane, select "Library > Utilities" From the presented choices in the next pane, drag and drop Run Shell Script into the far right pane Set the area "Pass input" to as arguments Enter the following code below as your script and type ⌘-S to save (name it something like "Convert HEIC/HEIF to JPG") for f in "$@" do /opt/homebrew/bin/heif-convert "$f" "${f%.*}.jpg" done Making Edits If you ever have the need to edit this script (for example, changing the default format to png), you will need to navigate to your ~/Library/Services folder and open your custom heif Quick Action in the Automator application. Simple as that. Happy converting! If you're interested, I also have some other Automator scripts available: Batch Converting Images to webp with macOS Automator Convert Files to HTML with macOS Automator Quick Actions
Blogging for 7 Years 2023-06-24 My first public article was posted on June 28th 2016. That was seven years ago. In that time, quite a lot has changed in my life both personally and professionally. So, I figured it would be interesting to reflect on these years and document it for my own personal records. My hope is that this is something I could start doing every 5 or 10 years (if I can keep going that long!). This way, my blog also serves as a "time capsule" or museum of the past... Fun Facts This Blog: I originally started blogging on bradleytaunt.com using WordPress, but since then I have changed both my main domain and blog infrastructure multiple times. At a glance I have used: Jekyll Hugo Blot Static HTML/CSS PHPetite Shinobi pblog barf Currently using! Personal: As with anyone over time, the personal side of my life has seen the biggest updates: Married the love of my life (after knowing each other for ~14 years!) Moved out into rural Ontario for some peace and quiet Had three wonderful kids with said wife (two boys and a girl) Started noticing grey sprinkles in my stubble (I guess I can officially call myself a "grey beard"?) Professionally: Pivoted heavily into UX research and design for a handful of years (after working mostly with web front-ends) Recently switched back into a more fullstack development role to challenge myself and learn more Nothing Special This post isn't anything ground-breaking but for me it's nice to reflect on the time passed and remember how much can change in such little time. Hopefully I'll be right back here in another 7 years and maybe you'll still be reading along with me!
More in programming
.title {text-wrap:balance;} #content > p:first-child {text-wrap:balance;} If Git had a nemesis, it’d be large files. Large files bloat Git’s storage, slow down git clone, and wreak havoc on Git forges. In 2015, GitHub released Git LFS—a Git extension that hacked around problems with large files. But Git LFS added new complications and storage costs. Meanwhile, the Git project has been quietly working on large files. And while LFS ain’t dead yet, the latest Git release shows the path towards a future where LFS is, finally, obsolete. What you can do today: replace Git LFS with Git partial clone Git LFS works by storing large files outside your repo. When you clone a project via LFS, you get the repo’s history and small files, but skip large files. Instead, Git LFS downloads only the large files you need for your working copy. In 2017, the Git project introduced partial clones that provide the same benefits as Git LFS: Partial clone allows us to avoid downloading [large binary assets] in advance during clone and fetch operations and thereby reduce download times and disk usage. – Partial Clone Design Notes, git-scm.com Git’s partial clone and LFS both make for: Small checkouts – On clone, you get the latest copy of big files instead of every copy. Fast clones – Because you avoid downloading large files, each clone is fast. Quick setup – Unlike shallow clones, you get the entire history of the project—you can get to work right away. What is a partial clone? A Git partial clone is a clone with a --filter. For example, to avoid downloading files bigger than 100KB, you’d use: git clone --filter='blobs:size=100k' <repo> Later, Git will lazily download any files over 100KB you need for your checkout. By default, if I git clone a repo with many revisions of a noisome 25 MB PNG file, then cloning is slow and the checkout is obnoxiously large: $ time git clone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (153/153), 1.19 GiB real 3m49.052s Almost four minutes to check out a single 25MB file! $ du --max-depth=0 --human-readable noise-over-git/. 1.3G noise-over-git/. $ ^ 🤬 And 50 revisions of that single 25MB file eat 1.3GB of space. But a partial clone side-steps these problems: $ git config --global alias.pclone 'clone --filter=blob:limit=100k' $ time git pclone https://github.com/thcipriani/noise-over-git Cloning into '/tmp/noise-over-git'... ... Receiving objects: 100% (1/1), 24.03 MiB real 0m6.132s $ du --max-depth=0 --human-readable noise-over-git/. 49M noise-over-git/ $ ^ 😻 (the same size as a git lfs checkout) My filter made cloning 97% faster (3m 49s → 6s), and it reduced my checkout size by 96% (1.3GB → 49M)! But there are still some caveats here. If you run a command that needs data you filtered out, Git will need to make a trip to the server to get it. So, commands like git diff, git blame, and git checkout will require a trip to your Git host to run. But, for large files, this is the same behavior as Git LFS. Plus, I can’t remember the last time I ran git blame on a PNG 🙃. Why go to the trouble? What’s wrong with Git LFS? Git LFS foists Git’s problems with large files onto users. And the problems are significant: 🖕 High vendor lock-in – When GitHub wrote Git LFS, the other large file systems—Git Fat, Git Annex, and Git Media—were agnostic about the server-side. But GitHub locked users to their proprietary server implementation and charged folks to use it.1 💸 Costly – GitHub won because it let users host repositories for free. But Git LFS started as a paid product. Nowadays, there’s a free tier, but you’re dependent on the whims of GitHub to set pricing. Today, a 50GB repo on GitHub will cost $40/year for storage. In contrast, storing 50GB on Amazon’s S3 standard storage is $13/year. 😰 Hard to undo – Once you’ve moved to Git LFS, it’s impossible to undo the move without rewriting history. 🌀 Ongoing set-up costs – All your collaborators need to install Git LFS. Without Git LFS installed, your collaborators will get confusing, metadata-filled text files instead of the large files they expect. The future: Git large object promisors Large files create problems for Git forges, too. GitHub and GitLab put limits on file size2 because big files cost more money to host. Git LFS keeps server-side costs low by offloading large files to CDNs. But the Git project has a new solution. Earlier this year, Git merged a new feature: large object promisers. Large object promisors aim to provide the same server-side benefits as LFS, minus the hassle to users. This effort aims to especially improve things on the server side, and especially for large blobs that are already compressed in a binary format. This effort aims to provide an alternative to Git LFS – Large Object Promisors, git-scm.com What is a large object promisor? Large object promisors are special Git remotes that only house large files. In the bright, shiny future, large object promisors will work like this: You push a large file to your Git host. In the background, your Git host offloads that large file to a large object promisor. When you clone, the Git host tells your Git client about the promisor. Your client will clone from the Git host, and automagically nab large files from the promisor remote. But we’re still a ways off from that bright, shiny future. Git large object promisors are still a work in progress. Pieces of large object promisors merged to Git in March of 2025. But there’s more to do and open questions yet to answer. And so, for today, you’re stuck with Git LFS for giant files. But once large object promisors see broad adoption, maybe GitHub will let you push files bigger than 100MB. The future of large files in Git is Git. The Git project is thinking hard about large files, so you don’t have to. Today, we’re stuck with Git LFS. But soon, the only obstacle for large files in Git will be your half-remembered, ominous hunch that it’s a bad idea to stow your MP3 library in Git. Edited by Refactoring English Later, other Git forges made their own LFS servers. Today, you can push to multiple Git forges or use an LFS transfer agent, but all this makes set up harder for contributors. You’re pretty much locked-in unless you put in extra effort to get unlocked.↩︎ File size limits: 100MB for GitHub, 100MB for GitLab.com↩︎
Conrad Irwin has an article on the Zed blog “Why LLMs Can't Really Build Software”. He says it boils down to: the distinguishing factor of effective engineers is their ability to build and maintain clear mental models We do this by: Building a mental model of what you want to do Building a mental model of what the code does Reducing the difference between the two It’s kind of an interesting observation about how we (as humans) problem solve vs. how we use LLMs to problem solve: With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution. With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself. One adds information — complexity you might even say — to solve a problem. The other eliminates it. You know what that sort of makes me think of? NPM driven development. Solving problems with LLMs is like solving front-end problems with NPM: the “solution” comes through installing more and more things — adding more and more context, i.e. more and more packages. LLM: Problem? Add more context. NPM: Problem? There’s a package for that. Contrast that with a solution that comes through simplification. You don’t add more context. You simplify your mental model so you need less to solve a problem — if you solve it at all, perhaps you eliminate the problem entirely! Rather than install another package to fix what ails you, you simplify your mental model which often eliminates the problem you had in the first place; thus eliminating the need to solve any problem at all, or to add any additional context or complexity (or dependency). As I’m typing this, I’m thinking of that image of the evolution of the Raptor engine, where it evolved in simplicity: This stands in contrast to my working with LLMs, which often wants more and more context from me to get to a generative solution: I know, I know. There’s probably a false equivalence here. This entire post started as a note and I just kept going. This post itself needs further thought and simplification. But that’ll have to come in a subsequent post, otherwise this never gets published lol. Email · Mastodon · Bluesky
Measuring, analyzing, and optimizing loops using Linux perf, Top-Down Microarchitectural Analysis, and the CPU’s micro-op cache
You can just change things! That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome? Well, yes! We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming. And now it's released as a package anyone can install on any flavor of Arch using the AUR (Arch User Repository). We got it all done in just four days. From idea, to solicitation, to successful patch, to release, to incorporation. And now it'll be part of the next release of Omarchy. There are no speed limits in open source. Nobody to ask for permission. You have the code, so you can make the change. All you need is skill and will (and maybe, if you need someone else to do it for you, a $5,000 incentive 😄).
Jan Miksovsky lays out his idea for website creation as content transformation. He starts by talking about tools that hide what’s happening “under the hood”: A framework’s marketing usually pretends it is unnecessary for you to understand how its core transformation works — but without that knowledge, you can’t achieve the beautiful range of results you see in the framework’s sample site gallery. This is a great callout. Tools will say, “You don’t have to worry about the details.” But the reality is, you end up worrying about the details — at least to some degree. Why? Because what you want to build is full of personalization. That’s how you differentiate yourself, which means you’re going to need a tool that’s expressive enough to help you. So the question becomes: how hard is it to understand the details that are being intentionally hidden away? A lot of the time those details are not exposed directly. Instead they’re exposed through configuration. But configuration doesn’t really help you learn how something works. I mean, how many of you have learned how typescript works under the hood by using tsconfig.json? As Jan says: Configuration can lead to as many problems as it solves Nailed it. He continues: Configuring software is itself a form of programming, in fact a rather difficult and often baroque form. It can take more data files or code to configure a framework’s transformation than to write a program that directly implements that transformation itself. I’m not a Devops person, but that sounds like Devops in a nutshell right there. (It also perfectly encapsulates my feelings on trying to setup configuration in GitHub Actions.) Jan moves beyond site creation to also discuss site hosting. He gives good reasons for keeping your website’s architecture simple and decoupled from your hosting provider (something I’ve been a long time proponent of): These site hosting platforms typically charge an ongoing subscription fee. (Some offer a free tier that may meet your needs.) The monthly fee may not be large, but it’s forever. Ten years from now you’ll probably still want your content to be publicly available, but will you still be happy paying that monthly fee? If you stop paying, your site disappears. In subscription pricing, any price (however small) is recurring. Stated differently: pricing is forever. Anyhow, it’s a good read from Jan and lays out his vision for why he’s building Web Origami: a tool for that encourages you to understand (and customize) how you transform content to a website. He just launched version 0.4.0 which has some exciting stuff I’m excited to try out further (I’ll have to write about all that later). Email · Mastodon · Bluesky