More from ./techtipsy
I’ve written about abusing USB storage devices in the past, with a passing mention that I’m too cheap to buy an IODD device. Then I bought one. I’ve always liked the promise of tools like Ventoy: you only need to carry the one storage device that boots anything you want. Unfortunately I still can’t trust Ventoy, so I’m forced to look elsewhere. The hardware I decided to get the IODD ST400 for 122 EUR (about 124 USD) off of Amazon Germany, since it was for some reason cheaper than getting it from iodd.shop directly. SATA SSD-s are cheap and plentiful, so the ST400 made the most sense to me. The device came with one USB cable, with type A and type C ends. The device itself has a USB type C port, which I like a lot. The buttons are functional and clicky, but incredibly loud. Setting it up Before you get started with this device, I highly recommend glancing over the official documentation. The text is poorly translated in some parts, but overall it gets the job done. Inserting the SSD was reasonably simple, it slotted in well and would not move around after assembling it. Getting the back cover off was tricky, but I’d rather have that than have to deal with a loose back cover that comes off when it shouldn’t. The most important step is the filesystem choice. You can choose between NTFS, FAT32 or exFAT. Due to the maximum file size limitation of 4GB on FAT32, you will probably want to go with either NTFS or exFAT. Once you have a filesystem on the SSD, you can start copying various installers and tools on it and mount them! The interface is unintuitive. I had to keep the manual close when testing mine, but eventually I figured out what I can and cannot do. Device emulation Whenever you connect the IODD device to a powered on PC, it will present itself as multiple devices: normal hard drive: the whole IODD filesystem is visible here, and you can also store other files and backups as well if you want to optical media drive: this is where your installation media (ISO files) will end up, read only virtual drives (up to 3 at a time): VHD files that represent virtual hard drives, but are seen as actual storage devices on the PC This combination of devices is incredibly handy. For example, you can boot an actual Fedora Linux installation as one of the virtual drives, and make a backup of the files on the PC right to the IODD storage itself. S.M.A.R.T information also seems to be passed through properly for the disk that’s inside. Tech tip: to automatically mount your current selection of virtual drives and ISO file at boot, hold down the “9” button for about 3 seconds. The button also has an exit logo on it. Without this step, booting an ISO or virtual drive becomes tricky as you’ll have to both spam the “select boot drive” key on the PC while navigating the menus on the IODD device to mount the ISO. The performance is okay. The drive speeds are limited to SATA II speeds, which means that your read/write speeds cap out at about 250 MB/s. Latency will depend a lot on the drive, but it stays mostly in the sub-millisecond range on my SSD. The GNOME Disks benchmark does show a notable chunk of reads having a 5 millisecond latency. The drive does not seem to exhibit any throttling under sustained loads, so at least it’s better than a normal USB stick. The speeds seem to be the same for all emulated devices, with latencies and speeds being within spitting distance. The firmware sucks, actually The IODD ST400 is a great idea that’s been turned into a good product, but the firmware is terrible enough to almost make me regret the purchase. The choice of filesystems available (FAT32, NTFS, exFAT) is very Windows-centric, but at least it comes with the upside of being supported on most popular platforms, including Linux and Mac. Not great, not terrible. The folder structure has some odd limitations. For example, you can only have 32 items within a folder. If you have more of that, you have to use nested folders. This sounds like a hard cap written somewhere within the device firmware itself. I’m unlikely to hit such limits myself and it doesn’t seem to affect the actual storage, just the device itself isn’t able to handle that many files within a directory listing. The most annoying issue has turned out to be defragmentation. In 2025! It’s a known limitation that’s handily documented on the IODD documentation. On Windows, you can fix it by using a disk defragmentation tool, which is really not recommended on an SSD. On Linux, I have not yet found a way to do that, so I’ve resorted to simply making a backup of the contents of the drive, formatting the disk, and copying it all back again. This is a frustrating issue that only comes up when you try to use a virtual hard drive. It would absolutely suck to hit this error while in the field. The way virtual drives are handled is also less than ideal. You can only use fixed VHD files that are not sparse, which seems to again be a limitation of the firmware. Tech tip: if you’re on Linux and want to convert a raw disk image (such as a disk copied with dd) to a VHD file, you can use a command like this one: qemu-img convert -f raw -O vpc -o subformat=fixed,force_size source.img target.vhd The firmware really is the worst part of this device. What I would love to see is a device like IODD but with free and open source firmware. Ventoy has proven that there is a market for a solution that makes juggling installation media easy, but it can’t emulate hardware devices. An IODD-like device can. Encryption and other features I didn’t test those because I don’t really need those features myself, I really don’t need to protect my Linux installers from prying eyes. Conclusion The IODD ST400 is a good device with a proven market, but the firmware makes me refrain from outright recommending it to everyone, at least not at this price. If it were to cost something like 30-50 EUR/USD, I would not mind the firmware issues at all.
When you’re dealing with a particularly large service with a slow deployment pipeline (15-30 minutes), and a rollback delay of up to 10 minutes, you’re going to need feature toggles (some also call them feature flags) to turn those half-an-hour nerve-wrecking major incidents into a small whoopsie-daisy that you can fix in a few seconds. Make a change, gate it behind a feature toggle, release, enable the feature toggle and monitor the impact. If there is an issue, you can immediately roll it back with one HTTP request (or database query 1). If everything looks good, you can remove the usage of the feature toggle from your code and move on with other work. Need to roll out the new feature gradually? Implement the feature toggle as a percentage and increase it as you go. It’s really that simple, and you don’t have to pay 500 USD a month to get similar functionality from a service provider and make critical paths in your application depend on them.2 As my teammate once said, our service is perfectly capable of breaking down on its own. All you really need is one database table containing the keys and values for the feature toggles, and two HTTP endpoints, one to GET the current value of the feature toggle, and one to POST a new value for an existing one. New feature toggles will be introduced using tools like Flyway or Liquibase, and the same method can be used for also deleting them later on. You can also add convenience columns containing timestamps, such as created and modified, to track when these were introduced and when the last change was. However, there are a few considerations to take into account when setting up such a system. Feature toggles implemented as database table rows can work fantastically, but you should also monitor how often these get used. If you implement a feature toggle on a hot path in your service, then you can easily generate thousands of queries per second. A properly set up feature toggles system can sustain it without any issues on any competent database engine, but you should still try to monitor the impact and remove unused feature toggles as soon as possible. For hot code paths (1000+ requests/second) you might be better off implementing feature toggles as application properties. There’s no call to the database and reading a static property is darn fast, but you lose out on the ability to update it while the application is running. Alternatively, you can rely on the same database-based feature toggles system and keep a cached copy in-memory, while also refreshing it from time to time. Toggling won’t be as responsive as it will depend on the cache expiry time, but the reduced load on the database is often worth it. If your service receives contributions from multiple teams, or you have very anxious product managers that fill your backlog faster than you can say “story points”, then it’s a good idea to also introduce expiration dates for your feature toggles, with ample warning time to properly remove them. Using this method, you can make sure that old feature toggles get properly removed as there is no better prioritization reason than a looming major incident. You don’t want them to stick around for years on end, that’s just wasteful and clutters up your codebase. If your feature toggling needs are a bit more complicated, then you may need to invest more time in your DIY solution, or you can use one of the SaaS options if you really want to, just account for the added expense and reliance on yet another third party service. At work, I help manage a business-critical monolith that handles thousands of requests per second during peak hours, and the simple approach has served us very well. All it took was one motivated developer and about a day to implement, document and communicate the solution to our stakeholders. Skip the latter two steps, and you can be done within two hours, tops. letting inexperienced developers touch the production database is a fantastic way to take down your service, and a very expensive way to learn about database locks. ↩︎ I hate to refer to specific Hacker News comments like this, but there’s just something about paying 6000 USD a year for such a service that I just can’t understand. Has the Silicon Valley mindset gone too far? Or are US-based developers just way too expensive, resulting in these types of services sounding reasonable? You can hire a senior developer in Estonia for that amount of money for 2-3 weeks (including all taxes), and they can pop in and implement a feature toggles system in a few hours at most. The response comment with the status page link that’s highlighting multiple outages for LaunchDarkly is the cherry on top. ↩︎
I liked Ubuntu. For a very long time, it was the sensible default option. Around 2016, I used the Ubuntu GNOME flavor, and after they ditched the Unity desktop environment, GNOME became the default option. I was really happy with it, both for work and personal computing needs. Estonian ID card software was also officially supported on Ubuntu, which made Ubuntu a good choice for family members. But then something changed. Upgrades suck Like many Ubuntu users, I stuck to the long-term support releases and upgraded every two years to the next major version. There was just one tiny little issue: every upgrade broke something. Usually it was a relatively minor issue, with some icons, fonts or themes being a bit funny. Sometimes things went completely wrong. The worst upgrade was the one I did on my mothers’ laptop. During the upgrade process from Ubuntu 20.04 to 22.04, everything blew up spectacularly. The UI froze, the machine was completely unresponsive. After a 30-minute wait and a forced restart later, the installation was absolutely fucked. In frustration, I ended up installing Windows so that I don’t have to support Ubuntu. Another family member, another upgrade. This is one that they did themselves on Lubuntu 18.04, and they upgraded to the latest version. The result: Firefox shortcuts stopped working, the status bar contained duplicate icons, and random errors popped up after logging in. After making sure that ID card software works on Fedora 40, I installed that instead. All they need is a working browser, and that’s too difficult for Ubuntu to handle. Snaps ruined Ubuntu Snaps. I hate them. They sound great in theory, but the poor implementation and heavy-handed push by Canonical has been a mess. Snaps auto-update by default. Great for security1, but horrible for users who want to control what their personal computer is doing. Snaps get forced upon users as more and more system components are forcibly switched from Debian-based packages to Snaps, which breaks compatibility, functionality and introduces a lot of new issues. You can upgrade your Ubuntu installation and then discover that your browser is now contained within a Snap, the desktop shortcut for it doesn’t work and your government ID card does not work for logging in to your bank any longer. Snaps also destroy productivity. A colleague was struggling to get any work done because the desktop environment on their Ubuntu installation was flashing certain UI elements, being unresponsive and blocking them from doing any work. Apparently the whole GNOME desktop environment is a Snap now, and that lead to issues. The fix was super easy, barely an inconvenience: roll back to the previous version of the GNOME snap restart still broken update to the latest version again restart still broken restart again it is fixed now What was the issue? Absolutely no clue, but a days’ worth of developers’ productivity was completely wasted. Some of these issues have probably been fixed by now, but if I executed migration projects at my day job with a similar track record, I would be fired.2 Snaps done right: Flatpak Snaps can be implemented in a way that doesn’t suck for end users. It’s called a Flatpak. They work reasonably well, you can update them whenever you want and they are optional. Your Firefox installation won’t suddenly turn into a Flatpak overnight. On the Steam Deck, Flatpaks are the main distribution method for user-installed apps and I don’t mind it at all. The only issue is the software selection, not every app is available as a Flatpak just yet. Consider Fedora Fedora works fine. It’s not perfect, but I like it. At this point I’ve used it for longer than Ubuntu and unless IBM ruins it for all of us, I think it will be a perfectly cromulent distro go get work done on. Hopefully it’s not too late for Canonical to reconsider their approach to building a Linux distro. the xz backdoor demonstrated that getting the latest versions of all software can also be problematic from the security angle. ↩︎ technical failures themselves are not the issue, but not responding to users’ feedback and not testing things certainly is, especially if you keep repeatedly making the same mistake. ↩︎
In November 2024, my blog was down for over 24 hours. Here’s what I learned from this absolute clusterfuck of an incident. Lead-up to the incident I was browsing through photos on my Nextcloud instance. Everything was fine, until Nextcloud started generating preview images for older photos. This process is quite resource intensive, but generally manageable. However, this time the images were high quality photos in the 10-20 MB size range. Nextcloud crunched through those, but ended up spawning so many processes that it ended up using all the available memory on my home server. And thus, the server was down. This could have been solved by a forced reboot. Things were complicated by the simple fact that I was 120 kilometers away from my server, and I had no IPMI-like device set up. So I waited. 50 minutes later, I successfully logged in to my server over SSH again! The load averages were in the three-digit realm, but the system was mostly operational. I thought that it would be a good idea to restart the server, since who knows what might’ve gone wrong while the server was handling the out-of-memory situation. I reboot. The server doesn’t seem to come back up. Fuck. The downtime The worst part of the downtime was that I was simply unable to immediately fix it due to being 120 kilometers away from the server. My VPN connection back home was also hosted right there on the server, using this Docker image. I eventually got around to fixing this issue the next day when I could finally get hands-on with the server, my trusty ThinkPad T430. I open the lid and am greeted with the console login screen. This means that the machine did boot. I log in to the server over SSH and quickly open htop. My htop configuration shows metrics like systemd state, and it was showing 20+ failed services. This is very unusual. lsblk and mount show that the storage is there. What was the issue? Well, apparently the Docker daemon was not starting. I was searching for the error messages and ended up on this GitHub issue. I tried the fix, which involved deleting the Docker folder with all the containers and configuration, and restarted the daemon and containers. Everything is operational once again. I then rebooted the server. Everything is down again, with the same issue. And thus began a 8+ hours long troubleshooting session that ran late into the night. 04:00-ish late, on a Monday. I tried everything that I could come up with: used the btrfs Docker storage driver instead of the default overlay one Docker is still broken after a reboot replaced everything with podman I could not get podman to play well with my containers and IPv6 networking considered switching careers tractors are surprisingly expensive! I’m unable to put into words how frustrating this troubleshooting session was. The sleep deprivation, the lack of helpful information, the failed attempts at finding solutions. I’m usually quite calm and very rarely feel anger, but during these hours I felt enraged. The root cause The root cause will make more sense after you understand the storage setup I had at the time. The storage on my server consisted of four 4 TB SSD-s, two were mounted inside the laptop, and the remaining two were connected via USB-SATA adapters. The filesystem in use was btrfs, both on the OS drive and the 4x 4TB storage pool. To avoid hitting the OS boot drive with unnecessary writes, I moved the Docker data root to a separate btrfs subvolume on the main storage pool. What was the issue? Apparently the Docker daemon on Fedora Server is able to start up before every filesystem was mounted. In this case, Docker daemon started up before the subvolume containing all the Docker images, containers and networks was mounted. I tested out this theory by moving the Docker storage back to /var/lib/docker, which lives on the root filesystem, and after a reboot everything remained functional. In the past, I ran a similar setup, but with the Docker storage on the SATA SSD-s that are mounted inside the laptop over a native SATA connection. With the addition of two USB-connected SSD-s, the mounting process took longer for the whole pool, which resulted in a race condition between the Docker daemon startup and the storage being mounted. Fixing the root cause The fix for Docker starting up before all of your storage is mounted is actually quite elegant. The Docker service definition is contained in /etc/systemd/system/docker.service. You can override this configuration by creating a new directory at /etc/systemd/system/docker.service.d and dropping a file with the name override.conf in there with the following contents: [Unit] RequiresMountsFor=/containerstorage The rest of the service definition remains the same and your customized configuration won’t be overwritten with a Docker version update. The RequiresMountsFor setting prevents the Docker service from starting up before that particular mount exists. You can specify multiple mount points on the same line, separated by spaces. [Unit] RequiresMountsFor=/containerstorage /otherstorage /some/other/mountpoint You can also specify the mount points over multiple lines if you prefer. [Unit] RequiresMountsFor=/containerstorage RequiresMountsFor=/otherstorage RequiresMountsFor=/some/other/mountpoint If you’re using systemd unit files for controlling containers, then you can use the same systemd setting to prevent your containers from starting up before the storage that the container depends on is mounted. Avoiding the out of memory incident Nextcloud taking down my home server for 50 minutes was not the root cause, it only highlighted an issue that had been there for days at that point. That doesn’t mean that this area can’t be improved. After this incident, every Docker Compose file that I use includes resource limits on all containers. When defining the limits, I started with very conservative limits based on the average resource usage as observed from docker stats output. Over the past few months I’ve had to continuously tweak the limits, especially the memory ones, due to the containers themselves running out of memory when the limits were set too low. Apparently software is getting increasingly more resource hungry. An example Docker Compose file with resource limits looks like this: name: nextcloud services: nextcloud: container_name: nextcloud volumes: - /path/to/nextcloud/stuff:/data deploy: resources: limits: cpus: "4" memory: 2gb image: docker.io/nextcloud:latest restart: always nextcloud-db: container_name: nextcloud-db volumes: - /path/to/database:/var/lib/postgresql/data deploy: resources: limits: cpus: "4" memory: 2gb image: docker.io/postgres:16 restart: always In this example, each container is able to use up to 4 CPU cores and a maximum of 2 GB of memory. And just like that, Nextcloud is unable to take down my server by eating up all the available memory. Yes, I’m aware of the Preview Generator Nextcloud app. I have it, but over multiple years of running Nextcloud, I have not found it to be very effective against the resource-hungry preview image generation happening during user interactions. Decoupling my VPN solution from Docker With this incident, it was also clear that running your gateway to your home network inside a container was a really stupid idea. I’ve mitigated this issue by taking the WireGuard configuration generated by the container and moving it to the host. I also used this as an opportunity to get a to-do list item done and used this guide to add IPv6 support inside the virtual WireGuard network. I can now access IPv6 networks everywhere I go! I briefly considered setting WireGuard up on my openWRT-powered router, but I decided against it as I’d like to own one computer that I don’t screw up with my configuration changes. Closing thoughts I have not yet faced an incident this severe, even at work. The impact wasn’t that big, I guess a hundred people were not able to read my blog, but the stress levels were off the charts for me during the troubleshooting process. I’ve long advocated for self-hosting and running basic and boring solutions, with the main benefits being ease of maintenance, troubleshooting and low cost. This incident is a good reminder that even the most basic setups can have complicated issues associated with them. At least I got it fixed and learned about a new systemd unit setting, which is nice. Still better than handling Kubernetes issues.
I was in a pinch. I needed to make a full disk backup of a PC, but I had no external storage device with me to store it on. The local Wi-Fi network was also way too slow to transfer the disk over it. All I had was my laptop with an Ethernet port, a Fedora Linux USB stick, and a short Ethernet cable. I took the following steps: boot the target machine up with the Fedora Linux installer in a live environment modify the SSH configuration on the target machine to allow root user login with a password it’s OK to do this on a temporary setup like this one, but don’t do it on an actual Linux server set a password for the root user on the target machine only required because a live environment usually does not set one for root user connect both laptops with the Ethernet cable set static IPv4 addresses on both machines using network settings 1 edit the “Wired” connection and open the IPv4 tab example IP address on target: 192.168.100.5 example IP address on my laptop: 192.168.100.1 make sure to set the netmask to 255.255.255.0 on both! verify that the SSH connection works to the target machine back up the disk to your local machine using ssh and dd example: ssh root@192.168.100.5 "dd if=/dev/sda" | dd of=disk-image-backup.iso status=progress replace /dev/sda with the correct drive name! And just like that, I backed up the 120 GB SSD at gigabit speeds to my laptop. I’ve used a similar approach in the past when switching between laptops by running a live environment on both machines and copying the disk over with dd bit by bit. You’ll also save time on not having to copy the data over twice, first to an external storage device, and then to the target device. there’s probably a simpler way to do this with IPv6 magic, but I have not tested it yet ↩︎
More in technology
99% of the time I want my software to be as fast as humanly possible. I want UI elements that respond quickly, and I want keyboard shortcuts to do as much as possible. But 1% of the time, for whatever reason, I just like when a computer takes a long
The age-old combination of physical locks and keys, although reliable, also comes with a few drawbacks, such as when you lose the key or you want to share access with someone else remotely. Davide Gomba has recognized this and built the MKR Keylock project as a way to address some of these shortcomings. Starting with an existing electronic […] The post MKR Keylock is an open-source IoT keypad for your front door appeared first on Arduino Blog.
Week 7 of my humanities crash course had me exploring ancient Mesopotamia with a side trip to northern India. I also watched an Iranian film that had me pondering the meaning of life. Readings This week, I read two short ancient texts: the Epic of Gilgamesh and the Dhammapada. Let’s tackle them in order. I’d never read any ancient Mesopotamian literature, so this was all new to me: the pantheon, story, style, etc. were thrillingly unfamiliar. Gilgamesh is around 1,500 years older than Homer, and it shows: there are lots of repetitive passages and what felt like archaic writing. But human nature hasn’t changed much in 4,700 years. People still love, hate, drink, eat, etc. – and they still fear death. Gilgamesh is the awe-inspiring, despotic king of Uruk. The gods answer his beleaguered subjects’ prayers in the form of Enkidu, a rival who becomes Gilgamesh’s friend. They embark on several heroic exploits and end up pissing off the gods. As a result, they condemn Enkidu to death. Despondent and fearing for his own death, Gilgamesh goes in search of the secret of immortality. His travels take him to Utnapishtim, immortal survivor of the great flood. Our hero finds a plant that restores youth, but loses it. By the end of the story, he accepts his fate as a mortal. The story moves fast and is surprisingly engaging. It includes early versions of ideas that would resurface later in the Bible. (Most obviously, Noah and the flood.) There’s also some material that probably wouldn’t pass muster in our prudish time. The Dhammapada is one of the central Buddhist scriptures. I was familiar with several of these texts but hadn’t read the whole thing. Gioa notes that he recommended it because of its length, but there are also obvious connections with Gilgamesh. For example, several verses in the Dhammapada deal with attachment. For example, here’s verse 215: From affection comes grief; Gilgamesh suffers from such an attachment. Here’s the moment of Enkidu’s death: He touched his heart but it did not beat, nor did he lift his eyes again. When Gilgamesh touched his heart it did not beat. So Gilgamesh laid a veil, as one veils the bride, over his friend. He began to rage like a lion, like a lioness robbed of her whelps. This way and that he paced round the bed, he tore out his hair and strewed it around. He dragged off his splendid robes and flung them down as though they were abominations. He wishes to hold on: Then Gilgamesh issued a proclamation through the land, he summoned them all, the coppersmiths, the goldsmiths, the stone-workers, and commanded them, ‘Make a statue of my friend.’ The statue was fashioned with a great weight of lapis lazuli for the breast and of gold for the body. A table of hard-wood was set out, and on it a bowl of carnelian filled with honey, and a bowl of lapis lazuli filled with butter. These he exposed and offered to the Sun; and weeping he went away. You can probably relate if you’ve ever lost someone dear. Human nature. Audiovisual Gioia recommended Stravinsky’s Rite of Spring and Wagner’s Overtures. I’m very familiar with both so I didn’t spend much time with either this week. For a new take on one of these familiar classics, check out Fazil Say’s astonishing piano version of the Rite of Spring. Here’s a short portion: Gioia also recommended looking at ancient Mesopotamian art. I didn’t spend as much time on this as I would’ve liked. That said, this introductory lecture provided context while highlighting major works of art and architecture: I took a different approach to cinema this week. Rather than go by an AI recommendation, I went down the old-fashioned route. (I.e., Google.) Specifically, I thought this would be a good opportunity to check out Iranian cinema. I’ve heard good things about Iranian films, but had never seen one. Googling led me to this article on Vulture. After reading through the list, I picked Abbas Kiarostami’s TASTE OF CHERRY. Yet again, I’ve gravitated towards a film about a middle-aged man in despair. (Is the Universe trying to tell me something?) Kiarostami effectively uses a minimalist style to explore what makes life meaningful despite (or perhaps because) of its finitude. Reflections There’s a pattern here. This week’s works dealt with core issues people have grappled with since we became people. The big one: how do we deal with death? Not just the impending death of everyone we love, but our own. Gilgamesh offers the traditional “Western” answer: “I can’t even.” So, fight it! He looks for a MacGuffin that’ll let him go on living and perhaps brings his loved ones back. It’s an idea that has had many progeny in our mythologies. And it’s not just the stuff of fiction: the impulse is still alive and well. (Pardon the pun.) The Buddha offers a different approach: non-attachment. It’ll be easier to let go if you don’t become enmeshed with things, people, and your own sense of being. So you train your mind so it won’t hang on. (Even the idea of “your mind” is suspect.) The price: not feeling either extreme. No despair, no elation. Kiarostami’s film suggests a third approach: accepting the inevitability of death while reveling in the experience of being alive. (You could argue this is part of the Buddhist way as well.) I won’t say more in case you haven’t seen TASTE OF CHERRY, but suffice it to say the film employs a clever structural trick to wake you from your slumber. Grappling with these kinds of issues is the point of studying the humanities. Yes, I know you’re busy. I’m busy too. But some day, the busyness will stop – as will everything else. I’m committed to living an examined life, and that requires thinking about stuff we’d rather put aside so we can get on with the next Zoom meeting. Notes on Note-taking I’m also committed to the other point of this humanities project: learning how to learn better in this AI age. This week, I continued tweaking my note-taking approach. I took notes in the Drafts app as I read, building an outline as I go through the week. I wrote down the main points I learned and things I’d like to share with you. I then elaborated this outline on one of my morning walks. My mind works better when my body is moving and clear from the day’s detritus. I also tweaked my note taking approach around the readings. I had an LLM summarize the reading and then used that as a refresher to write a summary in my own words. I’ve done the same in previous weeks. What’s different now is that I then pasted my summary into a ChatGPT window with a simple prompt: I read The Epic of Gilgamesh. What is wrong with this description of the story?: Gilgamesh is king of Uruk. He’s described as the strongest and most beautiful man in the world. He’s also something of a despot. He befriends Enkidu, a wild man who is almost as strong as Gilgamesh. They go on several adventures, which entail opposing the wishes of one of the Mesopotamian deities. Eventually, the gods are angered and decree Enkidu must die. Grief-stricken, Gilgamesh goes in search for the secret of eternal life, only to learn that human lives are limited. He returns to lead his people with this newfound wisdom. The LLM offered a helpful response that clarified nuances I’d missed: Gilgamesh wasn’t “something of a despot”; he was a tyrant. The gods created Enkidu as a counterbalance to answer his subjects’ calls for relief. The taming and channeling of this force of nature through initiation into human pleasures is an important aspect of the story I’d left out. Details about Gilgamesh and Enkidu’s transgressions against the gods. (These seemed less relevant for a high-level summary.) The fact Gilgamesh isn’t just searching for immortality because he’s grief-stricken over Enkidu’s death; he’s also fearing for his own life. The end of my summary was wrong; the book doesn’t suggest Gilgamesh changed as a result of his experiences. This last point is important. In writing my summary, I made stuff up that wasn’t in the book. I attribute my error to the fact I expect closure from my stories. Gilgamesh precedes Aristotle’s Poetics; its authors were under no compulsion to offer the hero a redemption arc. Which is to say, humans hallucinate too – and LLMs can correct us. Up Next Next week, we’re reading ancient Egyptian literature. I couldn’t find an ebook of the text suggested by Gioia, so I’m going with another Penguin book, Writings from Ancient Egypt. I studied some Egyptian architecture in college, and look forward to revisiting this part of the world and its history. Check out Gioia’s post for the full syllabus. I’ve started a YouTube playlist to bookmark all the videos I’m sharing in this course. And as a reminder, I’m also sharing these posts via Substack if you’d like to subscribe and comment.
My fellow F1 fans should watch this to make sure they know the main rule changes coming in this season. I totally missed that the fastest lap point was going away. Unsecured and still-using0the-default-password cameras are quite the thing…always always always change your passwords, people! This is a
Last month I completed my first year at EnterpriseDB. I'm on the team that built and maintains pglogical and who, over the years, contributed a good chunk of the logical replication functionality that exists in community Postgres. Most of my work, our work, is in C and Rust with tests in Perl and Python. Our focus these days is a descendant of pglogical called Postgres Distributed which supports replicating DDL, tunable consistency across the cluster, etc. This post is about how I got here. Black boxes I was a web developer from 2014-2021†. I wrote JavaScript and HTML and CSS and whatever server-side language: Python or Go or PHP. I was a hands-on engineering manager from 2017-2021. I was pretty clueless about databases and indeed database knowledge was not a serious part of any interview I did. Throughout that time (2014-2021) I wanted to move my career forward as quickly as possible so I spent much of my free time doing educational projects and writing about them on this blog (or previous incarnations of it). I learned how to write primitive HTTP servers, how to write little parsers and interpreters and compilers. It was a virtuous cycle because the internet (Hacker News anyway) liked reading these posts and I wanted to learn how the black boxes worked. But I shied away from data structures and algorithms (DSA) because they seemed complicated and useless to the work that I did. That is, until 2020 when an inbox page I built started loading more and more slowly as the inbox grew. My coworker pointed me at Use The Index, Luke and the DSA scales fell from my eyes. I wanted to understand this new black box so I built a little in-memory SQL database with support for indexes. I'm a college dropout so even while I was interested in compilers and interpreters earlier in my career I never dreamed I could get a job working on them. Only geniuses and PhDs did that work and I was neither. The idea of working on a database felt the same. However, I could work on little database side projects like I had done before on other topics, so I did. Or a series of explorations of Raft implementations, others' and my own. Startups From 2021-2023 I tried to start a company and when that didn't pan out I joined TigerBeetle as a cofounder to work on marketing and community. It was during this time I started the Software Internals Discord and /r/databasedevelopment which have since kind of exploded in popularity among professionals and academics in database and distributed systems. TigerBeetle was my first job at a database company, and while I contributed bits of code I was not a developer there. It was a way into the space. And indeed it was an incredible learning experience both on the cofounder side and on the database side. I wrote articles with King and Joran that helped teach and affirm for myself the basics of databases and consensus-based distributed systems. Holding out When I left TigerBeetle in 2023 I was still not sure if I could get a job as an actual database developer. My network had exploded since 2021 (when I started my own company that didn't pan out) so I had no trouble getting referrals at database companies. But my background kept leading hiring managers to suggest putting me on cloud teams doing orchestration in Go around a database rather than working on the database itself. I was unhappy with this type-casting so I held out while unemployed and continued to write posts and host virtual hackweeks messing with Postgres and MySQL. I started the first incarnation of the Software Internals Book Club during this time, reading Designing Data Intensive Applications with 5-10 other developers in Bryant Park. During this time I also started the NYC Systems Coffee Club. Postgres After about four months of searching I ended up with three good offers, all to do C and Rust development on Postgres (extensions) as an individual contributor. Working on extensions might sound like the definition of not-sexy, but Postgres APIs are so loosely abstracted it's really as if you're working on Postgres itself. You can mess with almost anything in Postgres so you have to be very aware of what you're doing. And when you can't mess with something in Postgres because an API doesn't yet exist, companies have the tendency to just fork Postgres so they can. (This tendency isn't specific to Postgres, almost every open-source database company seems to have a long-running internal fork or two of the database.) EnterpriseDB Two of the three offers were from early-stage startups and after more than 3 years being part of the earliest stages of startups I was happy for a break. But the third offer was from one of the biggest contributors to Postgres, a 20-year old company called EnterpriseDB. (You can probably come up with different rankings of companies using different metrics so I'm only saying EnterpriseDB is one of the biggest contributors.) It seemed like the best place to be to learn a lot and contribute something meaningful. My coworkers are a mix of Postgres veterans (people who contributed the WAL to Postgres, who contributed MVCC to Postgres, who contributed logical decoding and logical replication, who contributed parallel queries; the list goes on and on) but also my developer-coworkers are people who started at EnterpriseDB on technical support, or who were previously Postgres administrators. It's quite a mix. Relatively few geniuses or PhDs, despite what I used to think, but they certainly work hard and have hard-earned experience. Anyway, I've now been working at EnterpriseDB for over a year so I wanted to share this retrospective. I also wanted to cover what it's like coming from engineering management and founding companies to going back to being an individual contributor. (Spoiler: incredibly enjoyable.) But it has been hard enough to make myself write this much so I'm calling it a day. :) I wrote a post about the winding path I took from web developer to database developer over 10 years. pic.twitter.com/tf8bUDRzjV — Phil Eaton (@eatonphil) February 15, 2025 † From 2011-2014 I also did contract web development but this was part-time while I was in school.