Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
35
In December 2024, I did something that I had never done before: I participated in a short (~6 hours) Dungeons and Dragons campaign. It was the nerdiest thing ever, and I loved it! The setting After another day of keeping a critical production service up, the whole team met up at Kvest to play Dungeons and Dragons, as a team event. The game room was small but cozy, with ambient lighting, music and countless figurines on shelves setting the mood, and situated in the basement of the building, adding to the whole nerdy/geeky vibe. I guess that you could say that it, too, was a dungeon of sorts. The lights and music matched the events of the campaign. Enter a cave? The room gets dimmer, quieter. Fight starts? Boss music! After an introduction to D&D, character sheets, character creation and some good pizza, we got started. The experience Before I write about the recollections of the story and my character, I want to summarize the experience of the D&D campaign itself. Before this...
a month ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ./techtipsy

I'm done with Ubuntu

I liked Ubuntu. For a very long time, it was the sensible default option. Around 2016, I used the Ubuntu GNOME flavor, and after they ditched the Unity desktop environment, GNOME became the default option. I was really happy with it, both for work and personal computing needs. Estonian ID card software was also officially supported on Ubuntu, which made Ubuntu a good choice for family members. But then something changed. Upgrades suck Like many Ubuntu users, I stuck to the long-term support releases and upgraded every two years to the next major version. There was just one tiny little issue: every upgrade broke something. Usually it was a relatively minor issue, with some icons, fonts or themes being a bit funny. Sometimes things went completely wrong. The worst upgrade was the one I did on my mothers’ laptop. During the upgrade process from Ubuntu 20.04 to 22.04, everything blew up spectacularly. The UI froze, the machine was completely unresponsive. After a 30-minute wait and a forced restart later, the installation was absolutely fucked. In frustration, I ended up installing Windows so that I don’t have to support Ubuntu. Another family member, another upgrade. This is one that they did themselves on Lubuntu 18.04, and they upgraded to the latest version. The result: Firefox shortcuts stopped working, the status bar contained duplicate icons, and random errors popped up after logging in. After making sure that ID card software works on Fedora 40, I installed that instead. All they need is a working browser, and that’s too difficult for Ubuntu to handle. Snaps ruined Ubuntu Snaps. I hate them. They sound great in theory, but the poor implementation and heavy-handed push by Canonical has been a mess. Snaps auto-update by default. Great for security1, but horrible for users who want to control what their personal computer is doing. Snaps get forced upon users as more and more system components are forcibly switched from Debian-based packages to Snaps, which breaks compatibility, functionality and introduces a lot of new issues. You can upgrade your Ubuntu installation and then discover that your browser is now contained within a Snap, the desktop shortcut for it doesn’t work and your government ID card does not work for logging in to your bank any longer. Snaps also destroy productivity. A colleague was struggling to get any work done because the desktop environment on their Ubuntu installation was flashing certain UI elements, being unresponsive and blocking them from doing any work. Apparently the whole GNOME desktop environment is a Snap now, and that lead to issues. The fix was super easy, barely an inconvenience: roll back to the previous version of the GNOME snap restart still broken update to the latest version again restart still broken restart again it is fixed now What was the issue? Absolutely no clue, but a days’ worth of developers’ productivity was completely wasted. Some of these issues have probably been fixed by now, but if I executed migration projects at my day job with a similar track record, I would be fired.2 Snaps done right: Flatpak Snaps can be implemented in a way that doesn’t suck for end users. It’s called a Flatpak. They work reasonably well, you can update them whenever you want and they are optional. Your Firefox installation won’t suddenly turn into a Flatpak overnight. On the Steam Deck, Flatpaks are the main distribution method for user-installed apps and I don’t mind it at all. The only issue is the software selection, not every app is available as a Flatpak just yet. Consider Fedora Fedora works fine. It’s not perfect, but I like it. At this point I’ve used it for longer than Ubuntu and unless IBM ruins it for all of us, I think it will be a perfectly cromulent distro go get work done on. Hopefully it’s not too late for Canonical to reconsider their approach to building a Linux distro. the xz backdoor demonstrated that getting the latest versions of all software can also be problematic from the security angle. ↩︎ technical failures themselves are not the issue, but not responding to users’ feedback and not testing things certainly is, especially if you keep repeatedly making the same mistake. ↩︎

7 hours ago 3 votes
Why my blog was down for over 24 hours in November 2024

In November 2024, my blog was down for over 24 hours. Here’s what I learned from this absolute clusterfuck of an incident. Lead-up to the incident I was browsing through photos on my Nextcloud instance. Everything was fine, until Nextcloud started generating preview images for older photos. This process is quite resource intensive, but generally manageable. However, this time the images were high quality photos in the 10-20 MB size range. Nextcloud crunched through those, but ended up spawning so many processes that it ended up using all the available memory on my home server. And thus, the server was down. This could have been solved by a forced reboot. Things were complicated by the simple fact that I was 120 kilometers away from my server, and I had no IPMI-like device set up. So I waited. 50 minutes later, I successfully logged in to my server over SSH again! The load averages were in the three-digit realm, but the system was mostly operational. I thought that it would be a good idea to restart the server, since who knows what might’ve gone wrong while the server was handling the out-of-memory situation. I reboot. The server doesn’t seem to come back up. Fuck. The downtime The worst part of the downtime was that I was simply unable to immediately fix it due to being 120 kilometers away from the server. My VPN connection back home was also hosted right there on the server, using this Docker image. I eventually got around to fixing this issue the next day when I could finally get hands-on with the server, my trusty ThinkPad T430. I open the lid and am greeted with the console login screen. This means that the machine did boot. I log in to the server over SSH and quickly open htop. My htop configuration shows metrics like systemd state, and it was showing 20+ failed services. This is very unusual. lsblk and mount show that the storage is there. What was the issue? Well, apparently the Docker daemon was not starting. I was searching for the error messages and ended up on this GitHub issue. I tried the fix, which involved deleting the Docker folder with all the containers and configuration, and restarted the daemon and containers. Everything is operational once again. I then rebooted the server. Everything is down again, with the same issue. And thus began a 8+ hours long troubleshooting session that ran late into the night. 04:00-ish late, on a Monday. I tried everything that I could come up with: used the btrfs Docker storage driver instead of the default overlay one Docker is still broken after a reboot replaced everything with podman I could not get podman to play well with my containers and IPv6 networking considered switching careers tractors are surprisingly expensive! I’m unable to put into words how frustrating this troubleshooting session was. The sleep deprivation, the lack of helpful information, the failed attempts at finding solutions. I’m usually quite calm and very rarely feel anger, but during these hours I felt enraged. The root cause The root cause will make more sense after you understand the storage setup I had at the time. The storage on my server consisted of four 4 TB SSD-s, two were mounted inside the laptop, and the remaining two were connected via USB-SATA adapters. The filesystem in use was btrfs, both on the OS drive and the 4x 4TB storage pool. To avoid hitting the OS boot drive with unnecessary writes, I moved the Docker data root to a separate btrfs subvolume on the main storage pool. What was the issue? Apparently the Docker daemon on Fedora Server is able to start up before every filesystem was mounted. In this case, Docker daemon started up before the subvolume containing all the Docker images, containers and networks was mounted. I tested out this theory by moving the Docker storage back to /var/lib/docker, which lives on the root filesystem, and after a reboot everything remained functional. In the past, I ran a similar setup, but with the Docker storage on the SATA SSD-s that are mounted inside the laptop over a native SATA connection. With the addition of two USB-connected SSD-s, the mounting process took longer for the whole pool, which resulted in a race condition between the Docker daemon startup and the storage being mounted. Fixing the root cause The fix for Docker starting up before all of your storage is mounted is actually quite elegant. The Docker service definition is contained in /etc/systemd/system/docker.service. You can override this configuration by creating a new directory at /etc/systemd/system/docker.service.d and dropping a file with the name override.conf in there with the following contents: [Unit] RequiresMountsFor=/containerstorage The rest of the service definition remains the same and your customized configuration won’t be overwritten with a Docker version update. The RequiresMountsFor setting prevents the Docker service from starting up before that particular mount exists. You can specify multiple mount points on the same line, separated by spaces. [Unit] RequiresMountsFor=/containerstorage /otherstorage /some/other/mountpoint You can also specify the mount points over multiple lines if you prefer. [Unit] RequiresMountsFor=/containerstorage RequiresMountsFor=/otherstorage RequiresMountsFor=/some/other/mountpoint If you’re using systemd unit files for controlling containers, then you can use the same systemd setting to prevent your containers from starting up before the storage that the container depends on is mounted. Avoiding the out of memory incident Nextcloud taking down my home server for 50 minutes was not the root cause, it only highlighted an issue that had been there for days at that point. That doesn’t mean that this area can’t be improved. After this incident, every Docker Compose file that I use includes resource limits on all containers. When defining the limits, I started with very conservative limits based on the average resource usage as observed from docker stats output. Over the past few months I’ve had to continuously tweak the limits, especially the memory ones, due to the containers themselves running out of memory when the limits were set too low. Apparently software is getting increasingly more resource hungry. An example Docker Compose file with resource limits looks like this: name: nextcloud services: nextcloud: container_name: nextcloud volumes: - /path/to/nextcloud/stuff:/data deploy: resources: limits: cpus: "4" memory: 2gb image: docker.io/nextcloud:latest restart: always nextcloud-db: container_name: nextcloud-db volumes: - /path/to/database:/var/lib/postgresql/data deploy: resources: limits: cpus: "4" memory: 2gb image: docker.io/postgres:16 restart: always In this example, each container is able to use up to 4 CPU cores and a maximum of 2 GB of memory. And just like that, Nextcloud is unable to take down my server by eating up all the available memory. Yes, I’m aware of the Preview Generator Nextcloud app. I have it, but over multiple years of running Nextcloud, I have not found it to be very effective against the resource-hungry preview image generation happening during user interactions. Decoupling my VPN solution from Docker With this incident, it was also clear that running your gateway to your home network inside a container was a really stupid idea. I’ve mitigated this issue by taking the WireGuard configuration generated by the container and moving it to the host. I also used this as an opportunity to get a to-do list item done and used this guide to add IPv6 support inside the virtual WireGuard network. I can now access IPv6 networks everywhere I go! I briefly considered setting WireGuard up on my openWRT-powered router, but I decided against it as I’d like to own one computer that I don’t screw up with my configuration changes. Closing thoughts I have not yet faced an incident this severe, even at work. The impact wasn’t that big, I guess a hundred people were not able to read my blog, but the stress levels were off the charts for me during the troubleshooting process. I’ve long advocated for self-hosting and running basic and boring solutions, with the main benefits being ease of maintenance, troubleshooting and low cost. This incident is a good reminder that even the most basic setups can have complicated issues associated with them. At least I got it fixed and learned about a new systemd unit setting, which is nice. Still better than handling Kubernetes issues.

2 weeks ago 24 votes
Backing up another PC with a single Ethernet cable

I was in a pinch. I needed to make a full disk backup of a PC, but I had no external storage device with me to store it on. The local Wi-Fi network was also way too slow to transfer the disk over it. All I had was my laptop with an Ethernet port, a Fedora Linux USB stick, and a short Ethernet cable. I took the following steps: boot the target machine up with the Fedora Linux installer in a live environment modify the SSH configuration on the target machine to allow root user login with a password it’s OK to do this on a temporary setup like this one, but don’t do it on an actual Linux server set a password for the root user on the target machine only required because a live environment usually does not set one for root user connect both laptops with the Ethernet cable set static IPv4 addresses on both machines using network settings 1 edit the “Wired” connection and open the IPv4 tab example IP address on target: 192.168.100.5 example IP address on my laptop: 192.168.100.1 make sure to set the netmask to 255.255.255.0 on both! verify that the SSH connection works to the target machine back up the disk to your local machine using ssh and dd example: ssh root@192.168.100.5 "dd if=/dev/sda" | dd of=disk-image-backup.iso status=progress replace /dev/sda with the correct drive name! And just like that, I backed up the 120 GB SSD at gigabit speeds to my laptop. I’ve used a similar approach in the past when switching between laptops by running a live environment on both machines and copying the disk over with dd bit by bit. You’ll also save time on not having to copy the data over twice, first to an external storage device, and then to the target device. there’s probably a simpler way to do this with IPv6 magic, but I have not tested it yet ↩︎

3 weeks ago 24 votes
The IPv6 situation on Docker is good now!

Good news, everyone! Doing IPv6 networking stuff on Docker is actually good now! I’ve recently started reworking my home server setup to be more IPv6 compatible, and as part of that I learned that during the summer of 2024 Docker shipped an update that eliminated a lot of the configuration and tweaking previously necessary to support IPv6. There is no need to change the daemon configuration any longer, it just works on Docker Engine v27 and later. Examples If your host has a working IPv6 setup and you want to listen to port 80 on both IPv4 and IPv6, then you don’t have to do anything special. However, the container will only have an IPv4 address internally. You can verify it by listing all the Docker networks via sudo docker network ls and running sudo docker network inspect network-name-here for the one associated with your container. For services like nginx that log the source IP address, this is problematic, as every incoming IPv6 request will be logged with the Docker network gateway IP address, such as 10.88.0.1. name: nginx services: nginx: container_name: nginx ports: - 80:80 image: docker.io/library/nginx restart: always If you want the container to have an IPv4 and an IPv6 address within the Docker network, you can create a new network and enable IPv6 in it. name: nginx services: nginx: container_name: nginx networks: - nginx-network ports: - 80:80 image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true There are situations where it’s handy to have a static IP address for a container within the Docker network. If you need help coming up with an unique local IPv6 address range, you can use this tool. name: nginx services: nginx: container_name: nginx networks: nginx-network ipv4_address: 10.69.42.5 ipv6_address: fdec:cc68:5178::abba ports: - 80:80 image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true ipam: driver: default config: - subnet: "10.69.42.0/24" - subnet: "fdec:cc68:5178::/64" If you choose the host network driver, your container will operate within the same networking space as your container host. If the host handles both IPv4 and IPv6 networking, then your container will happily operate with both. However, due to reduced network isolation, this has some security implications that you must take into account. name: nginx services: nginx: container_name: nginx network_mode: host # ports are not relevant with host network mode image: docker.io/library/nginx restart: always If you want your container to only accept connections on select interfaces, such as a Wireguard connection, then you will need to specify the IP addresses in the ports section. Here’s one example with both IPv4 and IPv6. name: nginx services: nginx: container_name: nginx networks: - nginx-network ports: - 10.69.42.5:80:80 - "[fdec:cc68:5178::beef]:80:80" image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true What about Podman? I’ve given up on Podman. Before doing things the IPv6 way, Podman was functional for the most part, requiring a few tweaks to get things working. I have not managed to get Podman to play fair with IPv6. No matter what I did, I could not get it to listen to certain ports and access my services, the ports would always be filtered out. Conclusion I’m genuinely happy to see that the IPv6 support has gotten better with Docker, and I hope that this short introduction helps those out there looking to do things the IPv6 way with containers.

a month ago 58 votes

More in technology

I'm done with Ubuntu

I liked Ubuntu. For a very long time, it was the sensible default option. Around 2016, I used the Ubuntu GNOME flavor, and after they ditched the Unity desktop environment, GNOME became the default option. I was really happy with it, both for work and personal computing needs. Estonian ID card software was also officially supported on Ubuntu, which made Ubuntu a good choice for family members. But then something changed. Upgrades suck Like many Ubuntu users, I stuck to the long-term support releases and upgraded every two years to the next major version. There was just one tiny little issue: every upgrade broke something. Usually it was a relatively minor issue, with some icons, fonts or themes being a bit funny. Sometimes things went completely wrong. The worst upgrade was the one I did on my mothers’ laptop. During the upgrade process from Ubuntu 20.04 to 22.04, everything blew up spectacularly. The UI froze, the machine was completely unresponsive. After a 30-minute wait and a forced restart later, the installation was absolutely fucked. In frustration, I ended up installing Windows so that I don’t have to support Ubuntu. Another family member, another upgrade. This is one that they did themselves on Lubuntu 18.04, and they upgraded to the latest version. The result: Firefox shortcuts stopped working, the status bar contained duplicate icons, and random errors popped up after logging in. After making sure that ID card software works on Fedora 40, I installed that instead. All they need is a working browser, and that’s too difficult for Ubuntu to handle. Snaps ruined Ubuntu Snaps. I hate them. They sound great in theory, but the poor implementation and heavy-handed push by Canonical has been a mess. Snaps auto-update by default. Great for security1, but horrible for users who want to control what their personal computer is doing. Snaps get forced upon users as more and more system components are forcibly switched from Debian-based packages to Snaps, which breaks compatibility, functionality and introduces a lot of new issues. You can upgrade your Ubuntu installation and then discover that your browser is now contained within a Snap, the desktop shortcut for it doesn’t work and your government ID card does not work for logging in to your bank any longer. Snaps also destroy productivity. A colleague was struggling to get any work done because the desktop environment on their Ubuntu installation was flashing certain UI elements, being unresponsive and blocking them from doing any work. Apparently the whole GNOME desktop environment is a Snap now, and that lead to issues. The fix was super easy, barely an inconvenience: roll back to the previous version of the GNOME snap restart still broken update to the latest version again restart still broken restart again it is fixed now What was the issue? Absolutely no clue, but a days’ worth of developers’ productivity was completely wasted. Some of these issues have probably been fixed by now, but if I executed migration projects at my day job with a similar track record, I would be fired.2 Snaps done right: Flatpak Snaps can be implemented in a way that doesn’t suck for end users. It’s called a Flatpak. They work reasonably well, you can update them whenever you want and they are optional. Your Firefox installation won’t suddenly turn into a Flatpak overnight. On the Steam Deck, Flatpaks are the main distribution method for user-installed apps and I don’t mind it at all. The only issue is the software selection, not every app is available as a Flatpak just yet. Consider Fedora Fedora works fine. It’s not perfect, but I like it. At this point I’ve used it for longer than Ubuntu and unless IBM ruins it for all of us, I think it will be a perfectly cromulent distro go get work done on. Hopefully it’s not too late for Canonical to reconsider their approach to building a Linux distro. the xz backdoor demonstrated that getting the latest versions of all software can also be problematic from the security angle. ↩︎ technical failures themselves are not the issue, but not responding to users’ feedback and not testing things certainly is, especially if you keep repeatedly making the same mistake. ↩︎

7 hours ago 3 votes
The next few vids you'll watch on YouTube

I know it's a bit of cliché, but count me in the group of white men who 2021's Inside really connected with me in a huge way. Anyway, now All Eyes on Me is going to be stuck in my head for the next month.

15 hours ago 2 votes
8 Million Requests Later, We Made The SolarWinds Supply Chain Attack Look Amateur

Surprise surprise, we've done it again. We've demonstrated an ability to compromise significantly sensitive networks, including governments, militaries, space agencies, cyber security companies, supply chains, software development systems and environments, and more. “Ugh, won’t they just stick to creating poor-quality memes?” we

yesterday 5 votes
AI is going to break how schools work – but that will be good

The awkward reality the government seems scared to talk about

yesterday 3 votes
What's the deal with magnetic fields?

I apologize, but there won't be any Insane Clown Posse jokes in this article.

yesterday 3 votes