Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
3
I liked Ubuntu. For a very long time, it was the sensible default option. Around 2016, I used the Ubuntu GNOME flavor, and after they ditched the Unity desktop environment, GNOME became the default option. I was really happy with it, both for work and personal computing needs. Estonian ID card software was also officially supported on Ubuntu, which made Ubuntu a good choice for family members. But then something changed. Upgrades suck Like many Ubuntu users, I stuck to the long-term support releases and upgraded every two years to the next major version. There was just one tiny little issue: every upgrade broke something. Usually it was a relatively minor issue, with some icons, fonts or themes being a bit funny. Sometimes things went completely wrong. The worst upgrade was the one I did on my mothers’ laptop. During the upgrade process from Ubuntu 20.04 to 22.04, everything blew up spectacularly. The UI froze, the machine was completely unresponsive. After a 30-minute wait and...
5 hours ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ./techtipsy

Why my blog was down for over 24 hours in November 2024

In November 2024, my blog was down for over 24 hours. Here’s what I learned from this absolute clusterfuck of an incident. Lead-up to the incident I was browsing through photos on my Nextcloud instance. Everything was fine, until Nextcloud started generating preview images for older photos. This process is quite resource intensive, but generally manageable. However, this time the images were high quality photos in the 10-20 MB size range. Nextcloud crunched through those, but ended up spawning so many processes that it ended up using all the available memory on my home server. And thus, the server was down. This could have been solved by a forced reboot. Things were complicated by the simple fact that I was 120 kilometers away from my server, and I had no IPMI-like device set up. So I waited. 50 minutes later, I successfully logged in to my server over SSH again! The load averages were in the three-digit realm, but the system was mostly operational. I thought that it would be a good idea to restart the server, since who knows what might’ve gone wrong while the server was handling the out-of-memory situation. I reboot. The server doesn’t seem to come back up. Fuck. The downtime The worst part of the downtime was that I was simply unable to immediately fix it due to being 120 kilometers away from the server. My VPN connection back home was also hosted right there on the server, using this Docker image. I eventually got around to fixing this issue the next day when I could finally get hands-on with the server, my trusty ThinkPad T430. I open the lid and am greeted with the console login screen. This means that the machine did boot. I log in to the server over SSH and quickly open htop. My htop configuration shows metrics like systemd state, and it was showing 20+ failed services. This is very unusual. lsblk and mount show that the storage is there. What was the issue? Well, apparently the Docker daemon was not starting. I was searching for the error messages and ended up on this GitHub issue. I tried the fix, which involved deleting the Docker folder with all the containers and configuration, and restarted the daemon and containers. Everything is operational once again. I then rebooted the server. Everything is down again, with the same issue. And thus began a 8+ hours long troubleshooting session that ran late into the night. 04:00-ish late, on a Monday. I tried everything that I could come up with: used the btrfs Docker storage driver instead of the default overlay one Docker is still broken after a reboot replaced everything with podman I could not get podman to play well with my containers and IPv6 networking considered switching careers tractors are surprisingly expensive! I’m unable to put into words how frustrating this troubleshooting session was. The sleep deprivation, the lack of helpful information, the failed attempts at finding solutions. I’m usually quite calm and very rarely feel anger, but during these hours I felt enraged. The root cause The root cause will make more sense after you understand the storage setup I had at the time. The storage on my server consisted of four 4 TB SSD-s, two were mounted inside the laptop, and the remaining two were connected via USB-SATA adapters. The filesystem in use was btrfs, both on the OS drive and the 4x 4TB storage pool. To avoid hitting the OS boot drive with unnecessary writes, I moved the Docker data root to a separate btrfs subvolume on the main storage pool. What was the issue? Apparently the Docker daemon on Fedora Server is able to start up before every filesystem was mounted. In this case, Docker daemon started up before the subvolume containing all the Docker images, containers and networks was mounted. I tested out this theory by moving the Docker storage back to /var/lib/docker, which lives on the root filesystem, and after a reboot everything remained functional. In the past, I ran a similar setup, but with the Docker storage on the SATA SSD-s that are mounted inside the laptop over a native SATA connection. With the addition of two USB-connected SSD-s, the mounting process took longer for the whole pool, which resulted in a race condition between the Docker daemon startup and the storage being mounted. Fixing the root cause The fix for Docker starting up before all of your storage is mounted is actually quite elegant. The Docker service definition is contained in /etc/systemd/system/docker.service. You can override this configuration by creating a new directory at /etc/systemd/system/docker.service.d and dropping a file with the name override.conf in there with the following contents: [Unit] RequiresMountsFor=/containerstorage The rest of the service definition remains the same and your customized configuration won’t be overwritten with a Docker version update. The RequiresMountsFor setting prevents the Docker service from starting up before that particular mount exists. You can specify multiple mount points on the same line, separated by spaces. [Unit] RequiresMountsFor=/containerstorage /otherstorage /some/other/mountpoint You can also specify the mount points over multiple lines if you prefer. [Unit] RequiresMountsFor=/containerstorage RequiresMountsFor=/otherstorage RequiresMountsFor=/some/other/mountpoint If you’re using systemd unit files for controlling containers, then you can use the same systemd setting to prevent your containers from starting up before the storage that the container depends on is mounted. Avoiding the out of memory incident Nextcloud taking down my home server for 50 minutes was not the root cause, it only highlighted an issue that had been there for days at that point. That doesn’t mean that this area can’t be improved. After this incident, every Docker Compose file that I use includes resource limits on all containers. When defining the limits, I started with very conservative limits based on the average resource usage as observed from docker stats output. Over the past few months I’ve had to continuously tweak the limits, especially the memory ones, due to the containers themselves running out of memory when the limits were set too low. Apparently software is getting increasingly more resource hungry. An example Docker Compose file with resource limits looks like this: name: nextcloud services: nextcloud: container_name: nextcloud volumes: - /path/to/nextcloud/stuff:/data deploy: resources: limits: cpus: "4" memory: 2gb image: docker.io/nextcloud:latest restart: always nextcloud-db: container_name: nextcloud-db volumes: - /path/to/database:/var/lib/postgresql/data deploy: resources: limits: cpus: "4" memory: 2gb image: docker.io/postgres:16 restart: always In this example, each container is able to use up to 4 CPU cores and a maximum of 2 GB of memory. And just like that, Nextcloud is unable to take down my server by eating up all the available memory. Yes, I’m aware of the Preview Generator Nextcloud app. I have it, but over multiple years of running Nextcloud, I have not found it to be very effective against the resource-hungry preview image generation happening during user interactions. Decoupling my VPN solution from Docker With this incident, it was also clear that running your gateway to your home network inside a container was a really stupid idea. I’ve mitigated this issue by taking the WireGuard configuration generated by the container and moving it to the host. I also used this as an opportunity to get a to-do list item done and used this guide to add IPv6 support inside the virtual WireGuard network. I can now access IPv6 networks everywhere I go! I briefly considered setting WireGuard up on my openWRT-powered router, but I decided against it as I’d like to own one computer that I don’t screw up with my configuration changes. Closing thoughts I have not yet faced an incident this severe, even at work. The impact wasn’t that big, I guess a hundred people were not able to read my blog, but the stress levels were off the charts for me during the troubleshooting process. I’ve long advocated for self-hosting and running basic and boring solutions, with the main benefits being ease of maintenance, troubleshooting and low cost. This incident is a good reminder that even the most basic setups can have complicated issues associated with them. At least I got it fixed and learned about a new systemd unit setting, which is nice. Still better than handling Kubernetes issues.

2 weeks ago 24 votes
Backing up another PC with a single Ethernet cable

I was in a pinch. I needed to make a full disk backup of a PC, but I had no external storage device with me to store it on. The local Wi-Fi network was also way too slow to transfer the disk over it. All I had was my laptop with an Ethernet port, a Fedora Linux USB stick, and a short Ethernet cable. I took the following steps: boot the target machine up with the Fedora Linux installer in a live environment modify the SSH configuration on the target machine to allow root user login with a password it’s OK to do this on a temporary setup like this one, but don’t do it on an actual Linux server set a password for the root user on the target machine only required because a live environment usually does not set one for root user connect both laptops with the Ethernet cable set static IPv4 addresses on both machines using network settings 1 edit the “Wired” connection and open the IPv4 tab example IP address on target: 192.168.100.5 example IP address on my laptop: 192.168.100.1 make sure to set the netmask to 255.255.255.0 on both! verify that the SSH connection works to the target machine back up the disk to your local machine using ssh and dd example: ssh root@192.168.100.5 "dd if=/dev/sda" | dd of=disk-image-backup.iso status=progress replace /dev/sda with the correct drive name! And just like that, I backed up the 120 GB SSD at gigabit speeds to my laptop. I’ve used a similar approach in the past when switching between laptops by running a live environment on both machines and copying the disk over with dd bit by bit. You’ll also save time on not having to copy the data over twice, first to an external storage device, and then to the target device. there’s probably a simpler way to do this with IPv6 magic, but I have not tested it yet ↩︎

3 weeks ago 24 votes
My very first Dungeons and Dragons campaign

In December 2024, I did something that I had never done before: I participated in a short (~6 hours) Dungeons and Dragons campaign. It was the nerdiest thing ever, and I loved it! The setting After another day of keeping a critical production service up, the whole team met up at Kvest to play Dungeons and Dragons, as a team event. The game room was small but cozy, with ambient lighting, music and countless figurines on shelves setting the mood, and situated in the basement of the building, adding to the whole nerdy/geeky vibe. I guess that you could say that it, too, was a dungeon of sorts. The lights and music matched the events of the campaign. Enter a cave? The room gets dimmer, quieter. Fight starts? Boss music! After an introduction to D&D, character sheets, character creation and some good pizza, we got started. The experience Before I write about the recollections of the story and my character, I want to summarize the experience of the D&D campaign itself. Before this experience, I only knew about D&D from Stranger Things, as a thing that the main characters were into. I didn’t really get the appeal of it. During the campaign, I got it. The group I was with (my teammates from work) ended up working together very well.1 There was no shortage of humorous situations, and knowing who we were and what our personalities are outside the campaign made some situations even more fun. Looking back, the way our group approached situations had a lot of parallels with how we approach challenges at work. We took our time, investigated, asked questions, and were very suspicious of anything that behaved in a way that we could not understand, just like with that one critical production service that we all try to keep alive. Personally, the highlight was the fact that my imagination and creativity started working again. I can still picture the scenes and situations we were in during the campaign. I have not felt like that since the time I read Harry Potter books as a teenager. Doing a campaign like this after a long workday was perhaps not ideal, but the experience as a whole was still worth it. The campaign Now, the campaign. It was not a very long campaign, but at times it sure felt like one, largely because of how we operated as a group. We each got to pick a character class, and I ended up being a High Elf Warlock. Dumbfounded, I actually had to ask what that sequence of letters actually meant, which resulted in my team learning that I have never watched Lord of the Rings.2 Eventually I vaguely understood what I was, and came up with my character. Meet Borkus McDorkus, a high elf warlock. Borkus was a happy fellow, often accompanied with a cloudy smoke and a positive attitude. He would often end up being a bit slow to react to things. Borkus wore a black top hat, which was how the McDorkus family dressed, and had a distinct plant with multiple green leaves attached to the side of it, for good luck. Borkus sported a black trenchcoat and boots, which was the style at the time. This is the part where our dungeon master got started, referred to as the DM moving forward. The DM described the setting, and we ended up being split into two groups. We were brave knights that had trained under the wing of a local lord, and had recently started out lives on our own. After a few months, we would all end up at a local bar in a village, and got off to socializing with the other half of the group. Our group would end up screwing around at the bar, eating awful porridge, asking about a mysterious end-of-workday chime that was unexpectedly heard during the middle of the day, and stealing some coins from the money box to pay the barkeeper for the horrible porridge. After a local miner stormed in screaming and running away, we would end up investigating and walking towards the center of the village, where we met up with the head of the village. Apparently there was an accident at a local mine and they needed help. One member from our group asked about other work that may need to get done, things like killing dragons or saving orphans, multiple times. We got informed that there are no dragons to kill and orphanages to save, repeatedly. After that, the group was very eager to get working and started moving toward the mine. Our walk towards the mine took us to a big wooden boat, where we heard some knocking and banging, with tools. What followed was half an hour of trying to convince the boat captain to take us to the mine since we were hesitant to pay the price of 8 silver coins. Borkus (that’s me!) used some trickery to convince the captain that we were going to pay him one gold coin instead of the 8 silver coins they had asked for because they were such a good guy (and we had failed with threatening the captain multiple times before).3 Just one thing: this coin is special, it would only materialize once the captain kept their word and brought us to the mine. The captain was hesitant, but agreed to the deal. The coin would end up not materializing, as it was an illusion. We arrived in a mountainous place, got off the boat that apparently had big legs on it to traverse the swampy area, and started walking on a path. When presented with a choice of going forward or taking a left turn towards what looked like an abandoned mine, we of course went left, broke in and went down the increasingly darker and colder mineshaft. It took us a long time, but eventually we got to a small room where there had seemingly been a mining accident. The strongest members of the group started throwing rocks out of the way, until one of us found a hand that had been ripped off from the body. That didn’t seem to scare anyone, and eventually we got to the point where we saw a door on the other side of the rubble. The door was special. It illuminated in a very mysterious way and had some scribbles on it that we could not understand. We approached this door very carefully, discussing the next steps. One of us ended up trying to open it, and in a flash they were gone! What followed was about half an hour of testing the door. What happens if you throw a rock at the door? Nothing. Throwing rocks at the handle? Nothing. What if we touch the door handle with the ripped off hand that we found earlier? Flash, and it was gone. Okay, what if we use a rope to try to pull the door handle? Nothing. What if we agree that one of us touches the handle, and wherever we end up in, we try to bang on the walls as hard as possible to signal that we got to the other side in one piece? Nope, another member of the group was gone in a flash and the remaining ones did not hear anything. After lengthy discussions between the rest of us, we ended up all touching the door handle and going away, somewhere. Meanwhile, in that somewhere, the first member of the group flashed in to a room with paintings, chairs and chests full of valuables. There was also this one guy frozen in place with a terrified look. Flash. The ripped off hand popped in. Flash, flash, flash. The whole group eventually popped in to this room, and we began investigating it. Some of us sat on the chairs, and they ended up seemingly falling to sleep, followed by certain paintings changing in the room. At one point, something happened: monsters! Also, those from our group that sat on the chairs and seemingly fell asleep, sprung back into action, but they were now evil! At this point in the campaign, the DM brought out a small miniature that represented the room we were in, and we placed our miniature characters on it based on where we were positioned in our minds. We now got to roll for initiative, which determined our order of attacking, and got fighting! Borkus was first, but ended up rolling really damn poorly, so any magic and crossbow attacks were very ineffective. Damnit, Borkus. Luckily, others in the group did better and we eventually defeated the evil creatures. Unfortunately, we took casualties (RIP barbarian). With the room calm, there was one chair that was empty. Borkus, who had become sober during the fight, knew what he was destined to do (and definitely not because I was physically exhausted myself), and sat on that chair. Roll the credits! Closing thoughts Huge shout-out to: our DM, who did a genuinely good job getting us immersed in the campaign and guiding us through playing it Karoliine, our engineering manager, who set up this team event for us the team, who made this experience special with our sense of humor, ingenuity and creativity shining throughout the campaign I was exhausted after the campaign. I regret nothing. If I wasn’t time-deficient, I would do it all over again. Borkus must be avenged! we also work well together at work, so that makes sense. ↩︎ this knowledge was quickly followed up with a proposal to do a movie night of the Lord of the Rings trilogy, extended version and all. ↩︎ the exchange rate for 1 golden coin is 10 silver coins, so that was a very good offer. ↩︎

a month ago 35 votes
The IPv6 situation on Docker is good now!

Good news, everyone! Doing IPv6 networking stuff on Docker is actually good now! I’ve recently started reworking my home server setup to be more IPv6 compatible, and as part of that I learned that during the summer of 2024 Docker shipped an update that eliminated a lot of the configuration and tweaking previously necessary to support IPv6. There is no need to change the daemon configuration any longer, it just works on Docker Engine v27 and later. Examples If your host has a working IPv6 setup and you want to listen to port 80 on both IPv4 and IPv6, then you don’t have to do anything special. However, the container will only have an IPv4 address internally. You can verify it by listing all the Docker networks via sudo docker network ls and running sudo docker network inspect network-name-here for the one associated with your container. For services like nginx that log the source IP address, this is problematic, as every incoming IPv6 request will be logged with the Docker network gateway IP address, such as 10.88.0.1. name: nginx services: nginx: container_name: nginx ports: - 80:80 image: docker.io/library/nginx restart: always If you want the container to have an IPv4 and an IPv6 address within the Docker network, you can create a new network and enable IPv6 in it. name: nginx services: nginx: container_name: nginx networks: - nginx-network ports: - 80:80 image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true There are situations where it’s handy to have a static IP address for a container within the Docker network. If you need help coming up with an unique local IPv6 address range, you can use this tool. name: nginx services: nginx: container_name: nginx networks: nginx-network ipv4_address: 10.69.42.5 ipv6_address: fdec:cc68:5178::abba ports: - 80:80 image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true ipam: driver: default config: - subnet: "10.69.42.0/24" - subnet: "fdec:cc68:5178::/64" If you choose the host network driver, your container will operate within the same networking space as your container host. If the host handles both IPv4 and IPv6 networking, then your container will happily operate with both. However, due to reduced network isolation, this has some security implications that you must take into account. name: nginx services: nginx: container_name: nginx network_mode: host # ports are not relevant with host network mode image: docker.io/library/nginx restart: always If you want your container to only accept connections on select interfaces, such as a Wireguard connection, then you will need to specify the IP addresses in the ports section. Here’s one example with both IPv4 and IPv6. name: nginx services: nginx: container_name: nginx networks: - nginx-network ports: - 10.69.42.5:80:80 - "[fdec:cc68:5178::beef]:80:80" image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true What about Podman? I’ve given up on Podman. Before doing things the IPv6 way, Podman was functional for the most part, requiring a few tweaks to get things working. I have not managed to get Podman to play fair with IPv6. No matter what I did, I could not get it to listen to certain ports and access my services, the ports would always be filtered out. Conclusion I’m genuinely happy to see that the IPv6 support has gotten better with Docker, and I hope that this short introduction helps those out there looking to do things the IPv6 way with containers.

a month ago 58 votes

More in technology

8 Million Requests Later, We Made The SolarWinds Supply Chain Attack Look Amateur

Surprise surprise, we've done it again. We've demonstrated an ability to compromise significantly sensitive networks, including governments, militaries, space agencies, cyber security companies, supply chains, software development systems and environments, and more. “Ugh, won’t they just stick to creating poor-quality memes?” we

22 hours ago 5 votes
The next few vids you'll watch on YouTube

I know it's a bit of cliché, but count me in the group of white men who 2021's Inside really connected with me in a huge way. Anyway, now All Eyes on Me is going to be stuck in my head for the next month.

13 hours ago 2 votes
AI is going to break how schools work – but that will be good

The awkward reality the government seems scared to talk about

yesterday 3 votes
What's the deal with magnetic fields?

I apologize, but there won't be any Insane Clown Posse jokes in this article.

yesterday 3 votes