More from ./techtipsy
In November 2024, my blog was down for over 24 hours. Here’s what I learned from this absolute clusterfuck of an incident. Lead-up to the incident I was browsing through photos on my Nextcloud instance. Everything was fine, until Nextcloud started generating preview images for older photos. This process is quite resource intensive, but generally manageable. However, this time the images were high quality photos in the 10-20 MB size range. Nextcloud crunched through those, but ended up spawning so many processes that it ended up using all the available memory on my home server. And thus, the server was down. This could have been solved by a forced reboot. Things were complicated by the simple fact that I was 120 kilometers away from my server, and I had no IPMI-like device set up. So I waited. 50 minutes later, I successfully logged in to my server over SSH again! The load averages were in the three-digit realm, but the system was mostly operational. I thought that it would be a good idea to restart the server, since who knows what might’ve gone wrong while the server was handling the out-of-memory situation. I reboot. The server doesn’t seem to come back up. Fuck. The downtime The worst part of the downtime was that I was simply unable to immediately fix it due to being 120 kilometers away from the server. My VPN connection back home was also hosted right there on the server, using this Docker image. I eventually got around to fixing this issue the next day when I could finally get hands-on with the server, my trusty ThinkPad T430. I open the lid and am greeted with the console login screen. This means that the machine did boot. I log in to the server over SSH and quickly open htop. My htop configuration shows metrics like systemd state, and it was showing 20+ failed services. This is very unusual. lsblk and mount show that the storage is there. What was the issue? Well, apparently the Docker daemon was not starting. I was searching for the error messages and ended up on this GitHub issue. I tried the fix, which involved deleting the Docker folder with all the containers and configuration, and restarted the daemon and containers. Everything is operational once again. I then rebooted the server. Everything is down again, with the same issue. And thus began a 8+ hours long troubleshooting session that ran late into the night. 04:00-ish late, on a Monday. I tried everything that I could come up with: used the btrfs Docker storage driver instead of the default overlay one Docker is still broken after a reboot replaced everything with podman I could not get podman to play well with my containers and IPv6 networking considered switching careers tractors are surprisingly expensive! I’m unable to put into words how frustrating this troubleshooting session was. The sleep deprivation, the lack of helpful information, the failed attempts at finding solutions. I’m usually quite calm and very rarely feel anger, but during these hours I felt enraged. The root cause The root cause will make more sense after you understand the storage setup I had at the time. The storage on my server consisted of four 4 TB SSD-s, two were mounted inside the laptop, and the remaining two were connected via USB-SATA adapters. The filesystem in use was btrfs, both on the OS drive and the 4x 4TB storage pool. To avoid hitting the OS boot drive with unnecessary writes, I moved the Docker data root to a separate btrfs subvolume on the main storage pool. What was the issue? Apparently the Docker daemon on Fedora Server is able to start up before every filesystem was mounted. In this case, Docker daemon started up before the subvolume containing all the Docker images, containers and networks was mounted. I tested out this theory by moving the Docker storage back to /var/lib/docker, which lives on the root filesystem, and after a reboot everything remained functional. In the past, I ran a similar setup, but with the Docker storage on the SATA SSD-s that are mounted inside the laptop over a native SATA connection. With the addition of two USB-connected SSD-s, the mounting process took longer for the whole pool, which resulted in a race condition between the Docker daemon startup and the storage being mounted. Fixing the root cause The fix for Docker starting up before all of your storage is mounted is actually quite elegant. The Docker service definition is contained in /etc/systemd/system/docker.service. You can override this configuration by creating a new directory at /etc/systemd/system/docker.service.d and dropping a file with the name override.conf in there with the following contents: [Unit] RequiresMountsFor=/containerstorage The rest of the service definition remains the same and your customized configuration won’t be overwritten with a Docker version update. The RequiresMountsFor setting prevents the Docker service from starting up before that particular mount exists. You can specify multiple mount points on the same line, separated by spaces. [Unit] RequiresMountsFor=/containerstorage /otherstorage /some/other/mountpoint You can also specify the mount points over multiple lines if you prefer. [Unit] RequiresMountsFor=/containerstorage RequiresMountsFor=/otherstorage RequiresMountsFor=/some/other/mountpoint If you’re using systemd unit files for controlling containers, then you can use the same systemd setting to prevent your containers from starting up before the storage that the container depends on is mounted. Avoiding the out of memory incident Nextcloud taking down my home server for 50 minutes was not the root cause, it only highlighted an issue that had been there for days at that point. That doesn’t mean that this area can’t be improved. After this incident, every Docker Compose file that I use includes resource limits on all containers. When defining the limits, I started with very conservative limits based on the average resource usage as observed from docker stats output. Over the past few months I’ve had to continuously tweak the limits, especially the memory ones, due to the containers themselves running out of memory when the limits were set too low. Apparently software is getting increasingly more resource hungry. An example Docker Compose file with resource limits looks like this: name: nextcloud services: nextcloud: container_name: nextcloud volumes: - /path/to/nextcloud/stuff:/data deploy: resources: limits: cpus: "4" memory: 2gb image: docker.io/nextcloud:latest restart: always nextcloud-db: container_name: nextcloud-db volumes: - /path/to/database:/var/lib/postgresql/data deploy: resources: limits: cpus: "4" memory: 2gb image: docker.io/postgres:16 restart: always In this example, each container is able to use up to 4 CPU cores and a maximum of 2 GB of memory. And just like that, Nextcloud is unable to take down my server by eating up all the available memory. Yes, I’m aware of the Preview Generator Nextcloud app. I have it, but over multiple years of running Nextcloud, I have not found it to be very effective against the resource-hungry preview image generation happening during user interactions. Decoupling my VPN solution from Docker With this incident, it was also clear that running your gateway to your home network inside a container was a really stupid idea. I’ve mitigated this issue by taking the WireGuard configuration generated by the container and moving it to the host. I also used this as an opportunity to get a to-do list item done and used this guide to add IPv6 support inside the virtual WireGuard network. I can now access IPv6 networks everywhere I go! I briefly considered setting WireGuard up on my openWRT-powered router, but I decided against it as I’d like to own one computer that I don’t screw up with my configuration changes. Closing thoughts I have not yet faced an incident this severe, even at work. The impact wasn’t that big, I guess a hundred people were not able to read my blog, but the stress levels were off the charts for me during the troubleshooting process. I’ve long advocated for self-hosting and running basic and boring solutions, with the main benefits being ease of maintenance, troubleshooting and low cost. This incident is a good reminder that even the most basic setups can have complicated issues associated with them. At least I got it fixed and learned about a new systemd unit setting, which is nice. Still better than handling Kubernetes issues.
I was in a pinch. I needed to make a full disk backup of a PC, but I had no external storage device with me to store it on. The local Wi-Fi network was also way too slow to transfer the disk over it. All I had was my laptop with an Ethernet port, a Fedora Linux USB stick, and a short Ethernet cable. I took the following steps: boot the target machine up with the Fedora Linux installer in a live environment modify the SSH configuration on the target machine to allow root user login with a password it’s OK to do this on a temporary setup like this one, but don’t do it on an actual Linux server set a password for the root user on the target machine only required because a live environment usually does not set one for root user connect both laptops with the Ethernet cable set static IPv4 addresses on both machines using network settings 1 edit the “Wired” connection and open the IPv4 tab example IP address on target: 192.168.100.5 example IP address on my laptop: 192.168.100.1 make sure to set the netmask to 255.255.255.0 on both! verify that the SSH connection works to the target machine back up the disk to your local machine using ssh and dd example: ssh root@192.168.100.5 "dd if=/dev/sda" | dd of=disk-image-backup.iso status=progress replace /dev/sda with the correct drive name! And just like that, I backed up the 120 GB SSD at gigabit speeds to my laptop. I’ve used a similar approach in the past when switching between laptops by running a live environment on both machines and copying the disk over with dd bit by bit. You’ll also save time on not having to copy the data over twice, first to an external storage device, and then to the target device. there’s probably a simpler way to do this with IPv6 magic, but I have not tested it yet ↩︎
In December 2024, I did something that I had never done before: I participated in a short (~6 hours) Dungeons and Dragons campaign. It was the nerdiest thing ever, and I loved it! The setting After another day of keeping a critical production service up, the whole team met up at Kvest to play Dungeons and Dragons, as a team event. The game room was small but cozy, with ambient lighting, music and countless figurines on shelves setting the mood, and situated in the basement of the building, adding to the whole nerdy/geeky vibe. I guess that you could say that it, too, was a dungeon of sorts. The lights and music matched the events of the campaign. Enter a cave? The room gets dimmer, quieter. Fight starts? Boss music! After an introduction to D&D, character sheets, character creation and some good pizza, we got started. The experience Before I write about the recollections of the story and my character, I want to summarize the experience of the D&D campaign itself. Before this experience, I only knew about D&D from Stranger Things, as a thing that the main characters were into. I didn’t really get the appeal of it. During the campaign, I got it. The group I was with (my teammates from work) ended up working together very well.1 There was no shortage of humorous situations, and knowing who we were and what our personalities are outside the campaign made some situations even more fun. Looking back, the way our group approached situations had a lot of parallels with how we approach challenges at work. We took our time, investigated, asked questions, and were very suspicious of anything that behaved in a way that we could not understand, just like with that one critical production service that we all try to keep alive. Personally, the highlight was the fact that my imagination and creativity started working again. I can still picture the scenes and situations we were in during the campaign. I have not felt like that since the time I read Harry Potter books as a teenager. Doing a campaign like this after a long workday was perhaps not ideal, but the experience as a whole was still worth it. The campaign Now, the campaign. It was not a very long campaign, but at times it sure felt like one, largely because of how we operated as a group. We each got to pick a character class, and I ended up being a High Elf Warlock. Dumbfounded, I actually had to ask what that sequence of letters actually meant, which resulted in my team learning that I have never watched Lord of the Rings.2 Eventually I vaguely understood what I was, and came up with my character. Meet Borkus McDorkus, a high elf warlock. Borkus was a happy fellow, often accompanied with a cloudy smoke and a positive attitude. He would often end up being a bit slow to react to things. Borkus wore a black top hat, which was how the McDorkus family dressed, and had a distinct plant with multiple green leaves attached to the side of it, for good luck. Borkus sported a black trenchcoat and boots, which was the style at the time. This is the part where our dungeon master got started, referred to as the DM moving forward. The DM described the setting, and we ended up being split into two groups. We were brave knights that had trained under the wing of a local lord, and had recently started out lives on our own. After a few months, we would all end up at a local bar in a village, and got off to socializing with the other half of the group. Our group would end up screwing around at the bar, eating awful porridge, asking about a mysterious end-of-workday chime that was unexpectedly heard during the middle of the day, and stealing some coins from the money box to pay the barkeeper for the horrible porridge. After a local miner stormed in screaming and running away, we would end up investigating and walking towards the center of the village, where we met up with the head of the village. Apparently there was an accident at a local mine and they needed help. One member from our group asked about other work that may need to get done, things like killing dragons or saving orphans, multiple times. We got informed that there are no dragons to kill and orphanages to save, repeatedly. After that, the group was very eager to get working and started moving toward the mine. Our walk towards the mine took us to a big wooden boat, where we heard some knocking and banging, with tools. What followed was half an hour of trying to convince the boat captain to take us to the mine since we were hesitant to pay the price of 8 silver coins. Borkus (that’s me!) used some trickery to convince the captain that we were going to pay him one gold coin instead of the 8 silver coins they had asked for because they were such a good guy (and we had failed with threatening the captain multiple times before).3 Just one thing: this coin is special, it would only materialize once the captain kept their word and brought us to the mine. The captain was hesitant, but agreed to the deal. The coin would end up not materializing, as it was an illusion. We arrived in a mountainous place, got off the boat that apparently had big legs on it to traverse the swampy area, and started walking on a path. When presented with a choice of going forward or taking a left turn towards what looked like an abandoned mine, we of course went left, broke in and went down the increasingly darker and colder mineshaft. It took us a long time, but eventually we got to a small room where there had seemingly been a mining accident. The strongest members of the group started throwing rocks out of the way, until one of us found a hand that had been ripped off from the body. That didn’t seem to scare anyone, and eventually we got to the point where we saw a door on the other side of the rubble. The door was special. It illuminated in a very mysterious way and had some scribbles on it that we could not understand. We approached this door very carefully, discussing the next steps. One of us ended up trying to open it, and in a flash they were gone! What followed was about half an hour of testing the door. What happens if you throw a rock at the door? Nothing. Throwing rocks at the handle? Nothing. What if we touch the door handle with the ripped off hand that we found earlier? Flash, and it was gone. Okay, what if we use a rope to try to pull the door handle? Nothing. What if we agree that one of us touches the handle, and wherever we end up in, we try to bang on the walls as hard as possible to signal that we got to the other side in one piece? Nope, another member of the group was gone in a flash and the remaining ones did not hear anything. After lengthy discussions between the rest of us, we ended up all touching the door handle and going away, somewhere. Meanwhile, in that somewhere, the first member of the group flashed in to a room with paintings, chairs and chests full of valuables. There was also this one guy frozen in place with a terrified look. Flash. The ripped off hand popped in. Flash, flash, flash. The whole group eventually popped in to this room, and we began investigating it. Some of us sat on the chairs, and they ended up seemingly falling to sleep, followed by certain paintings changing in the room. At one point, something happened: monsters! Also, those from our group that sat on the chairs and seemingly fell asleep, sprung back into action, but they were now evil! At this point in the campaign, the DM brought out a small miniature that represented the room we were in, and we placed our miniature characters on it based on where we were positioned in our minds. We now got to roll for initiative, which determined our order of attacking, and got fighting! Borkus was first, but ended up rolling really damn poorly, so any magic and crossbow attacks were very ineffective. Damnit, Borkus. Luckily, others in the group did better and we eventually defeated the evil creatures. Unfortunately, we took casualties (RIP barbarian). With the room calm, there was one chair that was empty. Borkus, who had become sober during the fight, knew what he was destined to do (and definitely not because I was physically exhausted myself), and sat on that chair. Roll the credits! Closing thoughts Huge shout-out to: our DM, who did a genuinely good job getting us immersed in the campaign and guiding us through playing it Karoliine, our engineering manager, who set up this team event for us the team, who made this experience special with our sense of humor, ingenuity and creativity shining throughout the campaign I was exhausted after the campaign. I regret nothing. If I wasn’t time-deficient, I would do it all over again. Borkus must be avenged! we also work well together at work, so that makes sense. ↩︎ this knowledge was quickly followed up with a proposal to do a movie night of the Lord of the Rings trilogy, extended version and all. ↩︎ the exchange rate for 1 golden coin is 10 silver coins, so that was a very good offer. ↩︎
Good news, everyone! Doing IPv6 networking stuff on Docker is actually good now! I’ve recently started reworking my home server setup to be more IPv6 compatible, and as part of that I learned that during the summer of 2024 Docker shipped an update that eliminated a lot of the configuration and tweaking previously necessary to support IPv6. There is no need to change the daemon configuration any longer, it just works on Docker Engine v27 and later. Examples If your host has a working IPv6 setup and you want to listen to port 80 on both IPv4 and IPv6, then you don’t have to do anything special. However, the container will only have an IPv4 address internally. You can verify it by listing all the Docker networks via sudo docker network ls and running sudo docker network inspect network-name-here for the one associated with your container. For services like nginx that log the source IP address, this is problematic, as every incoming IPv6 request will be logged with the Docker network gateway IP address, such as 10.88.0.1. name: nginx services: nginx: container_name: nginx ports: - 80:80 image: docker.io/library/nginx restart: always If you want the container to have an IPv4 and an IPv6 address within the Docker network, you can create a new network and enable IPv6 in it. name: nginx services: nginx: container_name: nginx networks: - nginx-network ports: - 80:80 image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true There are situations where it’s handy to have a static IP address for a container within the Docker network. If you need help coming up with an unique local IPv6 address range, you can use this tool. name: nginx services: nginx: container_name: nginx networks: nginx-network ipv4_address: 10.69.42.5 ipv6_address: fdec:cc68:5178::abba ports: - 80:80 image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true ipam: driver: default config: - subnet: "10.69.42.0/24" - subnet: "fdec:cc68:5178::/64" If you choose the host network driver, your container will operate within the same networking space as your container host. If the host handles both IPv4 and IPv6 networking, then your container will happily operate with both. However, due to reduced network isolation, this has some security implications that you must take into account. name: nginx services: nginx: container_name: nginx network_mode: host # ports are not relevant with host network mode image: docker.io/library/nginx restart: always If you want your container to only accept connections on select interfaces, such as a Wireguard connection, then you will need to specify the IP addresses in the ports section. Here’s one example with both IPv4 and IPv6. name: nginx services: nginx: container_name: nginx networks: - nginx-network ports: - 10.69.42.5:80:80 - "[fdec:cc68:5178::beef]:80:80" image: docker.io/library/nginx restart: always networks: nginx-network: enable_ipv6: true What about Podman? I’ve given up on Podman. Before doing things the IPv6 way, Podman was functional for the most part, requiring a few tweaks to get things working. I have not managed to get Podman to play fair with IPv6. No matter what I did, I could not get it to listen to certain ports and access my services, the ports would always be filtered out. Conclusion I’m genuinely happy to see that the IPv6 support has gotten better with Docker, and I hope that this short introduction helps those out there looking to do things the IPv6 way with containers.
Just like most people out there, I have some files that are irreplaceable, such as cat pictures. At one point I had a few single-board computers sitting idle, namely the Orange Pi Zero and the LattePanda V1, and a few 1TB SSD-s. I hate idle hardware, so I did the most sensible thing and assembled a fleet of networked offsite backups for backing up the most important data. My setup is based on various flavors of Linux, but the ideas will likely translate well onto other operating systems and solutions. Networking The most important part is the networking. The offsite backup endpoints connect together to my home server over a WireGuard network. The home server is, well, the server, and backup endpoints are clients. I like this WireGuard Docker image a lot because it generates the server and client configurations automatically, but you can use plain WireGuard or a completely different networking solution to connect all the devices together. Some use Tailscale to make the setup process easier, but I like to keep things as self-hosted as possible. I’m not a networking expert, but here’s how I’ve set up my network. For this example, the WireGuard network operates in the 10.13.69.0/24 range.1 To only allow traffic between the devices and avoid tunneling everything through the home server, set the AllowedIPs setting to AllowedIPs = 10.13.69.0/24,10.13.69.1. We want to be able to access the backup endpoints, and nothing more. All the devices have a static IP address in that network, such as 10.13.69.1 for the home server, 10.13.69.2 for a backup endpoint and so on. The PersistentKeepalive = 25 option is present in the client configurations so that I don’t lose the ability to access the backup endpoints. With it, all the backup endpoints call back to the home server from time to time. The aforementioned Docker image automatically adds it to the generated configuration using the PERSISTENTKEEPALIVE_PEERS=all option. This setting is crucial. Without it, I sometimes ran into problems trying to connect from my home server to the backup endpoint, and that’s something you can’t easily alleviate without having physical access to the backup endpoints, which are offsite. Remove the DNS configuration from generated WireGuard client configurations, as you don’t need it for this purpose. Optionally, edit the /etc/hosts file for the home server and backup endpoints so that you can access your backup endpoints using simple hostnames, like orangepizero. Example row can look like this: 10.13.69.6 orangepizero. if your WireGuard server operates in a network with a dynamic external IP address, as is common with many home internet connections, I recommend getting yourself a domain name that you can update whenever your IP address changes and using that in your WireGuard client configurations. Without this, an IP address change will result in your backup endpoints being inaccessible. You’ll also likely need to set up port forwarding and/or traffic rules for your backup endpoints to be able to connect back to your WireGuard server. Once you have the WireGuard connection set up and SSH running on the backup endpoints, you should be able to drop the backup endpoints into any network that you have permission for. Ask your friends and family, and sweeten the deal by offering free technical support or help in some other area in return. The cost of running a single-board computer 24/7 is minuscule with the typical power consumption being 1-3W, so that won’t be much of a concern. Making backups For making the actual backups themselves, you have all sorts of options. I rely on rsync to copy the data over. It’s simple and it works, that’s all I expect from it. Example command: rsync -aAXvz /folder/to/back/up/ backupuser@backupendpoint:/backup/ --delete. The files will be compressed during transit with the -z option, and with --delete you’ll ensure that the target folder has all the files from the source, and nothing else. The backup storage is specified in /etc/fstab with the nofail option present. This ensures that in case the disk dies, the backup endpoint will still boot properly, allowing me to access the machine to troubleshoot the issue and/or force a desperate reboot to try to fix things. A good alternative approach is to mount/unmount the remote disk manually as part of the backup script. The backup storage uses the btrfs filesystem, and I use btrbk to take snapshots of the contents. If I accidentally delete all the files on the backup endpoint, then I can still recover from that situation because the data is still present in snapshots. 30 days is a good retention period: enough time to save the data in case of an accidental deletion, but short enough to avoid the backup disk getting full. If you don’t want to use filesystem-level snapshots, then tools like restic are a good alternative. It can also operate over SSH and you can configure snapshot retention policies in your backup script. Just make sure to not lose the encryption password, and verify the backups once in a while. Deployment I manage my backup endpoints using some cobbled-together Ansible roles. I’ve perfected it to the point where the only manual steps are flashing the OS and setting up the storage, the rest is handled via Ansible. I’d like to share my work here, but it will make Jeff Geerling cry. Maybe one day I’ll take the time to improve things… Maintenance All the backup endpoints update and reboot themselves regularly. It’s just the sensible thing to do. Every 6-12 months I also do major OS version updates. It’s risky because of the whole offsite aspect of the solution, but so far I haven’t been burned yet. Monitoring Monitoring is an area where I have some room for improvement. So far, I’ve set up Prometheus node-exporter to all of the backup endpoints, and my home server keeps track of how the backup endpoints are doing. This allows me to check once in a while if any of the backup endpoints has fallen off the network, or if the backup disk is getting full. Issues I’ve faced I’ve had this system running for a few years now, and it’s mostly stable! There have been some issues I’ve faced as well, though. Some are very specific to certain hardware, but I think there’s value in mentioning them. I once blew up a backup server because of an Ansible configuration issue. That meant that I had to physically go pick up the server to re-image it. The Orange Pi Zero was running quite hot, resulting in stability issues, so I put together a really janky cooling solution. Hell, I did the same for the LattePanda as well. It might look horrific but the extra cooling has fixed all the stability problems on both boards. The lack of a real-time clock on the LattePanda has required me to make its backup script a bit special. I can’t rely on a systemd timer that automatically reboots the machine once in a while, so instead that part is present in the backup script. The issue is that the LattePanda boots up with the time being set in the past, and once it gets the actual time from the network, it will run all sorts of tasks because enough time has passed! This included the reboot timer as well. At one point, the power supply on the LattePanda just died, and it was very visible on my graphs. That required a replacement. Conclusion That’s how I back up the most important data. I hope that this has given you inspiration to take your own backup approach to the next level! yes, I do plan to move this setup to IPv6 eventually. ↩︎
More in technology
The tragedy in Washington D.C. this week was horrible, and a shocking incident. There should and will be an investigation into what went wrong here, but every politician and official who spoke at the White House today explicitly blamed DEI programs for this crash. The message may as well
Guillermo posted this recently: What you name your product matters more than people give it credit. It's your first and most universal UI to the world. Designing a good name requires multi-dimensional thinking and is full of edge cases, much like designing software. I first will give credit where credit is due: I spent the first few years thinking "vercel" was phonetically interchangable with "volcel" and therefore fairly irredeemable as a name, but I've since come around to the name a bit as being (and I do not mean this snarkily or negatively!) generically futuristic, like the name of an amoral corporation in a Philip K. Dick novel. A few folks ask every year where the name for Buttondown came from. The answer is unexciting: Its killer feature was Markdown support, so I was trying to find a useful way to play off of that. "Buttondown" evokes, at least for me, the scent and touch of a well-worn OCBD, and that kind of timeless bourgeois aesthetic was what I was going for with the general branding. It was, in retrospect, a good-but-not-great name with two flaws: It's a common term. Setting Google Alerts (et al) for "buttondown" meant a lot of menswear stuff and not a lot of email stuff. Because it's a common term, the .com was an expensive purchase (see Notes on buttondown.com for more on that). We will probably never change the name. It's hard for me to imagine the ROI on a total rebrand like that ever justifying its own cost, and I have a soft spot for it even after all of these years. But all of this is to say: I don't know of any projects that have failed or succeeded because of a name. I would just try to avoid any obvious issues, and follow Seth's advice from 2003.
Mark your calendars for March 21-22, 2025, as we come together for a special Arduino Day to celebrate our 20th anniversary! This free, online event is open to everyone, everywhere. Two decades of creativity and community Over the past 20 years, we have evolved from a simple open-source hardware platform into a global community with […] The post Join us for Arduino Day 2025: celebrating 20 years of community! appeared first on Arduino Blog.
Disruptive technologies call for rethinking product design. We must question assumptions about underlying infrastructure and mental models while acknowledging neither change overnight. For example, self-driving cars don’t need steering wheels. Users direct AI-driven vehicles by giving them a destination address. Keyboards and microphones are better controls for this use case than steering wheels and pedals. But people expect cars to have steering wheels and pedals. Without them, they feel a loss of control – especially if they don’t fully trust the new technology. It’s not just control. The entire experience can – and perhaps must — change as a result. In a self-driving car, passengers needn’t all face forward. Freed from road duties, they can focus on work or leisure during the drive. As a result, designers can rethink the cabin experience from scratch. Such changes don’t happen overnight. People are used to having agency. They expect to actively sit behind the wheel with everyone facing forward. It’ll take time for people to cede control and relax. Moreover, current infrastructure is designed around these assumptions. For example, road signs point toward oncoming traffic because that’s where drivers can see them. Roads transited by robots don’t need signals at all. But it’s going to be a while before roads are used exclusively by AI-driven vehicles. Human drivers will share roads with them for some time, and humans need signs. The presence of robots might even call for new signaling. It’s a liminal situation that a) doesn’t yet accommodate the full potential of the new reality while b) trying to accommodate previous ways of being. The result is awkward “neither fish nor fowl” experiments. My favorite example is a late 19th Century product called Horsey Horseless. Patent diagram of Horsey Horseless (1899) via Wikimedia Yes, it’s a vehicle with a wooden horse head grafted on front. When I first saw this abomination (in a presentation by my friend Andrew Hinton,) I assumed it meant to appeal to early adopters who couldn’t let go of the idea of driving behind a horse. But there was a deeper logic here. At the time, cars shared roads with horse-drawn vehicles. Horsey Horseless was meant to keep motorcars from freaking out the horses. Whether it worked or not doesn’t matter. The important thing to note is people were grappling with the implications of the new technology on the product typology given the existing context. We’re in that situation now. Horsey Horseless is a metaphor for an approach to product evolution after the introduction of a disruptive new technology. To wit, designers seek to align the new technology with existing infrastructure and mental models by “grafting a horse.” Consider how many current products are “adding AI” by including a button that opens a chatbox alongside familiar UI. Here’s Gmail: Gmail’s Gemini AI panel. In this case, the email client UI is a sort of horse’s head that lets us use the new technology without disrupting our workflows. It’s a temporary hack. New products will appear that rethink use cases from the new technology’s unique capabilities. Why have a chat panel on an email client when AI can obviate the need for email altogether? Today, email is assumed infrastructure. Other products expect users to have an email address and a client app to access it. That might not always stand. Eventually, such awkward compromises will go away. But it takes time. We’re entering that liminal period now. It’s exciting – even if it produces weird chimeras for a while.
A quick intro to interfacing common OLED displays to bare-metal microcontrollers.