More from ./techtipsy
I’ve changed my home server setup a lot over the past decade, mainly because I keep changing the goals all the time. I’ve now realized why that keeps happening. I want the perfect home server. What is the perfect home server? I’d phrase it like this: The perfect home server uses very little power, offers plenty of affordable storage and provides a lot of performance when it’s actually being relied upon. In my case, low power means less than 5 W while idling, 10+ TB of redundant storage for data resilience and integrity concerns, and performance means about 4 modern CPU cores’ worth (low-to-midrange desktop CPU performance). I seem to only ever get one or two at most. Low power usage? Your performance will likely suffer, and you can’t run too many storage drives. You can run SSD-s, but they are not affordable if you need higher capacities. Lots of storage? Well, there goes the low power consumption goal, especially if you run 3.5" hard drives. Lots of performance? Lots of power consumed! There’s just something that annoys me whenever I do things on my home server and I have to wait longer than I should, and yet I’m bothered when my monitoring tells me that my home server is using 50+ watts.1 I keep an eye out for developments in the self-hosting and home server spaces with the hopes that I’ll one day stumble upon the holy grail, that one server that fits all my needs. I’ve gotten close, but no matter what setup I have, there’s always something that keeps bothering me. I’ve seen a few attempts at the perfect home server, covered by various tech reviewers, but they always have at least one critical flaw. Sometimes the whole package is actually great, the functionality rocks, and then you find that the hardware contains prototype-level solutions that result in the power consumption ballooning to over 30 W. Or the price is over 1000 USD/EUR, not including the drives. Or it’s only available in certain markets and the shipping and import duties destroy its value proposition. There is no affordable platform out there that provides great performance, flexibility and storage space, all while being quiet and using very little power.2 Desktop PC-s repurposed as home servers can provide room for a lot of storage, and they are by design very flexible, but the trade-off is the higher power consumption of the setup. Single board computers use very little power, but they can’t provide a lot of performance and connecting storage to them gets tricky and is overall limited. They can also get surprisingly expensive. NAS boxes provide a lot of storage space and are generally low power if you exclude the power consumption of hard drives, but the cheaper ones are not that performant, and the performant ones cost almost as much as a high-end PC. Laptops can be used as home servers, they are quite efficient and performant, but they lack the flexibility and storage options of desktop PC-s and NAS boxes. You can slap a USB-based DAS to it to add storage, but I’ve had poor experiences with these under high load, meaning that these approaches can’t be relied on if you care about your data and server stability. Then there’s the option of buying used versions of all of the above. Great bang for buck, but you’re likely taking a hit on the power efficiency part due to the simple fact that technology keeps evolving and getting more efficient. I’m still hopeful that one day a device exists that ticks all the boxes while also being priced affordably, but I’m afraid that it’s just a pipe dream. There are builds out there that fill in almost every need, but the parts list is very specific and the bulk of the power consumption wins come from using SSD-s instead of hard drives, which makes it less affordable. In the meantime I guess I’ll keep rocking my ThinkPad-as-a-server approach and praying that the USB-attached storage does not cause major issues. perhaps it’s an undiagnosed medical condition. Homeserveritis? ↩︎ if there is one, then let me know, you can find the contact details below! ↩︎
Yes, you read that right. I’m a prolific open-source influencer now. Some years ago I set up a Google Alert with my name, for fun. Who knows what it might show one day? On 7th of February, it fired an alert. Turns out that my thoughts on Ubuntu were somewhat popular, and it ended up being ingested by an AI slop generator over at Fudzilla, with no links back to the source or anything.1 Not only that, but their choice of spicy autocomplete confabulation bot a large language model completely butchered the article, leaving out critical information, which lead to one reader gloating about Windows. Not linking back to the original source? Not a good start. Misrepresenting my work? Insulting. Giving a Windows user the opportunity to boast about how happy they are with using it? Absolutely unacceptable. Here’s the full article in case they ever delete their poor excuse of a “news” “article”. two can play at that game. ↩︎
I’ve written about abusing USB storage devices in the past, with a passing mention that I’m too cheap to buy an IODD device. Then I bought one. I’ve always liked the promise of tools like Ventoy: you only need to carry the one storage device that boots anything you want. Unfortunately I still can’t trust Ventoy, so I’m forced to look elsewhere. The hardware I decided to get the IODD ST400 for 122 EUR (about 124 USD) off of Amazon Germany, since it was for some reason cheaper than getting it from iodd.shop directly. SATA SSD-s are cheap and plentiful, so the ST400 made the most sense to me. The device came with one USB cable, with type A and type C ends. The device itself has a USB type C port, which I like a lot. The buttons are functional and clicky, but incredibly loud. Setting it up Before you get started with this device, I highly recommend glancing over the official documentation. The text is poorly translated in some parts, but overall it gets the job done. Inserting the SSD was reasonably simple, it slotted in well and would not move around after assembling it. Getting the back cover off was tricky, but I’d rather have that than have to deal with a loose back cover that comes off when it shouldn’t. The most important step is the filesystem choice. You can choose between NTFS, FAT32 or exFAT. Due to the maximum file size limitation of 4GB on FAT32, you will probably want to go with either NTFS or exFAT. Once you have a filesystem on the SSD, you can start copying various installers and tools on it and mount them! The interface is unintuitive. I had to keep the manual close when testing mine, but eventually I figured out what I can and cannot do. Device emulation Whenever you connect the IODD device to a powered on PC, it will present itself as multiple devices: normal hard drive: the whole IODD filesystem is visible here, and you can also store other files and backups as well if you want to optical media drive: this is where your installation media (ISO files) will end up, read only virtual drives (up to 3 at a time): VHD files that represent virtual hard drives, but are seen as actual storage devices on the PC This combination of devices is incredibly handy. For example, you can boot an actual Fedora Linux installation as one of the virtual drives, and make a backup of the files on the PC right to the IODD storage itself. S.M.A.R.T information also seems to be passed through properly for the disk that’s inside. Tech tip: to automatically mount your current selection of virtual drives and ISO file at boot, hold down the “9” button for about 3 seconds. The button also has an exit logo on it. Without this step, booting an ISO or virtual drive becomes tricky as you’ll have to both spam the “select boot drive” key on the PC while navigating the menus on the IODD device to mount the ISO. The performance is okay. The drive speeds are limited to SATA II speeds, which means that your read/write speeds cap out at about 250 MB/s. Latency will depend a lot on the drive, but it stays mostly in the sub-millisecond range on my SSD. The GNOME Disks benchmark does show a notable chunk of reads having a 5 millisecond latency. The drive does not seem to exhibit any throttling under sustained loads, so at least it’s better than a normal USB stick. The speeds seem to be the same for all emulated devices, with latencies and speeds being within spitting distance. The firmware sucks, actually The IODD ST400 is a great idea that’s been turned into a good product, but the firmware is terrible enough to almost make me regret the purchase. The choice of filesystems available (FAT32, NTFS, exFAT) is very Windows-centric, but at least it comes with the upside of being supported on most popular platforms, including Linux and Mac. Not great, not terrible. The folder structure has some odd limitations. For example, you can only have 32 items within a folder. If you have more of that, you have to use nested folders. This sounds like a hard cap written somewhere within the device firmware itself. I’m unlikely to hit such limits myself and it doesn’t seem to affect the actual storage, just the device itself isn’t able to handle that many files within a directory listing. The most annoying issue has turned out to be defragmentation. In 2025! It’s a known limitation that’s handily documented on the IODD documentation. On Windows, you can fix it by using a disk defragmentation tool, which is really not recommended on an SSD. On Linux, I have not yet found a way to do that, so I’ve resorted to simply making a backup of the contents of the drive, formatting the disk, and copying it all back again. This is a frustrating issue that only comes up when you try to use a virtual hard drive. It would absolutely suck to hit this error while in the field. The way virtual drives are handled is also less than ideal. You can only use fixed VHD files that are not sparse, which seems to again be a limitation of the firmware. Tech tip: if you’re on Linux and want to convert a raw disk image (such as a disk copied with dd) to a VHD file, you can use a command like this one: qemu-img convert -f raw -O vpc -o subformat=fixed,force_size source.img target.vhd The firmware really is the worst part of this device. What I would love to see is a device like IODD but with free and open source firmware. Ventoy has proven that there is a market for a solution that makes juggling installation media easy, but it can’t emulate hardware devices. An IODD-like device can. Encryption and other features I didn’t test those because I don’t really need those features myself, I really don’t need to protect my Linux installers from prying eyes. Conclusion The IODD ST400 is a good device with a proven market, but the firmware makes me refrain from outright recommending it to everyone, at least not at this price. If it were to cost something like 30-50 EUR/USD, I would not mind the firmware issues at all.
When you’re dealing with a particularly large service with a slow deployment pipeline (15-30 minutes), and a rollback delay of up to 10 minutes, you’re going to need feature toggles (some also call them feature flags) to turn those half-an-hour nerve-wrecking major incidents into a small whoopsie-daisy that you can fix in a few seconds. Make a change, gate it behind a feature toggle, release, enable the feature toggle and monitor the impact. If there is an issue, you can immediately roll it back with one HTTP request (or database query 1). If everything looks good, you can remove the usage of the feature toggle from your code and move on with other work. Need to roll out the new feature gradually? Implement the feature toggle as a percentage and increase it as you go. It’s really that simple, and you don’t have to pay 500 USD a month to get similar functionality from a service provider and make critical paths in your application depend on them.2 As my teammate once said, our service is perfectly capable of breaking down on its own. All you really need is one database table containing the keys and values for the feature toggles, and two HTTP endpoints, one to GET the current value of the feature toggle, and one to POST a new value for an existing one. New feature toggles will be introduced using tools like Flyway or Liquibase, and the same method can be used for also deleting them later on. You can also add convenience columns containing timestamps, such as created and modified, to track when these were introduced and when the last change was. However, there are a few considerations to take into account when setting up such a system. Feature toggles implemented as database table rows can work fantastically, but you should also monitor how often these get used. If you implement a feature toggle on a hot path in your service, then you can easily generate thousands of queries per second. A properly set up feature toggles system can sustain it without any issues on any competent database engine, but you should still try to monitor the impact and remove unused feature toggles as soon as possible. For hot code paths (1000+ requests/second) you might be better off implementing feature toggles as application properties. There’s no call to the database and reading a static property is darn fast, but you lose out on the ability to update it while the application is running. Alternatively, you can rely on the same database-based feature toggles system and keep a cached copy in-memory, while also refreshing it from time to time. Toggling won’t be as responsive as it will depend on the cache expiry time, but the reduced load on the database is often worth it. If your service receives contributions from multiple teams, or you have very anxious product managers that fill your backlog faster than you can say “story points”, then it’s a good idea to also introduce expiration dates for your feature toggles, with ample warning time to properly remove them. Using this method, you can make sure that old feature toggles get properly removed as there is no better prioritization reason than a looming major incident. You don’t want them to stick around for years on end, that’s just wasteful and clutters up your codebase. If your feature toggling needs are a bit more complicated, then you may need to invest more time in your DIY solution, or you can use one of the SaaS options if you really want to, just account for the added expense and reliance on yet another third party service. At work, I help manage a business-critical monolith that handles thousands of requests per second during peak hours, and the simple approach has served us very well. All it took was one motivated developer and about a day to implement, document and communicate the solution to our stakeholders. Skip the latter two steps, and you can be done within two hours, tops. letting inexperienced developers touch the production database is a fantastic way to take down your service, and a very expensive way to learn about database locks. ↩︎ I hate to refer to specific Hacker News comments like this, but there’s just something about paying 6000 USD a year for such a service that I just can’t understand. Has the Silicon Valley mindset gone too far? Or are US-based developers just way too expensive, resulting in these types of services sounding reasonable? You can hire a senior developer in Estonia for that amount of money for 2-3 weeks (including all taxes), and they can pop in and implement a feature toggles system in a few hours at most. The response comment with the status page link that’s highlighting multiple outages for LaunchDarkly is the cherry on top. ↩︎
I liked Ubuntu. For a very long time, it was the sensible default option. Around 2016, I used the Ubuntu GNOME flavor, and after they ditched the Unity desktop environment, GNOME became the default option. I was really happy with it, both for work and personal computing needs. Estonian ID card software was also officially supported on Ubuntu, which made Ubuntu a good choice for family members. But then something changed. Upgrades suck Like many Ubuntu users, I stuck to the long-term support releases and upgraded every two years to the next major version. There was just one tiny little issue: every upgrade broke something. Usually it was a relatively minor issue, with some icons, fonts or themes being a bit funny. Sometimes things went completely wrong. The worst upgrade was the one I did on my mothers’ laptop. During the upgrade process from Ubuntu 20.04 to 22.04, everything blew up spectacularly. The UI froze, the machine was completely unresponsive. After a 30-minute wait and a forced restart later, the installation was absolutely fucked. In frustration, I ended up installing Windows so that I don’t have to support Ubuntu. Another family member, another upgrade. This is one that they did themselves on Lubuntu 18.04, and they upgraded to the latest version. The result: Firefox shortcuts stopped working, the status bar contained duplicate icons, and random errors popped up after logging in. After making sure that ID card software works on Fedora 40, I installed that instead. All they need is a working browser, and that’s too difficult for Ubuntu to handle. Snaps ruined Ubuntu Snaps. I hate them. They sound great in theory, but the poor implementation and heavy-handed push by Canonical has been a mess. Snaps auto-update by default. Great for security1, but horrible for users who want to control what their personal computer is doing. Snaps get forced upon users as more and more system components are forcibly switched from Debian-based packages to Snaps, which breaks compatibility, functionality and introduces a lot of new issues. You can upgrade your Ubuntu installation and then discover that your browser is now contained within a Snap, the desktop shortcut for it doesn’t work and your government ID card does not work for logging in to your bank any longer. Snaps also destroy productivity. A colleague was struggling to get any work done because the desktop environment on their Ubuntu installation was flashing certain UI elements, being unresponsive and blocking them from doing any work. Apparently the whole GNOME desktop environment is a Snap now, and that lead to issues. The fix was super easy, barely an inconvenience: roll back to the previous version of the GNOME snap restart still broken update to the latest version again restart still broken restart again it is fixed now What was the issue? Absolutely no clue, but a days’ worth of developers’ productivity was completely wasted. Some of these issues have probably been fixed by now, but if I executed migration projects at my day job with a similar track record, I would be fired.2 Snaps done right: Flatpak Snaps can be implemented in a way that doesn’t suck for end users. It’s called a Flatpak. They work reasonably well, you can update them whenever you want and they are optional. Your Firefox installation won’t suddenly turn into a Flatpak overnight. On the Steam Deck, Flatpaks are the main distribution method for user-installed apps and I don’t mind it at all. The only issue is the software selection, not every app is available as a Flatpak just yet. Consider Fedora Fedora works fine. It’s not perfect, but I like it. At this point I’ve used it for longer than Ubuntu and unless IBM ruins it for all of us, I think it will be a perfectly cromulent distro go get work done on. Hopefully it’s not too late for Canonical to reconsider their approach to building a Linux distro. the xz backdoor demonstrated that getting the latest versions of all software can also be problematic from the security angle. ↩︎ technical failures themselves are not the issue, but not responding to users’ feedback and not testing things certainly is, especially if you keep repeatedly making the same mistake. ↩︎
More in technology
Introduction Selecting the RAM Opening up Replacing the RAM Reassembly References Introduction I do virtually all of my hobby and home computing on Linux and MacOS. The MacOS stuff on a laptop and almost all Linux work a desktop PC. The desktop PC has Windows on it installed as well, but it’s too much of a hassle to reboot so it never gets used in practice. Recently, I’ve been working on a project that requires a lot of Spice simulations. NGspice works fine under Linux, but it doesn’t come standard with a GUI and, more important, the simulation often refuse to converge once your design becomes a little bit bigger. Tired of fighting against the tool, I switched to LTspice from Analog Devices. It’s free to use and while it support Windows and MacOS in theory, the Mac version is many years behind the Windows one and nearly unusuable. After dual-booting into Windows too many times, a Best Buy deal appeared on my BlueSky timeline for an HP laptop for just $330. The specs were pretty decent too: AMD Ryzen 5 7000 17.3” 1080p screen 512GB SSD 8 GB RAM Full size keyboard Windows 11 Someone at the HP marketing departement spent long hours to come up with a suitable name and settled on “HP Laptop 17”. I generally don’t pay attention to what’s available on the PC laptop market, but it’s hard to really go wrong for this price so I took the plunge. Worst case, I’d return it. We’re now 8 weeks later and the laptop is still firmly in my possession. In fact, I’ve used it way more than I thought I would. I haven’t noticed any performance issues, the screen is pretty good, the SSD larger than what I need for the limited use case, and, surprisingly, the trackpad is the better than any Windows laptop that I’ve ever used, though that’s not a high bar. It doesn’t come close to MacBook quality, but palm rejection is solid and it’s seriously good at moving the mouse around in CAD applications. The two worst parts are the plasticy keyboard and the 8GB of RAM. I can honestly not quantify whether or not it has a practical impact, but I decided to upgrade it anyway. In this blog post, I go through the steps of doing this upgrade. Important: there’s a good chance that you will damage your laptop when trying this upgade and almost certainly void your warranty. Do this at your own risk! Selecting the RAM The laptop wasn’t designed to be upgradable and thus you can’t find any official resources about it. And with such a generic name, there’s guaranteed to be multiple hardware versions of the same product. To have reasonable confidence that you’re buying the correct RAM, check out the full product name first. You can find it on the bottom: Mine is an HP Laptop 17-cp3005dx. There’s some conflicting information about being able to upgrade the thing. The BestBuy Q&A page says: The HP 17.3” Laptop Model 17-cp3005dx RAM and Storage are soldered to the motherboard, and are not upgradeable on this model. This is flat out wrong for my device. After a bit of Googling around, I learned that it has a single 8GB DDR4 SODIMM 260-pin RAM stick but that the motherboard has 2 RAM slots and that it can support up to 2x32GB. I bought a kit with Crucial 2x16GB 3200MHz SODIMMs from Amazon. As I write this, the price is $44. Opening up Removing the screws This is the easy part. There are 10 screws at the bottom, 6 of which are hidden underneath the 2 rubber anti-slip strips. It’s easy to peel these stips loose. It’s als easy to put them back without losing the stickiness. Removing the bottom cover The bottom cover is held back by those annoying plastic tabs. If you have a plastic spudger or prying tool, now is the time to use them. I didn’t so I used a small screwdriver instead. Chances are high that you’ll leave some tiny scuffmarks on the plastic casing. I found it easiest to open the top lid a bit, place the laptop on its side, and start on the left and right side of the keyboard. After that, it’s a matter of working your way down the long sides at the front and back of the laptop. There are power and USB connectors that are right against the side of the bottom panel so be careful not to poke with the spudger or screwdriver inside the case. It’s a bit of a jarring process, going back and forth and making steady improvement. In addition to all the clips around the board of the bottom panel, there are also a few in the center that latch on to the side of the battery. But after enough wiggling and creaking sounds, the panel should come loose. Replacing the RAM As expected, there are 2 SODIMM slots, one of which is populated with a 3200MHz 8GDB RAM stick. At the bottom right of the image below, you can also see the SSD slot. If you don’t enjoy the process of opening up the laptop and want to upgrade to a larger drive as well, now would be the time for that. New RAM in place! It’s always a good idea to test the surgery before reassembly: Success! Reassembly Reassembly of the laptop is much easier than taking it apart. Everything simply clicks together. The only minor surprise was that both anti-slip strips became a little bit longer… References Memory Upgrade for HP 17-cp3005dx Laptop Upgrading Newer HP 17.3” Laptop With New RAM And M.2 NVMe SSD Different model with Intel CPU but the case is the same.
Jez Corden writing for Windows Central: EXCLUSIVE: Xbox's New Hardware Plans Begin With a Gaming Handheld Set for Later This Year, With Full Next-Gen Consoles Targeting 2027 Microsoft is working with a PC gaming OEM (think ASUS, Lenovo, MSI, Razer, etc.) on an Xbox-branded gaming handheld, surprisingly slated
Forget GB Railways and GB Energy... how about GB Drones?
Today marks day 13 of using the iPhone 16e as my primary phone, and after this review goes live, I'll be moving my eSIM back to the 16 Pro that I use day to day. I intended to use this phone for a month before going back to