Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
23
I liked Ubuntu. For a very long time, it was the sensible default option. Around 2016, I used the Ubuntu GNOME flavor, and after they ditched the Unity desktop environment, GNOME became the default option. I was really happy with it, both for work and personal computing needs. Estonian ID card software was also officially supported on Ubuntu, which made Ubuntu a good choice for family members. But then something changed. Upgrades suck Like many Ubuntu users, I stuck to the long-term support releases and upgraded every two years to the next major version. There was just one tiny little issue: every upgrade broke something. Usually it was a relatively minor issue, with some icons, fonts or themes being a bit funny. Sometimes things went completely wrong. The worst upgrade was the one I did on my mothers’ laptop. During the upgrade process from Ubuntu 20.04 to 22.04, everything blew up spectacularly. The UI froze, the machine was completely unresponsive. After a 30-minute wait and...
a month ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ./techtipsy

I yearn for the perfect home server

I’ve changed my home server setup a lot over the past decade, mainly because I keep changing the goals all the time. I’ve now realized why that keeps happening. I want the perfect home server. What is the perfect home server? I’d phrase it like this: The perfect home server uses very little power, offers plenty of affordable storage and provides a lot of performance when it’s actually being relied upon. In my case, low power means less than 5 W while idling, 10+ TB of redundant storage for data resilience and integrity concerns, and performance means about 4 modern CPU cores’ worth (low-to-midrange desktop CPU performance). I seem to only ever get one or two at most. Low power usage? Your performance will likely suffer, and you can’t run too many storage drives. You can run SSD-s, but they are not affordable if you need higher capacities. Lots of storage? Well, there goes the low power consumption goal, especially if you run 3.5" hard drives. Lots of performance? Lots of power consumed! There’s just something that annoys me whenever I do things on my home server and I have to wait longer than I should, and yet I’m bothered when my monitoring tells me that my home server is using 50+ watts.1 I keep an eye out for developments in the self-hosting and home server spaces with the hopes that I’ll one day stumble upon the holy grail, that one server that fits all my needs. I’ve gotten close, but no matter what setup I have, there’s always something that keeps bothering me. I’ve seen a few attempts at the perfect home server, covered by various tech reviewers, but they always have at least one critical flaw. Sometimes the whole package is actually great, the functionality rocks, and then you find that the hardware contains prototype-level solutions that result in the power consumption ballooning to over 30 W. Or the price is over 1000 USD/EUR, not including the drives. Or it’s only available in certain markets and the shipping and import duties destroy its value proposition. There is no affordable platform out there that provides great performance, flexibility and storage space, all while being quiet and using very little power.2 Desktop PC-s repurposed as home servers can provide room for a lot of storage, and they are by design very flexible, but the trade-off is the higher power consumption of the setup. Single board computers use very little power, but they can’t provide a lot of performance and connecting storage to them gets tricky and is overall limited. They can also get surprisingly expensive. NAS boxes provide a lot of storage space and are generally low power if you exclude the power consumption of hard drives, but the cheaper ones are not that performant, and the performant ones cost almost as much as a high-end PC. Laptops can be used as home servers, they are quite efficient and performant, but they lack the flexibility and storage options of desktop PC-s and NAS boxes. You can slap a USB-based DAS to it to add storage, but I’ve had poor experiences with these under high load, meaning that these approaches can’t be relied on if you care about your data and server stability. Then there’s the option of buying used versions of all of the above. Great bang for buck, but you’re likely taking a hit on the power efficiency part due to the simple fact that technology keeps evolving and getting more efficient. I’m still hopeful that one day a device exists that ticks all the boxes while also being priced affordably, but I’m afraid that it’s just a pipe dream. There are builds out there that fill in almost every need, but the parts list is very specific and the bulk of the power consumption wins come from using SSD-s instead of hard drives, which makes it less affordable. In the meantime I guess I’ll keep rocking my ThinkPad-as-a-server approach and praying that the USB-attached storage does not cause major issues. perhaps it’s an undiagnosed medical condition. Homeserveritis? ↩︎ if there is one, then let me know, you can find the contact details below! ↩︎

2 days ago 7 votes
Turns out that I'm a 'prolific open-source influencer' now

Yes, you read that right. I’m a prolific open-source influencer now. Some years ago I set up a Google Alert with my name, for fun. Who knows what it might show one day? On 7th of February, it fired an alert. Turns out that my thoughts on Ubuntu were somewhat popular, and it ended up being ingested by an AI slop generator over at Fudzilla, with no links back to the source or anything.1 Not only that, but their choice of spicy autocomplete confabulation bot a large language model completely butchered the article, leaving out critical information, which lead to one reader gloating about Windows. Not linking back to the original source? Not a good start. Misrepresenting my work? Insulting. Giving a Windows user the opportunity to boast about how happy they are with using it? Absolutely unacceptable. Here’s the full article in case they ever delete their poor excuse of a “news” “article”. two can play at that game. ↩︎

2 weeks ago 15 votes
IODD ST400 review: great idea, good product, terrible firmware

I’ve written about abusing USB storage devices in the past, with a passing mention that I’m too cheap to buy an IODD device. Then I bought one. I’ve always liked the promise of tools like Ventoy: you only need to carry the one storage device that boots anything you want. Unfortunately I still can’t trust Ventoy, so I’m forced to look elsewhere. The hardware I decided to get the IODD ST400 for 122 EUR (about 124 USD) off of Amazon Germany, since it was for some reason cheaper than getting it from iodd.shop directly. SATA SSD-s are cheap and plentiful, so the ST400 made the most sense to me. The device came with one USB cable, with type A and type C ends. The device itself has a USB type C port, which I like a lot. The buttons are functional and clicky, but incredibly loud. Setting it up Before you get started with this device, I highly recommend glancing over the official documentation. The text is poorly translated in some parts, but overall it gets the job done. Inserting the SSD was reasonably simple, it slotted in well and would not move around after assembling it. Getting the back cover off was tricky, but I’d rather have that than have to deal with a loose back cover that comes off when it shouldn’t. The most important step is the filesystem choice. You can choose between NTFS, FAT32 or exFAT. Due to the maximum file size limitation of 4GB on FAT32, you will probably want to go with either NTFS or exFAT. Once you have a filesystem on the SSD, you can start copying various installers and tools on it and mount them! The interface is unintuitive. I had to keep the manual close when testing mine, but eventually I figured out what I can and cannot do. Device emulation Whenever you connect the IODD device to a powered on PC, it will present itself as multiple devices: normal hard drive: the whole IODD filesystem is visible here, and you can also store other files and backups as well if you want to optical media drive: this is where your installation media (ISO files) will end up, read only virtual drives (up to 3 at a time): VHD files that represent virtual hard drives, but are seen as actual storage devices on the PC This combination of devices is incredibly handy. For example, you can boot an actual Fedora Linux installation as one of the virtual drives, and make a backup of the files on the PC right to the IODD storage itself. S.M.A.R.T information also seems to be passed through properly for the disk that’s inside. Tech tip: to automatically mount your current selection of virtual drives and ISO file at boot, hold down the “9” button for about 3 seconds. The button also has an exit logo on it. Without this step, booting an ISO or virtual drive becomes tricky as you’ll have to both spam the “select boot drive” key on the PC while navigating the menus on the IODD device to mount the ISO. The performance is okay. The drive speeds are limited to SATA II speeds, which means that your read/write speeds cap out at about 250 MB/s. Latency will depend a lot on the drive, but it stays mostly in the sub-millisecond range on my SSD. The GNOME Disks benchmark does show a notable chunk of reads having a 5 millisecond latency. The drive does not seem to exhibit any throttling under sustained loads, so at least it’s better than a normal USB stick. The speeds seem to be the same for all emulated devices, with latencies and speeds being within spitting distance. The firmware sucks, actually The IODD ST400 is a great idea that’s been turned into a good product, but the firmware is terrible enough to almost make me regret the purchase. The choice of filesystems available (FAT32, NTFS, exFAT) is very Windows-centric, but at least it comes with the upside of being supported on most popular platforms, including Linux and Mac. Not great, not terrible. The folder structure has some odd limitations. For example, you can only have 32 items within a folder. If you have more of that, you have to use nested folders. This sounds like a hard cap written somewhere within the device firmware itself. I’m unlikely to hit such limits myself and it doesn’t seem to affect the actual storage, just the device itself isn’t able to handle that many files within a directory listing. The most annoying issue has turned out to be defragmentation. In 2025! It’s a known limitation that’s handily documented on the IODD documentation. On Windows, you can fix it by using a disk defragmentation tool, which is really not recommended on an SSD. On Linux, I have not yet found a way to do that, so I’ve resorted to simply making a backup of the contents of the drive, formatting the disk, and copying it all back again. This is a frustrating issue that only comes up when you try to use a virtual hard drive. It would absolutely suck to hit this error while in the field. The way virtual drives are handled is also less than ideal. You can only use fixed VHD files that are not sparse, which seems to again be a limitation of the firmware. Tech tip: if you’re on Linux and want to convert a raw disk image (such as a disk copied with dd) to a VHD file, you can use a command like this one: qemu-img convert -f raw -O vpc -o subformat=fixed,force_size source.img target.vhd The firmware really is the worst part of this device. What I would love to see is a device like IODD but with free and open source firmware. Ventoy has proven that there is a market for a solution that makes juggling installation media easy, but it can’t emulate hardware devices. An IODD-like device can. Encryption and other features I didn’t test those because I don’t really need those features myself, I really don’t need to protect my Linux installers from prying eyes. Conclusion The IODD ST400 is a good device with a proven market, but the firmware makes me refrain from outright recommending it to everyone, at least not at this price. If it were to cost something like 30-50 EUR/USD, I would not mind the firmware issues at all.

3 weeks ago 14 votes
Feature toggles: just roll your own!

When you’re dealing with a particularly large service with a slow deployment pipeline (15-30 minutes), and a rollback delay of up to 10 minutes, you’re going to need feature toggles (some also call them feature flags) to turn those half-an-hour nerve-wrecking major incidents into a small whoopsie-daisy that you can fix in a few seconds. Make a change, gate it behind a feature toggle, release, enable the feature toggle and monitor the impact. If there is an issue, you can immediately roll it back with one HTTP request (or database query 1). If everything looks good, you can remove the usage of the feature toggle from your code and move on with other work. Need to roll out the new feature gradually? Implement the feature toggle as a percentage and increase it as you go. It’s really that simple, and you don’t have to pay 500 USD a month to get similar functionality from a service provider and make critical paths in your application depend on them.2 As my teammate once said, our service is perfectly capable of breaking down on its own. All you really need is one database table containing the keys and values for the feature toggles, and two HTTP endpoints, one to GET the current value of the feature toggle, and one to POST a new value for an existing one. New feature toggles will be introduced using tools like Flyway or Liquibase, and the same method can be used for also deleting them later on. You can also add convenience columns containing timestamps, such as created and modified, to track when these were introduced and when the last change was. However, there are a few considerations to take into account when setting up such a system. Feature toggles implemented as database table rows can work fantastically, but you should also monitor how often these get used. If you implement a feature toggle on a hot path in your service, then you can easily generate thousands of queries per second. A properly set up feature toggles system can sustain it without any issues on any competent database engine, but you should still try to monitor the impact and remove unused feature toggles as soon as possible. For hot code paths (1000+ requests/second) you might be better off implementing feature toggles as application properties. There’s no call to the database and reading a static property is darn fast, but you lose out on the ability to update it while the application is running. Alternatively, you can rely on the same database-based feature toggles system and keep a cached copy in-memory, while also refreshing it from time to time. Toggling won’t be as responsive as it will depend on the cache expiry time, but the reduced load on the database is often worth it. If your service receives contributions from multiple teams, or you have very anxious product managers that fill your backlog faster than you can say “story points”, then it’s a good idea to also introduce expiration dates for your feature toggles, with ample warning time to properly remove them. Using this method, you can make sure that old feature toggles get properly removed as there is no better prioritization reason than a looming major incident. You don’t want them to stick around for years on end, that’s just wasteful and clutters up your codebase. If your feature toggling needs are a bit more complicated, then you may need to invest more time in your DIY solution, or you can use one of the SaaS options if you really want to, just account for the added expense and reliance on yet another third party service. At work, I help manage a business-critical monolith that handles thousands of requests per second during peak hours, and the simple approach has served us very well. All it took was one motivated developer and about a day to implement, document and communicate the solution to our stakeholders. Skip the latter two steps, and you can be done within two hours, tops. letting inexperienced developers touch the production database is a fantastic way to take down your service, and a very expensive way to learn about database locks. ↩︎ I hate to refer to specific Hacker News comments like this, but there’s just something about paying 6000 USD a year for such a service that I just can’t understand. Has the Silicon Valley mindset gone too far? Or are US-based developers just way too expensive, resulting in these types of services sounding reasonable? You can hire a senior developer in Estonia for that amount of money for 2-3 weeks (including all taxes), and they can pop in and implement a feature toggles system in a few hours at most. The response comment with the status page link that’s highlighting multiple outages for LaunchDarkly is the cherry on top. ↩︎

3 weeks ago 14 votes

More in technology

Humanities Crash Course Week 10: Greek Drama

Week 10 of the humanities crash course had me reading (and listening to) classic Greek plays. I also listened to the blues and watched a movie starring a venerable recently departed actor. How do they connect? Perhaps they don’t. Let’s find out. Readings The plan for this week included six classic Greek tragedies and one comedy: Sophocles’s Oedipus Rex, Oedipus at Colonus, and Antigone, Aeschylus’s Agamemnon, Euripides’s The Bacchae, and Aristophanes’s Lysistrata. The tragedies by Sophocles form a trilogy. Oedipus Rex is by far the most famous: the titular character discovers he’s not just responsible for his father’s death, but inadvertently married his widowed mother in its wake. Much sadness ensues. The other two plays continue the story. Oedipus at Colonus has him and his daughters seeking protection in a foreign land as his sons duke it out over his throne. In Antigone, Oedipus’s daughter faces the consequences of burying her brother after his demise in that struggle. In both plays, sadness ensues. Agamemnon dramatizes a story we’ve already encountered in the Odyssey: the titular king returns home only to be betrayed and murdered by his wife and her lover. The motive? The usual: revenge, lust, power. Sadness ensues. The Bacchae centers on the cult of the demigod Dionysus. He comes to Thebes to avenge a slanderous rumor and spread his own cult. Not recognizing him, King Pentheus arrests him and persecutes his followers, a group of women that includes Pentheus’s mother, Agave. In ecstatic frenzy, Agave and the women tear him apart. Again, not light fare. Lysistrata, a comedy, was a respite. Looking to end to the Peloponnesian War, a group of women led by the titular character convince Greek women to go on a sex strike until the men stop the fighting. For such an old play, it’s surprisingly funny. (More on this below.) These plays are very famous, but I’d never read them. This time, I heard dramatizations of Sophocles’s plays and an audiobook of The Bacchae, and read ebooks of the remaining two. The dramatizations were the most powerful and understandable, but reading Lysistrata helped me appreciate the puns. Audiovisual Music: Gioia recommended classic blues tunes. I listened to Apple Music collections for Blind Lemon Jefferson and Blind Willie Johnson. I also revisited an album of blues music compiled for Martin Scorcese’s film series, The Blues. My favorite track here is Lead Belly’s C.C. Rider, a song that’s lived rent free in my brain the last several days: Art: Gioia recommended looking at Greek pottery. I studied some of this in college and didn’t spend much time looking again. Cinema: rather than something related to the readings, I sought out a movie starring Gene Hackman, who died a couple of weeks ago. I opted for Francis Ford Coppola’s THE CONVERSATION, which is about the ethics of privacy-invading technologies. Even though the movie is fifty-one years old, that description should make it clear that it’s highly relevant today. Reflections I was surprised by the freshness of the plays. Yes, most namechecks are meaningless without notes. (That’s an advantage books have over audiobooks.) But the stories deal with timeless themes: truth-seeking, repression, free will vs. predestination, the influence of religious belief on our actions, relations between the sexes, etc. Unsurprisingly, some of these themes are also central to THE CONVERSATION. I sensed parallels between Oedipus and the film’s protagonist, Harry Caul. ChatGPT provided useful insights. (Spoilers here for both the play and movie – but c’mon, these are old works!) Both characters investigate the truth only to find painful revelations about themselves. Both believe that gaining knowledge will help them control events – but their efforts only lead to self-destruction. Both misunderstand key pieces of evidence. Both end up “isolated, ruined by their own knowledge, and stripped of their former identity.” (I liked how ChatGPT phrased this!) Both stories explore the limits of perception: it’s possible to see (and record) and remain ignorant of the truth. Heavy stuff – as is wont in drama. Bur for me, the bigger surprise in exploring these works was Lysistrata. Humor is highly contextual: even contemporary stuff doesn’t play well across cultures. But this ancient Greek play is filled with randy situations and double entendres that are still funny. Much rides on the translation. The edition I read was translated by Jack Lindsay, and I marveled at his skills. It must’ve been challenging to get the rhymes and puns in and still make the story work. A note in the text mentioned that the Spartans in the story were translated to sound like Scots to make them relatable to the intended English audience. (!) Obviously, none of these ancient texts I’ve been reading were written in English. That will change in the latter stages of the course. I’m wondering if I should read texts originally written in Spanish and Italian in those languages, since I can. (But what would that do to my notes and running interactions with the LLMs? It’s an opportunity to explore…) Notes on Note-taking Part of why I’m undertaking this course is to experiment with note-taking and LLMs. This week, I tried a few new things. First, before reading each play, I read through its synopsis in Wikipedia. This helped me understand the narrative thread and themes and generally get oriented in unfamiliar terrain. Second, I tried a new cadence for capturing notes. These are short plays; I read one per day. (Except The Bacchae, which I read over two days.) During my early morning journaling sessions, I wrote down a synopsis of the play I’d read the previous day. Then, I asked GPT-4o for comments on the synopsis. The LLM invariably pointed out important things I’d missed. The point wasn’t making more complete notes, but helping me understand and remember better by writing down my fresh memories and reviewing them through a “third party.” I was forced to be clear and complete, since I knew I’d be asking for feedback. Third, I added new sections to my notes for each work. After the synopsis, I asked GPT-4o for an outline explaining why the work is considered important. I read these outlines and reflected on them. Then, I asked for criticisms, both modern and contemporary, that could be leveled against these works. Frankly, this is risky. One of my guidelines has been to stick to prompts where I can verify the LLM’s output. If I ask for a summary of a work I’ve just read, I’ll have a better shot at knowing whether the LLM is hallucinating. But in this case, I’m asking for stuff that I won’t be able to validate. Still, I’m not using these prompts to generate authoritative texts. Instead, the answers help me consider the work from different perspectives. The LLM helps me step outside my experience – and that’s one of the reasons for studying the humanities. Up Next Gioia scheduled Marcus Aurelius and Epictetus for week 11. I’ve read Meditations twice and loved it, and will revisit it now more systemically. But since I’m already familiar with this work, I’ll also spend more time with the Bible – the Book of Job, in particular. In addition to Job itself, I plan to read Mark Larrimore’s The Book of Job: A Biography, which explores its background. It’ll be the first time in the course that I read a work about a work. (As you may surmise, I’m keen on Job.) This will also be the first physical book I read in the course. Otherwise, I’m sticking with Gioia’s recommendations. Check out his post for the full syllabus. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

14 hours ago 1 votes
Performance of the Python 3.14 tail-call interpreter

About a month ago, the CPython project merged a new implementation strategy for their bytecode interpreter. The initial headline results were very impressive, showing a 10-15% performance improvement on average across a wide range of benchmarks across a variety of platforms. Unfortunately, as I will document in this post, these impressive performance gains turned out to be primarily due to inadvertently working around a regression in LLVM 19. When benchmarked against a better baseline (such GCC, clang-18, or LLVM 19 with certain tuning flags), the performance gain drops to 1-5% or so depending on the exact setup.

51 minutes ago 1 votes
Reading List 03/08/2025

China’s industrial diplomacy, streetlights and crime, deorbiting Starlink satellites, a proposed canal across Thailand, a looming gas turbine shortage, and more.

yesterday 2 votes
+ iPhone 16e review in progress: battery life

You can never do too much battery testing, but after a week with this phone I've got some impressions to share.

yesterday 2 votes
Real WordPress Security

One thing you’ll see on every host that offers WordPress is claims about how secure they are, however they don’t put their money where their mouth is. When you dig deeper, if your site actually gets hacked they’ll hit you with remediation fees that can go from hundreds to thousands of dollars. They may try … Continue reading Real WordPress Security →

yesterday 2 votes