Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
9
I've always been fond of the idea of the Raspberry Pi. An energy efficient, small, cheap but capable computer. An ideal home server. Until the Pi 4, the Pi was not that capable, and only with the relatively recent Pi 5 (fall 2023) do I feel the Pi is OK performance wise, although still hampered by SD card performance1. And the Pi isn't that cheap either. The Pi 5 can be fitted with an NVME SSD, but for me it's too little, too late. Because I feel there is a type of computer on the market, that is much more compelling than the Pi. I'm talking about the tinyminimicro home lab 'revolution' started by servethehome.com about four years ago (2020). A 1L mini PC (Elitedesk 705 G4) with a Raspberry Pi 5 on top During the pandemic, the Raspberry Pi was in short supply and people started looking for alternatives. The people at servethehome realised that these small enterprise desktop PCs could be a good option. Dell (micro), Lenovo (tiny) and HP (mini) all make these small desktop PCs, which...
8 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Louwrentius

My 71 TiB ZFS NAS after 10 years and zero drive failures

My 4U 71 TiB ZFS NAS built with twenty-four 4 TB drives is over 10 years old and still going strong. Although now on its second motherboard and power supply, the system has yet to experience a single drive failure (knock on wood). Zero drive failures in ten years, how is that possible? Let's talk about the drives first The 4 TB HGST drives have roughly 6000 hours on them after ten years. You might think something's off and you'd be right. That's only about 250 days worth of runtime. And therein lies the secret of drive longevity (I think): Turn the server off when you're not using it. According to people on Hacker News I have my bearings wrong. The chance of having zero drive failures over 10 years for 24 drives is much higher than I thought it was. So this good result may not be related to turning my NAS off and keeping it off most off the time. My NAS is turned off by default. I only turn it on (remotely) when I need to use it. I use a script to turn the IoT power bar on and once the BMC (Baseboard Management Controller) is done booting, I use IPMI to turn on the NAS itself. But I could have used Wake-on-Lan too as an alternative. Once I'm done using the server, I run a small script that turns the server off, wait a few seconds and then turn the wall socket off. It wasn't enough for me to just turn off the server, but leave the motherboard, and thus the BMC powered, because that's just a constant 7 watts (about two Raspberry Pis at idle) being wasted (24/7). This process works for me because I run other services on low-power devices such as Raspberry Pi4s or servers that use much less power when idling than my 'big' NAS. This proces reduces my energy bill considerably (primary motivation) and also seems great for hard drive longevity. Although zero drive failures to date is awesome, N=24 is not very representative and I could just be very lucky. Yet, it was the same story with the predecessor of this NAS, a machine with 20 drives (1 TB Samsung Spinpoint F1s (remember those?)) and I also had zero drive failures during its operational lifespan (~5 years). The motherboard (died once) Although the drives are still ok, I had to replace the motherboard a few years ago. The failure mode of the motherboard was interesting: it was impossible to get into the BIOS and it would occasionally fail to boot. I tried the obvious like removing the CMOS battery and such but to no avail. Fortunately, the [motherboard]1 was still available on Ebay for a decent price so that ended up not being a big deal. ZFS ZFS worked fine for all these years. I've switched operating systems over the years and I never had an issue importing the pool back into the new OS install. If I would build a new storage server, I would definitely use ZFS again. I run a zpool scrub on the drives a few times a year2. The scrub has never found a single checksum error. I must have run so many scrubs, more than a petabyte of data must have been read from the drives (all drives combined) and ZFS didn't have to kick in. I'm not surprised by this result at all. Drives tend to fail most often in two modes: Total failure, drive isn't even detected Bad sectors (read or write failures) There is a third failure mode, but it's extremely rare: silent data corruption. Silent data corruption is 'silent' because a disk isn't aware it delivered corrupted data. Or the SATA connection didn't detect any checksum errors. However, due to all the low-level checksumming, this risk is extremely small. It's a real risk, don't get me wrong, but it's a small risk. To me, it's a risk you mostly care about at scale, in datacenters4 but for residential usage, it's totally reasonable to accept the risk3. But ZFS is not that difficult to learn and if you are well-versed in Linux or FreeBSD, it's absolutely worth checking out. Just remember! Sound levels (It's Oh So Quiet) This NAS is very quiet for a NAS (video with audio). But to get there, I had to do some work. The chassis contains three sturdy 12V fans that cool the 24 drive cages. These fans are extremely loud if they run at their default speed. But because they are so beefy, they are fairly quiet when they run at idle RPM5, yet they still provide enough airflow, most of the time. But running at idle speeds was not enough as the drives would heat up eventually, especially when they are being read from / written to. Fortunately, the particular Supermicro motherboard I bought at the time allows all fan headers to be controlled through Linux. So I decided to create a script that sets the fan speed according to the temperature of the hottest drive in the chassis. I actually visited a math-related subreddit and asked for an algorithm that would best fit my need to create a silent setup and also keep the drives cool. Somebody recommended to use a "PID controller", which I knew nothing about. So I wrote some Python, stole some example Python PID controller code, and tweaked the parameters to find a balance between sound and cooling performance. The script has worked very well over the years and kept the drives at 40C or below. PID controllers are awesome and I feel it should be used in much more equipment that controls fans, temperature, and so on, instead of 'dumb' on/of behaviour or less 'dumb' lookup tables. Networking I started out with quad-port gigabit network controllers and I used network bonding to get around 450 MB/s network transfer speeds between various systems. This setup required a ton of UTP cables so eventually I got bored with that and I bought some cheap Infiniband cards and that worked fine, I could reach around 700 MB/s between systems. As I decided to move away from Ubuntu and back to Debian, I faced a problem: the Infiniband cards didn't work anymore and I could not figure out how to fix it. So I decided to buy some second-hand 10Gbit Ethernet cards and those work totally fine to this day. The dead power supply When you turn this system on, all drives spin up at once (no staggered spinup) and that draws around 600W for a few seconds. I remember that the power supply was rated for 750W and the 12 volt rail would have been able to deliver enough power, but it would sometimes cut out at boot nonetheless. UPS (or lack thereof) For many years, I used a beefy UPS with the system, to protect against power failure, just to be able to shutdown cleanly during an outage. This worked fine, but I noticed that the UPS used another 10+ watts on top of the usage of the server and I decided it had to go. Losing the system due to power shenanigans is a risk I accept. Backups (or a lack thereof) My most important data is backed up trice. But a lot of data stored on this server isn't important enough for me to backup. I rely on replacement hardware and ZFS protecting against data loss due to drive failure. And if that's not enough, I'm out of luck. I've accepted that risk for 10 years. Maybe one day my luck will run out, but until then, I enjoy what I have. Future storage plans (or lack thereof) To be frank, I don't have any. I built this server back in the day because I didn't want to shuffle data around due to storage space constraints and I still have ample space left. I have a spare motherboard, CPU, Memory and a spare HBA card so I'm quite likely able to revive the system if something breaks. As hard drive sizes have increased tremendously, I may eventually move away from the 24-drive bay chassis into a smaller form-factor. It's possible to create the same amount of redundant storage space with only 6-8 hard drives with RAIDZ2 (RAID 6) redundancy. Yet, storage is always expensive. But another likely scenario is that in the coming years this system eventually dies and I decide not to replace it at all, and my storage hobby will come to an end. I needed the same board, because the server uses four PCIe slots: 3 x HBA and 1 x 10Gbit NIC. ↩ It takes ~20 hours to complete a scrub and it uses a ton of power while doing so. As I'm on a dynamic power tariff, I run it on 'cheap' days. ↩ every time I listen to ZFS enthusiasts you get the impression you are taking insane risks with your data if you don't run ZFS. I disagree, it all depends on context and circumstances. ↩ enterprise hard drives used in servers and SANs had larger sector sizes to accommodate even more checksumming data to prevent against silent data corruption. ↩ Because there is little airflow by default, I had to add a fan to cool the four PCIe cards (HBA and networking) or they would have gotten way too hot. ↩

5 months ago 11 votes
AI is critically important but not for you

Before Chat-GPT caused a sensation, big tech companies like Facebook and Apple were betting their future growth on virtual reality. But I'm convinced that virtual reality will never be a mainstream thing. If you ever used VR you know why: A heavy thing on your head that messes up your hair Nausea The focus on virtual reality felt like desperation to me. The desperation of big tech companies trying to find new growth, ideally a monopoly they control1, to satisfy the demands of shareholders. And then OpenAI dropped ChatGPT and all the big tech companies started to pivot so fast because in contrary to VR, AI doesn't involve making people nauseated and look silly. It's probably obvious that I feel it's not about AI itself. It is really about huge tech companies that have found a new way to sustain growth a bit longer, now that all other markets have been saturated. Flush with cash, they went nuts and bought up all the AI accelerator hardware2, which in turn uses unspeakable amounts of energy to train new large language models. Despite all the hype, current AI technology is at it's core a very sophisticated statistical model. It's all about probabilities, it can't actually reason. As I see it, work done by AI can't thus be trusted. Depending on the specific application, that may be less of an issue, but that is a fundamental limitation of current technology. And this gives me pause as it limits the application where it is most wanted: to control labour. To reduce the cost of headcount and to suppress wages. As AI tools become capable enough, it would be irresponsible towards shareholders not to pursue this direction. All this just to illustrate that the real value of AI is not for the average person in the street. The true value is for those bigger companies who can keep on growing, and the rest is just collateral damage. But I wonder: when the AI hype is over, what new hype will take it's place? I can't see it. I can't think of it. But I recognise that the internet created efficiencies that are convenient, yet social media weaponised this convenience to exploit our fundamental human weaknesses. As shareholder value rose, social media slowly chips away at the fabric of our society: trust. I've sold my Oculus Rift CV1 long ago, I lost hundreds of dollars of content but I refuse to create a Facebook/Meta account. ↩ climate change accelerators ↩

9 months ago 9 votes
How to run victron veconfigure on a mac

Introduction Victron Multiplus-II inverter/charges are configured with the veconfigure1 tool. Unforntunately this is a Windows-only tool, but there is still a way for Apple users to run this tool without any problems. Tip: if you've never worked with the Terminal app on MacOS, it might not be an easy process, but I've done my best to make it as simple as I can. A tool called 'Wine' makes it possible to run Windows applications on MacOS. There are some caveats, but none of those apply to veconfigure, this tool runs great! I won't cover in this tutorial how to make the MK-3 USB cable work. This tutorial is only meant for people who have a Cerbo GX or similar device, or run VenusOS, which can be used to remotely configure the Multipluss device(s). Step 1: install brew on macos Brew is a tool that can install additional software Visit https://brew.sh and copy the install command open the Terminal app on your mac and paste the command now press 'Enter' or return It can take a few minutes for 'brew' to install. Step 2: install wine Enter the following two commands in the terminal: brew tap homebrew/cask-versions brew install --cask --no-quarantine wine-stable Download Victron veconfigure Visit this page Scroll to the section "VE Configuration tools for VE.Bus Products" Click on the link "Ve Configuration Tools" You'll be asked if it's OK to download this file (VECSetup_B.exe) which is ok Start the veconfigure installer with wine Open a terminal window Run cd Enter the command wine Downloads\VECSetup_B.exe Observe that the veconfigure Windows setup installer starts Click on next, next, install and Finish veconfigure will run for the first time Click on the top left button on the video to enlarge These are the actual install steps: How to start veconfigure after you close the app Open a terminal window Run cd Run cd .wine/drive_c/Program\ Files\ \(x86\)/VE\ Configure\ tools/ Run wine VEConfig.exe Observe that veconfigure starts Allow veconfigure access to files in your Mac Download folder Open a terminal window Run cd run cd .wine/drive_c/ run ls -n ~/Downloads We just made the Downloads directory on your Mac accessible for the vedirect software. If you put the .RSVC files in the Downloads folder, you can edit them. Please follow the instructions for remote configuration of the Multiplus II. Click on the "Ve Configuration Tools" link in the "VE Configuration tools for VE.Bus Products" section. ↩

11 months ago 14 votes
Tunneling Elixir cluster network traffic over Wireguard

Introduction The other day I was supporting a customer with an Elixir-based platform that would make use of Elixir libcluster, so messages on one host can be passed to other hosts. This can - for example - enable live updates for all users, even if they are not communicating with the same application server. Encryption Elixir's libcluster does support encrypted communication using TLS certificates however I was struggling with the help of an application developer to make it work. "severity":"warn","message":"[libcluster:example] unable to connect to :\"app@Host-B\" I'm absolutely open to the idea that we did something wrong and certificate-based encryption will work, but we were time-constrained and we decided to opt for another solution that seemed simpler and easier to maintain. Wireguard as the encrypted transport I deployed a Wireguard mesh network between all application servers using Ansible, which was straight forward. We just provisioned all hosts into the /etc/hosts file to keep things simple. In the table below, we show a simplified example of the setup. Hostname IP-address Wireguard Hostname Wireguard IP-address Host-A 10.0.10.123 Host-A-wg 192.168.0.1 Host-B 10.0.11.231 Host-B-wg 192.168.0.2 The Elixir applications would only know about the Host-A|B-wg hostnames and thus communicate over the encrypted VPN tunnel. The problem with wireguard and libcluster The key issue with libcluster is that when Host-A connects to Host-B, it uses the DNS hostname Host-B-wg hostname. But the actual hostname of Host-B is - you guess it: 'Host-B'. This means there is a mismatch and for reasons unknown to me, the libcluster connection will fail. So the target hostname as configured in libcluster must match the hostname of the actual host! Since libcluster seems to make usage of domain names mandatatory, using IP-addresses was not an option. If we would point Host-B to it's Wireguard IP-address (192.168.0.2), the problem would be solved. However, in that case, Wireguard doesn't know about the external 10.0.11.231 IP address and also tries to connect to the non-existing 192.168.0.2 address. So the Wireguard tunnel would never be created. The solution The solution is not that elegant, but it works. We still point the Host-B domain name to the wireguard IP address of 192.168.0.2 but we create an additional DNS record specifically for Wireguard, pointing to 10.0.1.231, so it can setup the VPN tunnel. This is what /etc/hosts looks like on Host-A: 10.0.10.123 Host-A 192.168.0.2 Host-B 10.0.11.231 Host-B-wg And this is what /etc/hosts looks like on Host-B: 10.0.11.231 Host-B 192.168.0.1 Host-A 10.0.10.123 Host-A-wg Evaluation Although all choices are a tradeoff, for us, the Wireguard-based solution makes most sense. Especially now that we have an encrypted tunnel between all hosts and any future communication between hosts can thus be encrypted without any additional effort.

a year ago 9 votes

More in technology

Reading list 02/22/25

Waymo’s factory, a map of US land values, ships in the Arctic Circle, battery industry trends, and more.

13 hours ago 6 votes
How Core Git Developers Configure Git

What `git config` settings should be defaults by now? Here are some settings that even the core developers change.

18 hours ago 5 votes
WordCamp Asia and Maha Kumbh Mela

It’s been fantastic being in the Philippines for this year’s WordCamp Asia. We have attendees from 71 countries, over 1,800 tickets sold, and contributor day had over 700 people! It’s an interesting contrast to US and EU WordCamps as well in that the audience is definitely a lot younger, and there’s very little interest in … Continue reading WordCamp Asia and Maha Kumbh Mela →

20 hours ago 3 votes
Odds and Ends #59: The end of "woke capitalism"

Plus the government did the stupid thing after all.

18 hours ago 2 votes
Algorithms are breaking how we think

Today, Alec Watson posted a video titled “Algorithms are breaking how we think” on his YouTube channel, Technology Connections. The whole thing is excellent and very well argued. The main thrust is: people seem increasingly less mindful about the stuff they engage with. Watson argues that this is bad, and I agree. A little while ago I watched a video by Hank Green called “$4.5M to Spray Alcoholic Rats with Bobcat Urine”. Green has been banging this drum for a while. He hits some of the same notes as Watson, but from a different angle. This last month has been a lot, and I’ve withdrawn from news and social media quite a bit because of it. Part of this is because I’ve been very busy with work, but it’s also because I’ve felt overwhelmed. There are now a lot of bad-faith actors in positions of power. Part of their game plan is to spray a mass of obviously false, intellectually shallow, enraging nonsense into the world as quickly as possible. At a certain point the bullshit seeps in if you’re soaking in it. The ability to control over what you see next is powerful. I think it would be great if more people started being a bit more choosy about who they give that control to.

an hour ago 2 votes