Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
12
As described earlier, I setup a RAID 6 array consisting of physical 1 TB disk and 'virtual' 1 TB disks that are in fact two 0.5 TB disks in RAID 0.  I wanted to upgrade to Lenny because the new kernel that ships with Lenny supports growing a RAID 6 array. After installing Lenny the RAID 0 devices were running smootly, but not recognised as part of the RAID 6.  So the array was running in degraded mode. That is bad. In Lenny, a new version of mdadm is used that requires the presense of the mdadm.conf file. The mdadm.conf file contains these lines:  #DEVICE partitions #DEVICE /dev/md* After I uncommented the "DEVICE /dev/md*" line and generated a new initramfs file with: update-initramfs -u The RAID 0 drives were recognised as part of a RAID array and everything was OK again. So mdadm must be instructed to check if /dev/md? devices are a member of a RAID array.  I guess this is also relevant if you are running a RAID 10 based on a mirrored stripe or a striped mirror.
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Louwrentius

Bose SoundLink on-ear headphones battery replacement

Skip to the bottom two paragraph for instructions on how to replace the battery. I bought my Bose SoundLink on-ear Bluetooth headphones for 250 Euros around 2017 and I really like them. They are small, light, comfortable and can easily fit in a coat pocket when folded. Up until now (about 7 years later) I have replaced the ear cushions in 2019 (€25) and 2024 (€18). Early 2025, battery capacity had deteriorated to a point where it became noticeable. The battery was clearly dying. Unfortunately these headphones aren't designed for easy battery replacement: Bose hasn't published instructions on how to replace the battery, doesn't offer a replacement battery and hasn't documented which battery type/model is used. The left 'head phone' has two Torx security screws and most people won't have the appropriate screwdriver for this size There is soldering involved I wanted to try a battery replacement anyway as I hate to throw away a perfectly good, working product just because the battery has worn out. Maybe at some point the headband needs replacing, but with a fresh battery, these headphones can last another 7 years. Let's prevent a bit of e-waste with a little bit of cost and effort. Most of all, the cost of this battery replacement is much lower than a new pair of headphones as the battery was €18 including taxes and shipping. Right to repair should include easy battery replacement Although my repair seemed to have worked out fine, it requires enough effort that most people won't even try. For this reason, I feel that it should be mandatory by law that: Batteries in any product must be user-replaceable (no special equipment or soldering required) Batteries must be provided by the vendor until 10 years after the last day the product was sold (unless it's a standard format like AA(A) or 18650). Batteries must be provided at max 10% of the cost of the original product The penalty for non-compliance should be high enough such that it won't be regarded as the cost of doing business For that matter, all components that may wear down over time should be user-replaceable. What you need to replace the battery Buy the exact battery type: ahb571935pct-01 (350mAh) (notice the three wires!) A Philips #0 screwdriver / bit A Torx T6H security screwdriver / bit (iFixit kits have them) A soldering iron Solder Heat shrink for 'very thin wire' Multimeter (optional) a bit of tape to 'cap off' bare battery leads Please note that I found another battery ahb571935pct-03 with similar specifications (capacity and voltage) but I don't know if it will fit. Putting the headphone ear cushion back on can actually be the hardest part of the process, you need to be firm and this process is documented by Bose. Battery replacement steps I took Make sure you don't short the wires on the old or new battery during replacement The battery is located in the left 'head phone'. Use a multimeter to check if your new battery isn't dead (should be 3+ volt) Remove the ear cushion from the left 'head phone' very gently as not to tear the rim Remove the two philips screws that keep the driver (speaker) in place Remove the two Torx screws (you may have to press a bit harder) Remove the speaker and be carefull not to snap the wire Gently remove the battery from the 'head phone' Cut the wires close to the old battery (one by one!) and cover the wires on the battery to prevent a short Strip the three wires from the headphones a tiny bit (just a few mm) Put a short piece of heat shrink on each of the three wires of the battery Solder each wire to the correct wire in the ear cup Adjust the location of the heat shrink over the freshly soldered joint. Use the soldering iron close to the heat shrink to shrink it (don't touch anything), this can take some time, be patient Check that the heat shrink is fixed in place and can't move Put the battery into it's specific location in the back of the 'head phone' Test the headphones briefly before reassembling the headphones Reassemble the 'head phone' (consider leaving out the two Torx screws) Dispose of the old battery in a responsible manner

a month ago 25 votes
My 71 TiB ZFS NAS after 10 years and zero drive failures

My 4U 71 TiB ZFS NAS built with twenty-four 4 TB drives is over 10 years old and still going strong. Although now on its second motherboard and power supply, the system has yet to experience a single drive failure (knock on wood). Zero drive failures in ten years, how is that possible? Let's talk about the drives first The 4 TB HGST drives have roughly 6000 hours on them after ten years. You might think something's off and you'd be right. That's only about 250 days worth of runtime. And therein lies the secret of drive longevity (I think): Turn the server off when you're not using it. According to people on Hacker News I have my bearings wrong. The chance of having zero drive failures over 10 years for 24 drives is much higher than I thought it was. So this good result may not be related to turning my NAS off and keeping it off most off the time. My NAS is turned off by default. I only turn it on (remotely) when I need to use it. I use a script to turn the IoT power bar on and once the BMC (Baseboard Management Controller) is done booting, I use IPMI to turn on the NAS itself. But I could have used Wake-on-Lan too as an alternative. Once I'm done using the server, I run a small script that turns the server off, wait a few seconds and then turn the wall socket off. It wasn't enough for me to just turn off the server, but leave the motherboard, and thus the BMC powered, because that's just a constant 7 watts (about two Raspberry Pis at idle) being wasted (24/7). This process works for me because I run other services on low-power devices such as Raspberry Pi4s or servers that use much less power when idling than my 'big' NAS. This proces reduces my energy bill considerably (primary motivation) and also seems great for hard drive longevity. Although zero drive failures to date is awesome, N=24 is not very representative and I could just be very lucky. Yet, it was the same story with the predecessor of this NAS, a machine with 20 drives (1 TB Samsung Spinpoint F1s (remember those?)) and I also had zero drive failures during its operational lifespan (~5 years). The motherboard (died once) Although the drives are still ok, I had to replace the motherboard a few years ago. The failure mode of the motherboard was interesting: it was impossible to get into the BIOS and it would occasionally fail to boot. I tried the obvious like removing the CMOS battery and such but to no avail. Fortunately, the [motherboard]1 was still available on Ebay for a decent price so that ended up not being a big deal. ZFS ZFS worked fine for all these years. I've switched operating systems over the years and I never had an issue importing the pool back into the new OS install. If I would build a new storage server, I would definitely use ZFS again. I run a zpool scrub on the drives a few times a year2. The scrub has never found a single checksum error. I must have run so many scrubs, more than a petabyte of data must have been read from the drives (all drives combined) and ZFS didn't have to kick in. I'm not surprised by this result at all. Drives tend to fail most often in two modes: Total failure, drive isn't even detected Bad sectors (read or write failures) There is a third failure mode, but it's extremely rare: silent data corruption. Silent data corruption is 'silent' because a disk isn't aware it delivered corrupted data. Or the SATA connection didn't detect any checksum errors. However, due to all the low-level checksumming, this risk is extremely small. It's a real risk, don't get me wrong, but it's a small risk. To me, it's a risk you mostly care about at scale, in datacenters4 but for residential usage, it's totally reasonable to accept the risk3. But ZFS is not that difficult to learn and if you are well-versed in Linux or FreeBSD, it's absolutely worth checking out. Just remember! Sound levels (It's Oh So Quiet) This NAS is very quiet for a NAS (video with audio). But to get there, I had to do some work. The chassis contains three sturdy 12V fans that cool the 24 drive cages. These fans are extremely loud if they run at their default speed. But because they are so beefy, they are fairly quiet when they run at idle RPM5, yet they still provide enough airflow, most of the time. But running at idle speeds was not enough as the drives would heat up eventually, especially when they are being read from / written to. Fortunately, the particular Supermicro motherboard I bought at the time allows all fan headers to be controlled through Linux. So I decided to create a script that sets the fan speed according to the temperature of the hottest drive in the chassis. I actually visited a math-related subreddit and asked for an algorithm that would best fit my need to create a silent setup and also keep the drives cool. Somebody recommended to use a "PID controller", which I knew nothing about. So I wrote some Python, stole some example Python PID controller code, and tweaked the parameters to find a balance between sound and cooling performance. The script has worked very well over the years and kept the drives at 40C or below. PID controllers are awesome and I feel it should be used in much more equipment that controls fans, temperature, and so on, instead of 'dumb' on/of behaviour or less 'dumb' lookup tables. Networking I started out with quad-port gigabit network controllers and I used network bonding to get around 450 MB/s network transfer speeds between various systems. This setup required a ton of UTP cables so eventually I got bored with that and I bought some cheap Infiniband cards and that worked fine, I could reach around 700 MB/s between systems. As I decided to move away from Ubuntu and back to Debian, I faced a problem: the Infiniband cards didn't work anymore and I could not figure out how to fix it. So I decided to buy some second-hand 10Gbit Ethernet cards and those work totally fine to this day. The dead power supply When you turn this system on, all drives spin up at once (no staggered spinup) and that draws around 600W for a few seconds. I remember that the power supply was rated for 750W and the 12 volt rail would have been able to deliver enough power, but it would sometimes cut out at boot nonetheless. UPS (or lack thereof) For many years, I used a beefy UPS with the system, to protect against power failure, just to be able to shutdown cleanly during an outage. This worked fine, but I noticed that the UPS used another 10+ watts on top of the usage of the server and I decided it had to go. Losing the system due to power shenanigans is a risk I accept. Backups (or a lack thereof) My most important data is backed up trice. But a lot of data stored on this server isn't important enough for me to backup. I rely on replacement hardware and ZFS protecting against data loss due to drive failure. And if that's not enough, I'm out of luck. I've accepted that risk for 10 years. Maybe one day my luck will run out, but until then, I enjoy what I have. Future storage plans (or lack thereof) To be frank, I don't have any. I built this server back in the day because I didn't want to shuffle data around due to storage space constraints and I still have ample space left. I have a spare motherboard, CPU, Memory and a spare HBA card so I'm quite likely able to revive the system if something breaks. As hard drive sizes have increased tremendously, I may eventually move away from the 24-drive bay chassis into a smaller form-factor. It's possible to create the same amount of redundant storage space with only 6-8 hard drives with RAIDZ2 (RAID 6) redundancy. Yet, storage is always expensive. But another likely scenario is that in the coming years this system eventually dies and I decide not to replace it at all, and my storage hobby will come to an end. I needed the same board, because the server uses four PCIe slots: 3 x HBA and 1 x 10Gbit NIC. ↩ It takes ~20 hours to complete a scrub and it uses a ton of power while doing so. As I'm on a dynamic power tariff, I run it on 'cheap' days. ↩ every time I listen to ZFS enthusiasts you get the impression you are taking insane risks with your data if you don't run ZFS. I disagree, it all depends on context and circumstances. ↩ enterprise hard drives used in servers and SANs had larger sector sizes to accommodate even more checksumming data to prevent against silent data corruption. ↩ Because there is little airflow by default, I had to add a fan to cool the four PCIe cards (HBA and networking) or they would have gotten way too hot. ↩

7 months ago 23 votes
The Raspberry Pi 5 is no match for a tini-mini-micro PC

I've always been fond of the idea of the Raspberry Pi. An energy efficient, small, cheap but capable computer. An ideal home server. Until the Pi 4, the Pi was not that capable, and only with the relatively recent Pi 5 (fall 2023) do I feel the Pi is OK performance wise, although still hampered by SD card performance1. And the Pi isn't that cheap either. The Pi 5 can be fitted with an NVME SSD, but for me it's too little, too late. Because I feel there is a type of computer on the market, that is much more compelling than the Pi. I'm talking about the tinyminimicro home lab 'revolution' started by servethehome.com about four years ago (2020). A 1L mini PC (Elitedesk 705 G4) with a Raspberry Pi 5 on top During the pandemic, the Raspberry Pi was in short supply and people started looking for alternatives. The people at servethehome realised that these small enterprise desktop PCs could be a good option. Dell (micro), Lenovo (tiny) and HP (mini) all make these small desktop PCs, which are also known as 1L (one liter) PCs. These Mini PC are not cheap2 when bought new, but older models are sold at a very steep discount as enterprises offload old models by the thousands on the second hand market (through intermediates). Although these computers are often several years old, they are still much faster than a Raspberry Pi (including the Pi 5) and can hold more RAM. I decided to buy two HP Elitedesk Mini PCs to try them out, one based on AMD and the other based on Intel. The Hardware Elitedesk Mini G3 800 Elitedesk Mini G4 705 CPU Intel i5-6500 (65W) AMD Ryzen 3 PRO 2200GE (35W) RAM 16 GB (max 32 GB) 16 GB (max 32 GB) HDD 250 GB (SSD) 250 GB (NVME) Network 1Gb (Intel) 1Gb (Realtek) WiFi Not installed Not installed Display 2 x DP, 1 x VGA 3 x DP Remote management Yes No Idle power 4 W 10 W Price €160 €115 The AMD-based system is cheaper, but you 'pay' in higher idle power usage. In absolute terms 10 watt is still decent, but the Intel model directly competes with the Pi 5 on idle power consumption. Elitedesk 705 left, Elitedesk 800 right (click to enlarge) Regarding display output, these devices have two fixed displayport outputs, but there is one port that is configurable. It can be displayport, VGA or HDMI. Depending on the supplier you may be able to configure this option, or you can buy them separately for €15-€25 online. Click on image for official specs in PDF format Both models seem to be equipped with socketed CPUs. Although options for this formfactor are limited, it's possible to upgrade. Comparing cost with the Pi 5 The Raspberry Pi 5 with (max) 8 GB of RAM costs ~91 Euro, almost exactly the same price as the AMD-based mini PC3 in its base configuration (8GB RAM). Yet, with the Pi, you still need: power supply (€13) case (€11) SD card or NVME SSD (€10-€45) NVME hat (€15) (optional but would be more comparable) It's true that I'm comparing a new computer to a second hand device, and you can decide if that matters in this case. With a complete Pi 5 at around €160 including taxes and shipping, the AMD-based 1L PC is clearly the cheaper and still more capable option. Comparing performance with the Pi 5 The first two rows in this table show the Geekbench 6 score of the Intel and AMD mini PCs I've bought for evaluation. I've added the benchmark results of some other computers I've access to, just to provide some context. CPU Single-core Multi-core AMD Ryzen 3 PRO 2200GE (32W) 1148 3343 Intel i5-6500 (65W) 1307 3702 Mac Mini M2 2677 9984 Mac Mini i3-8100B 1250 3824 HP Microserver Gen8 Xeon E3-1200v2 744 2595 Raspberry Pi 5 806 1861 Intel i9-13900k 2938 21413 Intel E5-2680 v2 558 5859 Sure, these mini PCs won't come close to modern hardware like the Apple M2 or the intel i9. But if we look at the performance of the mini PCs we can observe that: The Intel i5-6500T CPU is 13% faster in single-core than the AMD Ryzen 3 PRO Both the Intel and AMD processors are 42% - 62% faster than the Pi 5 regarding single-core performance. Storage (performance) If there's one thing that really holds the Pi back, it's the SD card storage. If you buy a decent SD card (A1/A2) that doesn't have terrible random IOPs performance, you realise that you can get a SATA or NVME SSD for almost the same price that has more capacity and much better (random) IO performance. With the Pi 5, NVME SSD storage isn't standard and requires an extra hat. I feel that the missing integrated NVME storage option for the Pi 5 is a missed opportunity that - in my view - hurts the Pi 5. Now in contrast, the Intel-based mini PC came with a SATA SSD in a special mounting bracket. That bracket also contained a small fan(1) to keep the underlying NVME storage (not present) cooled. There is a fan under the SATA SSD (click to enlarge) The AMD-based mini PC was equipped with an NVME SSD and was not equipped with the SSD mounting bracket. The low price must come from somewhere... However, both systems have support for SATA SSD storage, an 80mm NVME SSD and a small 2230 slot for a WiFi card. There seems no room on the 705 G4 to put in a small SSD, but there are adapters available that convert the WiFi slot to a slot usable for an extra NVME SSD, which might be an option for the 800 G3. Noice levels (subjective) Both systems are barely audible at idle, but you will notice them (if you sensitive to that sort of thing). The AMD system seems to become quite loud under full load. The Intel system also became loud under full load, but much more like a Mac Mini: the noise is less loud and more tolerable in my view. Idle power consumption Elitedesk 800 (Intel) I can get the Intel-based Elitedesk 800 G3 to 3.5 watt at idle. Let that sink in for a moment. That's about the same power draw as the Raspberry Pi 5 at idle! Just installing Debian 12 instead of Windows 10 makes the idle power consumption drop from 10-11 watt to around 7 watt. Then on Debian, you: run apt install powertop run powertop --auto-tune (saves ~2 Watt) Unplug the monitor (run headless) (saves ~1 Watt) You have to put the powertop --auto-tune command in /etc/rc.local: #!/usr/bin/env bash powertop --auto-tune exit 0 Then apply chmod +x /etc/rc.local So, for about the same idle power draw you get so much more performance, and go beyond the max 8GB RAM of the Pi 5. Elitedesk 705 (AMD) I managed to get this system to 10-11 watt at idle, but it was a pain to get there. I measured around 11 Watts idle power consumption running a preinstalled Windows 11 (with monitor connected). After installing Debian 12 the system used 18 Watts at idle and so began a journey of many hours trying to solve this problem. The culprit is the integrated Radeon Vega GPU. To solve the problem you have to: Configure the 'bios' to only use UEFI Reinstall Debian 12 using UEFI install the appropriate firmware with apt install firmware-amd-graphics If you boot the computer using legacy 'bios' mode, the AMD Radeon firmware won't load no matter what you try. You can see this by issuing the commands: rmmod amdgpu modprobe amdgpu You may notice errors on the physical console or in the logs that the GPU driver isn't loaded because it's missing firmware (a lie). This whole process got me to around 12 Watt at idle. To get to ~10 Watts idle you need to do also run powertop --auto-tune and disconnect the monitor, as stated in the 'Intel' section earlier. Given the whole picture, 10-11 Watt at idle is perfectly okay for a home server, and if you just want the cheapest option possible, this is still a fine system. KVM Virtualisation I'm running vanilla KVM (Debian 12) on these Mini PCs and it works totally fine. I've created multiple virtual machines without issue and performance seemed perfectly adequate. Boot performance From the moment I pressed the power button to SSH connecting, it took 17 seconds for the Elitedesk 800. The Elitedesk 705 took 33 seconds until I got an SSH shell. These boot times include the 5 second boot delay within the GRUB bootloader screen that is default for Debian 12. Remote management support Some of you may be familiar with IPMI (ILO, DRAC, and so on) which is standard on most servers. But there is also similar technology for (enterprise) desktops. Intel AMT/ME is a technology used for remote out-of-band management of computers. It can be an interesting feature in a homelab environment but I have no need for it. If you want to try it, you can follow this guide. For most people, it may be best to disable the AMT/ME feature as it has a history of security vulnerabilities. This may not be a huge issue within a trusted home network, but you have been warned. The AMD-based Elitedesk 705 didn't came with equivalent remote management capabilities as far as I can tell. Alternatives The models discussed here are older models that are selected for a particular price point. Newer models from Lenovo, HP and Dell, equip more modern processors which are faster and have more cores. They are often also priced significantly higher. If you are looking for low-power small formfactor PCs with more potent or customisable hardware, you may want to look at second-hand NUC formfactor PCs. Stacking multiple mini PCs The AMD-based Elitedesk 705 G4 is closed at the top and it's possible to stack other mini PCs on top. The Intel-based Elitedesk 800 G3 has a perforated top enclosure, and putting another mini pc on top might suffocate the CPU fan. As you can see, the bottom/foot of the mini PC doubles as a VESA mount and has four screw holes. By putting some screws in those holes, you may effectively create standoffs that gives the machine below enough space to breathe (maybe you can use actual standoffs). Evaluation and conclusion I think these second-hand 1L tinyminimicro PCs are better suited to play the role of home (lab) server than the Raspberry Pi (5). The increased CPU performance, the built-in SSD/NVME support, the option to go beyond 8 GB of RAM (up to 32GB) and the price point on the second-hand market really makes a difference. I love the Raspberry Pi and I still have a ton of Pi 4s. This solar-powered blog is hosted on a Pi 4 because of the low power consumption and the availability of GPIO pins for the solar status display. That said, unless the Raspberry Pi becomes a lot cheaper (and more potent), I'm not so sure it's such a compelling home server. This blog post featured on the front page of Hacker News. even a decent quality SD card is no match (in terms of random IOPs and sequential throughput) for a regular SATA or NVME SSD. The fact that the Pi 5 has no on-board NVME support is a huge shortcomming in my view. ↩ in the sense that you can buy a ton of fully decked out Pi 5s for the price of one such system. ↩ The base price included the external power brick and 256GB NVME storage. ↩

10 months ago 30 votes
AI is critically important but not for you

Before Chat-GPT caused a sensation, big tech companies like Facebook and Apple were betting their future growth on virtual reality. But I'm convinced that virtual reality will never be a mainstream thing. If you ever used VR you know why: A heavy thing on your head that messes up your hair Nausea The focus on virtual reality felt like desperation to me. The desperation of big tech companies trying to find new growth, ideally a monopoly they control1, to satisfy the demands of shareholders. And then OpenAI dropped ChatGPT and all the big tech companies started to pivot so fast because in contrary to VR, AI doesn't involve making people nauseated and look silly. It's probably obvious that I feel it's not about AI itself. It is really about huge tech companies that have found a new way to sustain growth a bit longer, now that all other markets have been saturated. Flush with cash, they went nuts and bought up all the AI accelerator hardware2, which in turn uses unspeakable amounts of energy to train new large language models. Despite all the hype, current AI technology is at it's core a very sophisticated statistical model. It's all about probabilities, it can't actually reason. As I see it, work done by AI can't thus be trusted. Depending on the specific application, that may be less of an issue, but that is a fundamental limitation of current technology. And this gives me pause as it limits the application where it is most wanted: to control labour. To reduce the cost of headcount and to suppress wages. As AI tools become capable enough, it would be irresponsible towards shareholders not to pursue this direction. All this just to illustrate that the real value of AI is not for the average person in the street. The true value is for those bigger companies who can keep on growing, and the rest is just collateral damage. But I wonder: when the AI hype is over, what new hype will take it's place? I can't see it. I can't think of it. But I recognise that the internet created efficiencies that are convenient, yet social media weaponised this convenience to exploit our fundamental human weaknesses. As shareholder value rose, social media slowly chips away at the fabric of our society: trust. I've sold my Oculus Rift CV1 long ago, I lost hundreds of dollars of content but I refuse to create a Facebook/Meta account. ↩ climate change accelerators ↩

11 months ago 17 votes
How to run victron veconfigure on a mac

Introduction Victron Multiplus-II inverter/charges are configured with the veconfigure1 tool. Unforntunately this is a Windows-only tool, but there is still a way for Apple users to run this tool without any problems. Tip: if you've never worked with the Terminal app on MacOS, it might not be an easy process, but I've done my best to make it as simple as I can. A tool called 'Wine' makes it possible to run Windows applications on MacOS. There are some caveats, but none of those apply to veconfigure, this tool runs great! I won't cover in this tutorial how to make the MK-3 USB cable work. This tutorial is only meant for people who have a Cerbo GX or similar device, or run VenusOS, which can be used to remotely configure the Multipluss device(s). Step 1: install brew on macos Brew is a tool that can install additional software Visit https://brew.sh and copy the install command open the Terminal app on your mac and paste the command now press 'Enter' or return It can take a few minutes for 'brew' to install. Step 2: install wine Enter the following two commands in the terminal: brew tap homebrew/cask-versions brew install --cask --no-quarantine wine-stable Download Victron veconfigure Visit this page Scroll to the section "VE Configuration tools for VE.Bus Products" Click on the link "Ve Configuration Tools" You'll be asked if it's OK to download this file (VECSetup_B.exe) which is ok Start the veconfigure installer with wine Open a terminal window Run cd Enter the command wine Downloads\VECSetup_B.exe Observe that the veconfigure Windows setup installer starts Click on next, next, install and Finish veconfigure will run for the first time Click on the top left button on the video to enlarge These are the actual install steps: How to start veconfigure after you close the app Open a terminal window Run cd Run cd .wine/drive_c/Program\ Files\ \(x86\)/VE\ Configure\ tools/ Run wine VEConfig.exe Observe that veconfigure starts Allow veconfigure access to files in your Mac Download folder Open a terminal window Run cd run cd .wine/drive_c/ run ls -n ~/Downloads We just made the Downloads directory on your Mac accessible for the vedirect software. If you put the .RSVC files in the Downloads folder, you can edit them. Please follow the instructions for remote configuration of the Multiplus II. Click on the "Ve Configuration Tools" link in the "VE Configuration tools for VE.Bus Products" section. ↩

a year ago 27 votes

More in technology

Greatest Hits

I’ve been blogging now for approximately 8,465 days since my first post on Movable Type. My colleague Dan Luu helped me compile some of the “greatest hits” from the archives of ma.tt, perhaps some posts will stir some memories for you as well: Where Did WordCamps Come From? (2023) A look back at how Foo … Continue reading Greatest Hits →

21 hours ago 2 votes
Let's give PRO/VENIX a barely adequate, pre-C89 TCP/IP stack (featuring Slirp-CK)

TCP/IP Illustrated (what would now be called the first edition prior to the 2011 update) for a hundred-odd bucks on sale which has now sat on my bookshelf, encased in its original shrinkwrap, for at least twenty years. It would be fun to put up the 4.4BSD data structures poster it came with but that would require opening it. Fortunately, today we have AI we have many more excellent and comprehensive documents on the subject, and more importantly, we've recently brought back up an oddball platform that doesn't have networking either: our DEC Professional 380 running the System V-based PRO/VENIX V2.0, which you met a couple articles back. The DEC Professionals are a notoriously incompatible member of the PDP-11 family and, short of DECnet (DECNA) support in its unique Professional Operating System, there's officially no other way you can get one on a network — let alone the modern Internet. Are we going to let that stop us? Crypto Ancienne proxy for TLS 1.3. And, as we'll discuss, if you can get this thing on the network, you can get almost anything on the network! Easily portable and painfully verbose source code is included. Recall from our lengthy history of DEC's early misadventures with personal computers that, in Digital's ill-advised plan to avoid the DEC Pros cannibalizing low-end sales from their categorical PDP-11 minicomputers, Digital's Small Systems Group deliberately made the DEC Professional series nearly totally incompatible despite the fact they used the same CPUs. In their initial roll-out strategy in 1982, the Pros (as well as their sibling systems, the Rainbow and the DECmate II) were only supposed to be mere desktop office computers — the fact the Pros were PDP-11s internally was mostly treated as an implementation detail. The idea backfired spectacularly against the IBM PC when the Pros and their promised office software failed to arrive on time and in 1984 DEC retooled around a new concept of explicitly selling the Pros as desktop PDP-11s. This required porting operating systems that PDP-11 minis typically ran: RSX-11M Plus was already there as the low-level layer of the Professional Operating System (P/OS), and DEC internally ported RT-11 (as PRO/RT-11) and COS. PDP-11s were also famous for running Unix and so DEC needed a Unix for the Pro as well, though eventually only one official option was ever available: a port of VenturCom's Venix based on V7 Unix and later System V Release 2.0 called PRO/VENIX. After the last article, I had the distinct pleasure of being contacted by Paul Kleppner, the company's first paid employee in 1981, who was part of the group at VenturCom that did the Pro port and stayed at the company until 1988. Venix was originally developed from V6 Unix on the PDP-11/23 incorporating Myron Zimmerman's real-time extensions to the kernel (such as semaphores and asynchronous I/O), then a postdoc in physics at MIT; Kleppner's father was the professor of the lab Zimmerman worked in. Zimmerman founded VenturCom in 1981 to capitalize on the emerging Unix market, becoming one of the earliest commercial Unix licensees. Venix-11 was subsequently based on the later V7 Unix, as was Venix/86, which was the first Unix on the IBM PC in January 1983 and was ported to the DEC Rainbow as Venix/86R. In addition to its real-time extensions and enhanced segmentation capability, critical for memory management in smaller 16-bit address spaces, it also included a full desktop graphics package. Notably, DEC themselves were also a Unix licensee through their Unix Engineering Group and already had an enhanced V7 Unix of their own running on the PDP-11, branded initially as V7M. Subsequently the UEG developed a port of 4.2BSD with some System V components for the VAX and planned to release it as Ultrix-32, simultaneously retconning V7M as Ultrix-11 even though it had little in common with the VAX release. Paul recalls that DEC did attempt a port of Ultrix-11 to the Pro 350 themselves but ran into intractable performance problems. By then the clock was ticking on the Pro relaunch and the issues with Ultrix-11 likely prompted DEC to look for alternatives. Crucially, Zimmerman had managed to upgrade Venix-11's kernel while still keeping it small, a vital aspect on his 11/23 which lacked split instruction and data addressing and would have had to page in and out a larger kernel otherwise. Moreover, the 11/23 used an F-11 CPU — the same CPU as the original Professional 350 and 325. DEC quickly commissioned VenturCom to port their own system over to the Pro, which Paul says was a real win for VenturCom, and the first release came out in July 1984 complete with its real-time features intact and graphics support for the Pro's bitmapped screen. It was upgraded ("PRO/VENIX Rev 2.0") in October 1984, adding support for the new top-of-the-line DEC Professional 380, and then switched to System V (SVR2) in July 1985 with PRO/VENIX V2.0. (For its part Ultrix-11 was released as such in 1984 as well, but never for the Pro series.) Keep that kernel version history in mind for when we get to oddiments of the C compiler. As for networking, though, with the exception of UUCP over serial, none of these early versions of Venix on either the PDP-11 or 8086 supported any kind of network connectivity out of the box — officially the only Pro operating system to support its Ethernet upgrade option was P/OS 2.0. Although all Pros have a 15-pin AUI network port, it isn't activated until an Ethernet CTI card is installed. (While Stan P. found mention of a third-party networking product called Fusion by Network Research Corporation which could run on PRO/VENIX, Paul's recollection is that this package ran into technical problems with kernel size during development. No examples of the PRO/VENIX version have so far been located and it may never have actually been released. You'll hear about it if a copy is found. The unofficial Pro 2.9BSD port also supports the network card, but that was always an under-the-table thing.) Since we run Venix on our Pro, that means currently our only realistic option to get this on the 'Nets is also over a serial port. lower speed port for our serial IP implementation. PRO/VENIX supports using only the RS-423 port as a remote terminal, and because it's twice as fast, it's more convenient for logins and file exchange over Kermit (which also has no TCP/IP overhead). Using the printer port also provides us with a nice challenge: if our stack works acceptably well at 4800bps, it should do even better at higher speeds if we port it elsewhere. On the Pro, we connect to our upstream host using a BCC05 cable (in the middle of this photograph), which terminates in a regular 25-pin RS-232 on the other end. Now for the software part. There are other small TCP/IP stacks, notably things like Adam Dunkel's lwIP and so on. But even SVR2 Venix is by present standards a old Unix with a much less extensive libc and more primitive C compiler — in a short while you'll see just how primitive — and relatively modern code like lwIP's would require a lot of porting. Ideally we'd like a very minimal, indeed barely adequate, stack that can do simple tasks and can be expressed in a fashion acceptable to a now antiquated compiler. Once we've written it, it would be nice if it were also easily portable to other very limited systems, even by directly translating it to assembly language if necessary. What we want this barebones stack to accomplish will inform its design: and the hardware 24-7 to make such a use case meaningful. The Ethernet option was reportedly competent at server tasks, but Ethernet has more bandwidth, and that card also has additional on-board hardware. Let's face the cold reality: as a server, we'd find interacting with it over the serial port unsatisfactory at best and we'd use up a lot of power and MTBF keeping it on more than we'd like to. Therefore, we really should optimize for the client case, which means we also only need to run the client when we're performing a network task. no remote login capacity, like, I dunno, a C64, the person on the console gets it all. Therefore, we really should optimize for the single user case, which means we can simplify our code substantially by merely dealing with sockets sequentially, one at a time, without having to worry about routing packets we get on the serial port to other tasks or multiplexing them. Doing so would require extra work for dual-socket protocols like FTP, but we're already going to use directly-attached Kermit for that, and if we really want file transfer over TCP/IP there are other choices. (On a larger antique system with multiple serial ports, we could consider a setup where each user uses a separate outgoing serial port as their own link, which would also work under this scheme.) Some of you may find this conflicts hard with your notion of what a "stack" should provide, but I also argue that the breadth of a full-service driver would be wasted on a limited configuration like this and be unnecessarily more complex to write and test. Worse, in many cases, is better, and I assert this particular case is one of them. Keeping the above in mind, what are appropriate client tasks for a microcomputer from 1984, now over 40 years old — even a fairly powerful one by the standards of the time — to do over a slow TCP/IP link? Crypto Ancienne's carl can serve as an HTTP-to-HTTPS proxy to handle the TLS part, if necessary.) We could use protocols like these to download and/or view files from systems that aren't directly connected, or to send and receive status information. One task that is also likely common is an interactive terminal connection (e.g., Telnet, rlogin) to another host. However, as a client this particular deployment is still likely to hit the same sorts of latency problems for the same reasons we would experience connecting to it as a server. These other tasks here are not highly sensitive to latency, require only a single "connection" and no multiplexing, and are simple protocols which are easy to implement. Let's call this feature set our minimum viable product. Because we're writing only for a couple of specific use cases, and to make them even more explicit and easy to translate, we're going to take the unusual approach of having each of these clients handle their own raw packets in a bytewise manner. For the actual serial link we're going to go even more barebones and use old-school RFC 1055 SLIP instead of PPP (uncompressed, too, not even Van Jacobson CSLIP). This is trivial to debug and straightforward to write, and if we do so in a relatively encapsulated fashion, we could consider swapping in CSLIP or PPP later on. A couple of utility functions will do the IP checksum algorithm and reading and writing the serial port, and DNS and some aspects of TCP also get their own utility subroutines, but otherwise all of the programs we will create will read and write their own network datagrams, using the SLIP code to send and receive over the wire. The C we will write will also be intentionally very constrained, using bytewise operations assuming nothing about endianness and using as little of the C standard library as possible. For types, you only need some sort of 32-bit long, which need not be native, an int of at least 16 bits, and a char type — which can be signed, and in fact has to be to run on earlier Venices (read on). You can run the entirety of the code with just malloc/free, read/write/open/close, strlen/strcat, sleep, rand/srand and time for the srand seed (and fprintf for printing debugging information, if desired). On a system with little or no operating system support, almost all of these primitive library functions are easy to write or simulate, and we won't even assume we're capable of non-blocking reads despite the fact Venix can do so. After all, from that which little is demanded, even less is expected. slattach which effectively makes a serial port directly into a network interface. Such an arrangement would be the most flexible approach from the user's perspective because you necessarily have a fixed, bindable external address, but obviously such a scheme didn't scale over time. With the proliferation of dialup Unix shell accounts in the late 1980s and early 1990s, closed-source tools like 1993's The Internet Adapter ("TIA") could provide the SLIP and later PPP link just by running them from a shell prompt. Because they synthesize artificial local IP addresses, sort of NAT before the concept explicitly existed, the architecture of such tools prevented directly creating listening sockets — though for some situations this could be considered a more of a feature than a bug. Any needed external ports could be proxied by the software anyway and later network clients tended not to require it, so for most tasks it was more than sufficient. Closed-source and proprietary SLIP/PPP-over-shell solutions like TIA were eventually displaced by open source alternatives, most notably SLiRP. SLiRP (hereafter Slirp so I don't gouge my eyes out) emerged in 1995 and used a similar architecture to TIA, handing out virtual addresses on an synthetic network and bridging that network to the Internet through the host system. It rapidly became the SLIP/PPP shell solution of choice, leading to its outright ban by some shell ISPs who claimed it violated their terms of service. As direct SLIP/PPP dialup became more common than shell accounts, during which time yours truly upgraded to a 56K Mac modem I still have around here somewhere, Slirp eventually became most useful for connecting small devices via their serial ports (PDAs and mobile phones especially, but really anything — subsets of Slirp are still used in emulators today like QEMU for a similar purpose) to a LAN. By a shocking and completely contrived coincidence, that's exactly what we'll be doing! Slirp has not been officially maintained since 2006. There is no package in Fedora, which is my usual desktop Linux, and the one in Debian reportedly has issues. A stack of patch sets circulated thereafter, but the planned 1.1 release never happened and other crippling bugs remain, some of which were addressed in other patches that don't seem to have made it into any release, source or otherwise. If you tried to build Slirp from source on a modern system and it just immediately exits, you got bit. I have incorporated those patches and a couple of my own to port naming and the configure script, plus some additional fixes, into an unofficial "Slirp-CK" which is on Github. It builds the same way as prior versions and is tested on Fedora Linux. I'm working on getting it functional on current macOS also. Next, I wrote up our four basic functional clients: ping, DNS lookup, NTP client (it doesn't set the clock, just shows you the stratum, refid and time which you can use for your own purposes), and TCP client. The TCP client accepts strings up to a defined maximum length, opens the connection, sends those strings (optionally separated by CRLF), and then reads the reply until the connection closes. This all seemed to work great on the Linux box, which you yourself can play with as a toy stack (directions at the end). Unfortunately, I then pushed it over to the Pro with Kermit and the compiler immediately started complaining. SLIP is a very thin layer on IP packets. There are exactly four metabytes, which I created preprocessor defines for: A SLIP packet ends with SLIP_END, or hex $c0. Where this must occur within a packet, it is replaced by a two byte sequence for unambiguity, SLIP_ESC SLIP_ESC_END, or hex $db $dc, and where the escape byte must occur within a packet, it gets a different two byte sequence, SLIP_ESC SLIP_ESC_ESC, or hex $db $dd. Although I initially set out to use defines and symbols everywhere instead of naked bytes, and wrote slip.c on that basis, I eventually settled on raw bytes afterwards using copious comments so it was clear what was intended to be sent. That probably saved me a lot of work renaming everything, because: Dimly I recalled that early C compilers, including System V, limit their identifiers to eight characters (the so-called "Ritchie limit"). At this point I probably should have simply removed them entirely for consistency with their absence elsewhere, but I went ahead and trimmed them down to more opaque, pithy identifiers. That wasn't the only problem, though. I originally had two functions in slip.c, slip_start and slip_stop, and it didn't like that either despite each appearing to have a unique eight-character prefix: That's because their symbols in the object file are actually prepended with various metacharacters like _ and ~, so effectively you only get seven characters in function identifiers, an issue this error message fails to explain clearly. The next problem: there's no unsigned char, at least not in PRO/VENIX Rev. 2.0 which I want to support because it's more common, and presumably the original versions of PRO/VENIX and Venix-11. (This type does exist in PRO/VENIX V2.0, but that's because it's System V and has a later C compiler.) In fact, the unsigned keyword didn't exist at all in the earliest C compilers, and even when it did, it couldn't be applied to every basic type. Although unsigned char was introduced in V7 Unix and is documented as legal in the PRO/VENIX manual, and it does exist in Venix/86 2.1 which is also a V7 Unix derivative, the PDP-11 and 8086 C compilers have different lineages and Venix's V7 PDP-11 compiler definitely doesn't support it: I suspect this may not have been intended because unsigned int works (unsigned long would be pointless on this architecture, and indeed correctly generates Misplaced 'long' on both versions of PRO/VENIX). Regardless of why, however, the plain char type on the PDP-11 is signed, and for compatibility reasons here we'll have no choice but to use it. Recall that when C89 was being codified, plain char was left as an ambiguous type since some platforms (notably PDP-11 and VAX) made it signed by default and others made it unsigned, and C89 was more about codifying existing practice than establishing new ones. That's why you see this on a modern 64-bit platform, e.g., my POWER9 workstation, where plain char is unsigned: If we change the original type explicitly to signed char on our POWER9 Linux machine, that's different: and, accounting for different sizes of int, seems similar on PRO/VENIX V2.0 (again, which is System V): but the exact same program on PRO/VENIX Rev. 2.0 behaves a bit differently: The differences in int size we expect, but there's other kinds of weird stuff going on here. The PRO/VENIX manual lists all the various permutations about type conversions and what gets turned into what where, but since the manual is already wrong about unsigned char I don't think we can trust the documentation for this part either. Our best bet is to move values into int and mask off any propagated sign bits before doing comparisons or math, which is agonizing, but reliable. That means throwing around a lot of seemingly superfluous & 0xff to make sure we don't get negative numbers where we don't want them. Once I got it built, however, there were lots of bugs. Many were because it turns out the compiler isn't too good with 32-bit long, which is not a native type on the 16-bit PDP-11. This (part of the NTP client) worked on my regular Linux desktop, but didn't work in Venix: The first problem is that the intermediate shifts are too large and overshoot, even though they should be in range for a long. Consider this example: On the POWER9, accounting for the different semantics of %lx, But on Venix, the second shift blows out the value. We can get an idea of why from the generated assembly in the adb debugger (here from PRO/VENIX V2.0, since I could cut and paste from the Kermit session): (Parenthetical notes: csav is a small subroutine that pushes volatiles r2 through r4 on the stack and turns r5 into the frame pointer; the corresponding cret unwinds this. The initial branch in this main is used to reserve additional stack space, but is often practically a no-op.) The first shift is here at ~main+024. Remember the values are octal, so 010 == 8. r0 is 16 bits wide — no 32-bit registers — so an eight-bit shift is fine. When we get to the second shift, however, it's the same instruction on just one register (030 == 24) and the overflow is never checked. In fact, the compiler never shifts the second part of the long at all. The result is thus zero. The second problem in this example is that the compiler never treats the constant as a long even though statically there's no way it can fit in a 16-bit int. To get around those two gotchas on both Venices here, I rewrote it this way: An alternative to a second variable is to explicitly mark the epoch constant itself as long, e.g., by casting it, which also works. Here's another example for your entertainment. At least some sort of pseudo-random number generator is crucial, especially for TCP when selecting the pseudo-source port and initial sequence numbers, or otherwise Slirp seemed to get very confused because we would "reuse" things a lot. Unfortunately, the obvious typical idiom to seed it like srand(time(NULL)) doesn't work: srand() expects a 16-bit int but time(NULL) returns a 32-bit long, and it turns out the compiler only passes the 16 most significant bits of the time — i.e., the ones least likely to change — to srand(). Here's the disassembly as proof (contents trimmed for display here; since this is a static binary, we can see everything we're calling): At the time we call the glue code for time from main, the value under the stack pointer (i.e., r6) is cleared immediately beforehand since we're passing NULL (at ~main+06). We then invoke the system call, which per the Venix manual for time(2) uses two registers for the 32-bit result, namely r0 (high bits) and r1 (low bits). We passed a null pointer, so the values remain in those registers and aren't written anywhere (branch at _time+014). When we return to ~main+014, however, we only put r0 on the stack for srand (remember that r5 is being used as the frame pointer; see the disassembly I provided for csav) and r1 is completely ignored. Why would this happen? It's because time(2) isn't declared anywhere in /usr/include or /usr/include/sys (the two C include directories), nor for that matter rand(3) or srand(3). This is true of both Rev. 2.0 and V2.0. Since the symbols are statically present in the standard library, linking will still work, but since the compiler doesn't know what it's supposed to be working with, it assumes int and fails to handle both halves of the long. One option is to manually declare everything ourselves. However, from the assembly at _time+016 we do know that if we pass a pointer, the entire long value will get placed there. That means we can also do this: Now this gets the lower bits and there is sufficient entropy for our purpose (though obviously not a cryptographically-secure PRNG). Interestingly, the Venix manual recommends using the time as the seed, but doesn't include any sample code. At any rate this was enough to make the pieces work for IP, ICMP and UDP, but TCP would bug out after just a handful of packets. As it happens, Venix has rather small serial buffers by modern standards: tty(7), based on the TIOCQCNT ioctl(2), appears to have just a 256-byte read buffer (sg_ispeed is only char-sized). If we don't make adjustments for this, we'll start losing framing when the buffer gets overrun, as in this extract from a test build with debugging dumps on and a maximum segment size/window of 512 bytes. Here, the bytes marked by dashes are the remote end and the bytes separated by dots are what the SLIP driver is scanning for framing and/or throwing away; you'll note there is obvious ASCII data in them. If we make the TCP MSS and window on our client side 256 bytes, there is still retransmission, but the connection is more reliable since overrun occurs less often and seems to work better than a hard cap on the maximum transmission unit (e.g., "mtu 256") from SLiRP's side. Our only consequence to dropping the TCP MSS and window size is that the TCP client is currently hard-coded to just send one packet at the beginning (this aligns with how you'd do finger, HTTP/1.x, gopher, etc.), and that datagram uses the same size which necessarily limits how much can be sent. If I did the extra work to split this over several datagrams, it obviously wouldn't be a problem anymore, but I'm lazy and worse is better! The connection can be made somewhat more reliable still by improving the SLIP driver's notion of framing. RFC 1055 only specifies that the SLIP end byte (i.e., $c0) occur at the end of a SLIP datagram, though it also notes that it was proposed very early on that it could also start datagrams — i.e., if two occur back to back, then it just looks like a zero length or otherwise obviously invalid entity which can be trivially discarded. However, since there's no guarantee or requirement that the remote link will do this, we can't assume it either. We also can't just look for a $45 byte (i.e., IPv4 and a 20 byte length) because that's an ASCII character and appears frequently in text payloads. However, $45 followed by a valid DSCP/ECN byte is much less frequent, and most of the time this byte will be either $00, $08 or $10; we don't currently support ECN (maybe we should) and we wouldn't find other DSCP values meaningful anyway. The SLIP driver uses these sequences to find the start of a datagram and $c0 to end it. While that doesn't solve the overflow issue, it means the SLIP driver will be less likely to go out of framing when the buffer does overrun and thus can better recover when the remote side retransmits. And, well, that's it. There are still glitches to bang out but it's good enough to grab Hacker News: src/ directory, run configure and then run make (parallel make is fine, I use -j24 on my POWER9). Connect your two serial ports together with a null modem, which I assume will be /dev/ttyUSB0 and /dev/ttyUSB1. Start Slirp-CK with a command line like ./slirp -b 4800 "tty /dev/ttyUSB1" but adjusting the baud and path to your serial port. Take note of the specified virtual and nameserver addresses: Unlike the given directions, you can just kill it with Control-C when you're done; the five zeroes are only if you're running your connection over standard output such as direct shell dial-in (this is a retrocomputing blog so some of you might). To see the debug version in action, next go to the BASS directory and just do a make. You'll get a billion warnings but it should still work with current gcc and clang because I specifically request -std=c89. If you use a different path for your serial port (i.e., not /dev/ttyUSB0), edit slip.c before you compile. You don't do anything like ifconfig with these tools; you always provide the tools the client IP address they'll use (or create an alias or script to do so). Try this initial example, with slirp already running: Because I'm super-lazy, you separate the components of the IPv4 address with spaces, not dots. In Slirp-land, 10.0.2.2 is always the host you are connected to. You can see the ICMP packet being sent, the bytes being scanned by the SLIP driver for framing (the ones with dots), and then the reply (with dashes). These datagram dumps have already been pre-processed for SLIP metabytes. Unfortunately, you may not be able to ping other hosts through Slirp because there's no backroute but you could try this with a direct SLIP connection, an exercise left for the reader. If Slirp doesn't want to respond and you're sure your serial port works (try testing both ends with Kermit?), you can recompile it with -DDEBUG (change this in the generated Makefile) and pass your intended debug level like -d 1 or -d 3. You'll get a file called slirp_debug with some agonizingly detailed information so you can see if it's actually getting the datagrams and/or liking the datagrams it gets. For nslookup, ntp and minisock, the second address becomes your accessible recursive nameserver (or use -i to provide an IP). The DNS dump is also given in the debug mode with slashes for the DNS answer section. nslookup and ntp are otherwise self-explanatory: minisock takes a server name (or IP) and port, followed by optional strings. The strings, up to 255 characters total (in this version), are immediately sent with CR-LFs between them except if you specify -n. If you specify no strings, none are sent. It then waits on that port for data and exits when the socket closes. This is how we did the HTTP/1.0 requests in the screenshots. On the DEC Pro, this has been tested on my trusty DEC Professional 380 running PRO/VENIX V2.0. It should compile and run on a 325 or 350, and on at least PRO/VENIX Rev. V2.0, though I don't have any hardware for this and Xhomer's serial port emulation is not good enough for this purpose (so unfortunately you'll need a real DEC Pro until I or Tarek get around to fixing it). The easiest way to get it over there is Kermit. Assuming you have this already, connect your host and the Pro on the "real" serial port at 9600bps. Make sure both sides are set to binary and just push all the files over (except the Markdown documentation unless you really want), and then do a make -f Makefile.venix (it may have been renamed to makefile.venix; adjust accordingly). Establishing the link is as simple as connecting your server's serial port to the other end of the BCC05 or equivalent from the Pro and starting Slirp to talk to that port (on my system, it's even the same port, so the same command line suffices). If you experience issues with the connection, the easiest fix is to just bounce Slirp — because there are no timeouts, there are also no retransmits. I don't know if this is hitting bugs in Slirp or in my code, though it's probably the latter. Nevertheless, I've been able to run stuff most of the day without issue. It's nice to have a simple network option and the personal satisfaction of having written it myself. There are many acknowledged deficiencies, mostly because I assume little about the system itself and tried to keep everything very simplistic. There are no timeouts and thus no retransmits, and if you break the TCP connection in the middle there will be no proper teardown. Also, because I used Slirp for the other side (as many others will), and because my internal network is full of machines that have no idea what IPv6 is, there is no IPv6 support. I agree there should be and SLIP doesn't care whether it gets IPv4 or IPv6, but for now that would require patching Slirp which is a job I just don't feel up to at the moment. I'd also like to support at least CSLIP in the future. In the meantime, if you want to try this on other operating systems, the system-dependent portions are in compat.h and slip.c with a small amount in ntp.c for handling time values. You will likely want to make changes to where your serial ports are and the speed they run at and how to make that port "raw" in slip.c. You should also add any extra #includes to compat.h that your system requires. I'd love to hear about it running other places. Slirp-CK remains under the original modified Slirp license and BASS is under the BSD 2-clause license. You can get Slirp-CK and BASS at Github.

15 hours ago 2 votes
Transactions are a protocol

Transactions are not an intrinsic part of a storage system. Any storage system can be made transactional: Redis, S3, the filesystem, etc. Delta Lake and Orleans demonstrated techniques to make S3 (or cloud storage in general) transactional. Epoxy demonstrated techniques to make Redis (and any other system) transactional. And of course there's always good old Two-Phase Commit. If you don't want to read those papers, I wrote about a simplified implementation of Delta Lake and also wrote about a simplified MVCC implementation over a generic key-value storage layer. It is both the beauty and the burden of transactions that they are not intrinsic to a storage system. Postgres and MySQL and SQLite have transactions. But you don't need to use them. It isn't possible to require you to use transactions. Many developers, myself a few years ago included, do not know why you should use them. (Hint: read Designing Data Intensive Applications.) And you can take it even further by ignoring the transaction layer of an existing transactional database and implement your own transaction layer as Convex has done (the Epoxy paper above also does this). It isn't entirely clear that you have a lot to lose by implementing your own transaction layer since the indexes you'd want on the version field of a value would only be as expensive or slow as any other secondary index in a transactional database. Though why you'd do this isn't entirely clear (I will like to read about this from Convex some time). It's useful to see transaction protocols as another tool in your system design tool chest when you care about consistency, atomicity, and isolation. Especially as you build systems that span data systems. Maybe, as Ben Hindman hinted at the last NYC Systems, even proprietary APIs will eventually provide something like two-phase commit so physical systems outside our control can become transactional too. Transactions are a protocol short new post pic.twitter.com/nTj5LZUpUr — Phil Eaton (@eatonphil) April 20, 2025

21 hours ago 2 votes
Humanities Crash Course Week 16: The Art of War

In week 16 of the humanities crash course, I revisited the Tao Te Ching and The Art of War. I just re-read the Tao Te Ching last year, so I only revisited my notes now. I’ve also read The Art of War a few times, but decided to re-visit it now anyway. Readings Both books are related. The Art of War is older; Sun Tzu wrote it around 500 BCE, at a time when war was becoming more “professionalized” in China. The book aims convey what had (or hadn’t) worked in the battlefield. The starting point is conflict. There’s an enemy we’re looking to defeat. The best victory is achieved without engagement. That’s not always possible, so the book offers pragmatic suggestions on tactical maneuvers and such. It gives good advice for situations involving conflict, which is why they’ve influenced leaders (including businesspeople) throughout centuries: It’s better to win before any shots are fired (i.e., through cunning and calculation.) Use deception. Don’t let conflicts drag on. Understand the context to use it to your advantage. Keep your forces unified and disciplined. Adapt to changing conditions on the ground. Consider economics and logistics. Gather intelligence on the opposition. The goal is winning through foresight rather than brute force — good advice! The Tao Te Ching, written by Lao Tzu around the late 4th century BCE, is the central text in Taoism, a philosophy that aims for skillful action by aligning with the natural order of the universe — i.e., doing through “non-doing” and transcending distinctions (which aren’t present in reality but layered onto experiences by humans.) Tao means Way, as in the Way to achieve such alignment. The book is a guide to living the Tao. (Living in Tao?) But as it makes clear from its very first lines, you can’t really talk about it: the Tao precedes language. It’s a practice — and the practice entails non-striving. Audiovisual Music: Gioia recommended the Beatles (The White Album, Sgt. Pepper’s, and Abbey Road) and Rolling Stones (Let it Bleed, Beggars Banquet, and Exile on Main Street.) I’d heard all three Rolling Stones albums before, but don’t know them by heart (like I do with the Beatles.) So I revisited all three. Some songs sounded a bit cringe-y, especially after having heard “real” blues a few weeks ago. Of the three albums, Exile on Main Street sounds more authentic. (Perhaps because of the band member’s altered states?) In any case, it sounded most “in the Tao” to me — that is, as though the musicians surrendered to the experience of making this music. It’s about as rock ‘n roll as it gets. Arts: Gioia recommended looking at Chinese architecture. As usual, my first thought was to look for short documentaries or lectures in YouTube. I was surprised by how little there was. Instead, I read the webpage Gioia suggested. Cinema: Since we headed again to China, I took in another classic Chinese film that had long been on my to-watch list: Wong Kar-wai’s IN THE MOOD FOR LOVE. I found it more Confucian than Taoist, although its slow pacing, gentleness, focus on details, and passivity strike something of a Taoist mood. Reflections When reading the Tao Te Ching, I’m often reminded of this passage from the Gospel of Matthew: No man can serve two masters: for either he will hate the one, and love the other; or else he will hold to the one, and despise the other. Ye cannot serve God and mammon. Therefore I say unto you, Take no thought for your life, what ye shall eat, or what ye shall drink; nor yet for your body, what ye shall put on. Is not the life more than meat, and the body than raiment? Behold the fowls of the air: for they sow not, neither do they reap, nor gather into barns; yet your heavenly Father feedeth them. Are ye not much better than they? Which of you by taking thought can add one cubit unto his stature? And why take ye thought for raiment? Consider the lilies of the field, how they grow; they toil not, neither do they spin: And yet I say unto you, That even Solomon in all his glory was not arrayed like one of these. Wherefore, if God so clothe the grass of the field, which to day is, and to morrow is cast into the oven, shall he not much more clothe you, O ye of little faith? Therefore take no thought, saying, What shall we eat? or, What shall we drink? or, Wherewithal shall we be clothed? (For after all these things do the Gentiles seek:) for your heavenly Father knoweth that ye have need of all these things. But seek ye first the kingdom of God, and his righteousness; and all these things shall be added unto you. Take therefore no thought for the morrow: for the morrow shall take thought for the things of itself. Sufficient unto the day is the evil thereof. The Tao Te Ching is older and from a different culture, but “Consider the lilies of the field, how they grow; they toil not, neither do they spin” has always struck me as very Taoistic: both texts emphasize non-striving and putting your trust on a higher order. Even though it’s even older, that spirit is also evident in The Art of War. It’s not merely letting things happen, but aligning mindfully with the needs of the time. Sometimes we must fight. Best to do it quickly and efficiently. And best yet if the conflict can be settled before it begins. Notes on Note-taking This week, I started using ChatGPT’s new o3 model. Its answers are a bit better than what I got with previous models, but there are downsides. For one thing, o3 tends to format answers in tables rather than lists. This works well if you use ChatGPT in a wide window, but is less useful on a mobile device or (as in my case) on a narrow window to the side. This is how I usually use ChatGPT on my Mac: in a narrow window. o3’s responses often include tables that get cut off in this window. For another, replies take much longer as the AI does more “research” in the background. As a result, it feels less conversational than 4o — which changes how I interact with it. I’ll play more with o3 for work, but for this use case, I’ll revert to 4o. Up Next Gioia recommends Apulelius’s The Golden Ass. I’ve never read this, and frankly feel weary about returning to the period of Roman decline. (Too close to home?) But I’ll approach it with an open mind. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

14 hours ago 1 votes
My approach to teaching electronics

Explaining the reasoning behind my series of articles on electronics -- and asking for your thoughts.

yesterday 2 votes