Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
44
As you might have read from my previous post on this topic, I have a pretty neat “cloud” gaming setup running. I have one powerful desktop PC, one virtual machine with a GPU attached to it, and client machines that can be used to stream games from. To keep things simple (and because my current ISP is unbelievably bad), I have so far used streaming only over my local network. However, there were some pretty annoying issues with this setup: The AMD GPU encoder is bad. 720p streaming was the best I could do if I wanted reasonable performance. The Windows 10 VM would sometimes not be able to start the host encoder, and the GPU drivers reported issues. This was fixed with a couple of reboots to the VM and I suspect that this may be related to the notorious GPU reset issues that AMD cards are infamous for in the VFIO world, but I wasn’t interested in digging that deep into that topic. The stream would sometimes randomly crash or lag. This might have been down to the AMD GPU or Parsec...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ./techtipsy

About the time I trashed my mother's laptop

Around 2003, my mother had a laptop: the Compaq Armada 1592DT. It ran Windows Me, the worst Windows to ever exist, whopping 96 MB of RAM, and a 3 GB hard drive. My mother used it for important stuff, and I played games on it. Given the limitations of the 3 GB hard drive, this soon lead to a conflict: there was no room to store any new games! I did my best to make additional room by running the disk cleaner utility, disabling unnecessary Windows features and deleting some PDF catalogues that my mother had downloaded, but there was still a constant lack of space. Armed with a lack of knowledge about computers, I went further and found a tool that promised to make more room on the hard drive. I can’t remember what it was, but it had a nice graphical user interface where the space on the drive was represented as a pie chart. To my amazement, I could slide that pie chart to make it so that 90% of the drive was free space! I went full speed ahead with it. What followed was a crash and upon rebooting I was presented with a black screen. Oops. My mother ended up taking it to a repair shop for 1200 EEK, which was a lot of money at the time. The repair shop ended up installing Windows 98 SE on it, which felt like a downgrade at the time, but in retrospect it was an improvement over Windows Me. I had no idea what I was doing at the time, but I assume that the tool I was playing with was some sort of a partition manager that had no safeguards in place to avoid shrinking and reformatting operating system partitions. Or if it did, then it made ignoring the big warning signs way too easy. Still 100% user error on my part. If only I knew that reinstalling Windows was a relatively simple operation at the time, but it took a solid 4-5 years until I did my first installation of Windows all by myself.

a week ago 13 votes
Fairphone Fairbuds XL review: admirable goals, awful product

I bought the Fairphone Fairbuds XL with my own money at a recent sale for 186.75 EUR, plus 15 EUR for shipping to Estonia. The normal price for these headphones is 239 EUR. This post is not sponsored. I admire what Fairphone wants to achieve, even going as far as getting the Fairphone 5 as a replacement for my iPhone X. Failing to repair my current headphones, I went ahead and decided to get the Fairphone Fairbuds XL as they also advertise the active noise-cancelling feature, and I like the Fairphone brand. Disclaimer: this review is going to be entirely subjective and based on my opinions and experiences with other audio products in the past. I also have tinnitus.1 I consulted rtings.com review before purchasing the product to get an idea about what to expect as a consumer. The comparison headphones The main point of comparison for this review is going to be the Sony WH-1000XM3, which are premium high-end wireless Bluetooth headphones, with active noise-cancelling (before that feature broke). These headphones retailed at a higher price during 2020 (about 300-400 EUR) so they are technically a tier above the Fairbuds XL, but given that its successsor, the WH-1000XM4, can be bought for 239 EUR new (and often about 200-ish EUR on sale!), then it is a fair comparison in my view. After I replaced the ear cushions on my Sony WH-1000XM3 headset, the active noise-cancelling feature started being flaky (popping and loud noises occurring with NC on). No amount of cleaning or calibrating fixed it, and even the authorized repair shop could not do anything about it. I diagnosed the issue to be with the internal noise-cancelling microphones and found that these failing is a very common issue for these headsets, even for newer versions of it. I am unable to compare the active noise-cancelling performance side-by-side, but I can say that the NC performance on the Sony WH-1000XM3 was simply excellent when it did work, no doubt about it. The Fairphone shop experience The first issue I had with the product was actually buying it. For some reason, the form would not accept my legal name which has letter “Õ” in it, a common vowel in Estonia. Knowing how poorly Javascript-based client-side validation can be built, I pulled a pro gamer move and copy-pasted my name into the form, which bypassed the faulty check altogether. Similar issue occurred with the address field, as we also have the letter “Ä” ( and “Ö”, “Ü”, for that matter). The name I can understand why Fairphone went with the name “Fairbuds XL”, it kind of made sense in their audio product line, and Apple set a precedent with AirPods Max. However, there is such a big missed opportunity here: they could’ve called the product… Fairphones. Yes, it would cause some confusion about their other product line, which is the Fairphone, but at least I would find the name more amusing. Packaging The packaging for the headphones is quite similar to what you’d get with the Fairphone 5: lots of cardboard and seemingly no plastic or otherwise problematic materials. Aside from the headphones themselves, you also get a nice egg bag, meant to protect your headphones when travelling with them. It’s okay, but nothing special, and it won’t protect your headphones from physical damage should they fall or get thrown around in a backpack. The Sony headphones come with a solid hardcase, which have done a fantastic job of protecting the headphones over the last 4 years. Longevity of a device depends both on repairability and durability, which is why a hard case would benefit the Fairbuds XL a lot. Factory defect My experience with the Fairbuds XL were off to a rocky start. I noticed that the USB-C cable that connects both sides of the headphones was inserted incorrectly. The headphones worked fine, but you could feel the flat USB-C cable being twisted inside the headband. The fix to this was to carefully push the headband back, disconnect the USB-C cable from the headphones, flip the cable around and reconnect it. Not a good first impression, but at least the fix was simple enough. Fit and feel The Fairbuds XL are not as comfortable as the reference headphones. The ear cushions and headrest are quite hard and not as soft as on the Sony WH-1000XM3. If you get the fit just right, then you probably won’t have issues with wearing these for a few hours at the time, but I found myself adjusting these often to stop them from hurting my ears and head even during a short test. The ear cups lack any kind of swiveling, which is likely contributing to the comparatively poor fit. Our ears are angled ever-so-slightly forwards, and the Sony WH-1000XM3 feels so much better on the ears as a result of its swiveling aspect. I also noticed that you can hear some components inside the headphones rattling when moving your head. This noise is very noticeable even during music playback and you don’t need to move your head a lot to hear that rattling. In my view, this is a serious defect in the product. When the headphones are folded in, the USB-C cable gets bent in the process and gets forced against one of the ear cushions. I suspect that within months or years of use, either the cable will fail or the ear cushion gets a permanent imprint of the USB-C cable position. The sound I’m not impressed with the sound that the Fairbuds XL produce. They are not in the same class as the Sony WH-1000XM3, with the default equalizer sounding incredibly bland. Most instruments and sounds are bland and not as clear. That’s the best I can describe it as. The Fairbuds app can be used to tune the sound via the equalizer, and out of all the presets I’ve found “Boston” to be the most pleasant one to use. Unfortunately the UI does not show how the presets customize the values in the equalizer, which makes tweaking a preset all that much harder. Compared to the Sony WH-1000XM3, I miss the cripsy sound and the all-encompassing bass, it can really bring all the satisfying details out. Given that I had used the Sony headphones for almost 5 years at this point may also just mean that I had gotten used to how it sounds. Active noise-cancelling The active noise-cancelling performance is nowhere near the Sony WH-1000XM3-s. The effect is very minor, and you’ll be hearing most of the surrounding sounds. Touching the active noise-cancelling microphones on the sides of the headphones will also make a loud sound inside the speaker, and walking around in a room will result in the headphones making wind noises. Because of this, I consider the active noise-cancelling functionality to be functionally broken. Microphone quality I used the Fairbuds XL in a work call, and based on feedback from other attendees, the microphone quality over Bluetooth can be categorized as barely passable, getting a solid 2 points out of 5. To be fair, Bluetooth microphone quality is also not great on the Sony WH-1000XM3-s, but compared to the Fairphone Fairbuds XL, they are still subjectively better. Fairbuds app The Fairbuds app is very simple, and you’d mainly want to use it for setting the equalizer settings and upgrading the firmware. The rest of the functionality seems to be a bunch of links to Fairphone articles and guides. The first time I installed the app, it told me that a firmware upgrade version V90 is available. During the first attempt, the progress bar stopped. Second attempt: it almost reached the end and did not complain about a firmware upgrade being available after that. Third attempt came after I had reinstalled the app. And there it was, the version V90 update, again. This time it got stuck at 1%. I’m probably still on the older version of the firmware, but I honestly can’t tell. Bluetooth multi-device connecting This is a feature that I didn’t know I needed in my life. With the reference Sony WH-1000XM3-s, whenever I wanted to switch where I listen to music from, I had to disconnect from my phone and then reconnect on the desktop, which was an annoying and manual process. With the Fairbuds XL, I can connect the headphones to both my laptop and phone and play media wherever, the headphones will switch to whichever device I’m actually using! This, too, has its quirks, and there might be a small delay when playing media on the other device, but I’ve grown so accustomed to using this feature now and can’t imagine myself going back to using anything else. This feature is not unique to the Fairbuds XL as other modern wireless headphones are also likely to boast this feature, but this is the first time I’ve had the opportunity to try this out myself. It’s a tremendous quality of life improvement for me. However, this, too, is not perfect. If I have the headphones connected to my phone and laptop, and I change to headset mode on the laptop for a meeting, then the playback on the phone will be butchered until I completely disconnect the headphones from the laptop. This seems like a firmware issue to me. The controls The Fairbuds XL has one button and one joystick. The button controls the active noise-cancelling settings (NC on, Ambient sound, NC off), plus the Bluetooth pairing mode. The joystick is used to turn the device on, switch songs and control the volume, and likely some other settings that relate to accepting calls and the like. Coming from the Sony WH-1000XM3, I have to say that I absolutely LOVE having physical buttons again! It’s so much easier to change the volume level, skip songs and start/stop playback with a physical button compared to the asinine touch surface solution that Sony has going on. The joystick is not perfect, skipping a song can be a little bit tricky due to how the joystick is positioned, you can’t always get a good handle due to your fingers hitting the rest of the headphone assembly. That’s the only concern I have with it. If the joystick was a little bit concave and larger, then that may make some of these actions easier for those of us with modest/large thumbs. The audio cue for skipping songs is a bit annoying and cannot seemingly be disabled. The sound effect resembles someone hitting a golf ball with a very poor driver. The ANC settings button is alright, but it’s not possible to quickly cycle between the three modes, you will have to fully listen to the nice lady speaking and then you can move on to the next setting. I wish that clicking the button in rapid succession would skip through the modes faster. USB-C port functionality I was curious to see if the Fairbuds XL worked as normal headphones if I just connected them up to my PC using a USB-C cable. To my surprise, they did! The audio quality was not as good as with Bluetooth, and the volume controls depended on which virtual device you select in your operating system. The Sony WH-1000XM3 do not work like this, the USB-C port is for charging only as far as I’ve tested, but it does have an actual 3.5mm port for wired use. When connected over Bluetooth and you connect a charging cable, the Fairbuds XL will pause momentarily and then continue playback while charging the battery. This is incredibly handy for a wireless device, especially in situations where you have an important meeting coming up and you’re just about to run out of battery. The Sony WH-1000XM3 will simply power off when you connect a charger cable, rendering them unusable while charging. Annoying issues For some reason, whenever I charge my Fairbuds XL, they magically turn on again and I have to shut them off a second time. I’m never quite sure if I’ve managed to shut the headphones off. It does the jingle that indicates that it’s powered off, but then I come back to it later and I find that they’re powered on again. Customer care experience I was so unhappy with the product that I tried out the refunding process for the Fairphone Fairbuds XL. I ordered the Fairbuds XL on 2025-02-10 and I received them on 2025-02-14, shipped to Estonia. According to Fairphone’s own materials, I can return the headphones without any questions asked, assuming that my use of them matches what can be done at a physical store. For Fairphone Products, including gift cards, you purchased on the Fairphone Webshop, you have a legal right to change your mind within 14 days and receive a refund amounting to the purchase price of the products and the costs of delivery and return. You are entitled to cancel your purchase within fourteen (14) days from the day the products were delivered to you, without explanation and without any penalties. In the case of a Cool-off, Fairphone may reduce the refund of the purchase price (including delivery costs) to reflect any reduction in the value of the Products, if this has been caused by your handling them in a way which would not normally be permitted in a shop. This means You are entitled to turn on and inspect Your purchased device to familiarise yourself with its properties and ensure that it is working correctly – comparable to the conditions that are permitted within a shop. I followed their instructions and filed a support ticket on 2025-02-16. On 2025-02-25, I had not yet received any contact from Fairphone and I asked them again under the same ticket. On 2025-03-07, I received an automated message that apologized for the delay and asked me to not make any additional tickets on the matter. I’m still waiting for an update for the support ticket over a month later, while the headphones sit in the original packaging. Based on the experiences by others in the Fairphone community forum, it seems that unacceptably large delays in customer service are the norm for Fairphone. Fairphone, if you want to succeed as a company, you need to make sure that the one part of your company that’s directly interfacing with your actual paying customers needs to be appropriately staffed and resourced. A bad customer support experience can turn off a brand evangelist overnight. Closing thoughts I want Fairphone to succeed in their mission, but products like these do not further the cause. The feature set of the Fairbuds XL seems competent, and I’m willing to give a pass on a few minor issues if the overall experience is good, but the unimpressive sound profile, broken active noise-cancelling mode, multiple quality issues and poor customer service mean that I can’t in good conscience recommend the Fairphone Fairbuds XL, not even on sale. Perhaps less resources should be spent on rebranding and more on engineering good products. Remember dubstep being a thing? Yeah, so do I. That, plus a little bit of mandatory military service can do a lot of damage to hearing. ↩︎

a month ago 19 votes
I yearn for the perfect home server

I’ve changed my home server setup a lot over the past decade, mainly because I keep changing the goals all the time. I’ve now realized why that keeps happening. I want the perfect home server. What is the perfect home server? I’d phrase it like this: The perfect home server uses very little power, offers plenty of affordable storage and provides a lot of performance when it’s actually being relied upon. In my case, low power means less than 5 W while idling, 10+ TB of redundant storage for data resilience and integrity concerns, and performance means about 4 modern CPU cores’ worth (low-to-midrange desktop CPU performance). I seem to only ever get one or two at most. Low power usage? Your performance will likely suffer, and you can’t run too many storage drives. You can run SSD-s, but they are not affordable if you need higher capacities. Lots of storage? Well, there goes the low power consumption goal, especially if you run 3.5" hard drives. Lots of performance? Lots of power consumed! There’s just something that annoys me whenever I do things on my home server and I have to wait longer than I should, and yet I’m bothered when my monitoring tells me that my home server is using 50+ watts.1 I keep an eye out for developments in the self-hosting and home server spaces with the hopes that I’ll one day stumble upon the holy grail, that one server that fits all my needs. I’ve gotten close, but no matter what setup I have, there’s always something that keeps bothering me. I’ve seen a few attempts at the perfect home server, covered by various tech reviewers, but they always have at least one critical flaw. Sometimes the whole package is actually great, the functionality rocks, and then you find that the hardware contains prototype-level solutions that result in the power consumption ballooning to over 30 W. Or the price is over 1000 USD/EUR, not including the drives. Or it’s only available in certain markets and the shipping and import duties destroy its value proposition. There is no affordable platform out there that provides great performance, flexibility and storage space, all while being quiet and using very little power.2 Desktop PC-s repurposed as home servers can provide room for a lot of storage, and they are by design very flexible, but the trade-off is the higher power consumption of the setup. Single board computers use very little power, but they can’t provide a lot of performance and connecting storage to them gets tricky and is overall limited. They can also get surprisingly expensive. NAS boxes provide a lot of storage space and are generally low power if you exclude the power consumption of hard drives, but the cheaper ones are not that performant, and the performant ones cost almost as much as a high-end PC. Laptops can be used as home servers, they are quite efficient and performant, but they lack the flexibility and storage options of desktop PC-s and NAS boxes. You can slap a USB-based DAS to it to add storage, but I’ve had poor experiences with these under high load, meaning that these approaches can’t be relied on if you care about your data and server stability. Then there’s the option of buying used versions of all of the above. Great bang for buck, but you’re likely taking a hit on the power efficiency part due to the simple fact that technology keeps evolving and getting more efficient. I’m still hopeful that one day a device exists that ticks all the boxes while also being priced affordably, but I’m afraid that it’s just a pipe dream. There are builds out there that fill in almost every need, but the parts list is very specific and the bulk of the power consumption wins come from using SSD-s instead of hard drives, which makes it less affordable. In the meantime I guess I’ll keep rocking my ThinkPad-as-a-server approach and praying that the USB-attached storage does not cause major issues. perhaps it’s an undiagnosed medical condition. Homeserveritis? ↩︎ if there is one, then let me know, you can find the contact details below! ↩︎

a month ago 26 votes
Turns out that I'm a 'prolific open-source influencer' now

Yes, you read that right. I’m a prolific open-source influencer now. Some years ago I set up a Google Alert with my name, for fun. Who knows what it might show one day? On 7th of February, it fired an alert. Turns out that my thoughts on Ubuntu were somewhat popular, and it ended up being ingested by an AI slop generator over at Fudzilla, with no links back to the source or anything.1 Not only that, but their choice of spicy autocomplete confabulation bot a large language model completely butchered the article, leaving out critical information, which lead to one reader gloating about Windows. Not linking back to the original source? Not a good start. Misrepresenting my work? Insulting. Giving a Windows user the opportunity to boast about how happy they are with using it? Absolutely unacceptable. Here’s the full article in case they ever delete their poor excuse of a “news” “article”. two can play at that game. ↩︎

2 months ago 26 votes
IODD ST400 review: great idea, good product, terrible firmware

I’ve written about abusing USB storage devices in the past, with a passing mention that I’m too cheap to buy an IODD device. Then I bought one. I’ve always liked the promise of tools like Ventoy: you only need to carry the one storage device that boots anything you want. Unfortunately I still can’t trust Ventoy, so I’m forced to look elsewhere. The hardware I decided to get the IODD ST400 for 122 EUR (about 124 USD) off of Amazon Germany, since it was for some reason cheaper than getting it from iodd.shop directly. SATA SSD-s are cheap and plentiful, so the ST400 made the most sense to me. The device came with one USB cable, with type A and type C ends. The device itself has a USB type C port, which I like a lot. The buttons are functional and clicky, but incredibly loud. Setting it up Before you get started with this device, I highly recommend glancing over the official documentation. The text is poorly translated in some parts, but overall it gets the job done. Inserting the SSD was reasonably simple, it slotted in well and would not move around after assembling it. Getting the back cover off was tricky, but I’d rather have that than have to deal with a loose back cover that comes off when it shouldn’t. The most important step is the filesystem choice. You can choose between NTFS, FAT32 or exFAT. Due to the maximum file size limitation of 4GB on FAT32, you will probably want to go with either NTFS or exFAT. Once you have a filesystem on the SSD, you can start copying various installers and tools on it and mount them! The interface is unintuitive. I had to keep the manual close when testing mine, but eventually I figured out what I can and cannot do. Device emulation Whenever you connect the IODD device to a powered on PC, it will present itself as multiple devices: normal hard drive: the whole IODD filesystem is visible here, and you can also store other files and backups as well if you want to optical media drive: this is where your installation media (ISO files) will end up, read only virtual drives (up to 3 at a time): VHD files that represent virtual hard drives, but are seen as actual storage devices on the PC This combination of devices is incredibly handy. For example, you can boot an actual Fedora Linux installation as one of the virtual drives, and make a backup of the files on the PC right to the IODD storage itself. S.M.A.R.T information also seems to be passed through properly for the disk that’s inside. Tech tip: to automatically mount your current selection of virtual drives and ISO file at boot, hold down the “9” button for about 3 seconds. The button also has an exit logo on it. Without this step, booting an ISO or virtual drive becomes tricky as you’ll have to both spam the “select boot drive” key on the PC while navigating the menus on the IODD device to mount the ISO. The performance is okay. The drive speeds are limited to SATA II speeds, which means that your read/write speeds cap out at about 250 MB/s. Latency will depend a lot on the drive, but it stays mostly in the sub-millisecond range on my SSD. The GNOME Disks benchmark does show a notable chunk of reads having a 5 millisecond latency. The drive does not seem to exhibit any throttling under sustained loads, so at least it’s better than a normal USB stick. The speeds seem to be the same for all emulated devices, with latencies and speeds being within spitting distance. The firmware sucks, actually The IODD ST400 is a great idea that’s been turned into a good product, but the firmware is terrible enough to almost make me regret the purchase. The choice of filesystems available (FAT32, NTFS, exFAT) is very Windows-centric, but at least it comes with the upside of being supported on most popular platforms, including Linux and Mac. Not great, not terrible. The folder structure has some odd limitations. For example, you can only have 32 items within a folder. If you have more of that, you have to use nested folders. This sounds like a hard cap written somewhere within the device firmware itself. I’m unlikely to hit such limits myself and it doesn’t seem to affect the actual storage, just the device itself isn’t able to handle that many files within a directory listing. The most annoying issue has turned out to be defragmentation. In 2025! It’s a known limitation that’s handily documented on the IODD documentation. On Windows, you can fix it by using a disk defragmentation tool, which is really not recommended on an SSD. On Linux, I have not yet found a way to do that, so I’ve resorted to simply making a backup of the contents of the drive, formatting the disk, and copying it all back again. This is a frustrating issue that only comes up when you try to use a virtual hard drive. It would absolutely suck to hit this error while in the field. The way virtual drives are handled is also less than ideal. You can only use fixed VHD files that are not sparse, which seems to again be a limitation of the firmware. Tech tip: if you’re on Linux and want to convert a raw disk image (such as a disk copied with dd) to a VHD file, you can use a command like this one: qemu-img convert -f raw -O vpc -o subformat=fixed,force_size source.img target.vhd The firmware really is the worst part of this device. What I would love to see is a device like IODD but with free and open source firmware. Ventoy has proven that there is a market for a solution that makes juggling installation media easy, but it can’t emulate hardware devices. An IODD-like device can. Encryption and other features I didn’t test those because I don’t really need those features myself, I really don’t need to protect my Linux installers from prying eyes. Conclusion The IODD ST400 is a good device with a proven market, but the firmware makes me refrain from outright recommending it to everyone, at least not at this price. If it were to cost something like 30-50 EUR/USD, I would not mind the firmware issues at all.

2 months ago 29 votes

More in technology

Greatest Hits

I’ve been blogging now for approximately 8,465 days since my first post on Movable Type. My colleague Dan Luu helped me compile some of the “greatest hits” from the archives of ma.tt, perhaps some posts will stir some memories for you as well: Where Did WordCamps Come From? (2023) A look back at how Foo … Continue reading Greatest Hits →

21 hours ago 2 votes
Let's give PRO/VENIX a barely adequate, pre-C89 TCP/IP stack (featuring Slirp-CK)

TCP/IP Illustrated (what would now be called the first edition prior to the 2011 update) for a hundred-odd bucks on sale which has now sat on my bookshelf, encased in its original shrinkwrap, for at least twenty years. It would be fun to put up the 4.4BSD data structures poster it came with but that would require opening it. Fortunately, today we have AI we have many more excellent and comprehensive documents on the subject, and more importantly, we've recently brought back up an oddball platform that doesn't have networking either: our DEC Professional 380 running the System V-based PRO/VENIX V2.0, which you met a couple articles back. The DEC Professionals are a notoriously incompatible member of the PDP-11 family and, short of DECnet (DECNA) support in its unique Professional Operating System, there's officially no other way you can get one on a network — let alone the modern Internet. Are we going to let that stop us? Crypto Ancienne proxy for TLS 1.3. And, as we'll discuss, if you can get this thing on the network, you can get almost anything on the network! Easily portable and painfully verbose source code is included. Recall from our lengthy history of DEC's early misadventures with personal computers that, in Digital's ill-advised plan to avoid the DEC Pros cannibalizing low-end sales from their categorical PDP-11 minicomputers, Digital's Small Systems Group deliberately made the DEC Professional series nearly totally incompatible despite the fact they used the same CPUs. In their initial roll-out strategy in 1982, the Pros (as well as their sibling systems, the Rainbow and the DECmate II) were only supposed to be mere desktop office computers — the fact the Pros were PDP-11s internally was mostly treated as an implementation detail. The idea backfired spectacularly against the IBM PC when the Pros and their promised office software failed to arrive on time and in 1984 DEC retooled around a new concept of explicitly selling the Pros as desktop PDP-11s. This required porting operating systems that PDP-11 minis typically ran: RSX-11M Plus was already there as the low-level layer of the Professional Operating System (P/OS), and DEC internally ported RT-11 (as PRO/RT-11) and COS. PDP-11s were also famous for running Unix and so DEC needed a Unix for the Pro as well, though eventually only one official option was ever available: a port of VenturCom's Venix based on V7 Unix and later System V Release 2.0 called PRO/VENIX. After the last article, I had the distinct pleasure of being contacted by Paul Kleppner, the company's first paid employee in 1981, who was part of the group at VenturCom that did the Pro port and stayed at the company until 1988. Venix was originally developed from V6 Unix on the PDP-11/23 incorporating Myron Zimmerman's real-time extensions to the kernel (such as semaphores and asynchronous I/O), then a postdoc in physics at MIT; Kleppner's father was the professor of the lab Zimmerman worked in. Zimmerman founded VenturCom in 1981 to capitalize on the emerging Unix market, becoming one of the earliest commercial Unix licensees. Venix-11 was subsequently based on the later V7 Unix, as was Venix/86, which was the first Unix on the IBM PC in January 1983 and was ported to the DEC Rainbow as Venix/86R. In addition to its real-time extensions and enhanced segmentation capability, critical for memory management in smaller 16-bit address spaces, it also included a full desktop graphics package. Notably, DEC themselves were also a Unix licensee through their Unix Engineering Group and already had an enhanced V7 Unix of their own running on the PDP-11, branded initially as V7M. Subsequently the UEG developed a port of 4.2BSD with some System V components for the VAX and planned to release it as Ultrix-32, simultaneously retconning V7M as Ultrix-11 even though it had little in common with the VAX release. Paul recalls that DEC did attempt a port of Ultrix-11 to the Pro 350 themselves but ran into intractable performance problems. By then the clock was ticking on the Pro relaunch and the issues with Ultrix-11 likely prompted DEC to look for alternatives. Crucially, Zimmerman had managed to upgrade Venix-11's kernel while still keeping it small, a vital aspect on his 11/23 which lacked split instruction and data addressing and would have had to page in and out a larger kernel otherwise. Moreover, the 11/23 used an F-11 CPU — the same CPU as the original Professional 350 and 325. DEC quickly commissioned VenturCom to port their own system over to the Pro, which Paul says was a real win for VenturCom, and the first release came out in July 1984 complete with its real-time features intact and graphics support for the Pro's bitmapped screen. It was upgraded ("PRO/VENIX Rev 2.0") in October 1984, adding support for the new top-of-the-line DEC Professional 380, and then switched to System V (SVR2) in July 1985 with PRO/VENIX V2.0. (For its part Ultrix-11 was released as such in 1984 as well, but never for the Pro series.) Keep that kernel version history in mind for when we get to oddiments of the C compiler. As for networking, though, with the exception of UUCP over serial, none of these early versions of Venix on either the PDP-11 or 8086 supported any kind of network connectivity out of the box — officially the only Pro operating system to support its Ethernet upgrade option was P/OS 2.0. Although all Pros have a 15-pin AUI network port, it isn't activated until an Ethernet CTI card is installed. (While Stan P. found mention of a third-party networking product called Fusion by Network Research Corporation which could run on PRO/VENIX, Paul's recollection is that this package ran into technical problems with kernel size during development. No examples of the PRO/VENIX version have so far been located and it may never have actually been released. You'll hear about it if a copy is found. The unofficial Pro 2.9BSD port also supports the network card, but that was always an under-the-table thing.) Since we run Venix on our Pro, that means currently our only realistic option to get this on the 'Nets is also over a serial port. lower speed port for our serial IP implementation. PRO/VENIX supports using only the RS-423 port as a remote terminal, and because it's twice as fast, it's more convenient for logins and file exchange over Kermit (which also has no TCP/IP overhead). Using the printer port also provides us with a nice challenge: if our stack works acceptably well at 4800bps, it should do even better at higher speeds if we port it elsewhere. On the Pro, we connect to our upstream host using a BCC05 cable (in the middle of this photograph), which terminates in a regular 25-pin RS-232 on the other end. Now for the software part. There are other small TCP/IP stacks, notably things like Adam Dunkel's lwIP and so on. But even SVR2 Venix is by present standards a old Unix with a much less extensive libc and more primitive C compiler — in a short while you'll see just how primitive — and relatively modern code like lwIP's would require a lot of porting. Ideally we'd like a very minimal, indeed barely adequate, stack that can do simple tasks and can be expressed in a fashion acceptable to a now antiquated compiler. Once we've written it, it would be nice if it were also easily portable to other very limited systems, even by directly translating it to assembly language if necessary. What we want this barebones stack to accomplish will inform its design: and the hardware 24-7 to make such a use case meaningful. The Ethernet option was reportedly competent at server tasks, but Ethernet has more bandwidth, and that card also has additional on-board hardware. Let's face the cold reality: as a server, we'd find interacting with it over the serial port unsatisfactory at best and we'd use up a lot of power and MTBF keeping it on more than we'd like to. Therefore, we really should optimize for the client case, which means we also only need to run the client when we're performing a network task. no remote login capacity, like, I dunno, a C64, the person on the console gets it all. Therefore, we really should optimize for the single user case, which means we can simplify our code substantially by merely dealing with sockets sequentially, one at a time, without having to worry about routing packets we get on the serial port to other tasks or multiplexing them. Doing so would require extra work for dual-socket protocols like FTP, but we're already going to use directly-attached Kermit for that, and if we really want file transfer over TCP/IP there are other choices. (On a larger antique system with multiple serial ports, we could consider a setup where each user uses a separate outgoing serial port as their own link, which would also work under this scheme.) Some of you may find this conflicts hard with your notion of what a "stack" should provide, but I also argue that the breadth of a full-service driver would be wasted on a limited configuration like this and be unnecessarily more complex to write and test. Worse, in many cases, is better, and I assert this particular case is one of them. Keeping the above in mind, what are appropriate client tasks for a microcomputer from 1984, now over 40 years old — even a fairly powerful one by the standards of the time — to do over a slow TCP/IP link? Crypto Ancienne's carl can serve as an HTTP-to-HTTPS proxy to handle the TLS part, if necessary.) We could use protocols like these to download and/or view files from systems that aren't directly connected, or to send and receive status information. One task that is also likely common is an interactive terminal connection (e.g., Telnet, rlogin) to another host. However, as a client this particular deployment is still likely to hit the same sorts of latency problems for the same reasons we would experience connecting to it as a server. These other tasks here are not highly sensitive to latency, require only a single "connection" and no multiplexing, and are simple protocols which are easy to implement. Let's call this feature set our minimum viable product. Because we're writing only for a couple of specific use cases, and to make them even more explicit and easy to translate, we're going to take the unusual approach of having each of these clients handle their own raw packets in a bytewise manner. For the actual serial link we're going to go even more barebones and use old-school RFC 1055 SLIP instead of PPP (uncompressed, too, not even Van Jacobson CSLIP). This is trivial to debug and straightforward to write, and if we do so in a relatively encapsulated fashion, we could consider swapping in CSLIP or PPP later on. A couple of utility functions will do the IP checksum algorithm and reading and writing the serial port, and DNS and some aspects of TCP also get their own utility subroutines, but otherwise all of the programs we will create will read and write their own network datagrams, using the SLIP code to send and receive over the wire. The C we will write will also be intentionally very constrained, using bytewise operations assuming nothing about endianness and using as little of the C standard library as possible. For types, you only need some sort of 32-bit long, which need not be native, an int of at least 16 bits, and a char type — which can be signed, and in fact has to be to run on earlier Venices (read on). You can run the entirety of the code with just malloc/free, read/write/open/close, strlen/strcat, sleep, rand/srand and time for the srand seed (and fprintf for printing debugging information, if desired). On a system with little or no operating system support, almost all of these primitive library functions are easy to write or simulate, and we won't even assume we're capable of non-blocking reads despite the fact Venix can do so. After all, from that which little is demanded, even less is expected. slattach which effectively makes a serial port directly into a network interface. Such an arrangement would be the most flexible approach from the user's perspective because you necessarily have a fixed, bindable external address, but obviously such a scheme didn't scale over time. With the proliferation of dialup Unix shell accounts in the late 1980s and early 1990s, closed-source tools like 1993's The Internet Adapter ("TIA") could provide the SLIP and later PPP link just by running them from a shell prompt. Because they synthesize artificial local IP addresses, sort of NAT before the concept explicitly existed, the architecture of such tools prevented directly creating listening sockets — though for some situations this could be considered a more of a feature than a bug. Any needed external ports could be proxied by the software anyway and later network clients tended not to require it, so for most tasks it was more than sufficient. Closed-source and proprietary SLIP/PPP-over-shell solutions like TIA were eventually displaced by open source alternatives, most notably SLiRP. SLiRP (hereafter Slirp so I don't gouge my eyes out) emerged in 1995 and used a similar architecture to TIA, handing out virtual addresses on an synthetic network and bridging that network to the Internet through the host system. It rapidly became the SLIP/PPP shell solution of choice, leading to its outright ban by some shell ISPs who claimed it violated their terms of service. As direct SLIP/PPP dialup became more common than shell accounts, during which time yours truly upgraded to a 56K Mac modem I still have around here somewhere, Slirp eventually became most useful for connecting small devices via their serial ports (PDAs and mobile phones especially, but really anything — subsets of Slirp are still used in emulators today like QEMU for a similar purpose) to a LAN. By a shocking and completely contrived coincidence, that's exactly what we'll be doing! Slirp has not been officially maintained since 2006. There is no package in Fedora, which is my usual desktop Linux, and the one in Debian reportedly has issues. A stack of patch sets circulated thereafter, but the planned 1.1 release never happened and other crippling bugs remain, some of which were addressed in other patches that don't seem to have made it into any release, source or otherwise. If you tried to build Slirp from source on a modern system and it just immediately exits, you got bit. I have incorporated those patches and a couple of my own to port naming and the configure script, plus some additional fixes, into an unofficial "Slirp-CK" which is on Github. It builds the same way as prior versions and is tested on Fedora Linux. I'm working on getting it functional on current macOS also. Next, I wrote up our four basic functional clients: ping, DNS lookup, NTP client (it doesn't set the clock, just shows you the stratum, refid and time which you can use for your own purposes), and TCP client. The TCP client accepts strings up to a defined maximum length, opens the connection, sends those strings (optionally separated by CRLF), and then reads the reply until the connection closes. This all seemed to work great on the Linux box, which you yourself can play with as a toy stack (directions at the end). Unfortunately, I then pushed it over to the Pro with Kermit and the compiler immediately started complaining. SLIP is a very thin layer on IP packets. There are exactly four metabytes, which I created preprocessor defines for: A SLIP packet ends with SLIP_END, or hex $c0. Where this must occur within a packet, it is replaced by a two byte sequence for unambiguity, SLIP_ESC SLIP_ESC_END, or hex $db $dc, and where the escape byte must occur within a packet, it gets a different two byte sequence, SLIP_ESC SLIP_ESC_ESC, or hex $db $dd. Although I initially set out to use defines and symbols everywhere instead of naked bytes, and wrote slip.c on that basis, I eventually settled on raw bytes afterwards using copious comments so it was clear what was intended to be sent. That probably saved me a lot of work renaming everything, because: Dimly I recalled that early C compilers, including System V, limit their identifiers to eight characters (the so-called "Ritchie limit"). At this point I probably should have simply removed them entirely for consistency with their absence elsewhere, but I went ahead and trimmed them down to more opaque, pithy identifiers. That wasn't the only problem, though. I originally had two functions in slip.c, slip_start and slip_stop, and it didn't like that either despite each appearing to have a unique eight-character prefix: That's because their symbols in the object file are actually prepended with various metacharacters like _ and ~, so effectively you only get seven characters in function identifiers, an issue this error message fails to explain clearly. The next problem: there's no unsigned char, at least not in PRO/VENIX Rev. 2.0 which I want to support because it's more common, and presumably the original versions of PRO/VENIX and Venix-11. (This type does exist in PRO/VENIX V2.0, but that's because it's System V and has a later C compiler.) In fact, the unsigned keyword didn't exist at all in the earliest C compilers, and even when it did, it couldn't be applied to every basic type. Although unsigned char was introduced in V7 Unix and is documented as legal in the PRO/VENIX manual, and it does exist in Venix/86 2.1 which is also a V7 Unix derivative, the PDP-11 and 8086 C compilers have different lineages and Venix's V7 PDP-11 compiler definitely doesn't support it: I suspect this may not have been intended because unsigned int works (unsigned long would be pointless on this architecture, and indeed correctly generates Misplaced 'long' on both versions of PRO/VENIX). Regardless of why, however, the plain char type on the PDP-11 is signed, and for compatibility reasons here we'll have no choice but to use it. Recall that when C89 was being codified, plain char was left as an ambiguous type since some platforms (notably PDP-11 and VAX) made it signed by default and others made it unsigned, and C89 was more about codifying existing practice than establishing new ones. That's why you see this on a modern 64-bit platform, e.g., my POWER9 workstation, where plain char is unsigned: If we change the original type explicitly to signed char on our POWER9 Linux machine, that's different: and, accounting for different sizes of int, seems similar on PRO/VENIX V2.0 (again, which is System V): but the exact same program on PRO/VENIX Rev. 2.0 behaves a bit differently: The differences in int size we expect, but there's other kinds of weird stuff going on here. The PRO/VENIX manual lists all the various permutations about type conversions and what gets turned into what where, but since the manual is already wrong about unsigned char I don't think we can trust the documentation for this part either. Our best bet is to move values into int and mask off any propagated sign bits before doing comparisons or math, which is agonizing, but reliable. That means throwing around a lot of seemingly superfluous & 0xff to make sure we don't get negative numbers where we don't want them. Once I got it built, however, there were lots of bugs. Many were because it turns out the compiler isn't too good with 32-bit long, which is not a native type on the 16-bit PDP-11. This (part of the NTP client) worked on my regular Linux desktop, but didn't work in Venix: The first problem is that the intermediate shifts are too large and overshoot, even though they should be in range for a long. Consider this example: On the POWER9, accounting for the different semantics of %lx, But on Venix, the second shift blows out the value. We can get an idea of why from the generated assembly in the adb debugger (here from PRO/VENIX V2.0, since I could cut and paste from the Kermit session): (Parenthetical notes: csav is a small subroutine that pushes volatiles r2 through r4 on the stack and turns r5 into the frame pointer; the corresponding cret unwinds this. The initial branch in this main is used to reserve additional stack space, but is often practically a no-op.) The first shift is here at ~main+024. Remember the values are octal, so 010 == 8. r0 is 16 bits wide — no 32-bit registers — so an eight-bit shift is fine. When we get to the second shift, however, it's the same instruction on just one register (030 == 24) and the overflow is never checked. In fact, the compiler never shifts the second part of the long at all. The result is thus zero. The second problem in this example is that the compiler never treats the constant as a long even though statically there's no way it can fit in a 16-bit int. To get around those two gotchas on both Venices here, I rewrote it this way: An alternative to a second variable is to explicitly mark the epoch constant itself as long, e.g., by casting it, which also works. Here's another example for your entertainment. At least some sort of pseudo-random number generator is crucial, especially for TCP when selecting the pseudo-source port and initial sequence numbers, or otherwise Slirp seemed to get very confused because we would "reuse" things a lot. Unfortunately, the obvious typical idiom to seed it like srand(time(NULL)) doesn't work: srand() expects a 16-bit int but time(NULL) returns a 32-bit long, and it turns out the compiler only passes the 16 most significant bits of the time — i.e., the ones least likely to change — to srand(). Here's the disassembly as proof (contents trimmed for display here; since this is a static binary, we can see everything we're calling): At the time we call the glue code for time from main, the value under the stack pointer (i.e., r6) is cleared immediately beforehand since we're passing NULL (at ~main+06). We then invoke the system call, which per the Venix manual for time(2) uses two registers for the 32-bit result, namely r0 (high bits) and r1 (low bits). We passed a null pointer, so the values remain in those registers and aren't written anywhere (branch at _time+014). When we return to ~main+014, however, we only put r0 on the stack for srand (remember that r5 is being used as the frame pointer; see the disassembly I provided for csav) and r1 is completely ignored. Why would this happen? It's because time(2) isn't declared anywhere in /usr/include or /usr/include/sys (the two C include directories), nor for that matter rand(3) or srand(3). This is true of both Rev. 2.0 and V2.0. Since the symbols are statically present in the standard library, linking will still work, but since the compiler doesn't know what it's supposed to be working with, it assumes int and fails to handle both halves of the long. One option is to manually declare everything ourselves. However, from the assembly at _time+016 we do know that if we pass a pointer, the entire long value will get placed there. That means we can also do this: Now this gets the lower bits and there is sufficient entropy for our purpose (though obviously not a cryptographically-secure PRNG). Interestingly, the Venix manual recommends using the time as the seed, but doesn't include any sample code. At any rate this was enough to make the pieces work for IP, ICMP and UDP, but TCP would bug out after just a handful of packets. As it happens, Venix has rather small serial buffers by modern standards: tty(7), based on the TIOCQCNT ioctl(2), appears to have just a 256-byte read buffer (sg_ispeed is only char-sized). If we don't make adjustments for this, we'll start losing framing when the buffer gets overrun, as in this extract from a test build with debugging dumps on and a maximum segment size/window of 512 bytes. Here, the bytes marked by dashes are the remote end and the bytes separated by dots are what the SLIP driver is scanning for framing and/or throwing away; you'll note there is obvious ASCII data in them. If we make the TCP MSS and window on our client side 256 bytes, there is still retransmission, but the connection is more reliable since overrun occurs less often and seems to work better than a hard cap on the maximum transmission unit (e.g., "mtu 256") from SLiRP's side. Our only consequence to dropping the TCP MSS and window size is that the TCP client is currently hard-coded to just send one packet at the beginning (this aligns with how you'd do finger, HTTP/1.x, gopher, etc.), and that datagram uses the same size which necessarily limits how much can be sent. If I did the extra work to split this over several datagrams, it obviously wouldn't be a problem anymore, but I'm lazy and worse is better! The connection can be made somewhat more reliable still by improving the SLIP driver's notion of framing. RFC 1055 only specifies that the SLIP end byte (i.e., $c0) occur at the end of a SLIP datagram, though it also notes that it was proposed very early on that it could also start datagrams — i.e., if two occur back to back, then it just looks like a zero length or otherwise obviously invalid entity which can be trivially discarded. However, since there's no guarantee or requirement that the remote link will do this, we can't assume it either. We also can't just look for a $45 byte (i.e., IPv4 and a 20 byte length) because that's an ASCII character and appears frequently in text payloads. However, $45 followed by a valid DSCP/ECN byte is much less frequent, and most of the time this byte will be either $00, $08 or $10; we don't currently support ECN (maybe we should) and we wouldn't find other DSCP values meaningful anyway. The SLIP driver uses these sequences to find the start of a datagram and $c0 to end it. While that doesn't solve the overflow issue, it means the SLIP driver will be less likely to go out of framing when the buffer does overrun and thus can better recover when the remote side retransmits. And, well, that's it. There are still glitches to bang out but it's good enough to grab Hacker News: src/ directory, run configure and then run make (parallel make is fine, I use -j24 on my POWER9). Connect your two serial ports together with a null modem, which I assume will be /dev/ttyUSB0 and /dev/ttyUSB1. Start Slirp-CK with a command line like ./slirp -b 4800 "tty /dev/ttyUSB1" but adjusting the baud and path to your serial port. Take note of the specified virtual and nameserver addresses: Unlike the given directions, you can just kill it with Control-C when you're done; the five zeroes are only if you're running your connection over standard output such as direct shell dial-in (this is a retrocomputing blog so some of you might). To see the debug version in action, next go to the BASS directory and just do a make. You'll get a billion warnings but it should still work with current gcc and clang because I specifically request -std=c89. If you use a different path for your serial port (i.e., not /dev/ttyUSB0), edit slip.c before you compile. You don't do anything like ifconfig with these tools; you always provide the tools the client IP address they'll use (or create an alias or script to do so). Try this initial example, with slirp already running: Because I'm super-lazy, you separate the components of the IPv4 address with spaces, not dots. In Slirp-land, 10.0.2.2 is always the host you are connected to. You can see the ICMP packet being sent, the bytes being scanned by the SLIP driver for framing (the ones with dots), and then the reply (with dashes). These datagram dumps have already been pre-processed for SLIP metabytes. Unfortunately, you may not be able to ping other hosts through Slirp because there's no backroute but you could try this with a direct SLIP connection, an exercise left for the reader. If Slirp doesn't want to respond and you're sure your serial port works (try testing both ends with Kermit?), you can recompile it with -DDEBUG (change this in the generated Makefile) and pass your intended debug level like -d 1 or -d 3. You'll get a file called slirp_debug with some agonizingly detailed information so you can see if it's actually getting the datagrams and/or liking the datagrams it gets. For nslookup, ntp and minisock, the second address becomes your accessible recursive nameserver (or use -i to provide an IP). The DNS dump is also given in the debug mode with slashes for the DNS answer section. nslookup and ntp are otherwise self-explanatory: minisock takes a server name (or IP) and port, followed by optional strings. The strings, up to 255 characters total (in this version), are immediately sent with CR-LFs between them except if you specify -n. If you specify no strings, none are sent. It then waits on that port for data and exits when the socket closes. This is how we did the HTTP/1.0 requests in the screenshots. On the DEC Pro, this has been tested on my trusty DEC Professional 380 running PRO/VENIX V2.0. It should compile and run on a 325 or 350, and on at least PRO/VENIX Rev. V2.0, though I don't have any hardware for this and Xhomer's serial port emulation is not good enough for this purpose (so unfortunately you'll need a real DEC Pro until I or Tarek get around to fixing it). The easiest way to get it over there is Kermit. Assuming you have this already, connect your host and the Pro on the "real" serial port at 9600bps. Make sure both sides are set to binary and just push all the files over (except the Markdown documentation unless you really want), and then do a make -f Makefile.venix (it may have been renamed to makefile.venix; adjust accordingly). Establishing the link is as simple as connecting your server's serial port to the other end of the BCC05 or equivalent from the Pro and starting Slirp to talk to that port (on my system, it's even the same port, so the same command line suffices). If you experience issues with the connection, the easiest fix is to just bounce Slirp — because there are no timeouts, there are also no retransmits. I don't know if this is hitting bugs in Slirp or in my code, though it's probably the latter. Nevertheless, I've been able to run stuff most of the day without issue. It's nice to have a simple network option and the personal satisfaction of having written it myself. There are many acknowledged deficiencies, mostly because I assume little about the system itself and tried to keep everything very simplistic. There are no timeouts and thus no retransmits, and if you break the TCP connection in the middle there will be no proper teardown. Also, because I used Slirp for the other side (as many others will), and because my internal network is full of machines that have no idea what IPv6 is, there is no IPv6 support. I agree there should be and SLIP doesn't care whether it gets IPv4 or IPv6, but for now that would require patching Slirp which is a job I just don't feel up to at the moment. I'd also like to support at least CSLIP in the future. In the meantime, if you want to try this on other operating systems, the system-dependent portions are in compat.h and slip.c with a small amount in ntp.c for handling time values. You will likely want to make changes to where your serial ports are and the speed they run at and how to make that port "raw" in slip.c. You should also add any extra #includes to compat.h that your system requires. I'd love to hear about it running other places. Slirp-CK remains under the original modified Slirp license and BASS is under the BSD 2-clause license. You can get Slirp-CK and BASS at Github.

15 hours ago 2 votes
Transactions are a protocol

Transactions are not an intrinsic part of a storage system. Any storage system can be made transactional: Redis, S3, the filesystem, etc. Delta Lake and Orleans demonstrated techniques to make S3 (or cloud storage in general) transactional. Epoxy demonstrated techniques to make Redis (and any other system) transactional. And of course there's always good old Two-Phase Commit. If you don't want to read those papers, I wrote about a simplified implementation of Delta Lake and also wrote about a simplified MVCC implementation over a generic key-value storage layer. It is both the beauty and the burden of transactions that they are not intrinsic to a storage system. Postgres and MySQL and SQLite have transactions. But you don't need to use them. It isn't possible to require you to use transactions. Many developers, myself a few years ago included, do not know why you should use them. (Hint: read Designing Data Intensive Applications.) And you can take it even further by ignoring the transaction layer of an existing transactional database and implement your own transaction layer as Convex has done (the Epoxy paper above also does this). It isn't entirely clear that you have a lot to lose by implementing your own transaction layer since the indexes you'd want on the version field of a value would only be as expensive or slow as any other secondary index in a transactional database. Though why you'd do this isn't entirely clear (I will like to read about this from Convex some time). It's useful to see transaction protocols as another tool in your system design tool chest when you care about consistency, atomicity, and isolation. Especially as you build systems that span data systems. Maybe, as Ben Hindman hinted at the last NYC Systems, even proprietary APIs will eventually provide something like two-phase commit so physical systems outside our control can become transactional too. Transactions are a protocol short new post pic.twitter.com/nTj5LZUpUr — Phil Eaton (@eatonphil) April 20, 2025

21 hours ago 2 votes
Humanities Crash Course Week 16: The Art of War

In week 16 of the humanities crash course, I revisited the Tao Te Ching and The Art of War. I just re-read the Tao Te Ching last year, so I only revisited my notes now. I’ve also read The Art of War a few times, but decided to re-visit it now anyway. Readings Both books are related. The Art of War is older; Sun Tzu wrote it around 500 BCE, at a time when war was becoming more “professionalized” in China. The book aims convey what had (or hadn’t) worked in the battlefield. The starting point is conflict. There’s an enemy we’re looking to defeat. The best victory is achieved without engagement. That’s not always possible, so the book offers pragmatic suggestions on tactical maneuvers and such. It gives good advice for situations involving conflict, which is why they’ve influenced leaders (including businesspeople) throughout centuries: It’s better to win before any shots are fired (i.e., through cunning and calculation.) Use deception. Don’t let conflicts drag on. Understand the context to use it to your advantage. Keep your forces unified and disciplined. Adapt to changing conditions on the ground. Consider economics and logistics. Gather intelligence on the opposition. The goal is winning through foresight rather than brute force — good advice! The Tao Te Ching, written by Lao Tzu around the late 4th century BCE, is the central text in Taoism, a philosophy that aims for skillful action by aligning with the natural order of the universe — i.e., doing through “non-doing” and transcending distinctions (which aren’t present in reality but layered onto experiences by humans.) Tao means Way, as in the Way to achieve such alignment. The book is a guide to living the Tao. (Living in Tao?) But as it makes clear from its very first lines, you can’t really talk about it: the Tao precedes language. It’s a practice — and the practice entails non-striving. Audiovisual Music: Gioia recommended the Beatles (The White Album, Sgt. Pepper’s, and Abbey Road) and Rolling Stones (Let it Bleed, Beggars Banquet, and Exile on Main Street.) I’d heard all three Rolling Stones albums before, but don’t know them by heart (like I do with the Beatles.) So I revisited all three. Some songs sounded a bit cringe-y, especially after having heard “real” blues a few weeks ago. Of the three albums, Exile on Main Street sounds more authentic. (Perhaps because of the band member’s altered states?) In any case, it sounded most “in the Tao” to me — that is, as though the musicians surrendered to the experience of making this music. It’s about as rock ‘n roll as it gets. Arts: Gioia recommended looking at Chinese architecture. As usual, my first thought was to look for short documentaries or lectures in YouTube. I was surprised by how little there was. Instead, I read the webpage Gioia suggested. Cinema: Since we headed again to China, I took in another classic Chinese film that had long been on my to-watch list: Wong Kar-wai’s IN THE MOOD FOR LOVE. I found it more Confucian than Taoist, although its slow pacing, gentleness, focus on details, and passivity strike something of a Taoist mood. Reflections When reading the Tao Te Ching, I’m often reminded of this passage from the Gospel of Matthew: No man can serve two masters: for either he will hate the one, and love the other; or else he will hold to the one, and despise the other. Ye cannot serve God and mammon. Therefore I say unto you, Take no thought for your life, what ye shall eat, or what ye shall drink; nor yet for your body, what ye shall put on. Is not the life more than meat, and the body than raiment? Behold the fowls of the air: for they sow not, neither do they reap, nor gather into barns; yet your heavenly Father feedeth them. Are ye not much better than they? Which of you by taking thought can add one cubit unto his stature? And why take ye thought for raiment? Consider the lilies of the field, how they grow; they toil not, neither do they spin: And yet I say unto you, That even Solomon in all his glory was not arrayed like one of these. Wherefore, if God so clothe the grass of the field, which to day is, and to morrow is cast into the oven, shall he not much more clothe you, O ye of little faith? Therefore take no thought, saying, What shall we eat? or, What shall we drink? or, Wherewithal shall we be clothed? (For after all these things do the Gentiles seek:) for your heavenly Father knoweth that ye have need of all these things. But seek ye first the kingdom of God, and his righteousness; and all these things shall be added unto you. Take therefore no thought for the morrow: for the morrow shall take thought for the things of itself. Sufficient unto the day is the evil thereof. The Tao Te Ching is older and from a different culture, but “Consider the lilies of the field, how they grow; they toil not, neither do they spin” has always struck me as very Taoistic: both texts emphasize non-striving and putting your trust on a higher order. Even though it’s even older, that spirit is also evident in The Art of War. It’s not merely letting things happen, but aligning mindfully with the needs of the time. Sometimes we must fight. Best to do it quickly and efficiently. And best yet if the conflict can be settled before it begins. Notes on Note-taking This week, I started using ChatGPT’s new o3 model. Its answers are a bit better than what I got with previous models, but there are downsides. For one thing, o3 tends to format answers in tables rather than lists. This works well if you use ChatGPT in a wide window, but is less useful on a mobile device or (as in my case) on a narrow window to the side. This is how I usually use ChatGPT on my Mac: in a narrow window. o3’s responses often include tables that get cut off in this window. For another, replies take much longer as the AI does more “research” in the background. As a result, it feels less conversational than 4o — which changes how I interact with it. I’ll play more with o3 for work, but for this use case, I’ll revert to 4o. Up Next Gioia recommends Apulelius’s The Golden Ass. I’ve never read this, and frankly feel weary about returning to the period of Roman decline. (Too close to home?) But I’ll approach it with an open mind. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

14 hours ago 1 votes
My approach to teaching electronics

Explaining the reasoning behind my series of articles on electronics -- and asking for your thoughts.

yesterday 2 votes