More from Louwrentius
Skip to the bottom two paragraph for instructions on how to replace the battery. I bought my Bose SoundLink on-ear Bluetooth headphones for 250 Euros around 2017 and I really like them. They are small, light, comfortable and can easily fit in a coat pocket when folded. Up until now (about 7 years later) I have replaced the ear cushions in 2019 (€25) and 2024 (€18). Early 2025, battery capacity had deteriorated to a point where it became noticeable. The battery was clearly dying. Unfortunately these headphones aren't designed for easy battery replacement: Bose hasn't published instructions on how to replace the battery, doesn't offer a replacement battery and hasn't documented which battery type/model is used. The left 'head phone' has two Torx security screws and most people won't have the appropriate screwdriver for this size There is soldering involved I wanted to try a battery replacement anyway as I hate to throw away a perfectly good, working product just because the battery has worn out. Maybe at some point the headband needs replacing, but with a fresh battery, these headphones can last another 7 years. Let's prevent a bit of e-waste with a little bit of cost and effort. Most of all, the cost of this battery replacement is much lower than a new pair of headphones as the battery was €18 including taxes and shipping. Right to repair should include easy battery replacement Although my repair seemed to have worked out fine, it requires enough effort that most people won't even try. For this reason, I feel that it should be mandatory by law that: Batteries in any product must be user-replaceable (no special equipment or soldering required) Batteries must be provided by the vendor until 10 years after the last day the product was sold (unless it's a standard format like AA(A) or 18650). Batteries must be provided at max 10% of the cost of the original product The penalty for non-compliance should be high enough such that it won't be regarded as the cost of doing business For that matter, all components that may wear down over time should be user-replaceable. What you need to replace the battery Buy the exact battery type: ahb571935pct-01 (350mAh) (notice the three wires!) A Philips #0 screwdriver / bit A Torx T6H security screwdriver / bit (iFixit kits have them) A soldering iron Solder Heat shrink for 'very thin wire' Multimeter (optional) a bit of tape to 'cap off' bare battery leads Please note that I found another battery ahb571935pct-03 with similar specifications (capacity and voltage) but I don't know if it will fit. Putting the headphone ear cushion back on can actually be the hardest part of the process, you need to be firm and this process is documented by Bose. Battery replacement steps I took Make sure you don't short the wires on the old or new battery during replacement The battery is located in the left 'head phone'. Use a multimeter to check if your new battery isn't dead (should be 3+ volt) Remove the ear cushion from the left 'head phone' very gently as not to tear the rim Remove the two philips screws that keep the driver (speaker) in place Remove the two Torx screws (you may have to press a bit harder) Remove the speaker and be carefull not to snap the wire Gently remove the battery from the 'head phone' Cut the wires close to the old battery (one by one!) and cover the wires on the battery to prevent a short Strip the three wires from the headphones a tiny bit (just a few mm) Put a short piece of heat shrink on each of the three wires of the battery Solder each wire to the correct wire in the ear cup Adjust the location of the heat shrink over the freshly soldered joint. Use the soldering iron close to the heat shrink to shrink it (don't touch anything), this can take some time, be patient Check that the heat shrink is fixed in place and can't move Put the battery into it's specific location in the back of the 'head phone' Test the headphones briefly before reassembling the headphones Reassemble the 'head phone' (consider leaving out the two Torx screws) Dispose of the old battery in a responsible manner
My 4U 71 TiB ZFS NAS built with twenty-four 4 TB drives is over 10 years old and still going strong. Although now on its second motherboard and power supply, the system has yet to experience a single drive failure (knock on wood). Zero drive failures in ten years, how is that possible? Let's talk about the drives first The 4 TB HGST drives have roughly 6000 hours on them after ten years. You might think something's off and you'd be right. That's only about 250 days worth of runtime. And therein lies the secret of drive longevity (I think): Turn the server off when you're not using it. According to people on Hacker News I have my bearings wrong. The chance of having zero drive failures over 10 years for 24 drives is much higher than I thought it was. So this good result may not be related to turning my NAS off and keeping it off most off the time. My NAS is turned off by default. I only turn it on (remotely) when I need to use it. I use a script to turn the IoT power bar on and once the BMC (Baseboard Management Controller) is done booting, I use IPMI to turn on the NAS itself. But I could have used Wake-on-Lan too as an alternative. Once I'm done using the server, I run a small script that turns the server off, wait a few seconds and then turn the wall socket off. It wasn't enough for me to just turn off the server, but leave the motherboard, and thus the BMC powered, because that's just a constant 7 watts (about two Raspberry Pis at idle) being wasted (24/7). This process works for me because I run other services on low-power devices such as Raspberry Pi4s or servers that use much less power when idling than my 'big' NAS. This proces reduces my energy bill considerably (primary motivation) and also seems great for hard drive longevity. Although zero drive failures to date is awesome, N=24 is not very representative and I could just be very lucky. Yet, it was the same story with the predecessor of this NAS, a machine with 20 drives (1 TB Samsung Spinpoint F1s (remember those?)) and I also had zero drive failures during its operational lifespan (~5 years). The motherboard (died once) Although the drives are still ok, I had to replace the motherboard a few years ago. The failure mode of the motherboard was interesting: it was impossible to get into the BIOS and it would occasionally fail to boot. I tried the obvious like removing the CMOS battery and such but to no avail. Fortunately, the [motherboard]1 was still available on Ebay for a decent price so that ended up not being a big deal. ZFS ZFS worked fine for all these years. I've switched operating systems over the years and I never had an issue importing the pool back into the new OS install. If I would build a new storage server, I would definitely use ZFS again. I run a zpool scrub on the drives a few times a year2. The scrub has never found a single checksum error. I must have run so many scrubs, more than a petabyte of data must have been read from the drives (all drives combined) and ZFS didn't have to kick in. I'm not surprised by this result at all. Drives tend to fail most often in two modes: Total failure, drive isn't even detected Bad sectors (read or write failures) There is a third failure mode, but it's extremely rare: silent data corruption. Silent data corruption is 'silent' because a disk isn't aware it delivered corrupted data. Or the SATA connection didn't detect any checksum errors. However, due to all the low-level checksumming, this risk is extremely small. It's a real risk, don't get me wrong, but it's a small risk. To me, it's a risk you mostly care about at scale, in datacenters4 but for residential usage, it's totally reasonable to accept the risk3. But ZFS is not that difficult to learn and if you are well-versed in Linux or FreeBSD, it's absolutely worth checking out. Just remember! Sound levels (It's Oh So Quiet) This NAS is very quiet for a NAS (video with audio). But to get there, I had to do some work. The chassis contains three sturdy 12V fans that cool the 24 drive cages. These fans are extremely loud if they run at their default speed. But because they are so beefy, they are fairly quiet when they run at idle RPM5, yet they still provide enough airflow, most of the time. But running at idle speeds was not enough as the drives would heat up eventually, especially when they are being read from / written to. Fortunately, the particular Supermicro motherboard I bought at the time allows all fan headers to be controlled through Linux. So I decided to create a script that sets the fan speed according to the temperature of the hottest drive in the chassis. I actually visited a math-related subreddit and asked for an algorithm that would best fit my need to create a silent setup and also keep the drives cool. Somebody recommended to use a "PID controller", which I knew nothing about. So I wrote some Python, stole some example Python PID controller code, and tweaked the parameters to find a balance between sound and cooling performance. The script has worked very well over the years and kept the drives at 40C or below. PID controllers are awesome and I feel it should be used in much more equipment that controls fans, temperature, and so on, instead of 'dumb' on/of behaviour or less 'dumb' lookup tables. Networking I started out with quad-port gigabit network controllers and I used network bonding to get around 450 MB/s network transfer speeds between various systems. This setup required a ton of UTP cables so eventually I got bored with that and I bought some cheap Infiniband cards and that worked fine, I could reach around 700 MB/s between systems. As I decided to move away from Ubuntu and back to Debian, I faced a problem: the Infiniband cards didn't work anymore and I could not figure out how to fix it. So I decided to buy some second-hand 10Gbit Ethernet cards and those work totally fine to this day. The dead power supply When you turn this system on, all drives spin up at once (no staggered spinup) and that draws around 600W for a few seconds. I remember that the power supply was rated for 750W and the 12 volt rail would have been able to deliver enough power, but it would sometimes cut out at boot nonetheless. UPS (or lack thereof) For many years, I used a beefy UPS with the system, to protect against power failure, just to be able to shutdown cleanly during an outage. This worked fine, but I noticed that the UPS used another 10+ watts on top of the usage of the server and I decided it had to go. Losing the system due to power shenanigans is a risk I accept. Backups (or a lack thereof) My most important data is backed up trice. But a lot of data stored on this server isn't important enough for me to backup. I rely on replacement hardware and ZFS protecting against data loss due to drive failure. And if that's not enough, I'm out of luck. I've accepted that risk for 10 years. Maybe one day my luck will run out, but until then, I enjoy what I have. Future storage plans (or lack thereof) To be frank, I don't have any. I built this server back in the day because I didn't want to shuffle data around due to storage space constraints and I still have ample space left. I have a spare motherboard, CPU, Memory and a spare HBA card so I'm quite likely able to revive the system if something breaks. As hard drive sizes have increased tremendously, I may eventually move away from the 24-drive bay chassis into a smaller form-factor. It's possible to create the same amount of redundant storage space with only 6-8 hard drives with RAIDZ2 (RAID 6) redundancy. Yet, storage is always expensive. But another likely scenario is that in the coming years this system eventually dies and I decide not to replace it at all, and my storage hobby will come to an end. I needed the same board, because the server uses four PCIe slots: 3 x HBA and 1 x 10Gbit NIC. ↩ It takes ~20 hours to complete a scrub and it uses a ton of power while doing so. As I'm on a dynamic power tariff, I run it on 'cheap' days. ↩ every time I listen to ZFS enthusiasts you get the impression you are taking insane risks with your data if you don't run ZFS. I disagree, it all depends on context and circumstances. ↩ enterprise hard drives used in servers and SANs had larger sector sizes to accommodate even more checksumming data to prevent against silent data corruption. ↩ Because there is little airflow by default, I had to add a fan to cool the four PCIe cards (HBA and networking) or they would have gotten way too hot. ↩
I've always been fond of the idea of the Raspberry Pi. An energy efficient, small, cheap but capable computer. An ideal home server. Until the Pi 4, the Pi was not that capable, and only with the relatively recent Pi 5 (fall 2023) do I feel the Pi is OK performance wise, although still hampered by SD card performance1. And the Pi isn't that cheap either. The Pi 5 can be fitted with an NVME SSD, but for me it's too little, too late. Because I feel there is a type of computer on the market, that is much more compelling than the Pi. I'm talking about the tinyminimicro home lab 'revolution' started by servethehome.com about four years ago (2020). A 1L mini PC (Elitedesk 705 G4) with a Raspberry Pi 5 on top During the pandemic, the Raspberry Pi was in short supply and people started looking for alternatives. The people at servethehome realised that these small enterprise desktop PCs could be a good option. Dell (micro), Lenovo (tiny) and HP (mini) all make these small desktop PCs, which are also known as 1L (one liter) PCs. These Mini PC are not cheap2 when bought new, but older models are sold at a very steep discount as enterprises offload old models by the thousands on the second hand market (through intermediates). Although these computers are often several years old, they are still much faster than a Raspberry Pi (including the Pi 5) and can hold more RAM. I decided to buy two HP Elitedesk Mini PCs to try them out, one based on AMD and the other based on Intel. The Hardware Elitedesk Mini G3 800 Elitedesk Mini G4 705 CPU Intel i5-6500 (65W) AMD Ryzen 3 PRO 2200GE (35W) RAM 16 GB (max 32 GB) 16 GB (max 32 GB) HDD 250 GB (SSD) 250 GB (NVME) Network 1Gb (Intel) 1Gb (Realtek) WiFi Not installed Not installed Display 2 x DP, 1 x VGA 3 x DP Remote management Yes No Idle power 4 W 10 W Price €160 €115 The AMD-based system is cheaper, but you 'pay' in higher idle power usage. In absolute terms 10 watt is still decent, but the Intel model directly competes with the Pi 5 on idle power consumption. Elitedesk 705 left, Elitedesk 800 right (click to enlarge) Regarding display output, these devices have two fixed displayport outputs, but there is one port that is configurable. It can be displayport, VGA or HDMI. Depending on the supplier you may be able to configure this option, or you can buy them separately for €15-€25 online. Click on image for official specs in PDF format Both models seem to be equipped with socketed CPUs. Although options for this formfactor are limited, it's possible to upgrade. Comparing cost with the Pi 5 The Raspberry Pi 5 with (max) 8 GB of RAM costs ~91 Euro, almost exactly the same price as the AMD-based mini PC3 in its base configuration (8GB RAM). Yet, with the Pi, you still need: power supply (€13) case (€11) SD card or NVME SSD (€10-€45) NVME hat (€15) (optional but would be more comparable) It's true that I'm comparing a new computer to a second hand device, and you can decide if that matters in this case. With a complete Pi 5 at around €160 including taxes and shipping, the AMD-based 1L PC is clearly the cheaper and still more capable option. Comparing performance with the Pi 5 The first two rows in this table show the Geekbench 6 score of the Intel and AMD mini PCs I've bought for evaluation. I've added the benchmark results of some other computers I've access to, just to provide some context. CPU Single-core Multi-core AMD Ryzen 3 PRO 2200GE (32W) 1148 3343 Intel i5-6500 (65W) 1307 3702 Mac Mini M2 2677 9984 Mac Mini i3-8100B 1250 3824 HP Microserver Gen8 Xeon E3-1200v2 744 2595 Raspberry Pi 5 806 1861 Intel i9-13900k 2938 21413 Intel E5-2680 v2 558 5859 Sure, these mini PCs won't come close to modern hardware like the Apple M2 or the intel i9. But if we look at the performance of the mini PCs we can observe that: The Intel i5-6500T CPU is 13% faster in single-core than the AMD Ryzen 3 PRO Both the Intel and AMD processors are 42% - 62% faster than the Pi 5 regarding single-core performance. Storage (performance) If there's one thing that really holds the Pi back, it's the SD card storage. If you buy a decent SD card (A1/A2) that doesn't have terrible random IOPs performance, you realise that you can get a SATA or NVME SSD for almost the same price that has more capacity and much better (random) IO performance. With the Pi 5, NVME SSD storage isn't standard and requires an extra hat. I feel that the missing integrated NVME storage option for the Pi 5 is a missed opportunity that - in my view - hurts the Pi 5. Now in contrast, the Intel-based mini PC came with a SATA SSD in a special mounting bracket. That bracket also contained a small fan(1) to keep the underlying NVME storage (not present) cooled. There is a fan under the SATA SSD (click to enlarge) The AMD-based mini PC was equipped with an NVME SSD and was not equipped with the SSD mounting bracket. The low price must come from somewhere... However, both systems have support for SATA SSD storage, an 80mm NVME SSD and a small 2230 slot for a WiFi card. There seems no room on the 705 G4 to put in a small SSD, but there are adapters available that convert the WiFi slot to a slot usable for an extra NVME SSD, which might be an option for the 800 G3. Noice levels (subjective) Both systems are barely audible at idle, but you will notice them (if you sensitive to that sort of thing). The AMD system seems to become quite loud under full load. The Intel system also became loud under full load, but much more like a Mac Mini: the noise is less loud and more tolerable in my view. Idle power consumption Elitedesk 800 (Intel) I can get the Intel-based Elitedesk 800 G3 to 3.5 watt at idle. Let that sink in for a moment. That's about the same power draw as the Raspberry Pi 5 at idle! Just installing Debian 12 instead of Windows 10 makes the idle power consumption drop from 10-11 watt to around 7 watt. Then on Debian, you: run apt install powertop run powertop --auto-tune (saves ~2 Watt) Unplug the monitor (run headless) (saves ~1 Watt) You have to put the powertop --auto-tune command in /etc/rc.local: #!/usr/bin/env bash powertop --auto-tune exit 0 Then apply chmod +x /etc/rc.local So, for about the same idle power draw you get so much more performance, and go beyond the max 8GB RAM of the Pi 5. Elitedesk 705 (AMD) I managed to get this system to 10-11 watt at idle, but it was a pain to get there. I measured around 11 Watts idle power consumption running a preinstalled Windows 11 (with monitor connected). After installing Debian 12 the system used 18 Watts at idle and so began a journey of many hours trying to solve this problem. The culprit is the integrated Radeon Vega GPU. To solve the problem you have to: Configure the 'bios' to only use UEFI Reinstall Debian 12 using UEFI install the appropriate firmware with apt install firmware-amd-graphics If you boot the computer using legacy 'bios' mode, the AMD Radeon firmware won't load no matter what you try. You can see this by issuing the commands: rmmod amdgpu modprobe amdgpu You may notice errors on the physical console or in the logs that the GPU driver isn't loaded because it's missing firmware (a lie). This whole process got me to around 12 Watt at idle. To get to ~10 Watts idle you need to do also run powertop --auto-tune and disconnect the monitor, as stated in the 'Intel' section earlier. Given the whole picture, 10-11 Watt at idle is perfectly okay for a home server, and if you just want the cheapest option possible, this is still a fine system. KVM Virtualisation I'm running vanilla KVM (Debian 12) on these Mini PCs and it works totally fine. I've created multiple virtual machines without issue and performance seemed perfectly adequate. Boot performance From the moment I pressed the power button to SSH connecting, it took 17 seconds for the Elitedesk 800. The Elitedesk 705 took 33 seconds until I got an SSH shell. These boot times include the 5 second boot delay within the GRUB bootloader screen that is default for Debian 12. Remote management support Some of you may be familiar with IPMI (ILO, DRAC, and so on) which is standard on most servers. But there is also similar technology for (enterprise) desktops. Intel AMT/ME is a technology used for remote out-of-band management of computers. It can be an interesting feature in a homelab environment but I have no need for it. If you want to try it, you can follow this guide. For most people, it may be best to disable the AMT/ME feature as it has a history of security vulnerabilities. This may not be a huge issue within a trusted home network, but you have been warned. The AMD-based Elitedesk 705 didn't came with equivalent remote management capabilities as far as I can tell. Alternatives The models discussed here are older models that are selected for a particular price point. Newer models from Lenovo, HP and Dell, equip more modern processors which are faster and have more cores. They are often also priced significantly higher. If you are looking for low-power small formfactor PCs with more potent or customisable hardware, you may want to look at second-hand NUC formfactor PCs. Stacking multiple mini PCs The AMD-based Elitedesk 705 G4 is closed at the top and it's possible to stack other mini PCs on top. The Intel-based Elitedesk 800 G3 has a perforated top enclosure, and putting another mini pc on top might suffocate the CPU fan. As you can see, the bottom/foot of the mini PC doubles as a VESA mount and has four screw holes. By putting some screws in those holes, you may effectively create standoffs that gives the machine below enough space to breathe (maybe you can use actual standoffs). Evaluation and conclusion I think these second-hand 1L tinyminimicro PCs are better suited to play the role of home (lab) server than the Raspberry Pi (5). The increased CPU performance, the built-in SSD/NVME support, the option to go beyond 8 GB of RAM (up to 32GB) and the price point on the second-hand market really makes a difference. I love the Raspberry Pi and I still have a ton of Pi 4s. This solar-powered blog is hosted on a Pi 4 because of the low power consumption and the availability of GPIO pins for the solar status display. That said, unless the Raspberry Pi becomes a lot cheaper (and more potent), I'm not so sure it's such a compelling home server. This blog post featured on the front page of Hacker News. even a decent quality SD card is no match (in terms of random IOPs and sequential throughput) for a regular SATA or NVME SSD. The fact that the Pi 5 has no on-board NVME support is a huge shortcomming in my view. ↩ in the sense that you can buy a ton of fully decked out Pi 5s for the price of one such system. ↩ The base price included the external power brick and 256GB NVME storage. ↩
Before Chat-GPT caused a sensation, big tech companies like Facebook and Apple were betting their future growth on virtual reality. But I'm convinced that virtual reality will never be a mainstream thing. If you ever used VR you know why: A heavy thing on your head that messes up your hair Nausea The focus on virtual reality felt like desperation to me. The desperation of big tech companies trying to find new growth, ideally a monopoly they control1, to satisfy the demands of shareholders. And then OpenAI dropped ChatGPT and all the big tech companies started to pivot so fast because in contrary to VR, AI doesn't involve making people nauseated and look silly. It's probably obvious that I feel it's not about AI itself. It is really about huge tech companies that have found a new way to sustain growth a bit longer, now that all other markets have been saturated. Flush with cash, they went nuts and bought up all the AI accelerator hardware2, which in turn uses unspeakable amounts of energy to train new large language models. Despite all the hype, current AI technology is at it's core a very sophisticated statistical model. It's all about probabilities, it can't actually reason. As I see it, work done by AI can't thus be trusted. Depending on the specific application, that may be less of an issue, but that is a fundamental limitation of current technology. And this gives me pause as it limits the application where it is most wanted: to control labour. To reduce the cost of headcount and to suppress wages. As AI tools become capable enough, it would be irresponsible towards shareholders not to pursue this direction. All this just to illustrate that the real value of AI is not for the average person in the street. The true value is for those bigger companies who can keep on growing, and the rest is just collateral damage. But I wonder: when the AI hype is over, what new hype will take it's place? I can't see it. I can't think of it. But I recognise that the internet created efficiencies that are convenient, yet social media weaponised this convenience to exploit our fundamental human weaknesses. As shareholder value rose, social media slowly chips away at the fabric of our society: trust. I've sold my Oculus Rift CV1 long ago, I lost hundreds of dollars of content but I refuse to create a Facebook/Meta account. ↩ climate change accelerators ↩
Introduction Victron Multiplus-II inverter/charges are configured with the veconfigure1 tool. Unforntunately this is a Windows-only tool, but there is still a way for Apple users to run this tool without any problems. Tip: if you've never worked with the Terminal app on MacOS, it might not be an easy process, but I've done my best to make it as simple as I can. A tool called 'Wine' makes it possible to run Windows applications on MacOS. There are some caveats, but none of those apply to veconfigure, this tool runs great! I won't cover in this tutorial how to make the MK-3 USB cable work. This tutorial is only meant for people who have a Cerbo GX or similar device, or run VenusOS, which can be used to remotely configure the Multipluss device(s). Step 1: install brew on macos Brew is a tool that can install additional software Visit https://brew.sh and copy the install command open the Terminal app on your mac and paste the command now press 'Enter' or return It can take a few minutes for 'brew' to install. Step 2: install wine Enter the following two commands in the terminal: brew tap homebrew/cask-versions brew install --cask --no-quarantine wine-stable Download Victron veconfigure Visit this page Scroll to the section "VE Configuration tools for VE.Bus Products" Click on the link "Ve Configuration Tools" You'll be asked if it's OK to download this file (VECSetup_B.exe) which is ok Start the veconfigure installer with wine Open a terminal window Run cd Enter the command wine Downloads\VECSetup_B.exe Observe that the veconfigure Windows setup installer starts Click on next, next, install and Finish veconfigure will run for the first time Click on the top left button on the video to enlarge These are the actual install steps: How to start veconfigure after you close the app Open a terminal window Run cd Run cd .wine/drive_c/Program\ Files\ \(x86\)/VE\ Configure\ tools/ Run wine VEConfig.exe Observe that veconfigure starts Allow veconfigure access to files in your Mac Download folder Open a terminal window Run cd run cd .wine/drive_c/ run ls -n ~/Downloads We just made the Downloads directory on your Mac accessible for the vedirect software. If you put the .RSVC files in the Downloads folder, you can edit them. Please follow the instructions for remote configuration of the Multiplus II. Click on the "Ve Configuration Tools" link in the "VE Configuration tools for VE.Bus Products" section. ↩
More in technology
When 3D printing matured from an industrial edge case to a mainstream commercial technology in the 2010s, it captured the imaginations of everyone from schoolteachers to fashion designers. But if there’s one group that really, really got excited about 3D printing, it was makers. When 3D printers became commercially available, they knew that everything was […] The post What can you do with Arduino and a new 3D printer? appeared first on Arduino Blog.
There are a handful of instruments that are staples of modern music, like guitars and pianos. And then there are hundreds of other instruments that were invented throughout history and then fell into obscurity without much notice. The Luminaphone, invented by Harry Grindell Matthews and unveiled in 1925, is a particularly bizarre example. Few people […] The post Recreating a bizarre century-old electronic instrument appeared first on Arduino Blog.
Sometimes I think I should pivot my career to home automation critic, because I have many opinions on the state of the home automation industry---and they're pretty much all critical. Virtually every time I bring up home automation, someone says something about the superiority of the light switch. Controlling lights is one of the most obvious applications of home automation, and there is a roughly century long history of developments in light control---yet, paradoxically, it is an area where consumer home automation continues to struggle. An analysis of how and why billion-dollar tech companies fail to master the simple toggling of lights in response to human input will have to wait for a future article, because I will have a hard time writing one without descending into incoherent sobbing about the principles of scene control and the interests of capital. Instead, I want to just dip a toe into the troubled waters of "smart lighting" by looking at one of its earliest precedents: low-voltage lighting control. A source I generally trust, the venerable "old internet" website Inspectapedia, says that low-voltage lighting control systems date back to about 1946. The earliest conclusive evidence I can find of these systems is a newspaper ad from 1948, but let's be honest, it's a holiday and I'm only making a half effort on the research. In any case, the post-war timing is not a coincidence. The late 1940s were a period of both rapid (sub)urban expansion and high copper prices, and the original impetus for relay systems seems to have been the confluence of these two. But let's step back and explain what a relay or low-voltage lighting control system is. First, I am not referring to "low voltage lighting" meaning lights that run on 12 or 24 volts DC or AC, as was common in landscape lighting and is increasingly common today for integrated LED lighting. Low-voltage lighting control systems are used for conventional 120VAC lights. In the most traditional construction, e.g. in the 1940s, lights would be served by a "hot" wire that passed through a wall box containing a switch. In many cases the neutral (likely shared with other fixtures) went directly from the light back to the panel, bypassing the switch... running both the hot and neutral through the switch box did not become conventional until fairly recently, to the chagrin of anyone installing switches that require a neutral for their own power, like timers or "smart" switches. The problem with this is that it lengthens the wiring runs. If you have a ceiling fixture with two different switches in a three-way arrangement, say in a hallway in a larger house, you could be adding nearly 100' in additional wire to get the hot to the switches and the runner between them. The cost of that wiring, in the mid-century, was quite substantial. Considering how difficult it is to find an employee to unlock the Romex cage at Lowes these days, I'm not sure that's changed that much. There are different ways of dealing with this. In the UK, the "ring main" served in part to reduce the gauge (and thus cost) of outlet wiring, but we never picked up that particular eccentricity in the US (for good reason). In commercial buildings, it's not unusual for lighting to run on 240v for similar reasons, but 240v is discouraged in US residential wiring. Besides, the mid-century was an age of optimism and ambition in electrical technology, the days of Total Electric Living. Perhaps the technology of the relay, refined by so many innovations of WWII, could offer a solution. Switch wiring also had to run through wall cavities, an irritating requirement in single-floor houses where much of the lighting wiring could be contained to the attic. The wiring of four-way and other multi-switch arrangements could become complex and require a lot more wall runs, discouraging builders providing switches in the most convenient places. What if relays also made multiple switches significantly easier to install and relocate? You probably get the idea. In a typical low-voltage lighting control system, a transformer provides a low voltage like 24VAC, much the same as used by doorbells. The light switches simply toggle the 24VAC control power to the coils of relays. Some (generally older) systems powered the relay continuously, but most used latching relays. In this case, all light switches are momentary, with an "on" side and an "off" side. This could be a paddle that you push up or down (much like a conventional light switch), a bar that you push the left or right sides of, or a pair of two push buttons. In most installations, all of the relays were installed together in a single enclosure, usually in the attic where the high-voltage wiring to the actual lights would be fairly short. The 24VAC cabling to the switches was much smaller gauge, and depending on the jurisdiction might not require any sort of license to install. Many systems had enclosures with separate high voltage and low voltage components, or mounted the relays on the outside of an enclosure such that the high voltage wiring was inside and low voltage outside. Both arrangements helped to meet code requirements for isolating high and low voltage systems and provided a margin of safety in the low voltage wiring. That provided additional cost savings as well; low voltage wiring was usually installed without any kind of conduit or sheathed cable. By 1950, relay lighting controls were making common appearances in real estate listings. A feature piece on the "Melody House," a builder's model home, in the Tacoma News Tribune reads thus: Newest features in the house are the low voltage touch plate and relay system lighting controls, with wide plates instead of snap buttons---operated like the stops of a pipe organ, with the merest flick of a finger. The comparison to a pipe organ is interesting, first in its assumption that many readers were familiar with typical organ stops. Pipe organs were, increasingly, one of the technological marvels of the era: while the concept of the pipe organ is very old, this same era saw electrical control systems (replete with relays!) significantly reduce the cost and complexity of organ consoles. What's more, the tonewheel electric organ had become well-developed and started to find its way into homes. The comparison is also interesting because of its deficiencies. The Touch-Plate system described used wide bars, which you pressed the left or right side of---you could call them momentary SPDT rocker switches if you wanted. There were organs with similar rocker stops but I do not think they were common in 1950. My experience is that such rocker switch stops usually indicate a fully digital control system, where they make momentary action unobtrusive and avoid state synchronization problems. I am far from an expert on organs, though, which is why I haven't yet written about them. If you have a guess at which type of pipe organ console our journalist was familiar with, do let me know. Touch-Plate seems to have been one of the first manufacturers of these systems, although I can't say for sure that they invented them. Interestingly, Touch-Plate is still around today, but their badly broken WordPress site ("Welcome to the new touch-plate.com" despite it actually being touchplate.com) suggests they may not do much business. After a few pageloads their WordPress plugin WAF blocked me for "exceed[ing] the maximum number of page not found errors per minute for humans." This might be related to my frustration that none of the product images load. It seems that the Touch-Plate company has mostly pivoted to reselling imported LED lighting (touchplateled.com), so I suppose the controls business is withering on the vine. The 1950s saw a proliferation of relay lighting control brands, with GE introducing a particularly popular system with several generations of fixtures. Kyle Switch Plates, who sell replacement switch plates (what else?), list options for Remcon, Sierra, Bryant, Pyramid, Douglas, and Enercon systems in addition to the two brands we have met so far. As someone who pays a little too much attention to light switches, I have personally seen four of these brands, three of them still in use and one apparently abandoned in place. Now, you might be thinking that simply economizing wiring by relocating the switches does not constitute "home automation," but there are other features to consider. For one, low-voltage light control systems made it feasible to install a lot more switches. Houses originally built with them often go a little wild with the n-way switching, every room providing lightswitches at every door. But there is also the possibility of relay logic. From the same article: The necessary switches are found in every room, but in the master bedroom there is a master control panel above the bed, from where the house and yard may be flooded with instant light in case of night emergency. Such "master control panels" were a big attraction for relay lighting, and the finest homes of the 1950s and 1960s often displayed either a grid of buttons near the head of the master bed, or even better, a GE "Master Selector" with a curious system of rotary switches. On later systems, timers often served as auxiliary switches, so you could schedule exterior lights. With a creative installer, "scenes" were even possible by wiring switches to arbitrary sets of relays (this required DC or half-wave rectified control power and diodes to isolate the switches from each other). Many of these relay control systems are still in use today. While they are quite outdated in a certain sense, the design is robust and the simple components mean that it's usually not difficult to find replacement parts when something does fail. The most popular system is the one offered by GE, using their RR series relays (RR3, RR4, etc., to the modern RR9). That said, GE suggests a modernization path to their LightSweep system, which is really a 0-10v analog dimming controller that has the add-on ability to operate relays. The failure modes are mostly what you would expect: low voltage wiring can chafe and short, or the switches can become stuck. This tends to cause the lights to stick on or off, and the continuous current through the relay coil often burns it out. The fix requires finding the stuck switch or short and correcting it, and then replacing the relay. One upside of these systems that persists today is density: the low voltage switches are small, so with most systems you can fit 3 per gang. Another is that they still make N-way switching easier. There is arguably a safety benefit, considering the reduction in mains-voltage wire runs. Yet we rarely see such a thing installed in homes newer than around the '80s. I don't know that I can give a definitive explanation of the decline of relay lighting control, but reduced prices for copper wiring were probably a main factor. The relays added a failure point, which might lead to a perception of unreliability, and the declining familiarity of electricians means that installing a relay system could be expensive and frustrating today. What really interests me about relay systems is that they weren't really replaced... the idea just went away. It's not like modern homes are providing a master control panel in the bedroom using some alternative technology. I mean, some do, those with prices in the eight digits, but you'll hardly ever see it. That gets us to the tension between residential lighting and architectural lighting control systems. In higher-end commercial buildings, and in environments like conference rooms and lecture halls, there's a well established industry building digital lighting control systems. Today, DALI is a common standard for the actual lighting control, but if you look at a range of existing buildings you will find everything from completely proprietary digital distributed dimming to 0-10v analog dimming to central dimmer racks (similar to traditional theatrical lighting). Relay lighting systems were, in a way, a nascent version of residential architectural lighting control. And the architectural lighting control industry continues to evolve. If there is a modern equivalent to relay lighting, it's something like Lutron QSX. That's a proprietary digital lighting (and shade) control system, marketed for both residential and commercial use. QSX offers a wide range of attractive wall controls, tight integration to Lutron's HomeSense home automation platform, and a price tag that'll make your eyes water. Lutron has produced many generations of these systems, and you could make an argument that they trace their heritage back to the relay systems of the 1940s. But they're just priced way beyond the middle-class home. And, well, I suppose that requires an argument based on economics. Prices have gone up. Despite tract construction being a much older idea than people often realize, it seems clear that today's new construction homes have been "value engineered" to significantly lower feature and quality levels than those of the mid-century---but they're a lot bigger. There is a sort of maxim that today's home buyers don't care about anything but square footage, and if you've seen what Pulte or D. R. Horton are putting up... well, I never knew that 3,000 sq ft could come so cheap, and look it too. Modern new-construction homes just don't come with the gizmos that older ones did, especially in the '60s and '70s. Looking at the sales brochure for a new development in my own Albuquerque ("Estates at La Cuentista"), besides 21st century suburbanization (Gated Community! "East Access to Paseo del Norte" as if that's a good thing!) most of the advertised features are "big." I'm serious! If you look at the "More Innovation Built In" section, the "innovations" are a home office (more square footage), storage (more square footage), indoor and outdoor gathering spaces (to be fair, only the indoor ones are square footage), "dedicated learning areas" for kids (more square footage), and a "basement or bigger garage" for a home gym (more square footage). The only thing in the entire innovation section that I would call a "technical" feature is water filtration. You can scroll down for more details, and you get to things like "space for a movie room" and a finished basement described eight different ways. Things were different during the peak of relay lighting in the '60s. A house might only be 1,600 sq ft, but the builder would deck it out with an intercom (including multi-room audio of a primitive sort), burglar alarm, and yes, relay lighting. All of these technologies were a lot newer and people were more excited about them; I bring up Total Electric Living a lot because of an aesthetic obsession but it was a large-scale advertising and partnership campaign by the electrical industry (particularly Westinghouse) that gave builders additional cross-promotion if they included all of these bells and whistles. Remember, that was when people were watching those old videos about the "kitchen of the future." What would a 2025 "Kitchen of the Future" promotional film emphasize? An island bigger than my living room and a nook for every meal, I assume. Features like intercoms and even burglar alarms have become far less common in new construction, and even if they were present I don't think most buyers would use them. But that might seem a little odd, right, given the push towards home automation? Well, built-in home automation options have existed for longer than any of today's consumer solutions, but "built in" is a liability for a technology product. There are practical reasons, in that built-in equipment is harder to replace, but there's also a lamer commercial reason. Consumer technology companies want to sell their products like consumer technology, so they've recontextualized lighting control as "IoT" and "smart" and "AI" rather than something an electrician would hook up. While I was looking into relay lighting control systems, I ran into an interesting example. The Lutron Lu Master Lumi 5. What a name! Lutron loves naming things like this. The Lumi 5 is a 1980s era product with essentially the same features as a relay system, but architected in a much stranger way. It is, essentially, five three way switches in a box with remote controls. That means that each of the actual light switches in the house (which could also be dimmers) need mains-voltage wiring, including runner, back to the Lumi 5 "interface." Pressing a button on one of the Lutron wall panels toggles the state of the relay in the "interface" cabinet, toggling the light. But, since it's all wired as a three-way switch, toggling the physical switch at the light does the same thing. As is typical when combining n-way switches and dimming, the Lumi 5 has no control over dimmers. You can only dim a light up or down at the actual local control, the Lumi 5 can just toggle the dimmer on and off using the 3-way runner. The architecture also means that you have two fundamentally different types of wall panels in your house: local switches or dimmers wired to each light, and the Lu Master panels with their five buttons for the five circuits, along with "all on" and "all off." The Lumi 5 "interface" uses simple relay logic to implement a few more features. Five mains-voltage-level inputs can be wired to time clocks, so that you can schedule any combination(s) of the circuits to turn on and off. The manual recommends models including one with an astronomical clock for sunrise/sunset. An additional input causes all five circuits to turn on; it's suggested for connection to an auxiliary relay on a burglar alarm to turn all of the lights on should the alarm be triggered. The whole thing is strange and fascinating. It is basically a relay lighting control system, like so many before it, but using a distinctly different wiring convention. I think the main reason for the odd wiring was to accommodate dimmers, an increasingly popular option in the 1980s that relay systems could never really contend with. It doesn't have the cost advantages of relay systems at all, it will definitely be more expensive! But it adds some features over the fancy Lutron switches and dimmers you were going to install anyway. The Lu Master is the transitional stage between relay lighting systems and later architectural lighting controls, and it straddled too the end of relay light control in homes. It gives an idea of where relay light control in homes would have evolved, had the whole technology not been doomed to the niche zone of conference centers and universities. If you think about it, the Lu Master fills the most fundamental roles of home automation in lighting: control over multiple lights in a convenient place, scheduling and triggers, and an emergency function. It only lacks scenes, which I think we can excuse considering that the simple technology it uses does not allow it to adjust dimmers. And all of that with no Node-RED in sight! Maybe that conveys what most frustrates me about the "home automation" industry: it is constantly reinventing the wheel, an oligopoly of tech companies trying to drag people's homes into their "ecosystem." They do so by leveraging the buzzword of the moment, IoT to voice assistants to, I guess now AI?, to solve a basic set of problems that were pretty well solved at least as early as 1948. That's not to deny that modern home automation platforms have features that old ones don't. They are capable of incredibly sophisticated things! But realistically, most of their users want only very basic functionality: control in convenient places, basic automation, scenes. It wouldn't sting so much if all these whiz-bang general purpose computers were good at those tasks, but they aren't. For the very most basic tasks, things like turning on and off a group of lights, major tech ecosystems like HomeKit provide a user experience that is significantly worse than the model home of 1950. You could install a Lutron system, and it would solve those fundamental tasks much better... for a much higher price. But it's not like Lutron uses all that money to be an absolute technical powerhouse, a center of innovation at the cutting edge. No, even the latest Lutron products are really very simple, technically. The technical leaders here, Google, Apple, are the companies that can't figure out how to make a damn light switch. The problem with modern home automation platforms is that they are too ambitious. They are trying to apply enormously complex systems to very simple tasks, and thus contaminating the simplest of electrical systems with all the convenience and ease of a Smart TV. Sometimes that's what it feels like this whole industry is doing: adding complexity while the core decays. From automatic programming to AI coding agents, video terminals to Electron, the scope of the possible expands while the fundamentals become more and more irritating. But back to the real point, I hope you learned about some cool light switches. Check out the Kyle Switch Plates reference and you'll start seeing these buildings and homes, at least if you live in an area that built up during the era that they were common (1950s to the 1970s).
Is there anything more irritating than living with a partner who procrastinates on their share of the chores? Even if it isn’t malicious, it sure is annoying. Taking out the trash is YouTuber CircuitCindy’s boyfriend’s responsibility, but he often fails to do the task in a timely manner. That forced Cindy to implement a sinister […] The post YouTuber builds robot to make boyfriend take out the trash appeared first on Arduino Blog.