Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
23
If you background a bash or shell process, how do you determine if it has finished? Since inter process communication is not possible using shell scripts, people often refer to while loops or other polling mechanisms to determine if some process has stopped. However, the one player that knows best if a process has finished is the process itself. So if only this process could tell the parent or other process about this... The solution is using a FIFO or 'pipe'. A listener process reads the pipe and executes a command for every message received through this pipe. This was already build-in into PPSS. However, PPSS had this dirty while loop that polls every x seconds to determine if there are still running workers. If not, PPSS finishes itself. However, while loops and polling mechanisms are evil, dirty and bad. The nicest solution is to make PPSS fully asynchronous. To achieve this, every job must tell PPSS that is has finished. PPSS already has this listening process that listens to the...
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Louwrentius

Bose SoundLink on-ear headphones battery replacement

Skip to the bottom two paragraph for instructions on how to replace the battery. I bought my Bose SoundLink on-ear Bluetooth headphones for 250 Euros around 2017 and I really like them. They are small, light, comfortable and can easily fit in a coat pocket when folded. Up until now (about 7 years later) I have replaced the ear cushions in 2019 (€25) and 2024 (€18). Early 2025, battery capacity had deteriorated to a point where it became noticeable. The battery was clearly dying. Unfortunately these headphones aren't designed for easy battery replacement: Bose hasn't published instructions on how to replace the battery, doesn't offer a replacement battery and hasn't documented which battery type/model is used. The left 'head phone' has two Torx security screws and most people won't have the appropriate screwdriver for this size There is soldering involved I wanted to try a battery replacement anyway as I hate to throw away a perfectly good, working product just because the battery has worn out. Maybe at some point the headband needs replacing, but with a fresh battery, these headphones can last another 7 years. Let's prevent a bit of e-waste with a little bit of cost and effort. Most of all, the cost of this battery replacement is much lower than a new pair of headphones as the battery was €18 including taxes and shipping. Right to repair should include easy battery replacement Although my repair seemed to have worked out fine, it requires enough effort that most people won't even try. For this reason, I feel that it should be mandatory by law that: Batteries in any product must be user-replaceable (no special equipment or soldering required) Batteries must be provided by the vendor until 10 years after the last day the product was sold (unless it's a standard format like AA(A) or 18650). Batteries must be provided at max 10% of the cost of the original product The penalty for non-compliance should be high enough such that it won't be regarded as the cost of doing business For that matter, all components that may wear down over time should be user-replaceable. What you need to replace the battery Buy the exact battery type: ahb571935pct-01 (350mAh) (notice the three wires!) A Philips #0 screwdriver / bit A Torx T6H security screwdriver / bit (iFixit kits have them) A soldering iron Solder Heat shrink for 'very thin wire' Multimeter (optional) a bit of tape to 'cap off' bare battery leads Please note that I found another battery ahb571935pct-03 with similar specifications (capacity and voltage) but I don't know if it will fit. Putting the headphone ear cushion back on can actually be the hardest part of the process, you need to be firm and this process is documented by Bose. Battery replacement steps I took Make sure you don't short the wires on the old or new battery during replacement The battery is located in the left 'head phone'. Use a multimeter to check if your new battery isn't dead (should be 3+ volt) Remove the ear cushion from the left 'head phone' very gently as not to tear the rim Remove the two philips screws that keep the driver (speaker) in place Remove the two Torx screws (you may have to press a bit harder) Remove the speaker and be carefull not to snap the wire Gently remove the battery from the 'head phone' Cut the wires close to the old battery (one by one!) and cover the wires on the battery to prevent a short Strip the three wires from the headphones a tiny bit (just a few mm) Put a short piece of heat shrink on each of the three wires of the battery Solder each wire to the correct wire in the ear cup Adjust the location of the heat shrink over the freshly soldered joint. Use the soldering iron close to the heat shrink to shrink it (don't touch anything), this can take some time, be patient Check that the heat shrink is fixed in place and can't move Put the battery into it's specific location in the back of the 'head phone' Test the headphones briefly before reassembling the headphones Reassemble the 'head phone' (consider leaving out the two Torx screws) Dispose of the old battery in a responsible manner

6 months ago 58 votes
My 71 TiB ZFS NAS after 10 years and zero drive failures

My 4U 71 TiB ZFS NAS built with twenty-four 4 TB drives is over 10 years old and still going strong. Although now on its second motherboard and power supply, the system has yet to experience a single drive failure (knock on wood). Zero drive failures in ten years, how is that possible? Let's talk about the drives first The 4 TB HGST drives have roughly 6000 hours on them after ten years. You might think something's off and you'd be right. That's only about 250 days worth of runtime. And therein lies the secret of drive longevity (I think): Turn the server off when you're not using it. According to people on Hacker News I have my bearings wrong. The chance of having zero drive failures over 10 years for 24 drives is much higher than I thought it was. So this good result may not be related to turning my NAS off and keeping it off most off the time. My NAS is turned off by default. I only turn it on (remotely) when I need to use it. I use a script to turn the IoT power bar on and once the BMC (Baseboard Management Controller) is done booting, I use IPMI to turn on the NAS itself. But I could have used Wake-on-Lan too as an alternative. Once I'm done using the server, I run a small script that turns the server off, wait a few seconds and then turn the wall socket off. It wasn't enough for me to just turn off the server, but leave the motherboard, and thus the BMC powered, because that's just a constant 7 watts (about two Raspberry Pis at idle) being wasted (24/7). This process works for me because I run other services on low-power devices such as Raspberry Pi4s or servers that use much less power when idling than my 'big' NAS. This proces reduces my energy bill considerably (primary motivation) and also seems great for hard drive longevity. Although zero drive failures to date is awesome, N=24 is not very representative and I could just be very lucky. Yet, it was the same story with the predecessor of this NAS, a machine with 20 drives (1 TB Samsung Spinpoint F1s (remember those?)) and I also had zero drive failures during its operational lifespan (~5 years). The motherboard (died once) Although the drives are still ok, I had to replace the motherboard a few years ago. The failure mode of the motherboard was interesting: it was impossible to get into the BIOS and it would occasionally fail to boot. I tried the obvious like removing the CMOS battery and such but to no avail. Fortunately, the [motherboard]1 was still available on Ebay for a decent price so that ended up not being a big deal. ZFS ZFS worked fine for all these years. I've switched operating systems over the years and I never had an issue importing the pool back into the new OS install. If I would build a new storage server, I would definitely use ZFS again. I run a zpool scrub on the drives a few times a year2. The scrub has never found a single checksum error. I must have run so many scrubs, more than a petabyte of data must have been read from the drives (all drives combined) and ZFS didn't have to kick in. I'm not surprised by this result at all. Drives tend to fail most often in two modes: Total failure, drive isn't even detected Bad sectors (read or write failures) There is a third failure mode, but it's extremely rare: silent data corruption. Silent data corruption is 'silent' because a disk isn't aware it delivered corrupted data. Or the SATA connection didn't detect any checksum errors. However, due to all the low-level checksumming, this risk is extremely small. It's a real risk, don't get me wrong, but it's a small risk. To me, it's a risk you mostly care about at scale, in datacenters4 but for residential usage, it's totally reasonable to accept the risk3. But ZFS is not that difficult to learn and if you are well-versed in Linux or FreeBSD, it's absolutely worth checking out. Just remember! Sound levels (It's Oh So Quiet) This NAS is very quiet for a NAS (video with audio). But to get there, I had to do some work. The chassis contains three sturdy 12V fans that cool the 24 drive cages. These fans are extremely loud if they run at their default speed. But because they are so beefy, they are fairly quiet when they run at idle RPM5, yet they still provide enough airflow, most of the time. But running at idle speeds was not enough as the drives would heat up eventually, especially when they are being read from / written to. Fortunately, the particular Supermicro motherboard I bought at the time allows all fan headers to be controlled through Linux. So I decided to create a script that sets the fan speed according to the temperature of the hottest drive in the chassis. I actually visited a math-related subreddit and asked for an algorithm that would best fit my need to create a silent setup and also keep the drives cool. Somebody recommended to use a "PID controller", which I knew nothing about. So I wrote some Python, stole some example Python PID controller code, and tweaked the parameters to find a balance between sound and cooling performance. The script has worked very well over the years and kept the drives at 40C or below. PID controllers are awesome and I feel it should be used in much more equipment that controls fans, temperature, and so on, instead of 'dumb' on/of behaviour or less 'dumb' lookup tables. Networking I started out with quad-port gigabit network controllers and I used network bonding to get around 450 MB/s network transfer speeds between various systems. This setup required a ton of UTP cables so eventually I got bored with that and I bought some cheap Infiniband cards and that worked fine, I could reach around 700 MB/s between systems. As I decided to move away from Ubuntu and back to Debian, I faced a problem: the Infiniband cards didn't work anymore and I could not figure out how to fix it. So I decided to buy some second-hand 10Gbit Ethernet cards and those work totally fine to this day. The dead power supply When you turn this system on, all drives spin up at once (no staggered spinup) and that draws around 600W for a few seconds. I remember that the power supply was rated for 750W and the 12 volt rail would have been able to deliver enough power, but it would sometimes cut out at boot nonetheless. UPS (or lack thereof) For many years, I used a beefy UPS with the system, to protect against power failure, just to be able to shutdown cleanly during an outage. This worked fine, but I noticed that the UPS used another 10+ watts on top of the usage of the server and I decided it had to go. Losing the system due to power shenanigans is a risk I accept. Backups (or a lack thereof) My most important data is backed up trice. But a lot of data stored on this server isn't important enough for me to backup. I rely on replacement hardware and ZFS protecting against data loss due to drive failure. And if that's not enough, I'm out of luck. I've accepted that risk for 10 years. Maybe one day my luck will run out, but until then, I enjoy what I have. Future storage plans (or lack thereof) To be frank, I don't have any. I built this server back in the day because I didn't want to shuffle data around due to storage space constraints and I still have ample space left. I have a spare motherboard, CPU, Memory and a spare HBA card so I'm quite likely able to revive the system if something breaks. As hard drive sizes have increased tremendously, I may eventually move away from the 24-drive bay chassis into a smaller form-factor. It's possible to create the same amount of redundant storage space with only 6-8 hard drives with RAIDZ2 (RAID 6) redundancy. Yet, storage is always expensive. But another likely scenario is that in the coming years this system eventually dies and I decide not to replace it at all, and my storage hobby will come to an end. I needed the same board, because the server uses four PCIe slots: 3 x HBA and 1 x 10Gbit NIC. ↩ It takes ~20 hours to complete a scrub and it uses a ton of power while doing so. As I'm on a dynamic power tariff, I run it on 'cheap' days. ↩ every time I listen to ZFS enthusiasts you get the impression you are taking insane risks with your data if you don't run ZFS. I disagree, it all depends on context and circumstances. ↩ enterprise hard drives used in servers and SANs had larger sector sizes to accommodate even more checksumming data to prevent against silent data corruption. ↩ Because there is little airflow by default, I had to add a fan to cool the four PCIe cards (HBA and networking) or they would have gotten way too hot. ↩

12 months ago 44 votes
The Raspberry Pi 5 is no match for a tini-mini-micro PC

I've always been fond of the idea of the Raspberry Pi. An energy efficient, small, cheap but capable computer. An ideal home server. Until the Pi 4, the Pi was not that capable, and only with the relatively recent Pi 5 (fall 2023) do I feel the Pi is OK performance wise, although still hampered by SD card performance1. And the Pi isn't that cheap either. The Pi 5 can be fitted with an NVME SSD, but for me it's too little, too late. Because I feel there is a type of computer on the market, that is much more compelling than the Pi. I'm talking about the tinyminimicro home lab 'revolution' started by servethehome.com about four years ago (2020). A 1L mini PC (Elitedesk 705 G4) with a Raspberry Pi 5 on top During the pandemic, the Raspberry Pi was in short supply and people started looking for alternatives. The people at servethehome realised that these small enterprise desktop PCs could be a good option. Dell (micro), Lenovo (tiny) and HP (mini) all make these small desktop PCs, which are also known as 1L (one liter) PCs. These Mini PC are not cheap2 when bought new, but older models are sold at a very steep discount as enterprises offload old models by the thousands on the second hand market (through intermediates). Although these computers are often several years old, they are still much faster than a Raspberry Pi (including the Pi 5) and can hold more RAM. I decided to buy two HP Elitedesk Mini PCs to try them out, one based on AMD and the other based on Intel. The Hardware Elitedesk Mini G3 800 Elitedesk Mini G4 705 CPU Intel i5-6500 (65W) AMD Ryzen 3 PRO 2200GE (35W) RAM 16 GB (max 32 GB) 16 GB (max 32 GB) HDD 250 GB (SSD) 250 GB (NVME) Network 1Gb (Intel) 1Gb (Realtek) WiFi Not installed Not installed Display 2 x DP, 1 x VGA 3 x DP Remote management Yes No Idle power 4 W 10 W Price €160 €115 The AMD-based system is cheaper, but you 'pay' in higher idle power usage. In absolute terms 10 watt is still decent, but the Intel model directly competes with the Pi 5 on idle power consumption. Elitedesk 705 left, Elitedesk 800 right (click to enlarge) Regarding display output, these devices have two fixed displayport outputs, but there is one port that is configurable. It can be displayport, VGA or HDMI. Depending on the supplier you may be able to configure this option, or you can buy them separately for €15-€25 online. Click on image for official specs in PDF format Both models seem to be equipped with socketed CPUs. Although options for this formfactor are limited, it's possible to upgrade. Comparing cost with the Pi 5 The Raspberry Pi 5 with (max) 8 GB of RAM costs ~91 Euro, almost exactly the same price as the AMD-based mini PC3 in its base configuration (8GB RAM). Yet, with the Pi, you still need: power supply (€13) case (€11) SD card or NVME SSD (€10-€45) NVME hat (€15) (optional but would be more comparable) It's true that I'm comparing a new computer to a second hand device, and you can decide if that matters in this case. With a complete Pi 5 at around €160 including taxes and shipping, the AMD-based 1L PC is clearly the cheaper and still more capable option. Comparing performance with the Pi 5 The first two rows in this table show the Geekbench 6 score of the Intel and AMD mini PCs I've bought for evaluation. I've added the benchmark results of some other computers I've access to, just to provide some context. CPU Single-core Multi-core AMD Ryzen 3 PRO 2200GE (32W) 1148 3343 Intel i5-6500 (65W) 1307 3702 Mac Mini M2 2677 9984 Mac Mini i3-8100B 1250 3824 HP Microserver Gen8 Xeon E3-1200v2 744 2595 Raspberry Pi 5 806 1861 Intel i9-13900k 2938 21413 Intel E5-2680 v2 558 5859 Sure, these mini PCs won't come close to modern hardware like the Apple M2 or the intel i9. But if we look at the performance of the mini PCs we can observe that: The Intel i5-6500T CPU is 13% faster in single-core than the AMD Ryzen 3 PRO Both the Intel and AMD processors are 42% - 62% faster than the Pi 5 regarding single-core performance. Storage (performance) If there's one thing that really holds the Pi back, it's the SD card storage. If you buy a decent SD card (A1/A2) that doesn't have terrible random IOPs performance, you realise that you can get a SATA or NVME SSD for almost the same price that has more capacity and much better (random) IO performance. With the Pi 5, NVME SSD storage isn't standard and requires an extra hat. I feel that the missing integrated NVME storage option for the Pi 5 is a missed opportunity that - in my view - hurts the Pi 5. Now in contrast, the Intel-based mini PC came with a SATA SSD in a special mounting bracket. That bracket also contained a small fan(1) to keep the underlying NVME storage (not present) cooled. There is a fan under the SATA SSD (click to enlarge) The AMD-based mini PC was equipped with an NVME SSD and was not equipped with the SSD mounting bracket. The low price must come from somewhere... However, both systems have support for SATA SSD storage, an 80mm NVME SSD and a small 2230 slot for a WiFi card. There seems no room on the 705 G4 to put in a small SSD, but there are adapters available that convert the WiFi slot to a slot usable for an extra NVME SSD, which might be an option for the 800 G3. Noice levels (subjective) Both systems are barely audible at idle, but you will notice them (if you sensitive to that sort of thing). The AMD system seems to become quite loud under full load. The Intel system also became loud under full load, but much more like a Mac Mini: the noise is less loud and more tolerable in my view. Idle power consumption Elitedesk 800 (Intel) I can get the Intel-based Elitedesk 800 G3 to 3.5 watt at idle. Let that sink in for a moment. That's about the same power draw as the Raspberry Pi 5 at idle! Just installing Debian 12 instead of Windows 10 makes the idle power consumption drop from 10-11 watt to around 7 watt. Then on Debian, you: run apt install powertop run powertop --auto-tune (saves ~2 Watt) Unplug the monitor (run headless) (saves ~1 Watt) You have to put the powertop --auto-tune command in /etc/rc.local: #!/usr/bin/env bash powertop --auto-tune exit 0 Then apply chmod +x /etc/rc.local So, for about the same idle power draw you get so much more performance, and go beyond the max 8GB RAM of the Pi 5. Elitedesk 705 (AMD) I managed to get this system to 10-11 watt at idle, but it was a pain to get there. I measured around 11 Watts idle power consumption running a preinstalled Windows 11 (with monitor connected). After installing Debian 12 the system used 18 Watts at idle and so began a journey of many hours trying to solve this problem. The culprit is the integrated Radeon Vega GPU. To solve the problem you have to: Configure the 'bios' to only use UEFI Reinstall Debian 12 using UEFI install the appropriate firmware with apt install firmware-amd-graphics If you boot the computer using legacy 'bios' mode, the AMD Radeon firmware won't load no matter what you try. You can see this by issuing the commands: rmmod amdgpu modprobe amdgpu You may notice errors on the physical console or in the logs that the GPU driver isn't loaded because it's missing firmware (a lie). This whole process got me to around 12 Watt at idle. To get to ~10 Watts idle you need to do also run powertop --auto-tune and disconnect the monitor, as stated in the 'Intel' section earlier. Given the whole picture, 10-11 Watt at idle is perfectly okay for a home server, and if you just want the cheapest option possible, this is still a fine system. KVM Virtualisation I'm running vanilla KVM (Debian 12) on these Mini PCs and it works totally fine. I've created multiple virtual machines without issue and performance seemed perfectly adequate. Boot performance From the moment I pressed the power button to SSH connecting, it took 17 seconds for the Elitedesk 800. The Elitedesk 705 took 33 seconds until I got an SSH shell. These boot times include the 5 second boot delay within the GRUB bootloader screen that is default for Debian 12. Remote management support Some of you may be familiar with IPMI (ILO, DRAC, and so on) which is standard on most servers. But there is also similar technology for (enterprise) desktops. Intel AMT/ME is a technology used for remote out-of-band management of computers. It can be an interesting feature in a homelab environment but I have no need for it. If you want to try it, you can follow this guide. For most people, it may be best to disable the AMT/ME feature as it has a history of security vulnerabilities. This may not be a huge issue within a trusted home network, but you have been warned. The AMD-based Elitedesk 705 didn't came with equivalent remote management capabilities as far as I can tell. Alternatives The models discussed here are older models that are selected for a particular price point. Newer models from Lenovo, HP and Dell, equip more modern processors which are faster and have more cores. They are often also priced significantly higher. If you are looking for low-power small formfactor PCs with more potent or customisable hardware, you may want to look at second-hand NUC formfactor PCs. Stacking multiple mini PCs The AMD-based Elitedesk 705 G4 is closed at the top and it's possible to stack other mini PCs on top. The Intel-based Elitedesk 800 G3 has a perforated top enclosure, and putting another mini pc on top might suffocate the CPU fan. As you can see, the bottom/foot of the mini PC doubles as a VESA mount and has four screw holes. By putting some screws in those holes, you may effectively create standoffs that gives the machine below enough space to breathe (maybe you can use actual standoffs). Evaluation and conclusion I think these second-hand 1L tinyminimicro PCs are better suited to play the role of home (lab) server than the Raspberry Pi (5). The increased CPU performance, the built-in SSD/NVME support, the option to go beyond 8 GB of RAM (up to 32GB) and the price point on the second-hand market really makes a difference. I love the Raspberry Pi and I still have a ton of Pi 4s. This solar-powered blog is hosted on a Pi 4 because of the low power consumption and the availability of GPIO pins for the solar status display. That said, unless the Raspberry Pi becomes a lot cheaper (and more potent), I'm not so sure it's such a compelling home server. This blog post featured on the front page of Hacker News. even a decent quality SD card is no match (in terms of random IOPs and sequential throughput) for a regular SATA or NVME SSD. The fact that the Pi 5 has no on-board NVME support is a huge shortcomming in my view. ↩ in the sense that you can buy a ton of fully decked out Pi 5s for the price of one such system. ↩ The base price included the external power brick and 256GB NVME storage. ↩

a year ago 86 votes
AI is critically important but not for you

Before Chat-GPT caused a sensation, big tech companies like Facebook and Apple were betting their future growth on virtual reality. But I'm convinced that virtual reality will never be a mainstream thing. If you ever used VR you know why: A heavy thing on your head that messes up your hair Nausea The focus on virtual reality felt like desperation to me. The desperation of big tech companies trying to find new growth, ideally a monopoly they control1, to satisfy the demands of shareholders. And then OpenAI dropped ChatGPT and all the big tech companies started to pivot so fast because in contrary to VR, AI doesn't involve making people nauseated and look silly. It's probably obvious that I feel it's not about AI itself. It is really about huge tech companies that have found a new way to sustain growth a bit longer, now that all other markets have been saturated. Flush with cash, they went nuts and bought up all the AI accelerator hardware2, which in turn uses unspeakable amounts of energy to train new large language models. Despite all the hype, current AI technology is at it's core a very sophisticated statistical model. It's all about probabilities, it can't actually reason. As I see it, work done by AI can't thus be trusted. Depending on the specific application, that may be less of an issue, but that is a fundamental limitation of current technology. And this gives me pause as it limits the application where it is most wanted: to control labour. To reduce the cost of headcount and to suppress wages. As AI tools become capable enough, it would be irresponsible towards shareholders not to pursue this direction. All this just to illustrate that the real value of AI is not for the average person in the street. The true value is for those bigger companies who can keep on growing, and the rest is just collateral damage. But I wonder: when the AI hype is over, what new hype will take it's place? I can't see it. I can't think of it. But I recognise that the internet created efficiencies that are convenient, yet social media weaponised this convenience to exploit our fundamental human weaknesses. As shareholder value rose, social media slowly chips away at the fabric of our society: trust. I've sold my Oculus Rift CV1 long ago, I lost hundreds of dollars of content but I refuse to create a Facebook/Meta account. ↩ climate change accelerators ↩

a year ago 37 votes
How to run victron veconfigure on a mac

Introduction Victron Multiplus-II inverter/charges are configured with the veconfigure1 tool. Unforntunately this is a Windows-only tool, but there is still a way for Apple users to run this tool without any problems. Tip: if you've never worked with the Terminal app on MacOS, it might not be an easy process, but I've done my best to make it as simple as I can. A tool called 'Wine' makes it possible to run Windows applications on MacOS. There are some caveats, but none of those apply to veconfigure, this tool runs great! I won't cover in this tutorial how to make the MK-3 USB cable work. This tutorial is only meant for people who have a Cerbo GX or similar device, or run VenusOS, which can be used to remotely configure the Multipluss device(s). Step 1: install brew on macos Brew is a tool that can install additional software Visit https://brew.sh and copy the install command open the Terminal app on your mac and paste the command now press 'Enter' or return It can take a few minutes for 'brew' to install. Step 2: install wine Enter the following two commands in the terminal: brew tap homebrew/cask-versions brew install --cask --no-quarantine wine-stable Download Victron veconfigure Visit this page Scroll to the section "VE Configuration tools for VE.Bus Products" Click on the link "Ve Configuration Tools" You'll be asked if it's OK to download this file (VECSetup_B.exe) which is ok Start the veconfigure installer with wine Open a terminal window Run cd Enter the command wine Downloads\VECSetup_B.exe Observe that the veconfigure Windows setup installer starts Click on next, next, install and Finish veconfigure will run for the first time Click on the top left button on the video to enlarge These are the actual install steps: How to start veconfigure after you close the app Open a terminal window Run cd Run cd .wine/drive_c/Program\ Files\ \(x86\)/VE\ Configure\ tools/ Run wine VEConfig.exe Observe that veconfigure starts Allow veconfigure access to files in your Mac Download folder Open a terminal window Run cd run cd .wine/drive_c/ run ls -n ~/Downloads We just made the Downloads directory on your Mac accessible for the vedirect software. If you put the .RSVC files in the Downloads folder, you can edit them. Please follow the instructions for remote configuration of the Multiplus II. Click on the "Ve Configuration Tools" link in the "VE Configuration tools for VE.Bus Products" section. ↩

a year ago 63 votes

More in technology

illbruck SONEX

SONEX kills disk drive hum.

10 hours ago 2 votes
the video lunchbox

An opening note: would you believe that I have been at this for five years, now? If I planned ahead better, I would have done this on the five-year anniversary, but I missed it. Computers Are Bad is now five years and four months old. When I originally launched CAB, it was my second attempt at keeping up a blog. The first, which I had called 12 Bit Word, went nowhere and I stopped keeping it up. One of the reasons, I figured, is that I had put too much effort into it. CAB was a very low-effort affair, which was perhaps best exemplified by the website itself. It was monospace and 80 characters wide, a decision that I found funny (in a shitposty way) and generated constant complaints. To be fair, if you didn't like the font, it was "user error:" I only ever specified "monospace" and I can't be blamed that certain platforms default to Courier. But there were problems beyond the appearance; the tool that generated the website was extremely rough and made new features frustrating to implement. Over the years, I have not invested much (or really any) effort in promoting CAB or even making it presentable. I figured my readership, interested in vintage computing, would probably put up with it anyway. That is at least partially true, and I am not going to put any more effort into promotion, but some things have changed. Over time I have broadened my topics quite a bit, and I now regularly write about things that I would have dropped as "off topic" three or four years ago. Similarly, my readership has broadened, and probably to a set of people that find 80 characters of monospace text less charming. I think I've also changed my mind in some ways about what is "special" about CAB. One of the things that I really value about it, that I don't think comes across to readers well, is the extent to which it is what I call artisanal internet. It's like something you'd get at the farmer's market. What I mean by this is that CAB is a website generated by a static site generator that I wrote, and a newsletter sent by a mailing list system that I wrote, and you access them by connecting directly to a VM that I administer, on a VM cluster that I administer, on hardware that I own, in a rack that I lease in a data center in downtown Albuquerque, New Mexico. This is a very old-fashioned way of doing things, now, and one of the ironies is that it is a very expensive way of doing things. It would be radically cheaper and easier to use wordpress.com, and it would probably go down less often and definitely go down for reasons that are my fault less often. But I figure people listen to me in part because I don't use wordpress.com, because I have weird and often impractical opinions about how to best contribute to internet culture. I spent a week on a cruise ship just recently, and took advantage of the great deal of time I had to look at the sea to also get some work done. Strategically, I decided, I want to keep the things that are important to me (doing everything myself) and move on from the things that are not so important (the website looking, objectively, bad). So this is all a long-winded announcement that I am launching, with this post, a complete rewrite of the site generator and partial rewrite of the mailing list manager. This comes with several benefits to you. First, computer.rip is now much more readable and, arguably, better looking. Second, it should be generally less buggy (although to be fair I had eliminated most of the problems with the old generator through sheer brute force over the years). Perhaps most importantly, the emails sent to the mailing list are no longer the unrendered Markdown files. I originally didn't use markup of any kind, so it was natural to just email out the plaintext files. But then I wanted links, and then I wanted pictures, leading me to implement Markdown in generating the webpages... but I just kept emailing out the plaintext files. I strongly considered switching to HTML emails as a solution and mostly finished the effort, but in the end I didn't like it. HTML email is a massive pain in the ass and, I think, distasteful. Instead, I modified a Markdown renderer to create human-readable plaintext output. Things like links and images will still be a little weird in the plaintext emails, but vastly better than they were before. I expect some problems to surface when I put this all live. It is quite possible that RSS readers will consider the most recent ten posts to all be new again due to a change in how the article IDs are generated. I tried to avoid that happening but, look, I'm only going to put so much time into testing and I've found RSS readers to be surprisingly inconsistent. If anything else goes weird, please let me know. There has long been a certain connection between the computer industry and the art of animation. The computer, with a frame-oriented raster video output, is intrinsically an animation machine. Animation itself is an exacting, time-consuming process that has always relied on technology to expand the frontier of the possible. Walt Disney, before he was a business magnate, was a technical innovator in animation. He made great advances in cel animation techniques during the 1930s, propelling the Disney Company to fame not only by artistic achievement but also by reducing the cost and time involved in creating feature-length animated films. Most readers will be familiar with the case of Pixar, a technical division of Lucasfilm that operated primarily as a computer company before its 1986 spinoff under computer executive Steve Jobs---who led the company through a series of creative successes that overshadowed the company's technical work until it was known to most only as a film studio. Animation is hard. There are several techniques, but most ultimately come down to an animator using experience, judgment, and trial and error to get a series of individually composed frames to combine into fluid motion. Disney worked primarily in cel animation: each element of each frame was hand-drawn, but on independent transparent sheets. Each frame was created by overlaying the sheets like layers in a modern image editor. The use of separate cels made composition and corrections easier, by allowing the animator to move and redraw single elements of the final image, but it still took a great deal of experience to produce a reasonable result. The biggest challenge was in anticipating how motion would appear. From the era of Disney's first work, problems like registration (consistent positioning of non-moving objects) had been greatly simplified by the use of clear cels and alignment pegs on the animator's desk that held cels in exact registration for tracing. But some things in an animation are supposed to move, I would say that's what makes it animation. There was no simple jig for ensuring that motion would come out smoothly, especially for complex movements like a walking or gesturing character. The animator could flip two cels back and forth, but that was about as good as they could get without dedicating the animation to film. For much of the mid-century, a typical animation workflow looked like this: a key animator would draw out the key frames in final or near-final quality, establishing the most important moments in the animation, the positions and poses of the characters. The key animator or an assistant would then complete a series of rough pencil sketches for the frames that would need to go in between. These sketches were sent to the photography department for a "pencil test." In the photography department, a rostrum camera was used: a cinema camera, often 16mm, permanently mounted on an adjustable stand that pointed it down at a flat desk. The rostrom camera looked a bit like a photographic enlarger and worked much the same way, but backwards: the photographer laid out the cels or sketches on the desk, adjusted the position and focus of the camera for the desired framing, and then exposed one frame. This process was repeated, over and over and over, a simple economy that explains the common use of a low 12 FPS frame rate in animation. Once the pencil test had been photographed, the film went to the lab where it was developed, and then returned to the animation studio where the production team could watch it played on a cinema projector in a viewing room. Ideally, any problems would be identified during this first viewing before the key frames and pencil sketches were sent to the small army of assistant animators. These workers would refine the cels and redraw the pencil sketches in part by tracing, creating the "in between" frames of the final animation. Any needed changes were costly, even when caught at the earliest stage, as it usually took a full day for the photography department to return a new a pencil test (making the pencil test very much analogous to the dailies used in film). What separated the most skilled animators from amateurs, then, was often their ability to visualize the movement of their individual frames by imagination. They wanted to get it right the first time. Graphics posed a challenge to computers for similar reasons. Even a very basic drawing involves a huge number of line segments, which a computer will need to process individually during rendering. Add properties such as color, consider the practicalities of rasterizing, and then make it all move: just the number of simple arithmetic problems involved in computer graphics becomes enormous. It is not a coincidence that we picture all early computer systems as text-only, although it is a bit unfair. Graphical output is older than many realize, originating with vector-mode CRT displays in the 1950s. Still, early computer graphics were very slow. Vector-mode displays were often paired with high-end scientific computers and you could still watch them draw in real time. Early graphics-intensive computer applications like CAD used specialized ASICs for drawing and yet provided nothing like the interactivity we expect from computers today. The complexity of computer graphics ran head-first against an intense desire for more capable graphical computers, driven most prominently by the CAD industry. Aerospace and other advanced engineering fields were undergoing huge advancements during the second half of the 20th century. World War II had seen adoption of the jet engine, for example, machines which were extremely powerful but involved complex mathematics and a multitude of 3D parts that made them difficult for a human to reason over. The new field of computer-aided design promised a revolutionary leap in engineering capability, but ironically, the computers were quite bad at drawing. In the first decades, CAD output was still being sent to traditional draftsmen for final drawings. The computers were not only slow, but unskilled at the art of drafting: limitations on the number and complexity of the shapes that computers could render limited them to only very basic drawings, without the extensive annotations that would be needed for manufacturing. During the 1980s, the "workstation" began to replace the mainframe in engineering applications. Today, "workstation" mostly just identifies PCs that are often extra big and always extra expensive. Historically, workstations were a different class of machines from PCs that often employed fundamentally different architectures. Workstations were often RISC, an architecture selected for better mathematical performance, frequently ran UNIX or a derivative, and featured the first examples of what we now call a GPU. Some things don't change: they were also very big, and very expensive. It was the heady days of the space program and the Concorde, then, that brought us modern computer graphics. The intertied requirements for scientific computing, numerical simulation, and computer graphics that emerged from Cold War aerospace and weapons programs forged a strong bond between high-end computing and graphics. One could perhaps say that the nexus between AI and GPUs today is an extension of this era, although I think it's a bit of a stretch given the text-heavy applications. The echoes of the dawn of computer graphics are much quieter today, but still around. They persist, for example, in the heavy emphasis on computer visualization seen throughout scientific computing but especially in defense-related fields. They persist also in the names of the companies born in that era, names like Silicon Graphics and Mentor Graphics. The development of video technology, basically the combination of preexisting television technology with new video tape recorders, lead to a lot of optimizations in film. Video was simply not of good enough quality to displace film for editing and distribution, but it was fast and inexpensive. For example, beginning in the 1960s filmmakers began to adopt a system called "video assist." A video camera was coupled to the film camera, either side-by-side with matched lenses or even sharing the same lens via a beam splitter. By running a video tape recorder during filming, the crew could generate something like an "instant daily" and play the tape back on an on-set TV. For the first time, a director could film a scene and then immediately rewatch it. Video assist was a huge step forward, especially in the television industry where furthered the marriage of film techniques and television techniques for the production of television dramas. It certainly seems that there should be a similar technique for animation. It's not easy, though. Video technology was all designed around sequences of frames in a continuous analog signal, not individual images stored discretely. With the practicalities of video cameras and video recorders, it was surprisingly difficult to capture single frames and then play them back to back. In the 1970s, animators Bruce Lyon and John Lamb developed the Lyon-Lamb Video Animation System (VAS). The original version of the VAS was a large workstation that replaced a rostrum camera with a video camera, monitor, and a custom video tape recorder. Much like the film rostrum camera, the VAS allowed an operator to capture a single frame at a time by composing it on the desk. Unlike the traditional method, the resulting animation could be played back immediately on the included monitor. The VAS was a major innovation in cel animation, and netted both an Academy Award and an Emmy for technical achievement. While it's difficult to say for sure, it seems like a large portion of the cel-animated features of the '80s had used the VAS for pencil tests. The system was particularly well-suited to rotoscoping, overlaying animation on live-action images. Through a combination of analog mixing techniques and keying, the VAS could directly overlay an animator's work on the video, radically accelerating the process. To demonstrate the capability, John Lamb created a rotoscoped music video for the Tom Waits song "The One That Got Away." The resulting video, titled "Tom Waits for No One," was probably the first rotoscoped music video as well as the first production created with the video rotoscope process. As these landmarks often do, it languished in obscurity until it was quietly uploaded to YouTube in 2006. The VAS was not without its limitations. It was large, and it was expensive. Even later generations of the system, greatly miniaturized through the use of computerized controls and more modern tape recorders, came in at over $30,000 for a complete system. And the VAS was designed around the traditional rostrom camera workflow, intended for a dedicated operator working at a desk. For many smaller studios the system was out of reach, and for forms of animation that were not amenable to top-down photography on a desk, the VAS wasn't feasible. There are some forms of animation that are 3D---truly 3D. Disney had produced pseudo-3D scenes by mounting cels under a camera on multiple glass planes, for example, but it was obviously possible to do so in a more complete form by the use of animated sculptures or puppets. Practical challenges seem to have left this kind of animation mostly unexplored until the rise of its greatest producer, Will Vinton. Vinton grew up in McMinnville, Oregon, but left to study UC Berkeley. His time in Berkeley left him not only with an architecture degree (although he had studied filmmaking as well), but also a friendship with Bob Gardiner. Gardiner had a prolific and unfortunately short artistic career, in which he embraced many novel media including the hologram. Among his inventions, though, seems to have been claymation itself: Gardiner was fascinated with sculpting and posing clay figures, and demonstrated the animation potential to Vinton. Vinton, in turn, developed a method of using his student film camera to photograph the clay scenes frame by frame. Their first full project together, Closed Mondays, took the Academy Award for Best Animated Short Film in 1975. It was notable not only for the moving clay sculptures, but for its camerawork. Vinton had realized that in claymation, where scenes are composed in real 3D space, the camera can be moved from frame to frame just like the figures. Not long after this project, Vinton and Gardiner split up. Gardiner seems to have been a prolific artist in that way where he could never stick to one thing for very long, and Vinton had a mind towards making a business out of this new animation technology. It was Vinton who christened it Claymation, then a trademark of his new studio. Vinton returned to his home state and opened Will Vinton Studios in Portland. Vinton Studios released a series of successful animated shorts in the '70s, and picked up work on numerous other projects, contributing for example to the "Wizard of Oz" film sequel "Return to Oz" and the Disney film "Captain EO." By far Vinton Studios most famous contributions to our culture, though, are their advertising projects. Will Vinton Studios brought us the California Raisins, the Noid, and walking, talking M&M's. Will Vinton Studios struggled with producing claymation at commercial scale. Shooting with film cameras, it took hours to see the result. Claymation scenes were more difficult to rework than cel animation, setting an even larger penalty for reshoots. Most radically, claymation scenes had to be shot on sets, with camera and light rigging. Reshooting sections without continuity errors was as challenging as animating those sections in the first place. To reduce rework, they used pencil tests: quicker, lower-effort versions of scenes shot to test the lighting, motion, and sound synchronization before photography with a film camera. Their pencil tests were apparently captured on a crude system of customized VCRs, allowing the animator to see the previous frame on a monitor as they composed the next, and then to play back the whole sequence. It was better than working from film, but it was still slow going. The area from Beaverton to Hillsboro, in Oregon near Portland, is sometimes called "the silicon forest" largely on the influence of Intel and Tektronix. As in the better known silicon valley, these two keystone companies were important not only on their own, but also as the progenitors of dozens of new companies. Tektronix, in particular, had a steady stream of employees leaving to start their own businesses. Among these alumni was Mentor Graphics. Mentor Graphics was an early player in electronic design automation (EDA), sort of like a field of CAD specialized to electronics. Mentor products assisted not just in the physical design of circuit boards and ICs, but also simulation and validation of their functionality. Among the challenges of EDA are its fundamentally graphical nature: the final outputs of EDA are often images, masks for photolithographic manufacturing processes, and engineers want to see both manufacturing drawings and logical diagrams as they work on complex designs. When Mentor started out in 1981, EDA was in its infancy and relied mostly on custom hardware. Mentor went a different route, building a suite of software products that ran on Motorola 68000-based workstations from Apollo. The all-software architecture had cost and agility advantages, and Mentor outpaced their competition to become the field's leader. Corporations want for growth, and by the 1990s Mentor had a commanding position in EDA and went looking for other industries to which their graphics-intensive software could be applied. One route they considered was, apparently, animation: computer animation was starting to take off, and there were very few vendors for not just the animation software but the computer platforms capable of rendering the product. In the end, Mentor shied away: companies like Silicon Graphics and Pixar already had a substantial lead, and animation was an industry that Mentor knew little about. As best I can tell, though, it was this brief investigation of a new market that exposed Mentor engineering managers Howard Mozeico and Arthur Babitz to the animation industry. I don't know much about their career trajectories in the years shortly after, only that they both decided to leave Mentor for their own reasons. Arthur Babitz went into independent consulting, and found a client reminiscent of his work at Mentor, an established animation studio that was expanding into computer graphics: Will Vinton Studios. Babitz's work at Will Vinton Studios seems to have been largely unrelated to claymation, but it exposed him to the process, and he watched the way they used jury-rigged VCRs and consumer video cameras to preview animations. Just a couple of years later, Mozeico and Babitz talked about their experience with animation at Mentor, a field they were both still interested in. Babitz explained the process he had seen at Will Vinton Studios, and his ideas for improving it. Both agreed that they wanted to figure out a sort of retirement enterprise, what we might now call a "lifestyle business": they each wanted to found a company that would keep them busy, but not too busy. The pair incorporated Animation Toolworks, headquartered in Mozeico's Sherwood, Oregon home. In 1998 Animation Toolworks hit trade shows with the Video Lunchbox. The engineering was mostly by Babitz, the design and marketing by Mozeico, and the manufacturing done on contract by a third party. The device took its name from its form factor, a black crinkle paint box with a handle on top of its barn-roof-shaped lid. It was something like the Lyon Lamb VAS, if it was portable, digital, and relatively inexpensive. The Lunchbox was essentially a framegrabber, a compact and simplified version of the computer framegrabbers that were coming into use in the animation industry. You plugged a video camera into the input, and a television monitor into the output. You could see the output of the camera, live, on the monitor while you composed a scene. Then, one press of a button captured a single frame and stored it. With a press of another button, you could swap between the stored frame and the live image, helping to compose the next. You could even enable an automatic "flip-flop" mode that alternated the two rapidly, for hands-free adjustment. Each successive press of the capture button stored another frame to the Lunchbox's memory, and buttons allowed you to play the entire set of stored frames as a loop, or manually step forward or backward through the frames. And that was basically it: there were a couple of other convenience features like an intervalometer (for time lapse) and the ability to record short sections of real-time video, but complete operation of the device was really very simple. That seems to have been one of its great assets. The Lunchbox was much easier to sell after Mozeico gave a brief demonstration and said that that was all there is to it. To professionals, the Lunchbox was a more convenient, more reliable, and more portable version of the video tape recorder or computer framegrabber systems they were already using for pencil tests. Early customers of Animation Toolworks included Will Vinton Studios alongside other animation giants like Disney, MTV, and Academy Award-winning animator Mark Osborne. Animation Toolworks press quoted animators from these firms commenting on the simplicity and ease of use, saying that it had greatly sped up the animation test process. In a review for Animation World Magazine, Kellie-Bea Rainey wrote: In most cases, computers as framegrabbers offer more complications than solutions. Many frustrations stem from the complexity of learning the computer, the software and it's constant upgrades. But one of the things Gary Schwartz likes most about the LunchBox is that the system requires no techno-geeks. "Computers are too complex and the technology upgrades are so frequent that the learning curve keeps you from mastering the tools. It seems that computers are taking the focus off the art. The Video LunchBox has a minimum learning curve with no upgrade manuals. Everything is in the box, just plug it in." Indeed, the Lunchbox was so simple that it caught on well beyond the context of professional studios. It is remembered most as an educational tool. Disney used the Lunchbox for teaching cel animation in a summer program, but closer to home, the Lunchbox made its way to animation enthusiast and second-grade teacher Carrie Caramella. At Redmond, Oregon's John Tuck Elementary School, Caramella acted as director of a student production team that brought their short film "The Polka Dot Day" to the Northwest Film Center's Young People's Film and Video Festival. During the early 2000s, after-school and summer animation programs proliferated, many using claymation, and almost all using the Video Lunchbox. At $3,500, the Video Lunchbox was not exactly cheap. It cost more than some of the more affordable computer-based options, but it was so much easier to use, and so much more durable, that it was very much at home in a classroom. Caramella: "By using the lunchbox, we receive instant feedback because the camera acts > as an eye. It is also child-friendly, and you can manipulate the film a lot more." Caramella championed animation at John Tuck, finding its uses in other topics. A math teacher worked with students to make a short animation of a chicken. In a unit on compound words, Caramella led students in animating their two words together: a sun and a flower dance; the word is "sunflower." Butter and milk, base and ball. In Lake Oswego, an independent summer program called Earthlight Studios took up the system. With the lunchbox, Corey's black-and-white drawings spring to life, two catlike animé characters circling each other with broad edged swords. It's the opening seconds of what he envisions will be an action-adventure film. We can imagine how cringeworthy these student animations must be to their creators today, but early-'00s education was fascinated with multimedia and it seems rare that technology served the instructional role so well. It was in this context that I crossed ways with the Lunchbox. As a kid, I went to a summer animation program at OMSI---a claymation program, which I hazily remember was sponsored by a Will Vinton Studios employee. In an old industrial building beside the museum, we made crude clay figures and then made them crudely walk around. The museum's inventory of Lunchboxes already showed their age, but they worked, in a way that was so straightforward that I think hardly any time was spent teaching operation of the equipment. It was a far cry from an elementary school film project in which, as I recall, nearly an entire day of class time was burned trying to get video off of a DV camcorder and into iMovie. Mozeico and Babitz aimed for modest success, and that was exactly what they found. Animation Toolworks got started on so little capital that it turned a profit the first year, and by the second year the two made a comfortable salary---and that was all the company would ever really do. Mozeico and Babitz continued to improve on the concept. In 2000, they launched the Lunchbox Sync, which added an audio recorder and the ability to cue audio clips at specific frame numbers. In 2006, the Lunchbox DV added digital video. By the mid-2000s, computer multimedia technology had improved by leaps and bounds. Framegrabbers and real-time video capture devices were affordable, and animation software on commodity PCs overtook the Lunchbox on price and features. Still, the ease of use and portability of the Lunchbox was a huge appeal to educators. By 2005 Animation Toolworks was basically an educational technology company, and in the following years computers overtook them in that market as well. The era of the Lunchbox is over, in more ways than one. A contentious business maneuver by Phil Knight saw Will Vinton pushed out of Will Vinton Studios. He was replaced by Phil Knight's son, Travis Knight, and the studio rebranded to Laika. The company has struggled under its new management, and Laika has not achieved the renaissance of stop-motion that some thought Coraline might bring about. Educational technology has shifted its focus, as a business, to a sort of lightweight version of corporate productivity platforms that is firmly dominated by Google. Animation Toolworks was still selling the Lunchbox DV as late as 2014, but by 2016 Mozeico and Babitz had fully retired and offered support on existing units only. Mozeico died in 2017, crushed under a tractor on his own vineyard. There are worse ways to go. Arthur Babitz is a Hood River County Commissioner. Kellie-Bea Rainey: I took the two-minute tutorial and taped it to the wall. I cleaned off a work table and set up a stage and a character. Then I put my Sharp Slimcam on a tripod... Once the camera was plugged into the LunchBox, I focused it on my animation set-up. Next, I plugged in my monitor. All the machines were on and all the lights were green, standing by. It's time to hit the red button on the LunchBox and animate! Yippee! Look Houston, we have an image! That was quick, easy and most of all, painless. I want to do more, and more, and even more. The next time you hear from me I'll be having fun, teaching my own animation classes and making my own characters come to life. I think Gary Schwartz says it best, "The LunchBox brings the student back to what animation is all about: art, self-esteem, results and creativity." I think we're all a little nostalgic for the way technology used to be. I know I am. But there is something to be said for a simple device, from a small company, that does a specific thing well. I'm not sure that I have ever, in my life, used a piece of technology that was as immediately compelling as the Video Lunchbox. There are numerous modern alternatives, replete with USB and Bluetooth and iPad apps. Somehow I am confident that none of them are quite as good.

yesterday 2 votes
Comics from June 1983 Issue of Today Magazine

Your latest serving of computing related humor

3 days ago 12 votes
The Things Conference 2025: shape the future of IoT with Arduino!

We’re excited to announce that the Arduino team is returning to Amsterdam as an ecosystem partner at The Things Conference 2025, the world’s leading LoRaWAN event, taking place September 23rd-24th. This year, we’re bringing more tech, more insights, and more real-world use cases than ever – to give you all the tools you need to future-proof […] The post The Things Conference 2025: shape the future of IoT with Arduino! appeared first on Arduino Blog.

4 days ago 6 votes
App Clip Local Experiences have consumed my day

Okay, I have to be doing something astronomically stupid, right? This should be working? I’m playing around with an App Clip and want to just run it on the device as a test, but no matter how I set things up nothing ever works. If you see what I’m doing wrong let me know and I’ll update this, and hopefully we can save someone else in the future a few hours of banging their head! Xcode App Clips require some setup in App Store Connect, so Apple provides a way when you’re just testing things to side step all that: App Clip Local Experiences I create a new sample project called IceCreamStore, which has the bundle ID com.christianselig.IceCreamStore. I then go to File > New > Target… > App Clip. I choose the Product Name “IceCreamClip”, and it automatically gets the bundle ID com.christianselig.IceCreamStore.Clip. I run both the main target and the app clip target on my iOS 18.6 phone and everything shows up perfectly, so let’s go onto actually configuring the Local Experience. Local Experience setup I go to Settings.app > Developer > App Clips Testing > Local Experiences > Register Local Experience, and then input the following details: URL Prefix: https://boop.com/beep/ Bundle ID: com.christianselig.IceCreamStore.Clip (note thne Apple guide above says to use the Clip’s bundle ID, but I have tried both) Title: Test1 Subtitle: Test2 Action: Open Upon saving, I then send myself a link to https://boop.com/beep/123 in iMessage, and upon tapping on it… nothing, it just tries to open that URL in Safari rather than in an App Clip (as it presumably should?). Same thing if I paste the URL into Safari’s address bar directly. Help What’s the deal here, what am I doing wrong? Is my App Store Connect account conspiring against me? I’ve tried on multiple iPhones on both iOS 18 and 26, and the incredible Matt Heaney (wrangler of App Clips) even kindly spent a bunch of time also pulling his hair out over this. We even tried to see if my devices were somehow banned from using App Clips, but nope, production apps using App Clips work fine! If you figure this out you would be my favorite person. 😛

6 days ago 15 votes