More from markround.com
Earlier today, I got an email alerting me to an angrier than usual comment on this website. It was a proper keyboard warrior rant accusing me of all sorts of misdeads revolving around “forcing ads down people’s throats”. I replied saying that there had never been any ads on this site, never will be and I detest the enshitification trend of the modern Internet too. I also have found much of today’s web unbearable without tools such as Pi-Hole and a VPN; I use Firefox with adblockers whenever possible and generally speaking, if a site forces me to disable my ad-blocker I’ll simply stop visiting. Then I had a sinking feeling. Years ago, when I migrated this site from a PHP codebase to a static site generator, I’d enabled Disqus comments as it (at the time) seemed a reasonable alternative to dealing with all the spam, moderation, flame wars and handling user data that comes with a comments engine. I’d never really paid that much attention to it as it never got that much use, and I certainly had never noticed any ads or other junk being injected in amongst my content. But then I go to somewhat extreme lengths to avoid that sort of crap, and had a horrible feeling that maybe I’d simply just never noticed anything un-toward as I’d been blocking it. So I downloaded Google Chrome (ugh!), and browsed to this site without any kind of protection or blocking, and was confronted with an absolute abomination of link-farm “chumbox” adverts littering the bottom half of the page. Even though I’m currently on holiday, I immediately disabled the Disqus integration (yay for GitOps and CI/CD pipelines!) and can only apologise for the eye sore and god-awful mess that I was unwittingly inflicting on people. I’m not sure exactly when Disqus got that bad. It certainly wasn’t when I initially set it up, but I guess this is another example of why we can’t have nice things. So to my angry anonymous poster: You were right, I apologise (but you were still kind of a dick about it), and holy crap do I despise what we’ve done to the web.
Discussion on Hacker News Discussion on lobste.rs If you ever get a chance to look through the classic Amiga OS source-code still floating around some murky corners of the internet, it is a thing of beauty and astonishing capabilities. It’s an inspirational piece of computing history with unmatched capabilities for the time. Remember, this was all originally on a computer released in the 1980s with 512Kb memory, a 7Mhz 68000 16-bit CPU, and a single floppy drive with 880Kb storage. On these limited specs, AmigaOS provided a pre-emptive multi-tasking operating system, a full set of GUI primatives and built-in “Workbench” interface, expansion card auto-configuration and a fully-featured filesystem with some unique and powerful capabilities. Although to be fair, the AmigaDOS parts do literally come from a different time (and possibly planet) - but more on that later. Oh and of course, there was that amazing chipset that meant even that humble base can do things like this - while PCs of the time were basically office boxes that occasionally bleeped and home computers still loaded games from cassette tape. There’s understandably a lot of on-line interest in those parts of the Amiga as they’re the most impressive in an obvious “wow!” way. But while that was what drew me to the Amiga when I was a kid (and the demo/cracking/bbs scene heavily influenced me) I’ve always been more of a systems geek at heart. I’ve always loved building tools and platforms, and have long been fascinated with the world of operating systems. Apart from reading through the source code (where that’s legally available, of course…) I think there’s no better way to explore and understand a system - and the mindset that produced it - than to develop for it. What follows is a brain-dump of what I’ve learned about developing for the AmigaOS, both on classic 68k-powered hardware to modern PowerPC systems like the X5000. I’ll cover development environments, modern workflows like CI builds on containerised infrastructure, distribution of packages and even a look back in time before C existed, thanks to AmigaDOS’s odd heritage. Table Of Contents SetCmd SDK Updates Editors Native hardware Modern development AmigaDOS is weird Distribution Archive Documentation Installer File Sites External Documentation The way forward is back ? Wrap-up SetCmd There’s plenty of guides and videos on setting up an old-school game or demo-coding environment, but all of what follows is in the context of developing a systems tool in C as that’s the language of AmigaOS. I started a real-life project partly to solve a small problem I had (switching between different versions of commands/tools at the Amiga CLI) but mainly to explore and dig deeper into the OS that influenced me so much as a teenager. SetCmd was the result, and is a very simple AmigaOS 4 PowerPC package. I’m working (very slowly) on porting it to run on classic AmigaOS and variants but it has to be said this is my first time writing C in any meaningful capacity beyond wrestling with pointers at University. The source code is on GitLab if you want to take a look but bear in mind despite having owned Amigas since they were released, I’m a total newbie at most of this! I wrote it to have fun, explore the AmigaOS, set up build environments and figure out how to package it up for re-distribution. I have written a bit about my development setup in the past, but things have changed a fair bit since then - so without further ado, here’s my development environment and thoughts in 2023. SDK Updates Whilst things do move at a glacial pace in the world of AmigaOS 4/PPC, there have been a few big updates. A-EON’s Enhancer Software has had several releases, each adding new applications and developer APIs. As well as shipping their own versions of key Amiga OS applications and utilities, they also now are installing several core AmigaDOS command replacements. I tend to skip the installation of these as I’ve encountered a few edge cases where they don’t quite behave like the original OS 4 commands, but from recent discussions online it appears as if they are preparing for their own “clean-room” re-implementation and modernisation of Amiga OS 4. Presumably in order to free themselves from the eternal legal shenanigans with Hyperion et al. I’m not going to get into that raging dumpster fire here, but it’ll be interesting to see what comes of this. On the Hyperion side, they released a big SDK Update for OS 4 including updated GCC toolchains, cross-compilers, profilers and loads of updated SDKs. Bearing in mind the ancient GCC 4.x toolchain that had been in place for years it was great to have a more modern environment. On the classic Amiga front, Hyperion have also been pushing ahead with their updated AmigaOS 3.x OS for 68k-powered Amigas. Now on version 3.2.2.1 (I’ve got my boxed set of CD and floppy disks) there have also been several NDK (“Native Developer Kit”) Updates providing updated APIs and tools. AmiKit also released a great “all-in-one” environment called DevPack which includes a huge range of languages (C, Assembly, Amos, Lua, Basic…) and NDKs all configured and ready to go. As a quick and easy way of setting up a development environment on classic Amigas, it’s hard to beat and saves a lot of manual downloading, configuring and glue-ing everything together. Editors In my last update 3 years ago, I’d more or-less settled on using a GUI VIM derivative. While I’m still a die-hard VIM user at $DAYJOB, I really appreciate the modern comforts of e.g. VSCode. Thanks to the amazing work of George Sokianos, there is now a OS 4 package of the awesome Lite-XL editor along with a comprehensive set of plugins. Here’s what a hacking session on my X5000 looks like: In that session you can see alongside LiteXL two terminal windows: I’m compiling and running the PowerPC and classic 68k versions of setcmd thanks to the cross-compilers and native “Petunia” JIT 68k emulation built into OS 4. More on that later, but while we’re talking about classic Amigas… Native hardware Emulation is fine, but nothing beats running on the actual hardware! In the lead image to this article you can see my treasured Amiga 1200 (with older 8-bit friend in the background running my TNFS site) expanded with an Apollo Vampire accelerator. An Ethernet or Wifi adapter is more or less essential though when it comes to transferring data around and fortunately the Vampire card is capable of network connectivity, high-resolution display and other niceities but can easily be switched back to a more “stock” environment. I do still occasionally use my licensed copy of CubicIDE but due to the age of this architecture, I tend to keep my tools light and have settled on the simple Jano editor or sometimes CygnusEd for old time’s sake. My build toolchain is provided by Devpack and it’s included VBCC compiler. I use the vbcc_target_m68k-amigaos target with this makefile to build: Modern development I’m (sadly) not always in front of my Amigas, but these days a modern laptop and cloud-native tools offer a lot of flexibility particularly with the advanced state of emulation. I use VSCode as my editor, and a containerised cross-compiler toolchain built by - again! - George Sokianos to target both 68k and PPC platforms. I can build my project on any system capable of running OCI containers, e.g. docker run \ --user $UID:$GID \ -v ./:/opt/code \ walkero/docker4amigavbcc:latest-m68k-amd64 \ make -f makefile.docker Testing and running the code is made easy by the very advanced state of emulators. On Windows, WinUAE is the gold standard and can emulate everything from an original 1985-vintage A1000 up to modern systems with PowerPC accelerators, graphics cards and other devices. I have it multi-booting into clean Hyperion and Commodore/classic OS environments, with my source code directory shared as a virtual hard-drive: I can compile in seconds with Docker, and then straight away test the resulting binary in my emulated Amiga. Source-code is kept up to date between systems using Git; on the Amiga X5000 I use the port of SimpleGit, which is now bundled with the latest Hyperion SDK under SDK:c/sgit. I haven’t yet found a suitable Git solution for the classic Amigas, so on those I use a makeshift AmigaDOS shell script that uses Backup to copy files over to a network mount in an rsync -like fashion. I also keep meaning to test running AmigaOS 4.1 under QEMU as support for this has greatly improved and looks a lot simpler than the currently convoluted process of getting “classic OS 4” running on an emulated 68k Amiga with PPC accelerator configured. But for now, the WinUAE approach is working pretty well. Another advantage of having a containerised build-chain is that combined with Git and Drone running on my personal Kubernetes clusters, I can build and package my code with a simple git push wherever I am: AmigaDOS is weird Although AmigaOS is frequently lauded for it’s sophistication and elegance, there is a notable “oddness” about the AmigaDOS components which handle storage I/O, devices and filesystems. The original developers of the Amiga had an ambitious DOS system planned, but in the end Commodore had to purchase the Tripos operating system and port parts of it to the Amiga due to deadline challenges. This mismatch is all the more pronounced as Tripos was written in BCPL - which in turn, influenced the B programming language which begat the C we all know and… well, tolerate, in my case. So it really is looking back into computing history and remnants of this still remain even in the “modern” AmigaOS 4.x and other derivatives. Once you start diving into AmigaDOS code, you end up face-to-face with this legacy and need to convert back and forth with BCPL and C data-types. For example, BCPL strings are not NULL-terminated, instead they have a length in the first byte and then the characters follow. And pointers are similarly alien. This is why my code has stuff like this littered through it: // Convert the new node to a BPTR new_node_bptr = MKBADDR(new_node); // Set the new path cli->cli_CommandDir = new_node_bptr; As the NDK include file dos.h explains: “All BCPL data must be long word aligned. BCPL pointers are the long word address (i.e byte address divided by 4 (»2))”. It also includes helper functions like MKBADDR to help with the conversion as most DOS system calls use BCPL pointers in arguments: /* Convert BPTR to typical C pointer */ #define BADDR(x) ((APTR)((ULONG)(x) << 2)) /* Convert address into a BPTR */ #define MKBADDR(x) (((LONG)(x)) >> 2) All in all, a fascinating look back into an obscure branch of computing history, but it hasn’t furthered my appreciation of pointers any! Distribution If you want to distribute your project to a wider audience, there’s a few Amiga-specific things you can do that makes life much easier for people running AmigaOS. These all make use of some pretty cool bits of Amiga technology and have been widely adopted by native software since their introduction in the early 1990s. Archive Just use LHA format archives. It’s the standard compression tool on Amigas and even though there are modern (and technically better) alternatives, .lha files are as ubiquitous as e.g. .zip or .tar.gz packages on other systems and can also be handled by low-spec machines. There are ports of CLI tools and GUIs available on all platforms to handle these archives and while the syntax can be a little different, it’s quite easy to use. See my AmigaDOS script that builds the SetCmd release artifact for a practical example. Documentation Along with a basic README.txt, it’s a great practice to distribute more detailed documentation in AmigaGuide format. Another area the Amiga was way ahead of it’s time - AmigaGuide is a hyper-text format commonly used for application manuals although people have even used it to publish disk magazines! Introduced in 1992, the bundled tools on any AmigaOS (or clone/derivative) can read and use AmigaGuide as standard so you can include formatting, links and other content in your documentation. You can see the AmigaGuide docs I include with SetCmd in the screenshot above, or view the source to see what the syntax looks like. It’s pretty simple to write and is much like any other markdown or formatting code. You can set basic parameters: 1 2 @Width 72 @wordwrap Documents have “Nodes” which can be linked to, e.g. 1 2 @Node About "About SetCMD" @{"About" Link About} And formatting is much like HTML with opening and closing tags: @{b}@{u} Bold and Underlined! @{uu}@{ub} Installer A fantastic addition to AmigaOS, the system installer utility reads a developer-provided script which handles copying files, comparing versions, modifying system scripts and so on, in a standardized fashion. You can pass useful information and configuration which controls the Installer tool through standard Amiga tooltypes, and it uses a sort of LISP-ey syntax which again runs on all Amigas and derivatives. The syntax does some getting used to, but the best source of documentation is the AmigaGuide documentation found in the Installer dev package. As an example, here’s an excerpt of the block of code which copies the setcmd program file over to a previously created directory: 1 2 3 4 5 6 7 8 (copyfiles (source "setcmd") (dest #dname) (prompt ("Copy SetCmd program file?")) (confirm "expert") (all) (help @copyfiles-help) ) And for running commands, you can use the run command, along with cat (short for “concatenate”) to build up the command string : (run (cat "C:MakeLink FROM " #dname "/cmds/setcmd/release TO SETCMD:setcmd SOFT")) I found the best approach was to examine other Installer scripts to get a feel for common practices and idioms. Here’s my simplistic Install_SetCmd script, and if you want to see something more complex, there’s always the AmigaOS installation scripts, or the Qt installer for OS 4 which taught me a lot. File Sites The Amiga doesn’t have a universal package manager, so files are usually downloaded manually and installed from the .lha archives. The go-to place for Amiga software for all systems is AmiNet. It’s the biggest repository of Amiga packages on the internet (and, at one point in the mid-90s was actually the largest software repository of any platform) and now also supports hosting packages for Amiga OS 4, MorphOS and AROS alongside classic 68k fare. There are also smaller, platform-focused sites for each platform e.g. os4depot.net for OS 4, morphos-storage.net for MorphOS and so on. Getting your package uploaded and accepted into the repository is broadly the same for all these: You FTP your package up according to the naming standards, and supply a Readme file which provides the required metadata like this excerpt in AmiNet format: 1 2 3 4 5 6 Short: Switch between versions of software Author: amiga@markround.com (Mark Dastmalchi-Round) Type: util/shell Version: 1.1.0 Architecture: ppc-amigaos >= 4.0.0 Distribution: Aminet There are some Amiga-native GUI tools that assist with creating these files, but the specs e.g. for os4depot.net are pretty straight forward. And here’s the end result - My package available on os4depot.net and aminet. External Documentation When trying to learn or re-learn everything from C to AmigaDOS scripting, I found a few great resources. However, as with most things in Amiga-land, there’s an extraordinarily high “bus factor” for many websites and my biggest recommendation is to use native tools or save local copies of anything you find! With that said, here’s my essential Amiga bookmarks: Autodocs references. There’s lots of websites where you can browse the auto-generated docs from the SDK header files, like this with clickable links to jump between things. If I’m actually at an Amiga though, there are some useful native tools I prefer that can index and search through the local headers. The screenshot above shows the standard “AutoDoc Reader” freeware tool viewing the equivalent of a man page for the AmigaDOS library, alongside the AmigaGuide Installer language reference. http://www.pjhutchison.org/tutorial/amiga_c.html - amazing site. This is what inspired me to pick up a compiler again and get to work. There’s a great refresher on the C language itself, and then it dives into Amiga-specific coding with everything from low-level library access, sound and GUI programming and more. Amiga OS Dev wiki is a goldmine, although it can take a little searching to find what you’re after. It’s mostly OS 4-focused but because all Amiga systems share a common ancestor it’s usually pretty applicable to all platforms. Specific articles that I found useful include: OS 4 Migration Guide Programming in the Amiga Environment Fundamental Types https://www.amiga-news.de is a great news aggregator site for all things Amiga, and also has a bunch of exclusive articles on programming in the AmigaOS environment - Like this recent article on GUI programming. Well worth a read - thanks to Daniel Reimann for pointing out the great content to me! And lastly, there are great threads I constantly found on amigans.net which is where a lot of OS 4/”Next Gen” technical discussion happens. For classic systems, I found the Coders discussions on the English Amiga Board an invaluable resource. The way forward is back ? When I started this project, it was really a way to get acquainted with my new X5000. Since then, I’ve decided to port my codebase back to the classic Amiga, as well as explore porting over to other Amiga-like systems such as MorphOS and AROS. This leads to some choices: From a packaging and distribution point of view, a 68k binary is pretty much the universal standard in Amiga land. It can run natively on classic Amigas, and modern systems like AmigaOS 4.x and MorphOS can run 68k binaries through translation. In a method similar to how Apple has handled the transition between processors in the Mac, it’s a pretty seamless experience and I run a lot of classic 68k software on my X5000. As long as you aren’t “banging on the metal” it works really well and integrates smoothly with the rest of the system. The original 68k AmigaOS from Commodore is also pretty much the standard for source-code compatibility; code targetting this release can be built on most of the derivatives and later systems with very little (if any) modification. On AmigaOS 4 for example, you can simply add -D__USE_INLINE__ to your makefiles and in theory build from a common codebase. If you start the other way as I did and write initially targetting AmigaOS 4, it’s harder to port to other systems. For example, I originally followed the AmigaOS 4 programming style which favours prefixing library calls with interface names. This isn’t compatible with any other system, so the easiest way to port this to more Amiga-like platforms is to refactor this code back to the classic style of calling system functions. I do plan on building platform-specific binaries using #defines so I can for example use functions like dos.library/AddCmdPathNode on OS 4 that I otherwise have to manually implement, and while a lot of higher-level layers (like e.g. MUI for building graphical applications) are shared across platforms this is probably the best bet for adding specific features from one platform that aren’t available on others. Honestly though, at this point if you want to just get started I’d have to suggest you target classic AmigaOS compatibility and build a 68k binary. I’d personally target AmigaOS 3.x or 2.1 if you want to support a wider range of truly vintage systems; 1.x is facinating from a retro-geek perspective but lacks a lot of the nice features that came with later systems. Everything else like the installer, archive format, documentation format and so on is cross-platform anyway and supported from OS 2.1 and up. MorphOS, AROS and OS 4 are really fun systems to explore, and I highly recommend checking them out if this article has whetted your appetite (and you can find a system to run them on!) but classic is the easiest way to get your code out to the wider world and ironically provides a better code-base for future porting and native binaries than my “working backwards” approach. Wrap-up So that’s about the sum total of what I’ve picked up over the last few years, anyway! I still enjoy working on my Amigas when I get some “hacking on code in the evening” time, and in particular I find AmigaOS 4 on my X5000 a refreshing blend of retro appeal and just about enough modern convenience to use it for development tasks, or even for writing this article itself. My A1200 continues to impress me with how much utility there is in such a small box and is a wonderful distraction from the modern era of bloated systems and applications. It is perhaps an evolutionary dead-end, but it’s still a lot of fun and is one of the rare occasions these days where I feel actually in control of my computer. Working backwards in time from OS 4.1 to my classic Amigas has also really given me a greater insight and appreciation for what the Amiga engineers managed to pull off back then. If you’re in any way interested in computer history - or simply want to give something truly different a try - you should definitely check out AmigaOS. I hope this quick type BRAIN: > WEB: dump provides you with some good starting points, and maybe gets you coding too!
Discussion on Hacker News Discussion on lobste.rs I’ve long since been a die-hard BeOS fan and have been running the open-source recreation Haiku for many years. I think it’s interesting to explore the “alternative OS” world and consider some great ideas that for whatever reason never caught on elsewhere. The way Haiku handles package management and its alternative approach to an “immutable system” is one of those ideas I find really cool. Here’s what it looks like from a desktop user’s perspective - there’s all the usual stuff like an “app store”, package updater, repositories of packages and so on: It’s all there and works well - it’s easily as smooth as any desktop Linux experience. However, it’s the implementation details behind the scenes that make it so interesting to me. Haiku takes a refreshingly new approach to package management: Despite the user experience “feeling” like a traditional package manager - say, something like apt or dnf - it has seamless support for: Immutable system directories Rollback to previous states User-managed packages separated from system packages And a whole bunch of infrastructure and tooling to support multiple package repositories, building packages from source and more. As a geek, I find it beautifully elegant and an idea that I’d love to see other platforms exploring. Let’s take a closer look, starting with how Haiku handles the split between “system” and “user” directories. Filesystem layout Running a df command from the Haiku shell (BASH is the default, but others like ZSH can be easily installed) shows the following: ~> df -h Mount Type Total Free Flags Device ----------------- --------- --------- --------- ------- ------------------------ /boot bfs 232.9 GiB 219.6 GiB QAM-P-W /dev/disk/scsi/0/0/0/0 /boot/system packagefs 4.0 KiB 4.0 KiB QAM-P-- /boot/home/config packagefs 4.0 KiB 4.0 KiB QAM-P-- In Haiku, the root / is a ram-based virtual filesystem set up by the kernel when it’s booted. All other filesystems and devices are mounted under /, so /boot refers to the entire boot volume - and not a boot partition as is the case with e.g. most Linux installs. The /boot/system mountpoint is essentially the system directory which makes up the Haiku OS and installed applications. In the root directory, there are a bunch of symlinks that point there for convenience: ~> ls -l / total 6 lrwxrwxrwx 1 user root 16 Feb 13 14:18 bin -> /boot/system/bin drwxr-xr-x 1 user root 2048 Mar 31 2021 boot drwxr-xr-x 1 user root 0 Feb 13 14:18 dev lrwxrwxrwx 1 user root 25 Feb 13 14:18 etc -> /boot/system/settings/etc lrwxrwxrwx 1 user root 5 Feb 13 14:18 Haiku -> /boot lrwxrwxrwx 1 user root 26 Feb 13 14:18 packages -> /boot/system/package-links lrwxrwxrwx 1 user root 12 Feb 13 14:18 system -> /boot/system lrwxrwxrwx 1 user root 22 Feb 13 14:18 tmp -> /boot/system/cache/tmp lrwxrwxrwx 1 user root 16 Feb 13 14:18 var -> /boot/system/var So we can access e.g. /boot/system as /system and so on. Note that this mount point is backed by packagefs - this is provided by a virtual filesystem that presents a merged view of all the packages installed. It’s sort of like an overlay filesystem (commonly used by container runtimes) in the Linux world. This mount-point is read only: ~> touch /system/test touch: cannot touch '/system/test': Read-only file system But we can however write to anything in our home directory, which ensures a clean separation of system and user data/configuration. NOTE : Due to its BeOS ancestry, Haiku is not currently a multi-user system so there is only one /boot/home directory and the user is effectively the sole administrator account. As I understand it, the scaffolding is present to support multiple users, but it won’t be a priority until after R1 is released. However, this opens up the later possibility for packages to be installed and configured on a per-user basis. The Package Daemon and packagefs There’s a great overview of how all these components fit together in the developer documentation, but here’s my lay-persons understanding of it… Haiku has many packages available, ranging from system components, development libraries and end-user applications. There’s even recent ports of Wine, LibreOffice and KDE Applications. These are available as .hpkg files which are placed in a special packages directory, and the packagefs service mounts the contents into e.g. the /boot/system mountpoint. Unlike traditional package management tools, the contents of the package are not unarchived and copied into the filesystem; When you’re looking at /boot/system, you’re essentially looking at a collection of multiple packages and their contents, all dynamically mounted and made available on-the-fly as a read-only, virtual filesystem. This is why /boot/system is read-only: It’s not a “real” filesystem and only exists as a virtual union of all the different packages that are activated at that point in time. NOTE : There are some exceptions to this, as some directories such as packages, cache, var etc. are writeable. There are called “shine-through directories” which reside on the underlying BFS volume. You can read more about these in the developer documentation. And while we’re geeking out, BeFS itself is definitely also worth investigating! When packagefs detects a new .hpkg file has been copied to the packages directory, it performs some checks such as searching for dependencies or conflicts. If everything is OK it “activates” the package, and the contents are then available. Installing a package Here’s an example of installing a package using the CLI pkgman tool, showing what happens behind the scenes. First, let’s install an example package, in this case something simple like the awesome CLI Pipe Viewer tool. It’s very much like running apt-get, yum or other similar tools on a Linux system: ~> pkgman install pv Validating checksum for Haiku...done. 100% repochecksum-1 [64 bytes] Validating checksum for HaikuPorts...done. The following changes will be made: in system: install package pv-1.6.6-2 from repository HaikuPorts 100% pv-1.6.6-2-x86_64.hpkg [40.57 KiB] Validating checksum for https://eu.hpkg.haiku-os.org/haikuports/master/x86_64/current/packages/pv-1.6.6-2-x86_64.hpkg...done. [system] Applying changes ... [system] Changes applied. Old activation state backed up in "state_2023-02-13_14:33:27" [system] Cleaning up ... [system] Done. If we now look at /system/packages we can see the downloaded .hpkg file (which can also be inspected with CLI or Desktop tools): ~> ls -l /system/packages/pv-1.6.6-2-x86_64.hpkg -rw-r--r-- 1 user root 41543 Feb 13 14:33 /system/packages/pv-1.6.6-2-x86_64.hpkg Sure enough, the package daemon picked it up and mounted its contents so they are available through the merged packagefs under /boot/system. Here’s where the pv binary now appears installed: ~> ls -l /system/bin/pv -r-xr-xr-x 1 user root 70136 Aug 6 2018 /system/bin/pv Uninstalling is as simple as pkgman uninstall pv which removes the .hpkg file, and the contents then “disappear” from /boot/system. State and Rollback What’s really nice about this, is that it enables a simple and elegant way of rolling-back your system state to a previous set of packages or even OS releases. If you check the last output from the pkgman install pv command above, you’ll see there’s a message saying that an “old activation state” is being backed up to a newly-created directory. The package manager does this at the start of every transaction, and if we check that directory we’ll see a simple plain-text file called activated-packages: ~> head -n5 /system/packages/administrative/state_2023-02-13_14\:33\:27/activated-packages netcat-1.10-4-x86_64.hpkg ncurses6-6.3-2-x86_64.hpkg mpfr3-3.1.6-6-x86_64.hpkg mesa_swpipe-22.0.5-2-x86_64.hpkg mesa_devel-22.0.5-2-x86_64.hpkg This contains a record of the exact package versions that were installed at that time. If any packages were to be upgraded, then the state directory will also contain a set of packages to facilitate roll-back. Here’s what it looks like from the Haiku Desktop: Along with a listing of old packages in a state directory, the currently installed packages are shown on the top left. You can see for example, that BASH in the current system is at 5.1 but in an old backup state from 2021 it was 5.0. You can also clean these old state directories up according to your needs - such as only keeping the last 30 days of state to preserve disk space. Tying this all together, the Haiku Boot Loader can make use of activation lists and saved packages to get back to any particular state very easily - all it has to do is pick a backup package activation file, and only activate the packages found in it when booting. You can select a backup state from the “Select Boot Volume” option in the boot loader and your system will boot back to the previous state using the archived packages and activation list. Your virtual /boot/system directory will then be reverted to the desired state - and this all happens on-the-fly at boot time; there is no long roll-back process or destructive operations and you can reboot into different states at will. User packages So far, I’ve just been talking about the packagefs mountpoint at /boot/system, but there’s also another mountpoint shown in my first df -h command which lives under /boot/home/config. Here’s what exists under that directory: ~/config> tree -d /boot/home/config -L 1 /boot/home/config ├── apps ├── cache ├── data ├── non-packaged ├── packages ├── settings └── var This is more-or-less a copy of the system directories, but they are specific to my user account. This means I can use this location to install software and test new packages out without “polluting” the system directories. And when Haiku gets full multi-user support this also means each user can have their own distinct set of packages available. A couple of points of interest as an aside: For non-native software packages (e.g. something installed with ./configure && make install) you can use ~/config/non-packaged. This is somewhat similar to /usr/local on Unix systems. The settings directory is where Haiku-packaged software keeps local configuration. To use an analogy again, it’s sort of like $XDG_CONFIG_HOME on other Unix-like systems. Haiku software tends to make use of this directory: For example, instead of SSH keeping configuration under ~/.ssh, it’s now stored under a directory in ~/config/settings: ~/config> ls -l settings/ssh/ total 32 -rw------- 1 user root 1238 Mar 31 2021 authorized_keys -rw------- 1 user root 476 Mar 31 2021 config -rw------- 1 user root 1679 Mar 31 2021 id_rsa -rw------- 1 user root 410 Mar 31 2021 id_rsa.pub -rw------- 1 user root 1790 Dec 18 2021 known_hosts Building your own packages I’ll finish off by showing an example of building a user package and installing it using Haikuporter and the community HaikuPorts collection. These tools can be used to build and customise a massive amount of software for Haiku ranging from old BeOS tools to a complete LibreOffice installation. NOTE : Think of building HaikuPorts from source like the FreeBSD “ports” collection. All the HaikuPorts packages are available through their repository as binaries through pkgman or the HaikuDepot graphical desktop application. As an end-user, you typically do not need to use Haikuporter unless you want to customise a package, create a new one, or submit patches/bug-fixes. I’m just using it as an example and because it’s cool! After setting up Haiku Ports and Haikuporter by following the instructions, I can now build a package from source. I’ll use the GUI FTP client “FTP Positive” as an example: ~/haikuports/haiku-apps> alias hp="haikuporter -S -j8 --no-source-packages --get-dependencies" ~/haikuports/haiku-apps> hp ftppositive Checking if any dependency-infos need to be updated ... Looking for stale dependency-infos ... ---------------------------------------------------------------------- haiku-apps::ftppositive-1.2.2 /boot/home/haikuports/haiku-apps/ftppositive/ftppositive-1.2.2.recipe ---------------------------------------------------------------------- Skipping download of source for 48a5acdfe0981697018abf151a82802f4f3e500e.tar.gz Validating checksum of 48a5acdfe0981697018abf151a82802f4f3e500e.tar.gz Unpacking source of 48a5acdfe0981697018abf151a82802f4f3e500e.tar.gz ... ... Build output truncated ... mimesetting files for package ftppositive-1.2.2-7-x86_64.hpkg ... creating package ftppositive-1.2.2-7-x86_64.hpkg ... ----- Package Info ---------------- header size: 80 heap size: 211523 TOC size: 1110 package attributes size: 695 total size: 211603 ----------------------------------- waiting for build package ftppositive-1.2.2-7 to be deactivated grabbing ftppositive-1.2.2-7-x86_64.hpkg and moving it to /boot/home/haikuports/packages/ftppositive-1.2.2-7-x86_64.hpkg The last lines of output show me that a .hpkg file has been created. I can then simply copy the package to my local ~/config/packages directory to “activate” it. Once again, packagefs will check it and then add the contents to the /boot/home/config directory. Here’s what it looks like on the Haiku desktop: Note the top two windows “stuck” together using the built-in stacking and tiling window manager! You can see the package has been activated after copying it into the user’s packages directory. It’s installed the application to /boot/home/config/apps/FtpPositive/ instead of the system directories and has added a DeskBar menu entry which has also been picked up and “merged” into the global system Applications menu - where it appears alongside all the system-installed packages. Conclusion Like the rest of Haiku, package management is a refreshingly different and well thought-out experience. Given how niche an OS it is, it still surprises me how polished the system is - I have it running natively on an old Lenovo ThinkStation SFF PC and it absolutely flies. Apart from my Amigas it’s easily my favourite “hacking on code in the evening” system and for many tasks is perfectly capable of being a daily-driver. There’s a fascinating world of alternative OSes out there, many of them following entirely different paradigms than the current mainstream world of Windows/Mac/Linux. I’ve had the good fortune to be exposed to these different ways of thinking all the way back to my school days in the UK, where RISC OS was commonplace in the classroom and Amigas and Ataris ruled the playground. From this once-rich polyculture of competing processor architectures and platforms, the world seemingly consolidated itself around an x86 or ARM processor running something based on Windows or Unix. Which gets the job done, but it’s y’know… a little boring. There are so many interesting - and some downright crazy - ideas that either failed in the commercial marketplace, were never given a proper chance (yeah, I’m still salty about Commodore killing the Amiga) or only found a hobbyist following. But here’s the thing: Just because they never got wide adoption doesn’t mean they were wrong. I think it’s great that people are exploring new ideas, continuing the lineage of legacy systems, and simply creating stuff because they damn well feel like it and it’s fun! There’s lots of really interesting code out there and it makes you wonder what an alternate time-line would look like where Apple did use BeOS as the basis for their next operating system. What if VMS had “won” over UNIX? What if Commodore had known what to do with the Amiga ? Anyway, I hope this has whetted your appetite for all things Alt-OS, and Haiku in particular. If you haven’t yet tried it, there’s detailed instructions on their website, and I highly recommend taking it for a spin. Happy Hacking!
In Part 3 I covered the backend server processes and protocols, CI/CD pipelines and unit tests I used to build the TNFS site. In this (much shorter) part, I’d like to take a step back from the hardcore geekery, and wrap up with my thoughts on the whole thing. Other Sites But before that, I’m going to explore a little part of the rest of the TNFS universe. After all, this project is intended to build a community site, and the Speccy has one of the friendliest retro computing communities out there. My site isn’t the only one out there - there’s a whole network of TNFS servers on the public internet, and the protocol has also been adopted for 8-bit Atari systems. There’s currently no central directory as such (although work is underway to create an index system using DNS TXT records) but there’s a forum thread that gets regular updates, and I have a links section on my site’s main menu. To give you an idea of some of the great content others have built, here’s a quick overview and screenshots of some of my favourite Speccy TNFS sites… Spectranet-related vexed4.alioth.net A very useful site to have bookmarked in one of the Spectranet’s slots. Hosted by the developer of the Spectranet’s firmware, It has some nice lists of classic and modern (2000s-era) games, demos and also some internet utilities including IRC and Twitter gateways. The first option in the menu is an online firmware update utility for Spectranet cards. The updater doesn’t work on an emulated Spectranet but works great on the real hardware - this is one of the first sites you should visit! The weird and wonderful tnfs.bytedelight.com This site takes up the default slot in Spectranet cards purchased from ByteDelight.com and serves as a demonstration. When you first configure your card, it’ll boot straight into this site which displays a snazzy “SkyNet” intro sequence informing you that you Speccy has been taken over, and all your base are belong to them. zx.zapto.org This wonderful fever-dream of a site was produced by a “p13z”, a regular on the Spectrum Computing Forums. It’s an impressive demonstration of what can be achieved using plain Sinclair BASIC - It’s sort of an interactive adventure where you start off by driving away from a police car against a neat parallax-scrolling background. You then end up wandering around various locations including a car-park frequented by “doggers” (and a telephone box which lets you connect to other TNFS sites), some standing stones where you can sample some “banging mushrooms” and other weird delights. Utterly bonkers in a classic Speccy kinda way… Gateways zx.desertkun.in Home of the Channels project, which provides a ZX Spectrum browser for forums and imageboards. If you boot into this site, set zx.desertkun.in as your proxy in the opening screen, and you can connect to internet forums and message boards including the Spectrum Computing Forums! There’s a Docker Image and source code available so you can run and customise things on your own systems. This uses a similar approach to my site where the heavy-lifting (in this case, TLS processing and connecting to/parsing the data from websites) is shifted to a more-capable modern environment. Awesome work! irata.online Now this one is a proper rabbit hole you could spend days exploring. It’s a fascinating part of computer history I had never encountered before: A visionary system that hosted some of the earliest ever online message boards, as well as apparently inspired the creation of Castle Wolfenstein and Lotus Notes! Their website explains it: “IRATA.ONLINE is provided for the benefit of retro-computing users to have a place to socialize, and develop interesting multi-user, interactive, and graphical games and social applications. It descends from the historical PLATO system, a massive time-sharing system that lasted from 1962 until NovaNET was closed in 2015.” The TNFS site hosts a terminal application that connects to the PLATO system (other retro systems are also supported) and you can even browse it through the web at https://js.irata.online/. Amazing stuff, and well worth some time checking out. nihirash.net This TNFS site hosts the uGophy Gopher client for the Spectrum. This lets you browse “gopherspace” from your speccy - try out gopher.floodgap.com as a starting point. You can use a web browser to access the HTTP proxy at https://gopher.floodgap.com/gopher/ to get a useful list of links to other Gopher sites still in operation, including the Veronica-2 search engine. zxnet.co.uk As well as a couple of “experiments” (a multi-player Tank game and music Jukebox), the zxnet.co.uk TNFS site hosts a copy of snapCterm (see below) and an IRC gateway. The IRC gateway allows you to connect to any public IRC server network and chat with other users - try libera.chat to get started, and use their excellent online help guides to find some interesting channels to join. snapCterm SnapCTerm is a terminal emulator that provides the basics of an ANSI 80 column terminal for the plus machines. It supports ASCII characters as well as ANSI escape and colour codes. This means you can use it to connect to Telnet servers e.g. a Linux system running telnetd which makes it possible to access the many old-school BBS systems still running. This is how we used to “social network”, kids, before this new fangled web thing took over… There’s a great curated list of systems to connect to at https://www.telnetbbsguide.com/ including all those “elite” sites that used to be advertised all over Amiga demo/warez scene productions. You can see in the screenshot a connection starting to The Sanctuary BBS, running on AmiExpress (still being updated!) and which used to be Fairlight’s World Headquarters back in the day. SnapCTerm is available on my TNFS site (2nd menu page, option 3) as well as on sites like zxnet.co.uk. File sites tnfs.millhill.org This is an archive of the zx.kupo.be TNFS site, which at one point hosted one of the largest collections of games and files for the Speccy. Sadly, Adam Colley who ran the site passed away in 2020. Kupo was one of the first sites I connected to when I first got my Spectranet card and looking through the code gave me the inspiration to start my own site. His work is archived here by someone who was in close contact with him before he passed, and it’s fitting that his site lives on and will be remembered by the community. retrojen.org As well as a nice little “blizzard” intro, retrojen.org hosts around 500 classic Spectrum scene demo tapes. Use the QAOP keys to move around the menu system, ENTER to select a title, 1 to jump to the first page and F to find a production by the release date. A great little archive of some old gems! Thanks for the recent post on my message wall, too :) szeliga.zapto.org This site hosts a curated collection of modern (mid-2000s onwards) games and titles released for the Speccy. There’s some great games that showcase the best of the current Speccy scene. Many of these titles are also available on my site, but it’s great to have them all arranged in a curated date order like this. I highly recommend trying out 2019’s “Yazzie”, a fantastic platformer/puzzle game that is ridiculously addictive. Wrap Up Since the site went online in January 2021 it’s grown to nearly 10k lines of code, handled over 11,500 connections and has gathered a small community of users chatting, playing games and leaving messages. It’s connected me to a wonderful (if slightly crazy) network of people who care about this little squidgey-keyed black box from the 80s as much as I do. It’s been a real pleasure seeing first users come to the site and I have lots of plans still for the future! Just a few of the bits and pieces in various Git branches right now: Proper pagination indication on all menus (page x of y type stuff) Message Board improvements (big speed boost and longer retention) Reply to comments - this opens up the possibility of a proper message board system! More files to upload and more curated lists I’m open to suggestions! If you have any ideas or fancy writing a text article for the site, let me know… In the words of one of the messages from a user: “It’s mad being online with a 40yr old computer designed around a tape recorder.” Perhaps it’s even crazier to drag the whole shebang into the modern age and cobble together infra-as-code pipelines, unit testing and gratuitous amounts of Kubernetes around it all. I did use a lot of my current-day practical knowledge during this project, but the most enjoyable parts where when I was doing stuff like studying the decades-old intricies of the VAL$ function. Sometimes, introducing constraints like an ancient 8-bit processor with only 48Kb of usable RAM can really force you to think “outside the box” and produce the most creative hacks. Plus, in this day and age it’s a lot of fun to do something just for the sheer hell of it! I’m also definitely going to add Sinclair BASIC to my “skills” in LinkedIn now ;) Before I close, I’d just like to add a note of thanks to everyone who’s used my site, suggested features, left messages or helped me out with my many technical queries on forums. It’s been an absolute blast and I look forward to the next 40 years of Spectrum hacking! -Mark, February 2022.
More in programming
I was chatting with a friend recently, and she mentioned an annoyance when reading fanfiction on her iPad. She downloads fic from AO3 as EPUB files, and reads it in the Kindle app – but the files don’t have a cover image, and so the preview thumbnails aren’t very readable: She’s downloaded several hundred stories, and these thumbnails make it difficult to find things in the app’s “collections” view. This felt like a solvable problem. There are tools to add cover images to EPUB files, if you already have the image. The EPUB file embeds some key metadata, like the title and author. What if you had a tool that could extract that metadata, auto-generate an image, and use it as the cover? So I built that. It’s a small site where you upload EPUB files you’ve downloaded from AO3, the site generates a cover image based on the metadata, and it gives you an updated EPUB to download. The new covers show the title and author in large text on a coloured background, so they’re much easier to browse in the Kindle app: If you’d find this helpful, you can use it at alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ Otherwise, I’m going to explain how it works, and what I learnt from building it. There are three steps to this process: Open the existing EPUB to get the title and author Generate an image based on that metadata Modify the EPUB to insert the new cover image Let’s go through them in turn. Open the existing EPUB I’ve not worked with EPUB before, and I don’t know much about it. My first instinct was to look for Python EPUB libraries on PyPI, but there was nothing appealing. The results were either very specific tools (convert EPUB to/from format X) or very unmaintained (the top result was last updated in April 2014). I decied to try writing my own code to manipulate EPUBs, rather than using somebody else’s library. I had a vague memory that EPUB files are zips, so I changed the extension from .epub to .zip and tried unzipping one – and it turns out that yes, it is a zip file, and the internal structure is fairly simple. I found a file called content.opf which contains metadata as XML, including the title and author I’m looking for: <?xml version='1.0' encoding='utf-8'?> <package xmlns="http://www.idpf.org/2007/opf" version="2.0" unique-identifier="uuid_id"> <metadata xmlns:opf="http://www.idpf.org/2007/opf" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:calibre="http://calibre.kovidgoyal.net/2009/metadata"> <dc:title>Operation Cameo</dc:title> <meta name="calibre:timestamp" content="2025-01-25T18:01:43.253715+00:00"/> <dc:language>en</dc:language> <dc:creator opf:file-as="alexwlchan" opf:role="aut">alexwlchan</dc:creator> <dc:identifier id="uuid_id" opf:scheme="uuid">13385d97-35a1-4e72-830b-9757916d38a7</dc:identifier> <meta name="calibre:title_sort" content="operation cameo"/> <dc:description><p>Some unusual orders arrive at Operation Mincemeat HQ.</p></dc:description> <dc:publisher>Archive of Our Own</dc:publisher> <dc:subject>Fanworks</dc:subject> <dc:subject>General Audiences</dc:subject> <dc:subject>Operation Mincemeat: A New Musical - SpitLip</dc:subject> <dc:subject>No Archive Warnings Apply</dc:subject> <dc:date>2023-12-14T00:00:00+00:00</dc:date> </metadata> … That dc: prefix was instantly familiar from my time working at Wellcome Collection – this is Dublin Core, a standard set of metadata fields used to describe books and other objects. I’m unsurprised to see it in an EPUB; this is exactly how I’d expect it to be used. I found an article that explains the structure of an EPUB file, which told me that I can find the content.opf file by looking at the root-path element inside the mandatory META-INF/container.xml file which is every EPUB. I wrote some code to find the content.opf file, then a few XPath expressions to extract the key fields, and I had the metadata I needed. Generate a cover image I sketched a simple cover design which shows the title and author. I wrote the first version of the drawing code in Pillow, because that’s what I’m familiar with. It was fine, but the code was quite flimsy – it didn’t wrap properly for long titles, and I couldn’t get custom fonts to work. Later I rewrote the app in JavaScript, so I had access to the HTML canvas element. This is another tool that I haven’t worked with before, so a fun chance to learn something new. The API felt fairly familiar, similar to other APIs I’ve used to build HTML elements. This time I did implement some line wrapping – there’s a measureText() API for canvas, so you can see how much space text will take up before you draw it. I break the text into words, and keeping adding words to a line until measureText tells me the line is going to overflow the page. I have lots of ideas for how I could improve the line wrapping, but it’s good enough for now. I was also able to get fonts working, so I picked Georgia to match the font used for titles on AO3. Here are some examples: I had several ideas for choosing the background colour. I’m trying to help my friend browse her collection of fic, and colour would be a useful way to distinguish things – so how do I use it? I realised I could get the fandom from the EPUB file, so I decided to use that. I use the fandom name as a seed to a random number generator, then I pick a random colour. This means that all the fics in the same fandom will get the same colour – for example, all the Star Wars stories are a shade of red, while Star Trek are a bluey-green. This was a bit harder than I expected, because it turns out that JavaScript doesn’t have a built-in seeded random number generator – I ended up using some snippets from a Stack Overflow answer, where bryc has written several pseudorandom number generators in plain JavaScript. I didn’t realise until later, but I designed something similar to the placeholder book covers in the Apple Books app. I don’t use Apple Books that often so it wasn’t a deliberate choice to mimic this style, but clearly it was somewhere in my subconscious. One difference is that Apple’s app seems to be picking from a small selection of background colours, whereas my code can pick a much nicer variety of colours. Apple’s choices will have been pre-approved by a designer to look good, but I think mine is more fun. Add the cover image to the EPUB My first attempt to add a cover image used pandoc: pandoc input.epub --output output.epub --epub-cover-image cover.jpeg This approach was no good: although it added the cover image, it destroyed the formatting in the rest of the EPUB. This made it easier to find the fic, but harder to read once you’d found it. An EPUB file I downloaded from AO3, before/after it was processed by pandoc. So I tried to do it myself, and it turned out to be quite easy! I unzipped another EPUB which already had a cover image. I found the cover image in OPS/images/cover.jpg, and then I looked for references to it in content.opf. I found two elements that referred to cover images: <?xml version="1.0" encoding="UTF-8"?> <package xmlns="http://www.idpf.org/2007/opf" version="3.0" unique-identifier="PrimaryID"> <metadata xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:opf="http://www.idpf.org/2007/opf"> <meta name="cover" content="cover-image"/> … </metadata> <manifest> <item id="cover-image" href="images/cover.jpg" media-type="image/jpeg" properties="cover-image"/> … </manifest> </package> This gave me the steps for adding a cover image to an EPUB file: add the image file to the zipped bundle, then add these two elements to the content.opf. Where am I going to deploy this? I wrote the initial prototype of this in Python, because that’s the language I’m most familiar with. Python has all the libraries I need: The zipfile module can unpack and modify the EPUB/ZIP The xml.etree or lxml modules can manipulate XML The Pillow library can generate images I built a small Flask web app: you upload the EPUB to my server, my server does some processing, and sends the EPUB back to you. But for such a simple app, do I need a server? I tried rebuilding it as a static web page, doing all the processing in client-side JavaScript. That’s simpler for me to host, and it doesn’t involve a round-trip to my server. That has lots of other benefits – it’s faster, less of a privacy risk, and doesn’t require a persistent connection. I love static websites, so can they do this? Yes! I just had to find a different set of libraries: The JSZip library can unpack and modify the EPUB/ZIP, and is the only third-party code I’m using in the tool Browsers include DOMParser for manipulating XML I’ve already mentioned the HTML <canvas> element for rendering the image This took a bit longer because I’m not as familiar with JavaScript, but I got it working. As a bonus, this makes the tool very portable. Everything is bundled into a single HTML file, so if you download that file, you have the whole tool. If my friend finds this tool useful, she can save the file and keep a local copy of it – she doesn’t have to rely on my website to keep using it. What should it look like? My first design was very “engineer brain” – I just put the basic controls on the page. It was fine, but it wasn’t good. That might be okay, because the only person I need to be able to use this app is my friend – but wouldn’t it be nice if other people were able to use it? If they’re going to do that, they need to know what it is – most people aren’t going to read a 2,500 word blog post to understand a tool they’ve never heard of. (Although if you have read this far, I appreciate you!) I started designing a proper page, including some explanations and descriptions of what the tool is doing. I got something that felt pretty good, including FAQs and acknowledgements, and I added a grey area for the part where you actually upload and download your EPUBs, to draw the user’s eye and make it clear this is the important stuff. But even with that design, something was missing. I realised I was telling you I’d create covers, but not showing you what they’d look like. Aha! I sat down and made up a bunch of amusing titles for fanfic and fanfic authors, so now you see a sample of the covers before you upload your first EPUB: This makes it clearer what the app will do, and was a fun way to wrap up the project. What did I learn from this project? Don’t be scared of new file formats My first instinct was to look for a third-party library that could handle the “complexity” of EPUB files. In hindsight, I’m glad I didn’t find one – it forced me to learn more about how EPUBs work, and I realised I could write my own code using built-in libraries. EPUB files are essentially ZIP files, and I only had basic needs. I was able to write my own code. Because I didn’t rely on a library, now I know more about EPUBs, I have code that’s simpler and easier for me to understand, and I don’t have a dependency that may cause problems later. There are definitely some file formats where I need existing libraries (I’m not going to write my own JPEG parser, for example) – but I should be more open to writing my own code, and not jumping to add a dependency. Static websites can handle complex file manipulations I love static websites and I’ve used them for a lot of tasks, but mostly read-only display of information – not anything more complex or interactive. But modern JavaScript is very capable, and you can do a lot of things with it. Static pages aren’t just for static data. One of the first things I made that got popular was find untagged Tumblr posts, which was built as a static website because that’s all I knew how to build at the time. Somewhere in the intervening years, I forgot just how powerful static sites can be. I want to build more tools this way. Async JavaScript calls require careful handling The JSZip library I’m using has a lot of async functions, and this is my first time using async JavaScript. I got caught out several times, because I forgot to wait for async calls to finish properly. For example, I’m using canvas.toBlob to render the image, which is an async function. I wasn’t waiting for it to finish, and so the zip would be repackaged before the cover image was ready to add, and I got an EPUB with a missing image. Oops. I think I’ll always prefer the simplicity of synchronous code, but I’m sure I’ll get better at async JavaScript with practice. Final thoughts I know my friend will find this helpful, and that feels great. Writing software that’s designed for one person is my favourite software to write. It’s not hyper-scale, it won’t launch the next big startup, and it’s usually not breaking new technical ground – but it is useful. I can see how I’m making somebody’s life better, and isn’t that what computers are for? If other people like it, that’s a nice bonus, but I’m really thinking about that one person. Normally the one person I’m writing software for is me, so it’s extra nice when I can do it for somebody else. If you want to try this tool yourself, go to alexwlchan.net/my-tools/add-cover-to-ao3-epubs/ If you want to read the code, it’s all on GitHub. [If the formatting of this post looks odd in your feed reader, visit the original article]
I’ve been doing Dry January this year. One thing I missed was something for apéro hour, a beverage to mark the start of the evening. Something complex and maybe bitter, not like a drink you’d have with lunch. I found some good options. Ghia sodas are my favorite. Ghia is an NA apéritif based on grape juice but with enough bitterness (gentian) and sourness (yuzu) to be interesting. You can buy a bottle and mix it with soda yourself but I like the little cans with extra flavoring. The Ginger and the Sumac & Chili are both great. Another thing I like are low-sugar fancy soda pops. Not diet drinks, they still have a little sugar, but typically 50 calories a can. De La Calle Tepache is my favorite. Fermented pineapple is delicious and they have some fun flavors. Culture Pop is also good. A friend gave me the Zero book, a drinks cookbook from the fancy restaurant Alinea. This book is a little aspirational but the recipes are doable, it’s just a lot of labor. Very fancy high end drink mixing, really beautiful flavor ideas. The only thing I made was their gin substitute (mostly junipers extracted in glycerin) and it was too sweet for me. Need to find the right use for it, a martini definitely ain’t it. An easier homemade drink is this Nonalcoholic Dirty Lemon Tonic. It’s basically a lemonade heavily flavored with salted preserved lemons, then mixed with tonic. I love the complexity and freshness of this drink and enjoy it on its own merits. Finally, non-alcoholic beer has gotten a lot better in the last few years thanks to manufacturing innovations. I’ve been enjoying NA Black Butte Porter, Stella Artois 0.0, Heineken 0.0. They basically all taste just like their alcoholic uncles, no compromise. One thing to note about non-alcoholic substitutes is they are not cheap. They’ve become a big high end business. Expect to pay the same for an NA drink as one with alcohol even though they aren’t taxed nearly as much.
The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with our bags packed, thinking they'd probably get it under control. But by 2am, the roaring blades of fire choppers shaking the house got us up. As we sped down the canyon towards Pacific Coast Highway (PCH), the fire had reached the ridge across from ours, and flames were blazing large out the car windows. It felt like we had left the evacuation a little too late, but they eventually did get Franklin under control before it reached us. Humans have a strange relationship with risk and disasters. We're so prone to wishful thinking and bad pattern matching. I remember people being shocked when the flames jumped the PCH during the Woolsey fire in 2017. IT HAD NEVER DONE THAT! So several friends of ours had to suddenly escape a nightmare scenario, driving through burning streets, in heavy smoke, with literally their lives on the line. Because the past had failed to predict the future. I feel into that same trap for a moment with the dramatic proclamations of wind and fire weather in the days leading up to January 7. Warning after warning of "extremely dangerous, life-threatening wind" coming from the City of Malibu, and that overly-bureaucratic-but-still-ominous "Particularly Dangerous Situation" designation. Because, really, how much worse could it be? Turns out, a lot. It was a little before noon on the 7th when we first saw the big plumes of smoke rise from the Palisades fire. And immediately the pattern matching ran astray. Oh, it's probably just like Franklin. It's not big yet, they'll get it out. They usually do. Well, they didn't. By the late afternoon, we had once more packed our bags, and by then it was also clear that things actually were different this time. Different worse. Different enough that even Santa Monica didn't feel like it was assured to be safe. So we headed far North, to be sure that we wouldn't have to evacuate again. Turned out to be a good move. Because by now, into the evening, few people in the connected world hadn't started to see the catastrophic images emerging from the Palisades and Eaton fires. Well over 10,000 houses would ultimately burn. Entire neighborhoods leveled. Pictures that could be mistaken for World War II. Utter and complete destruction. By the night of the 7th, the fire reached our canyon, and it tore through the chaparral and brush that'd been building since the last big fire that area saw in 1993. Out of some 150 houses in our immediate vicinity, nearly a hundred burned to the ground. Including the first house we moved to in Malibu back in 2009. But thankfully not ours. That's of course a huge relief. This was and is our Malibu Dream House. The site of that gorgeous home office I'm so fond to share views from. Our home. But a house left standing in a disaster zone is still a disaster. The flames reached all the way up to the base of our construction, incinerated much of our landscaping, and devoured the power poles around it to dysfunction. We have burnt-out buildings every which way the eye looks. The national guard is still stationed at road blocks on the access roads. Utility workers are tearing down the entire power grid to rebuild it from scratch. It's going to be a long time before this is comfortably habitable again. So we left. That in itself feels like defeat. There's an urge to stay put, and to help, in whatever helpless ways you can. But with three school-age children who've already missed over a months worth of learning from power outages, fire threats, actual fires, and now mudslide dangers, it was time to go. None of this came as a surprise, mind you. After Woolsey in 2017, Malibu life always felt like living on borrowed time to us. We knew it, even accepted it. Beautiful enough to be worth the risk, we said. But even if it wasn't a surprise, it's still a shock. The sheer devastation, especially in the Palisades, went far beyond our normal range of comprehension. Bounded, as it always is, by past experiences. Thus, we find ourselves back in Copenhagen. A safe haven for calamities of all sorts. We lived here for three years during the pandemic, so it just made sense to use it for refuge once more. The kids' old international school accepted them right back in, and past friendships were quickly rebooted. I don't know how long it's going to be this time. And that's an odd feeling to have, just as America has been turning a corner, and just as the optimism is back in so many areas. Of the twenty years I've spent in America, this feels like the most exciting time to be part of the exceptionalism that the US of A offers. And of course we still are. I'll still be in the US all the time on both business, racing, and family trips. But it won't be exclusively so for a while, and it won't be from our Malibu Dream House. And that burns.
Thou shalt not suffer a flaky test to live, because it’s annoying, counterproductive, and dangerous: one day it might fail for real, and you won’t notice. Here’s what to do.
The ware for January 2025 is shown below. Thanks to brimdavis for contributing this ware! …back in the day when you would get wares that had “blue wires” in them… One thing I wonder about this ware is…where are the ROMs? Perhaps I’ll find out soon! Happy year of the snake!