Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
21
In Part 1, I explored the hardware and development environment. In this article, I’ll cover the server-side components as well as coding and launching the first iteration of the site along with some of the limitations I encountered when programming on such an old system. Server environment If you want to run your own TNFS site to host code for your Speccy, you’ll need the server component - TNFSD - which runs on Windows and Unix-like systems. The source for this is available at https://github.com/spectrumero/spectranet/tree/master/tnfs/tnfsd. There are binaries available (including for Windows), but I opted to build from source as I wanted to enable the the Usage Log by adding USAGELOG=yes to the make arguments. I can run this binary with a simple systemd unit file on my Linux systems: [Unit] Description=TNFSD Service ConditionPathExists=|/usr/bin After=network.target [Service] User=root ExecStart=/usr/local/bin/tnfsd /data/tnfs -c...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from markround.com

AmigaGuide Reference Library

As I slowly but surely work towards the next release of my setcmd project for the Amiga (see the 68k branch for the gory details and my total noob-like C flailing around), I’ve made heavy use of documentation in the AmigaGuide format. Despite it’s age, it’s a great Amiga-native format and there’s a wealth of great information out there for things like the C API, as well as language guides and tutorials for tools like the Installer utility - and the AmigaGuide markup syntax itself. The only snag is, I had to have access to an Amiga (real or emulated), or install one of the various viewer programs on my laptops. Because like many, I spend a lot of time in a web browser and occasionally want to check something on my mobile phone, this is less than convenient. Fortunately, there’s a great AmigaGuideJS online viewer which renders AmigaGuide format documents using Javascript. I’ve started building up a collection of useful developer guides and other files in my own reference library so that I can access this documentation whenever I’m not at my Amiga or am coding in my “modern” dev environment. It’s really just for my own personal use, but I’ll be adding to it whenever I come across a useful piece of documentation so I hope it’s of some use to others as well! And on a related note, I now have a “unified” code-base so that SetCmd now builds and runs on 68k-based OS 3.x systems as well as OS 4.x PPC systems like my X5000. I need to: Tidy up my code and fix all the “TODO” stuff Update the Installer to run on OS 3.x systems Update the documentation Build a new package and upload to Aminet/OS4Depot Hopefully I’ll get that done in the next month or so. With the pressures of work and family life (and my other hobbies), progress has been a lot slower these last few years but I’m still really enjoying working on Amiga code and it’s great to have a fun personal project that’s there for me whenever I want to hack away at something for the sheer hell of it. I’ve learned a lot along the way and the AmigaOS is still an absolute joy to develop for. I even brought my X5000 to the most recent Kickstart Amiga User Group BBQ/meetup and had a fun day working on the code with fellow Amigans and enjoying some classic gaming & demos - there was also a MorphOS machine there, which I think will be my next target as the codebase is slowly becoming more portable. Just got to find some room in the “retro cave” now… This stuff is addictive :)

14 hours ago 2 votes
Disqus - An Apology

Earlier today, I got an email alerting me to an angrier than usual comment on this website. It was a proper keyboard warrior rant accusing me of all sorts of misdeads revolving around “forcing ads down people’s throats”. I replied saying that there had never been any ads on this site, never will be and I detest the enshitification trend of the modern Internet too. I also have found much of today’s web unbearable without tools such as Pi-Hole and a VPN; I use Firefox with adblockers whenever possible and generally speaking, if a site forces me to disable my ad-blocker I’ll simply stop visiting. Then I had a sinking feeling. Years ago, when I migrated this site from a PHP codebase to a static site generator, I’d enabled Disqus comments as it (at the time) seemed a reasonable alternative to dealing with all the spam, moderation, flame wars and handling user data that comes with a comments engine. I’d never really paid that much attention to it as it never got that much use, and I certainly had never noticed any ads or other junk being injected in amongst my content. But then I go to somewhat extreme lengths to avoid that sort of crap, and had a horrible feeling that maybe I’d simply just never noticed anything un-toward as I’d been blocking it. So I downloaded Google Chrome (ugh!), and browsed to this site without any kind of protection or blocking, and was confronted with an absolute abomination of link-farm “chumbox” adverts littering the bottom half of the page. Even though I’m currently on holiday, I immediately disabled the Disqus integration (yay for GitOps and CI/CD pipelines!) and can only apologise for the eye sore and god-awful mess that I was unwittingly inflicting on people. I’m not sure exactly when Disqus got that bad. It certainly wasn’t when I initially set it up, but I guess this is another example of why we can’t have nice things. So to my angry anonymous poster: You were right, I apologise (but you were still kind of a dick about it), and holy crap do I despise what we’ve done to the web.

a year ago 35 votes
Amiga Systems Programming in 2023

Discussion on Hacker News Discussion on lobste.rs If you ever get a chance to look through the classic Amiga OS source-code still floating around some murky corners of the internet, it is a thing of beauty and astonishing capabilities. It’s an inspirational piece of computing history with unmatched capabilities for the time. Remember, this was all originally on a computer released in the 1980s with 512Kb memory, a 7Mhz 68000 16-bit CPU, and a single floppy drive with 880Kb storage. On these limited specs, AmigaOS provided a pre-emptive multi-tasking operating system, a full set of GUI primatives and built-in “Workbench” interface, expansion card auto-configuration and a fully-featured filesystem with some unique and powerful capabilities. Although to be fair, the AmigaDOS parts do literally come from a different time (and possibly planet) - but more on that later. Oh and of course, there was that amazing chipset that meant even that humble base can do things like this - while PCs of the time were basically office boxes that occasionally bleeped and home computers still loaded games from cassette tape. There’s understandably a lot of on-line interest in those parts of the Amiga as they’re the most impressive in an obvious “wow!” way. But while that was what drew me to the Amiga when I was a kid (and the demo/cracking/bbs scene heavily influenced me) I’ve always been more of a systems geek at heart. I’ve always loved building tools and platforms, and have long been fascinated with the world of operating systems. Apart from reading through the source code (where that’s legally available, of course…) I think there’s no better way to explore and understand a system - and the mindset that produced it - than to develop for it. What follows is a brain-dump of what I’ve learned about developing for the AmigaOS, both on classic 68k-powered hardware to modern PowerPC systems like the X5000. I’ll cover development environments, modern workflows like CI builds on containerised infrastructure, distribution of packages and even a look back in time before C existed, thanks to AmigaDOS’s odd heritage. Table Of Contents SetCmd SDK Updates Editors Native hardware Modern development AmigaDOS is weird Distribution Archive Documentation Installer File Sites External Documentation The way forward is back ? Wrap-up SetCmd There’s plenty of guides and videos on setting up an old-school game or demo-coding environment, but all of what follows is in the context of developing a systems tool in C as that’s the language of AmigaOS. I started a real-life project partly to solve a small problem I had (switching between different versions of commands/tools at the Amiga CLI) but mainly to explore and dig deeper into the OS that influenced me so much as a teenager. SetCmd was the result, and is a very simple AmigaOS 4 PowerPC package. I’m working (very slowly) on porting it to run on classic AmigaOS and variants but it has to be said this is my first time writing C in any meaningful capacity beyond wrestling with pointers at University. The source code is on GitLab if you want to take a look but bear in mind despite having owned Amigas since they were released, I’m a total newbie at most of this! I wrote it to have fun, explore the AmigaOS, set up build environments and figure out how to package it up for re-distribution. I have written a bit about my development setup in the past, but things have changed a fair bit since then - so without further ado, here’s my development environment and thoughts in 2023. SDK Updates Whilst things do move at a glacial pace in the world of AmigaOS 4/PPC, there have been a few big updates. A-EON’s Enhancer Software has had several releases, each adding new applications and developer APIs. As well as shipping their own versions of key Amiga OS applications and utilities, they also now are installing several core AmigaDOS command replacements. I tend to skip the installation of these as I’ve encountered a few edge cases where they don’t quite behave like the original OS 4 commands, but from recent discussions online it appears as if they are preparing for their own “clean-room” re-implementation and modernisation of Amiga OS 4. Presumably in order to free themselves from the eternal legal shenanigans with Hyperion et al. I’m not going to get into that raging dumpster fire here, but it’ll be interesting to see what comes of this. On the Hyperion side, they released a big SDK Update for OS 4 including updated GCC toolchains, cross-compilers, profilers and loads of updated SDKs. Bearing in mind the ancient GCC 4.x toolchain that had been in place for years it was great to have a more modern environment. On the classic Amiga front, Hyperion have also been pushing ahead with their updated AmigaOS 3.x OS for 68k-powered Amigas. Now on version 3.2.2.1 (I’ve got my boxed set of CD and floppy disks) there have also been several NDK (“Native Developer Kit”) Updates providing updated APIs and tools. AmiKit also released a great “all-in-one” environment called DevPack which includes a huge range of languages (C, Assembly, Amos, Lua, Basic…) and NDKs all configured and ready to go. As a quick and easy way of setting up a development environment on classic Amigas, it’s hard to beat and saves a lot of manual downloading, configuring and glue-ing everything together. Editors In my last update 3 years ago, I’d more or-less settled on using a GUI VIM derivative. While I’m still a die-hard VIM user at $DAYJOB, I really appreciate the modern comforts of e.g. VSCode. Thanks to the amazing work of George Sokianos, there is now a OS 4 package of the awesome Lite-XL editor along with a comprehensive set of plugins. Here’s what a hacking session on my X5000 looks like: In that session you can see alongside LiteXL two terminal windows: I’m compiling and running the PowerPC and classic 68k versions of setcmd thanks to the cross-compilers and native “Petunia” JIT 68k emulation built into OS 4. More on that later, but while we’re talking about classic Amigas… Native hardware Emulation is fine, but nothing beats running on the actual hardware! In the lead image to this article you can see my treasured Amiga 1200 (with older 8-bit friend in the background running my TNFS site) expanded with an Apollo Vampire accelerator. An Ethernet or Wifi adapter is more or less essential though when it comes to transferring data around and fortunately the Vampire card is capable of network connectivity, high-resolution display and other niceities but can easily be switched back to a more “stock” environment. I do still occasionally use my licensed copy of CubicIDE but due to the age of this architecture, I tend to keep my tools light and have settled on the simple Jano editor or sometimes CygnusEd for old time’s sake. My build toolchain is provided by Devpack and it’s included VBCC compiler. I use the vbcc_target_m68k-amigaos target with this makefile to build: Modern development I’m (sadly) not always in front of my Amigas, but these days a modern laptop and cloud-native tools offer a lot of flexibility particularly with the advanced state of emulation. I use VSCode as my editor, and a containerised cross-compiler toolchain built by - again! - George Sokianos to target both 68k and PPC platforms. I can build my project on any system capable of running OCI containers, e.g. docker run \ --user $UID:$GID \ -v ./:/opt/code \ walkero/docker4amigavbcc:latest-m68k-amd64 \ make -f makefile.docker Testing and running the code is made easy by the very advanced state of emulators. On Windows, WinUAE is the gold standard and can emulate everything from an original 1985-vintage A1000 up to modern systems with PowerPC accelerators, graphics cards and other devices. I have it multi-booting into clean Hyperion and Commodore/classic OS environments, with my source code directory shared as a virtual hard-drive: I can compile in seconds with Docker, and then straight away test the resulting binary in my emulated Amiga. Source-code is kept up to date between systems using Git; on the Amiga X5000 I use the port of SimpleGit, which is now bundled with the latest Hyperion SDK under SDK:c/sgit. I haven’t yet found a suitable Git solution for the classic Amigas, so on those I use a makeshift AmigaDOS shell script that uses Backup to copy files over to a network mount in an rsync -like fashion. I also keep meaning to test running AmigaOS 4.1 under QEMU as support for this has greatly improved and looks a lot simpler than the currently convoluted process of getting “classic OS 4” running on an emulated 68k Amiga with PPC accelerator configured. But for now, the WinUAE approach is working pretty well. Another advantage of having a containerised build-chain is that combined with Git and Drone running on my personal Kubernetes clusters, I can build and package my code with a simple git push wherever I am: AmigaDOS is weird Although AmigaOS is frequently lauded for it’s sophistication and elegance, there is a notable “oddness” about the AmigaDOS components which handle storage I/O, devices and filesystems. The original developers of the Amiga had an ambitious DOS system planned, but in the end Commodore had to purchase the Tripos operating system and port parts of it to the Amiga due to deadline challenges. This mismatch is all the more pronounced as Tripos was written in BCPL - which in turn, influenced the B programming language which begat the C we all know and… well, tolerate, in my case. So it really is looking back into computing history and remnants of this still remain even in the “modern” AmigaOS 4.x and other derivatives. Once you start diving into AmigaDOS code, you end up face-to-face with this legacy and need to convert back and forth with BCPL and C data-types. For example, BCPL strings are not NULL-terminated, instead they have a length in the first byte and then the characters follow. And pointers are similarly alien. This is why my code has stuff like this littered through it: // Convert the new node to a BPTR new_node_bptr = MKBADDR(new_node); // Set the new path cli->cli_CommandDir = new_node_bptr; As the NDK include file dos.h explains: “All BCPL data must be long word aligned. BCPL pointers are the long word address (i.e byte address divided by 4 (»2))”. It also includes helper functions like MKBADDR to help with the conversion as most DOS system calls use BCPL pointers in arguments: /* Convert BPTR to typical C pointer */ #define BADDR(x) ((APTR)((ULONG)(x) << 2)) /* Convert address into a BPTR */ #define MKBADDR(x) (((LONG)(x)) >> 2) All in all, a fascinating look back into an obscure branch of computing history, but it hasn’t furthered my appreciation of pointers any! Distribution If you want to distribute your project to a wider audience, there’s a few Amiga-specific things you can do that makes life much easier for people running AmigaOS. These all make use of some pretty cool bits of Amiga technology and have been widely adopted by native software since their introduction in the early 1990s. Archive Just use LHA format archives. It’s the standard compression tool on Amigas and even though there are modern (and technically better) alternatives, .lha files are as ubiquitous as e.g. .zip or .tar.gz packages on other systems and can also be handled by low-spec machines. There are ports of CLI tools and GUIs available on all platforms to handle these archives and while the syntax can be a little different, it’s quite easy to use. See my AmigaDOS script that builds the SetCmd release artifact for a practical example. Documentation Along with a basic README.txt, it’s a great practice to distribute more detailed documentation in AmigaGuide format. Another area the Amiga was way ahead of it’s time - AmigaGuide is a hyper-text format commonly used for application manuals although people have even used it to publish disk magazines! Introduced in 1992, the bundled tools on any AmigaOS (or clone/derivative) can read and use AmigaGuide as standard so you can include formatting, links and other content in your documentation. You can see the AmigaGuide docs I include with SetCmd in the screenshot above, or view the source to see what the syntax looks like. It’s pretty simple to write and is much like any other markdown or formatting code. You can set basic parameters: 1 2 @Width 72 @wordwrap Documents have “Nodes” which can be linked to, e.g. 1 2 @Node About "About SetCMD" @{"About" Link About} And formatting is much like HTML with opening and closing tags: @{b}@{u} Bold and Underlined! @{uu}@{ub} Installer A fantastic addition to AmigaOS, the system installer utility reads a developer-provided script which handles copying files, comparing versions, modifying system scripts and so on, in a standardized fashion. You can pass useful information and configuration which controls the Installer tool through standard Amiga tooltypes, and it uses a sort of LISP-ey syntax which again runs on all Amigas and derivatives. The syntax does some getting used to, but the best source of documentation is the AmigaGuide documentation found in the Installer dev package. As an example, here’s an excerpt of the block of code which copies the setcmd program file over to a previously created directory: 1 2 3 4 5 6 7 8 (copyfiles (source "setcmd") (dest #dname) (prompt ("Copy SetCmd program file?")) (confirm "expert") (all) (help @copyfiles-help) ) And for running commands, you can use the run command, along with cat (short for “concatenate”) to build up the command string : (run (cat "C:MakeLink FROM " #dname "/cmds/setcmd/release TO SETCMD:setcmd SOFT")) I found the best approach was to examine other Installer scripts to get a feel for common practices and idioms. Here’s my simplistic Install_SetCmd script, and if you want to see something more complex, there’s always the AmigaOS installation scripts, or the Qt installer for OS 4 which taught me a lot. File Sites The Amiga doesn’t have a universal package manager, so files are usually downloaded manually and installed from the .lha archives. The go-to place for Amiga software for all systems is AmiNet. It’s the biggest repository of Amiga packages on the internet (and, at one point in the mid-90s was actually the largest software repository of any platform) and now also supports hosting packages for Amiga OS 4, MorphOS and AROS alongside classic 68k fare. There are also smaller, platform-focused sites for each platform e.g. os4depot.net for OS 4, morphos-storage.net for MorphOS and so on. Getting your package uploaded and accepted into the repository is broadly the same for all these: You FTP your package up according to the naming standards, and supply a Readme file which provides the required metadata like this excerpt in AmiNet format: 1 2 3 4 5 6 Short: Switch between versions of software Author: amiga@markround.com (Mark Dastmalchi-Round) Type: util/shell Version: 1.1.0 Architecture: ppc-amigaos >= 4.0.0 Distribution: Aminet There are some Amiga-native GUI tools that assist with creating these files, but the specs e.g. for os4depot.net are pretty straight forward. And here’s the end result - My package available on os4depot.net and aminet. External Documentation When trying to learn or re-learn everything from C to AmigaDOS scripting, I found a few great resources. However, as with most things in Amiga-land, there’s an extraordinarily high “bus factor” for many websites and my biggest recommendation is to use native tools or save local copies of anything you find! With that said, here’s my essential Amiga bookmarks: Autodocs references. There’s lots of websites where you can browse the auto-generated docs from the SDK header files, like this with clickable links to jump between things. If I’m actually at an Amiga though, there are some useful native tools I prefer that can index and search through the local headers. The screenshot above shows the standard “AutoDoc Reader” freeware tool viewing the equivalent of a man page for the AmigaDOS library, alongside the AmigaGuide Installer language reference. http://www.pjhutchison.org/tutorial/amiga_c.html - amazing site. This is what inspired me to pick up a compiler again and get to work. There’s a great refresher on the C language itself, and then it dives into Amiga-specific coding with everything from low-level library access, sound and GUI programming and more. Amiga OS Dev wiki is a goldmine, although it can take a little searching to find what you’re after. It’s mostly OS 4-focused but because all Amiga systems share a common ancestor it’s usually pretty applicable to all platforms. Specific articles that I found useful include: OS 4 Migration Guide Programming in the Amiga Environment Fundamental Types https://www.amiga-news.de is a great news aggregator site for all things Amiga, and also has a bunch of exclusive articles on programming in the AmigaOS environment - Like this recent article on GUI programming. Well worth a read - thanks to Daniel Reimann for pointing out the great content to me! And lastly, there are great threads I constantly found on amigans.net which is where a lot of OS 4/”Next Gen” technical discussion happens. For classic systems, I found the Coders discussions on the English Amiga Board an invaluable resource. The way forward is back ? When I started this project, it was really a way to get acquainted with my new X5000. Since then, I’ve decided to port my codebase back to the classic Amiga, as well as explore porting over to other Amiga-like systems such as MorphOS and AROS. This leads to some choices: From a packaging and distribution point of view, a 68k binary is pretty much the universal standard in Amiga land. It can run natively on classic Amigas, and modern systems like AmigaOS 4.x and MorphOS can run 68k binaries through translation. In a method similar to how Apple has handled the transition between processors in the Mac, it’s a pretty seamless experience and I run a lot of classic 68k software on my X5000. As long as you aren’t “banging on the metal” it works really well and integrates smoothly with the rest of the system. The original 68k AmigaOS from Commodore is also pretty much the standard for source-code compatibility; code targetting this release can be built on most of the derivatives and later systems with very little (if any) modification. On AmigaOS 4 for example, you can simply add -D__USE_INLINE__ to your makefiles and in theory build from a common codebase. If you start the other way as I did and write initially targetting AmigaOS 4, it’s harder to port to other systems. For example, I originally followed the AmigaOS 4 programming style which favours prefixing library calls with interface names. This isn’t compatible with any other system, so the easiest way to port this to more Amiga-like platforms is to refactor this code back to the classic style of calling system functions. I do plan on building platform-specific binaries using #defines so I can for example use functions like dos.library/AddCmdPathNode on OS 4 that I otherwise have to manually implement, and while a lot of higher-level layers (like e.g. MUI for building graphical applications) are shared across platforms this is probably the best bet for adding specific features from one platform that aren’t available on others. Honestly though, at this point if you want to just get started I’d have to suggest you target classic AmigaOS compatibility and build a 68k binary. I’d personally target AmigaOS 3.x or 2.1 if you want to support a wider range of truly vintage systems; 1.x is facinating from a retro-geek perspective but lacks a lot of the nice features that came with later systems. Everything else like the installer, archive format, documentation format and so on is cross-platform anyway and supported from OS 2.1 and up. MorphOS, AROS and OS 4 are really fun systems to explore, and I highly recommend checking them out if this article has whetted your appetite (and you can find a system to run them on!) but classic is the easiest way to get your code out to the wider world and ironically provides a better code-base for future porting and native binaries than my “working backwards” approach. Wrap-up So that’s about the sum total of what I’ve picked up over the last few years, anyway! I still enjoy working on my Amigas when I get some “hacking on code in the evening” time, and in particular I find AmigaOS 4 on my X5000 a refreshing blend of retro appeal and just about enough modern convenience to use it for development tasks, or even for writing this article itself. My A1200 continues to impress me with how much utility there is in such a small box and is a wonderful distraction from the modern era of bloated systems and applications. It is perhaps an evolutionary dead-end, but it’s still a lot of fun and is one of the rare occasions these days where I feel actually in control of my computer. Working backwards in time from OS 4.1 to my classic Amigas has also really given me a greater insight and appreciation for what the Amiga engineers managed to pull off back then. If you’re in any way interested in computer history - or simply want to give something truly different a try - you should definitely check out AmigaOS. I hope this quick type BRAIN: > WEB: dump provides you with some good starting points, and maybe gets you coding too!

a year ago 38 votes
Haiku Package Management

Discussion on Hacker News Discussion on lobste.rs I’ve long since been a die-hard BeOS fan and have been running the open-source recreation Haiku for many years. I think it’s interesting to explore the “alternative OS” world and consider some great ideas that for whatever reason never caught on elsewhere. The way Haiku handles package management and its alternative approach to an “immutable system” is one of those ideas I find really cool. Here’s what it looks like from a desktop user’s perspective - there’s all the usual stuff like an “app store”, package updater, repositories of packages and so on: It’s all there and works well - it’s easily as smooth as any desktop Linux experience. However, it’s the implementation details behind the scenes that make it so interesting to me. Haiku takes a refreshingly new approach to package management: Despite the user experience “feeling” like a traditional package manager - say, something like apt or dnf - it has seamless support for: Immutable system directories Rollback to previous states User-managed packages separated from system packages And a whole bunch of infrastructure and tooling to support multiple package repositories, building packages from source and more. As a geek, I find it beautifully elegant and an idea that I’d love to see other platforms exploring. Let’s take a closer look, starting with how Haiku handles the split between “system” and “user” directories. Filesystem layout Running a df command from the Haiku shell (BASH is the default, but others like ZSH can be easily installed) shows the following: ~> df -h Mount Type Total Free Flags Device ----------------- --------- --------- --------- ------- ------------------------ /boot bfs 232.9 GiB 219.6 GiB QAM-P-W /dev/disk/scsi/0/0/0/0 /boot/system packagefs 4.0 KiB 4.0 KiB QAM-P-- /boot/home/config packagefs 4.0 KiB 4.0 KiB QAM-P-- In Haiku, the root / is a ram-based virtual filesystem set up by the kernel when it’s booted. All other filesystems and devices are mounted under /, so /boot refers to the entire boot volume - and not a boot partition as is the case with e.g. most Linux installs. The /boot/system mountpoint is essentially the system directory which makes up the Haiku OS and installed applications. In the root directory, there are a bunch of symlinks that point there for convenience: ~> ls -l / total 6 lrwxrwxrwx 1 user root 16 Feb 13 14:18 bin -> /boot/system/bin drwxr-xr-x 1 user root 2048 Mar 31 2021 boot drwxr-xr-x 1 user root 0 Feb 13 14:18 dev lrwxrwxrwx 1 user root 25 Feb 13 14:18 etc -> /boot/system/settings/etc lrwxrwxrwx 1 user root 5 Feb 13 14:18 Haiku -> /boot lrwxrwxrwx 1 user root 26 Feb 13 14:18 packages -> /boot/system/package-links lrwxrwxrwx 1 user root 12 Feb 13 14:18 system -> /boot/system lrwxrwxrwx 1 user root 22 Feb 13 14:18 tmp -> /boot/system/cache/tmp lrwxrwxrwx 1 user root 16 Feb 13 14:18 var -> /boot/system/var So we can access e.g. /boot/system as /system and so on. Note that this mount point is backed by packagefs - this is provided by a virtual filesystem that presents a merged view of all the packages installed. It’s sort of like an overlay filesystem (commonly used by container runtimes) in the Linux world. This mount-point is read only: ~> touch /system/test touch: cannot touch '/system/test': Read-only file system But we can however write to anything in our home directory, which ensures a clean separation of system and user data/configuration. NOTE : Due to its BeOS ancestry, Haiku is not currently a multi-user system so there is only one /boot/home directory and the user is effectively the sole administrator account. As I understand it, the scaffolding is present to support multiple users, but it won’t be a priority until after R1 is released. However, this opens up the later possibility for packages to be installed and configured on a per-user basis. The Package Daemon and packagefs There’s a great overview of how all these components fit together in the developer documentation, but here’s my lay-persons understanding of it… Haiku has many packages available, ranging from system components, development libraries and end-user applications. There’s even recent ports of Wine, LibreOffice and KDE Applications. These are available as .hpkg files which are placed in a special packages directory, and the packagefs service mounts the contents into e.g. the /boot/system mountpoint. Unlike traditional package management tools, the contents of the package are not unarchived and copied into the filesystem; When you’re looking at /boot/system, you’re essentially looking at a collection of multiple packages and their contents, all dynamically mounted and made available on-the-fly as a read-only, virtual filesystem. This is why /boot/system is read-only: It’s not a “real” filesystem and only exists as a virtual union of all the different packages that are activated at that point in time. NOTE : There are some exceptions to this, as some directories such as packages, cache, var etc. are writeable. There are called “shine-through directories” which reside on the underlying BFS volume. You can read more about these in the developer documentation. And while we’re geeking out, BeFS itself is definitely also worth investigating! When packagefs detects a new .hpkg file has been copied to the packages directory, it performs some checks such as searching for dependencies or conflicts. If everything is OK it “activates” the package, and the contents are then available. Installing a package Here’s an example of installing a package using the CLI pkgman tool, showing what happens behind the scenes. First, let’s install an example package, in this case something simple like the awesome CLI Pipe Viewer tool. It’s very much like running apt-get, yum or other similar tools on a Linux system: ~> pkgman install pv Validating checksum for Haiku...done. 100% repochecksum-1 [64 bytes] Validating checksum for HaikuPorts...done. The following changes will be made: in system: install package pv-1.6.6-2 from repository HaikuPorts 100% pv-1.6.6-2-x86_64.hpkg [40.57 KiB] Validating checksum for https://eu.hpkg.haiku-os.org/haikuports/master/x86_64/current/packages/pv-1.6.6-2-x86_64.hpkg...done. [system] Applying changes ... [system] Changes applied. Old activation state backed up in "state_2023-02-13_14:33:27" [system] Cleaning up ... [system] Done. If we now look at /system/packages we can see the downloaded .hpkg file (which can also be inspected with CLI or Desktop tools): ~> ls -l /system/packages/pv-1.6.6-2-x86_64.hpkg -rw-r--r-- 1 user root 41543 Feb 13 14:33 /system/packages/pv-1.6.6-2-x86_64.hpkg Sure enough, the package daemon picked it up and mounted its contents so they are available through the merged packagefs under /boot/system. Here’s where the pv binary now appears installed: ~> ls -l /system/bin/pv -r-xr-xr-x 1 user root 70136 Aug 6 2018 /system/bin/pv Uninstalling is as simple as pkgman uninstall pv which removes the .hpkg file, and the contents then “disappear” from /boot/system. State and Rollback What’s really nice about this, is that it enables a simple and elegant way of rolling-back your system state to a previous set of packages or even OS releases. If you check the last output from the pkgman install pv command above, you’ll see there’s a message saying that an “old activation state” is being backed up to a newly-created directory. The package manager does this at the start of every transaction, and if we check that directory we’ll see a simple plain-text file called activated-packages: ~> head -n5 /system/packages/administrative/state_2023-02-13_14\:33\:27/activated-packages netcat-1.10-4-x86_64.hpkg ncurses6-6.3-2-x86_64.hpkg mpfr3-3.1.6-6-x86_64.hpkg mesa_swpipe-22.0.5-2-x86_64.hpkg mesa_devel-22.0.5-2-x86_64.hpkg This contains a record of the exact package versions that were installed at that time. If any packages were to be upgraded, then the state directory will also contain a set of packages to facilitate roll-back. Here’s what it looks like from the Haiku Desktop: Along with a listing of old packages in a state directory, the currently installed packages are shown on the top left. You can see for example, that BASH in the current system is at 5.1 but in an old backup state from 2021 it was 5.0. You can also clean these old state directories up according to your needs - such as only keeping the last 30 days of state to preserve disk space. Tying this all together, the Haiku Boot Loader can make use of activation lists and saved packages to get back to any particular state very easily - all it has to do is pick a backup package activation file, and only activate the packages found in it when booting. You can select a backup state from the “Select Boot Volume” option in the boot loader and your system will boot back to the previous state using the archived packages and activation list. Your virtual /boot/system directory will then be reverted to the desired state - and this all happens on-the-fly at boot time; there is no long roll-back process or destructive operations and you can reboot into different states at will. User packages So far, I’ve just been talking about the packagefs mountpoint at /boot/system, but there’s also another mountpoint shown in my first df -h command which lives under /boot/home/config. Here’s what exists under that directory: ~/config> tree -d /boot/home/config -L 1 /boot/home/config ├── apps ├── cache ├── data ├── non-packaged ├── packages ├── settings └── var This is more-or-less a copy of the system directories, but they are specific to my user account. This means I can use this location to install software and test new packages out without “polluting” the system directories. And when Haiku gets full multi-user support this also means each user can have their own distinct set of packages available. A couple of points of interest as an aside: For non-native software packages (e.g. something installed with ./configure && make install) you can use ~/config/non-packaged. This is somewhat similar to /usr/local on Unix systems. The settings directory is where Haiku-packaged software keeps local configuration. To use an analogy again, it’s sort of like $XDG_CONFIG_HOME on other Unix-like systems. Haiku software tends to make use of this directory: For example, instead of SSH keeping configuration under ~/.ssh, it’s now stored under a directory in ~/config/settings: ~/config> ls -l settings/ssh/ total 32 -rw------- 1 user root 1238 Mar 31 2021 authorized_keys -rw------- 1 user root 476 Mar 31 2021 config -rw------- 1 user root 1679 Mar 31 2021 id_rsa -rw------- 1 user root 410 Mar 31 2021 id_rsa.pub -rw------- 1 user root 1790 Dec 18 2021 known_hosts Building your own packages I’ll finish off by showing an example of building a user package and installing it using Haikuporter and the community HaikuPorts collection. These tools can be used to build and customise a massive amount of software for Haiku ranging from old BeOS tools to a complete LibreOffice installation. NOTE : Think of building HaikuPorts from source like the FreeBSD “ports” collection. All the HaikuPorts packages are available through their repository as binaries through pkgman or the HaikuDepot graphical desktop application. As an end-user, you typically do not need to use Haikuporter unless you want to customise a package, create a new one, or submit patches/bug-fixes. I’m just using it as an example and because it’s cool! After setting up Haiku Ports and Haikuporter by following the instructions, I can now build a package from source. I’ll use the GUI FTP client “FTP Positive” as an example: ~/haikuports/haiku-apps> alias hp="haikuporter -S -j8 --no-source-packages --get-dependencies" ~/haikuports/haiku-apps> hp ftppositive Checking if any dependency-infos need to be updated ... Looking for stale dependency-infos ... ---------------------------------------------------------------------- haiku-apps::ftppositive-1.2.2 /boot/home/haikuports/haiku-apps/ftppositive/ftppositive-1.2.2.recipe ---------------------------------------------------------------------- Skipping download of source for 48a5acdfe0981697018abf151a82802f4f3e500e.tar.gz Validating checksum of 48a5acdfe0981697018abf151a82802f4f3e500e.tar.gz Unpacking source of 48a5acdfe0981697018abf151a82802f4f3e500e.tar.gz ... ... Build output truncated ... mimesetting files for package ftppositive-1.2.2-7-x86_64.hpkg ... creating package ftppositive-1.2.2-7-x86_64.hpkg ... ----- Package Info ---------------- header size: 80 heap size: 211523 TOC size: 1110 package attributes size: 695 total size: 211603 ----------------------------------- waiting for build package ftppositive-1.2.2-7 to be deactivated grabbing ftppositive-1.2.2-7-x86_64.hpkg and moving it to /boot/home/haikuports/packages/ftppositive-1.2.2-7-x86_64.hpkg The last lines of output show me that a .hpkg file has been created. I can then simply copy the package to my local ~/config/packages directory to “activate” it. Once again, packagefs will check it and then add the contents to the /boot/home/config directory. Here’s what it looks like on the Haiku desktop: Note the top two windows “stuck” together using the built-in stacking and tiling window manager! You can see the package has been activated after copying it into the user’s packages directory. It’s installed the application to /boot/home/config/apps/FtpPositive/ instead of the system directories and has added a DeskBar menu entry which has also been picked up and “merged” into the global system Applications menu - where it appears alongside all the system-installed packages. Conclusion Like the rest of Haiku, package management is a refreshingly different and well thought-out experience. Given how niche an OS it is, it still surprises me how polished the system is - I have it running natively on an old Lenovo ThinkStation SFF PC and it absolutely flies. Apart from my Amigas it’s easily my favourite “hacking on code in the evening” system and for many tasks is perfectly capable of being a daily-driver. There’s a fascinating world of alternative OSes out there, many of them following entirely different paradigms than the current mainstream world of Windows/Mac/Linux. I’ve had the good fortune to be exposed to these different ways of thinking all the way back to my school days in the UK, where RISC OS was commonplace in the classroom and Amigas and Ataris ruled the playground. From this once-rich polyculture of competing processor architectures and platforms, the world seemingly consolidated itself around an x86 or ARM processor running something based on Windows or Unix. Which gets the job done, but it’s y’know… a little boring. There are so many interesting - and some downright crazy - ideas that either failed in the commercial marketplace, were never given a proper chance (yeah, I’m still salty about Commodore killing the Amiga) or only found a hobbyist following. But here’s the thing: Just because they never got wide adoption doesn’t mean they were wrong. I think it’s great that people are exploring new ideas, continuing the lineage of legacy systems, and simply creating stuff because they damn well feel like it and it’s fun! There’s lots of really interesting code out there and it makes you wonder what an alternate time-line would look like where Apple did use BeOS as the basis for their next operating system. What if VMS had “won” over UNIX? What if Commodore had known what to do with the Amiga ? Anyway, I hope this has whetted your appetite for all things Alt-OS, and Haiku in particular. If you haven’t yet tried it, there’s detailed instructions on their website, and I highly recommend taking it for a spin. Happy Hacking!

over a year ago 29 votes
A Splinter In Your Mind

Earlier this year, I finally discovered as an adult that I am “on the spectrum” with what used to be called Asperger’s Syndrome. The diagnosis helped make sense of a lot things and has given me a greater insight into my “way of being in the world”. Whilst there are times I struggle with things that neuro-typical people usually find easy, or I find some situations draining, the condition has also brought me many positives which often get overlooked when talking about Autism Spectrum Disorders. True, it’s made life difficult or painful at times. But now I’ve learned more about it and have had help along the way, I’ve realised that many of my abilities and passions that I write about on this site also stem from the “unusual” way my mind works. Having fun with music is one of those gifts and it’s also how I can best express myself. I started putting this latest track together as I was processing everything and blew off some steam along the way - It was a great experience and I feel like I ended this project on a very positive note. I guess this is also me going public and being open about having an ASD. There’s still a fair amount of stigma associated with these conditions, but frankly much of our favourite art, the modern world and the Internet as we know it probably wouldn’t exist without all the neuro-diverse folks who made much of it! We’re just wired a little differently - but wouldn’t life be boring if we were all the same? So here’s to all the Aspies of the world! The track is available to stream on YouTube, and all the usual stores.

over a year ago 26 votes

More in programming

That boolean should probably be something else

One of the first types we learn about is the boolean. It's pretty natural to use, because boolean logic underpins much of modern computing. And yet, it's one of the types we should probably be using a lot less of. In almost every single instance when you use a boolean, it should be something else. The trick is figuring out what "something else" is. Doing this is worth the effort. It tells you a lot about your system, and it will improve your design (even if you end up using a boolean). There are a few possible types that come up often, hiding as booleans. Let's take a look at each of these, as well as the case where using a boolean does make sense. This isn't exhaustive—[1]there are surely other types that can make sense, too. Datetimes A lot of boolean data is representing a temporal event having happened. For example, websites often have you confirm your email. This may be stored as a boolean column, is_confirmed, in the database. It makes a lot of sense. But, you're throwing away data: when the confirmation happened. You can instead store when the user confirmed their email in a nullable column. You can still get the same information by checking whether the column is null. But you also get richer data for other purposes. Maybe you find out down the road that there was a bug in your confirmation process. You can use these timestamps to check which users would be affected by that, based on when their confirmation was stored. This is the one I've seen discussed the most of all these. We run into it with almost every database we design, after all. You can detect it by asking if an action has to occur for the boolean to change values, and if values can only change one time. If you have both of these, then it really looks like it is a datetime being transformed into a boolean. Store the datetime! Enums Much of the remaining boolean data indicates either what type something is, or its status. Is a user an admin or not? Check the is_admin column! Did that job fail? Check the failed column! Is the user allowed to take this action? Return a boolean for that, yes or no! These usually make more sense as an enum. Consider the admin case: this is really a user role, and you should have an enum for it. If it's a boolean, you're going to eventually need more columns, and you'll keep adding on other statuses. Oh, we had users and admins, but now we also need guest users and we need super-admins. With an enum, you can add those easily. enum UserRole { User, Admin, Guest, SuperAdmin, } And then you can usually use your tooling to make sure that all the new cases are covered in your code. With a boolean, you have to add more booleans, and then you have to make sure you find all the places where the old booleans were used and make sure they handle these new cases, too. Enums help you avoid these bugs. Job status is one that's pretty clearly an enum as well. If you use booleans, you'll have is_failed, is_started, is_queued, and on and on. Or you could just have one single field, status, which is an enum with the various statuses. (Note, though, that you probably do want timestamp fields for each of these events—but you're still best having the status stored explicitly as well.) This begins to resemble a state machine once you store the status, and it means that you can make much cleaner code and analyze things along state transition lines. And it's not just for storing in a database, either. If you're checking a user's permissions, you often return a boolean for that. fn check_permissions(user: User) -> bool { false // no one is allowed to do anything i guess } In this case, true means the user can do it and false means they can't. Usually. I think. But you can really start to have doubts here, and with any boolean, because the application logic meaning of the value cannot be inferred from the type. Instead, this can be represented as an enum, even when there are just two choices. enum PermissionCheck { Allowed, NotPermitted(reason: String), } As a bonus, though, if you use an enum? You can end up with richer information, like returning a reason for a permission check failing. And you are safe for future expansions of the enum, just like with roles. You can detect when something should be an enum a proliferation of booleans which are mutually exclusive or depend on one another. You'll see multiple columns which are all changed at the same time. Or you'll see a boolean which is returned and used for a long time. It's important to use enums here to keep your program maintainable and understandable. Conditionals But when should we use a boolean? I've mainly run into one case where it makes sense: when you're (temporarily) storing the result of a conditional expression for evaluation. This is in some ways an optimization, either for the computer (reuse a variable[2]) or for the programmer (make it more comprehensible by giving a name to a big conditional) by storing an intermediate value. Here's a contrived example where using a boolean as an intermediate value. fn calculate_user_data(user: User, records: RecordStore) { // this would be some nice long conditional, // but I don't have one. So variables it is! let user_can_do_this: bool = (a && b) && (c || !d); if user_can_do_this && records.ready() { // do the thing } else if user_can_do_this && records.in_progress() { // do another thing } else { // and something else! } } But even here in this contrived example, some enums would make more sense. I'd keep the boolean, probably, simply to give a name to what we're calculating. But the rest of it should be a match on an enum! * * * Sure, not every boolean should go away. There's probably no single rule in software design that is always true. But, we should be paying a lot more attention to booleans. They're sneaky. They feel like they make sense for our data, but they make sense for our logic. The data is usually something different underneath. By storing a boolean as our data, we're coupling that data tightly to our application logic. Instead, we should remain critical and ask what data the boolean depends on, and should we maybe store that instead? It comes easier with practice. Really, all good design does. A little thinking up front saves you a lot of time in the long run. I know that using an em-dash is treated as a sign of using LLMs. LLMs are never used for my writing. I just really like em-dashes and have a dedicated key for them on one of my keyboard layers. ↩ This one is probably best left to the compiler. ↩

22 hours ago 3 votes
AmigaGuide Reference Library

As I slowly but surely work towards the next release of my setcmd project for the Amiga (see the 68k branch for the gory details and my total noob-like C flailing around), I’ve made heavy use of documentation in the AmigaGuide format. Despite it’s age, it’s a great Amiga-native format and there’s a wealth of great information out there for things like the C API, as well as language guides and tutorials for tools like the Installer utility - and the AmigaGuide markup syntax itself. The only snag is, I had to have access to an Amiga (real or emulated), or install one of the various viewer programs on my laptops. Because like many, I spend a lot of time in a web browser and occasionally want to check something on my mobile phone, this is less than convenient. Fortunately, there’s a great AmigaGuideJS online viewer which renders AmigaGuide format documents using Javascript. I’ve started building up a collection of useful developer guides and other files in my own reference library so that I can access this documentation whenever I’m not at my Amiga or am coding in my “modern” dev environment. It’s really just for my own personal use, but I’ll be adding to it whenever I come across a useful piece of documentation so I hope it’s of some use to others as well! And on a related note, I now have a “unified” code-base so that SetCmd now builds and runs on 68k-based OS 3.x systems as well as OS 4.x PPC systems like my X5000. I need to: Tidy up my code and fix all the “TODO” stuff Update the Installer to run on OS 3.x systems Update the documentation Build a new package and upload to Aminet/OS4Depot Hopefully I’ll get that done in the next month or so. With the pressures of work and family life (and my other hobbies), progress has been a lot slower these last few years but I’m still really enjoying working on Amiga code and it’s great to have a fun personal project that’s there for me whenever I want to hack away at something for the sheer hell of it. I’ve learned a lot along the way and the AmigaOS is still an absolute joy to develop for. I even brought my X5000 to the most recent Kickstart Amiga User Group BBQ/meetup and had a fun day working on the code with fellow Amigans and enjoying some classic gaming & demos - there was also a MorphOS machine there, which I think will be my next target as the codebase is slowly becoming more portable. Just got to find some room in the “retro cave” now… This stuff is addictive :)

14 hours ago 2 votes
An Analysis of Links From The White House’s “Wire” Website

A little while back I heard about the White House launching their version of a Drudge Report style website called White House Wire. According to Axios, a White House official said the site’s purpose was to serve as “a place for supporters of the president’s agenda to get the real news all in one place”. So a link blog, if you will. As a self-professed connoisseur of websites and link blogs, this got me thinking: “I wonder what kind of links they’re considering as ‘real news’ and what they’re linking to?” So I decided to do quick analysis using Quadratic, a programmable spreadsheet where you can write code and return values to a 2d interface of rows and columns. I wrote some JavaScript to: Fetch the HTML page at whitehouse.gov/wire Parse it with cheerio Select all the external links on the page Return a list of links and their headline text In a few minutes I had a quick analysis of what kind of links were on the page: This immediately sparked my curiosity to know more about the meta information around the links, like: If you grouped all the links together, which sites get linked to the most? What kind of interesting data could you pull from the headlines they’re writing, like the most frequently used words? What if you did this analysis, but with snapshots of the website over time (rather than just the current moment)? So I got to building. Quadratic today doesn’t yet have the ability for your spreadsheet to run in the background on a schedule and append data. So I had to look elsewhere for a little extra functionality. My mind went to val.town which lets you write little scripts that can 1) run on a schedule (cron), 2) store information (blobs), and 3) retrieve stored information via their API. After a quick read of their docs, I figured out how to write a little script that’ll run once a day, scrape the site, and save the resulting HTML page in their key/value storage. From there, I was back to Quadratic writing code to talk to val.town’s API and retrieve my HTML, parse it, and turn it into good, structured data. There were some things I had to do, like: Fine-tune how I select all the editorial links on the page from the source HTML (I didn’t want, for example, to include external links to the White House’s social pages which appear on every page). This required a little finessing, but I eventually got a collection of links that corresponded to what I was seeing on the page. Parse the links and pull out the top-level domains so I could group links by domain occurrence. Create charts and graphs to visualize the structured data I had created. Selfish plug: Quadratic made this all super easy, as I could program in JavaScript and use third-party tools like tldts to do the analysis, all while visualizing my output on a 2d grid in real-time which made for a super fast feedback loop! Once I got all that done, I just had to sit back and wait for the HTML snapshots to begin accumulating! It’s been about a month and a half since I started this and I have about fifty days worth of data. The results? Here’s the top 10 domains that the White House Wire links to (by occurrence), from May 8 to June 24, 2025: youtube.com (133) foxnews.com (72) thepostmillennial.com (67) foxbusiness.com (66) breitbart.com (64) x.com (63) reuters.com (51) truthsocial.com (48) nypost.com (47) dailywire.com (36) From the links, here’s a word cloud of the most commonly recurring words in the link headlines: “trump” (343) “president” (145) “us” (134) “big” (131) “bill” (127) “beautiful” (113) “trumps” (92) “one” (72) “million” (57) “house” (56) The data and these graphs are all in my spreadsheet, so I can open it up whenever I want to see the latest data and re-run my script to pull the latest from val.town. In response to the new data that comes in, the spreadsheet automatically parses it, turn it into links, and updates the graphs. Cool! If you want to check out the spreadsheet — sorry! My API key for val.town is in it (“secrets management” is on the roadmap). But I created a duplicate where I inlined the data from the API (rather than the code which dynamically pulls it) which you can check out here at your convenience. Email · Mastodon · Bluesky

3 hours ago 2 votes
Implementation of optimized vector of strings in C++ in SumatraPDF

SumatraPDF is a fast, small, open-source PDF reader for Windows, written in C++. This article describes how I implemented StrVec class for efficiently storing multiple strings. Much ado about the strings Strings are among the most used types in most programs. Arrays of strings are also used often. I count ~80 uses of StrVec in SumatraPDF code. This article describes how I implemented an optimized array of strings in SumatraPDF C++ code . No STL for you Why not use std::vector<std::string>? In SumatraPDF I don’t use STL. I don’t use std::string, I don’t use std::vector. For me it’s a symbol of my individuality, and my belief in personal freedom. As described here, minimum size of std::string on 64-bit machines is 32 bytes for msvc / gcc and 24 bytes for short strings (15 chars for msvc / gcc, 22 chars for clang). For longer strings we have more overhead: 32⁄24 bytes for the header memory allocator overhead allocator metadata padding due to rounding allocations to at least 16 bytes There’s also std::vector overhead: for fast appends (push()) std::vectorimplementations over-allocated space Longer strings are allocated at random addresses so they can be spread out in memory. That is bad for cache locality and that often cause more slowness than executing lots of instructions. Design and implementation of StrVec StrVec (vector of strings) solves all of the above: per-string overhead of only 8 bytes strings are laid out next to each other in memory StrVec High level design of StrVec: backing memory is allocated in singly-linked pages similar to std::vector, we start with small page and increase the size of the page. This strikes a balance between speed of accessing a string at random index and wasted space unlike std::vector we don’t reallocate memory (most of the time). That saves memory copy when re-allocating backing space Here’s all there is to StrVec: struct StrVec { StrVecPage* first = nullptr; int nextPageSize = 256; int size = 0; } size is a cached number of strings. It could be calculated by summing the size in all StrVecPages. nextPageSize is the size of the next StrVecPage. Most array implementation increase the size of next allocation by 1.4x - 2x. I went with the following progression: 256 bytes, 1k, 4k, 16k, 32k and I cap it at 64k. I don’t have data behind those numbers, they feel right. Bigger page wastes more space. Smaller page makes random access slower because to find N-th string we need to traverse linked list of StrVecPage. nextPageSize is exposed to allow the caller to optimize use. E.g. if it expects lots of strings, it could set nextPageSize to a large number. StrVecPage Most of the implementation is in StrVecPage. The big idea here is: we allocate a block of memory strings are allocated from the end of memory block at the beginning of the memory block we build and index of strings. For each string we have: u32 size u32 offset of the string within memory block, counting from the beginning of the block The layout of memory block is: StrVecPage struct { size u32; offset u32 } [] … not yet used space strings This is StrVecPage: struct StrVecPage { struct StrVecPage* next; int pageSize; int nStrings; char* currEnd; } next is for linked list of pages. Since pages can have various sizes we need to record pageSize. nStrings is number of strings in the page and currEnd points to the end of free space within page. Implementing operations Appending a string Appending a string at the end is most common operation. To append a string: we calculate how much memory inside a page it’ll need: str::Len(string) + 1 + sizeof(u32) + sizeof(u32). +1 is for 0-termination for compatibility with C APIs that take char*, and 2xu32 for size and offset. If we have enough space in last page, we add size and offset at the end of index and append a string from the end i.e. `currEnd - (str::Len(string) + 1). If there is not enough space in last page, we allocate new page We can calculate how much space we have left with: int indexEntrySize = sizeof(u32) + sizeof(u32); // size + offset char* indexEnd = (char*)pageStart + sizeof(StrVecPage) + nStrings*indexEntrySize int nBytesFree = (int)(currEnd - indexEnd) Removing a string Removing a string is easy because it doesn’t require moving memory inside StrVecPage. We do nStrings-- and move index values of strings after the removed string. I don’t bother freeing the string memory within a page. It’s possible but complicated enough I decided to skip it. You can compact StrVec to remove all overhead. If you do not care about preserving order of strings after removal, I haveRemoveAtFast() which uses a trick: instead of copying memory of all index values after removed string, I copy a single index from the end into a slot of the string being removed. Replacing a string or inserting in the middle Replacing a string or inserting a string in the middle is more complicated because there might not be enough space in the page for the string. When there is enough space, it’s as simple as append. When there is not enough space, I re-use the compacting capability: I compact all existing pages into a single page with extra space for the string and some extra space as an optimization for multiple inserts. Iteration A random access requires traversing a linked list. I think it’s still fast because typically there aren’t many pages and we only need to look at a single nStrings value. After compaction to a single page, random access is as fast as it could ever be. C++ iterator is optimized for sequential access: struct iterator { const StrVec* v; int idx; // perf: cache page, idxInPage from prev iteration int idxInPage; StrVecPage* page; } We cache the current state of iteration as page and idxInPage. To advance to next string we advance idxInPage. If it exceeds nStrings, we advance to page->next. Optimized search Finding a string is as optimized as it could be without a hash table. Typically to compare char* strings you need to call str::Eq(s, s2) for every string you compare it to. That is a function call and it has to touch s2 memory. That is bad for performance because it blows the cache. In StrVec I calculate length of the string to find once and then traverse the size / offset index. Only when size is different I have to compare the strings. Most of the time we just look at offset / size in L1 cache, which is very fast. Compacting If you know that you’ll not be adding more strings to StrVec you can compact all pages into a single page with no overhead of empty space. It also speeds up random access because we don’t have multiple pages to traverse to find the item and a given index. Representing a nullptr char* Even though I have a string class, I mostly use char* in SumatraPDF code. In that world empty string and nullptr are 2 different things. To allow storing nullptr strings in StrVec (and not turning them into empty strings on the way out) I use a trick: a special u32 value kNullOffset represents nullptr. StrVec is a string pool allocator In C++ you have to track the lifetime of each object: you allocate with malloc() or new when you no longer need to object, you call free() or delete However, the lifetime of allocations is often tied together. For example in SumatraPDF an opened document is represented by a class. Many allocations done to construct that object last exactly as long as the object. The idea of a pool allocator is that instead of tracking the lifetime of each allocation, you have a single allocator. You allocate objects with the same lifetime from that allocator and you free them with a single call. StrVec is a string pool allocator: all strings stored in StrVec have the same lifetime. Testing In general I don’t advocate writing a lot of tests. However, low-level, tricky functionality like StrVec deserves decent test coverage to ensure basic functionality works and to exercise code for corner cases. I have 360 lines of tests for ~700 lines of of implementation. Potential tweaks and optimization When designing and implementing data structures, tradeoffs are aplenty. Interleaving index and strings I’m not sure if it would be faster but instead of storing size and offset at the beginning of the page and strings at the end, we could store size / string sequentially from the beginning. It would remove the need for u32 of offset but would make random access slower. Varint encoding of size and offset Most strings are short, under 127 chars. Most offsets are under 16k. If we stored size and offset as variable length integers, we would probably bring down average per-string overhead from 8 bytes to ~4 bytes. Implicit size When strings are stored sequentially size is implicit as difference between offset of the string and offset of next string. Not storing size would make insert and set operations more complicated and costly: we would have to compact and arrange strings in order every time. Storing index separately We could store index of size / offset in a separate vector and use pages to only allocate string data. This would simplify insert and set operations. With current design if we run out of space inside a page, we have to re-arrange memory. When offset is stored outside of the page, it can refer to any page so insert and set could be as simple as append. The evolution of StrVec The design described here is a second implementation of StrVec. The one before was simply a combination of str::Str (my std::string) for allocating all strings and Vec<u32> (my std::vector) for storing offset index. It had some flaws: appending a string could re-allocate memory within str::Str. The caller couldn’t store returned char* pointer because it could be invalidated. As a result the API was akward and potentially confusing: I was returning offset of the string so the string was str::Str.Data() + offset. The new StrVec doesn’t re-allocate on Append, only (potentially) on InsertAt and SetAt. The most common case is append-only which allows the caller to store the returned char* pointers. Before implementing StrVec I used Vec<char*>. Vec is my version of std::vector and Vec<char*> would just store pointer to individually allocated strings. Cost vs. benefit I’m a pragmatist: I want to achieve the most with the least amount of code, the least amount of time and effort. While it might seem that I’m re-implementing things willy-nilly, I’m actually very mindful of the cost of writing code. Writing software is a balance between effort and resulting quality. One of the biggest reasons SumatraPDF so popular is that it’s fast and small. That’s an important aspect of software quality. When you double click on a PDF file in an explorer, SumatraPDF starts instantly. You can’t say that about many similar programs and about other software in general. Keeping SumatraPDF small and fast is an ongoing focus and it does take effort. StrVec.cpp is only 705 lines of code. It took me several days to complete. Maybe 2 days to write the code and then some time here and there to fix the bugs. That being said, I didn’t start with this StrVec. For many years I used obvious Vec<char*>. Then I implemented somewhat optimized StrVec. And a few years after that I implemented this ultra-optimized version. References SumatraPDF is a small, fast, multi-format (PDF/eBook/Comic Book and more), open-source reader for Windows. The implementation described here: StrVec.cpp, StrVec.h, StrVec_ut.cpp By the time you read this, the implementation could have been improved.

22 hours ago 1 votes
The parental dead end of consent morality

Consent morality is the idea that there are no higher values or virtues than allowing consenting adults to do whatever they please. As long as they're not hurting anyone, it's all good, and whoever might have a problem with that is by definition a bigot.  This was the overriding morality I picked up as a child of the 90s. From TV, movies, music, and popular culture. Fly your freak! Whatever feels right is right! It doesn't seem like much has changed since then. What a moral dead end. I first heard the term consent morality as part of Louise Perry's critique of the sexual revolution. That in the context of hook-up culture, situationships, and falling birthrates, we have to wrestle with the fact that the sexual revolution — and it's insistence that, say, a sky-high body count mustn't be taboo — has led society to screwy dating market in the internet age that few people are actually happy with. But the application of consent morality that I actually find even more troubling is towards parenthood. As is widely acknowledged now, we're in a bit of a birthrate crisis all over the world. And I think consent morality can help explain part of it. I was reminded of this when I posted a cute video of a young girl so over-the-moon excited for her dad getting off work to argue that you'd be crazy to trade that for some nebulous concept of "personal freedom". Predictably, consent morality immediately appeared in the comments: Some people just don't want children and that's TOTALLY OKAY and you're actually bad for suggesting they should! No. It's the role of a well-functioning culture to guide people towards The Good Life. Not force, but guide. Nobody wants to be convinced by the morality police at the pointy end of a bayonet, but giving up on the whole idea of objective higher values and virtues is a nihilistic and cowardly alternative. Humans are deeply mimetic creatures. It's imperative that we celebrate what's good, true, and beautiful, such that these ideals become collective markers for morality. Such that they guide behavior. I don't think we've done a good job at doing that with parenthood in the last thirty-plus years. In fact, I'd argue we've done just about everything to undermine the cultural appeal of the simple yet divine satisfaction of child rearing (and by extension maligned the square family unit with mom, dad, and a few kids). Partly out of a coordinated campaign against the family unit as some sort of trad (possibly fascist!) identity marker in a long-waged culture war, but perhaps just as much out of the banal denigration of how boring and limiting it must be to carry such simple burdens as being a father or a mother in modern society. It's no wonder that if you incessantly focus on how expensive it is, how little sleep you get, how terrifying the responsibility is, and how much stress is involved with parenthood that it doesn't seem all that appealing! This is where Jordan Peterson does his best work. In advocating for the deeper meaning of embracing burden and responsibility. In diagnosing that much of our modern malaise does not come from carrying too much, but from carrying too little. That a myopic focus on personal freedom — the nights out, the "me time", the money saved — is a spiritual mirage: You think you want the paradise of nothing ever being asked of you, but it turns out to be the hell of nobody ever needing you. Whatever the cause, I think part of the cure is for our culture to reembrace the virtue and the value of parenthood without reservation. To stop centering the margins and their pathologies. To start centering the overwhelming middle where most people make for good parents, and will come to see that role as the most meaningful part they've played in their time on this planet. But this requires giving up on consent morality as the only way to find our path to The Good Life. It involves taking a moral stance that some ways of living are better than other ways of living for the broad many. That parenthood is good, that we need more children both for the literal survival of civilization, but also for the collective motivation to guard against the bad, the false, and the ugly. There's more to life than what you feel like doing in the moment. The worst thing in the world is not to have others ask more of you. Giving up on the total freedom of the unmoored life is a small price to pay for finding the deeper meaning in a tethered relationship with continuing a bloodline that's been drawn for hundreds of thousands of years before it came to you. You're never going to be "ready" before you take the leap. If you keep waiting, you'll wait until the window has closed, and all you see is regret. Summon a bit of bravery, don't overthink it, and do your part for the future of the world. It's 2.1 or bust, baby!

yesterday 2 votes