Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
43
Discussion on Hacker News Discussion on lobste.rs I’ve long since been a die-hard BeOS fan and have been running the open-source recreation Haiku for many years. I think it’s interesting to explore the “alternative OS” world and consider some great ideas that for whatever reason never caught on elsewhere. The way Haiku handles package management and its alternative approach to an “immutable system” is one of those ideas I find really cool. Here’s what it looks like from a desktop user’s perspective - there’s all the usual stuff like an “app store”, package updater, repositories of packages and so on: It’s all there and works well - it’s easily as smooth as any desktop Linux experience. However, it’s the implementation details behind the scenes that make it so interesting to me. Haiku takes a refreshingly new approach to package management: Despite the user experience “feeling” like a traditional package manager - say, something like apt or dnf - it has seamless support...
over a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from markround.com

AmigaGuide Reference Library

As I slowly but surely work towards the next release of my setcmd project for the Amiga (see the 68k branch for the gory details and my total noob-like C flailing around), I’ve made heavy use of documentation in the AmigaGuide format. Despite it’s age, it’s a great Amiga-native format and there’s a wealth of great information out there for things like the C API, as well as language guides and tutorials for tools like the Installer utility - and the AmigaGuide markup syntax itself. The only snag is, I had to have access to an Amiga (real or emulated), or install one of the various viewer programs on my laptops. Because like many, I spend a lot of time in a web browser and occasionally want to check something on my mobile phone, this is less than convenient. Fortunately, there’s a great AmigaGuideJS online viewer which renders AmigaGuide format documents using Javascript. I’ve started building up a collection of useful developer guides and other files in my own reference library so that I can access this documentation whenever I’m not at my Amiga or am coding in my “modern” dev environment. It’s really just for my own personal use, but I’ll be adding to it whenever I come across a useful piece of documentation so I hope it’s of some use to others as well! And on a related note, I now have a “unified” code-base so that SetCmd now builds and runs on 68k-based OS 3.x systems as well as OS 4.x PPC systems like my X5000. I need to: Tidy up my code and fix all the “TODO” stuff Update the Installer to run on OS 3.x systems Update the documentation Build a new package and upload to Aminet/OS4Depot Hopefully I’ll get that done in the next month or so. With the pressures of work and family life (and my other hobbies), progress has been a lot slower these last few years but I’m still really enjoying working on Amiga code and it’s great to have a fun personal project that’s there for me whenever I want to hack away at something for the sheer hell of it. I’ve learned a lot along the way and the AmigaOS is still an absolute joy to develop for. I even brought my X5000 to the most recent Kickstart Amiga User Group BBQ/meetup and had a fun day working on the code with fellow Amigans and enjoying some classic gaming & demos - there was also a MorphOS machine there, which I think will be my next target as the codebase is slowly becoming more portable. Just got to find some room in the “retro cave” now… This stuff is addictive :)

2 months ago 41 votes
Disqus - An Apology

Earlier today, I got an email alerting me to an angrier than usual comment on this website. It was a proper keyboard warrior rant accusing me of all sorts of misdeads revolving around “forcing ads down people’s throats”. I replied saying that there had never been any ads on this site, never will be and I detest the enshitification trend of the modern Internet too. I also have found much of today’s web unbearable without tools such as Pi-Hole and a VPN; I use Firefox with adblockers whenever possible and generally speaking, if a site forces me to disable my ad-blocker I’ll simply stop visiting. Then I had a sinking feeling. Years ago, when I migrated this site from a PHP codebase to a static site generator, I’d enabled Disqus comments as it (at the time) seemed a reasonable alternative to dealing with all the spam, moderation, flame wars and handling user data that comes with a comments engine. I’d never really paid that much attention to it as it never got that much use, and I certainly had never noticed any ads or other junk being injected in amongst my content. But then I go to somewhat extreme lengths to avoid that sort of crap, and had a horrible feeling that maybe I’d simply just never noticed anything un-toward as I’d been blocking it. So I downloaded Google Chrome (ugh!), and browsed to this site without any kind of protection or blocking, and was confronted with an absolute abomination of link-farm “chumbox” adverts littering the bottom half of the page. Even though I’m currently on holiday, I immediately disabled the Disqus integration (yay for GitOps and CI/CD pipelines!) and can only apologise for the eye sore and god-awful mess that I was unwittingly inflicting on people. I’m not sure exactly when Disqus got that bad. It certainly wasn’t when I initially set it up, but I guess this is another example of why we can’t have nice things. So to my angry anonymous poster: You were right, I apologise (but you were still kind of a dick about it), and holy crap do I despise what we’ve done to the web.

a year ago 48 votes
Amiga Systems Programming in 2023

Discussion on Hacker News Discussion on lobste.rs If you ever get a chance to look through the classic Amiga OS source-code still floating around some murky corners of the internet, it is a thing of beauty and astonishing capabilities. It’s an inspirational piece of computing history with unmatched capabilities for the time. Remember, this was all originally on a computer released in the 1980s with 512Kb memory, a 7Mhz 68000 16-bit CPU, and a single floppy drive with 880Kb storage. On these limited specs, AmigaOS provided a pre-emptive multi-tasking operating system, a full set of GUI primatives and built-in “Workbench” interface, expansion card auto-configuration and a fully-featured filesystem with some unique and powerful capabilities. Although to be fair, the AmigaDOS parts do literally come from a different time (and possibly planet) - but more on that later. Oh and of course, there was that amazing chipset that meant even that humble base can do things like this - while PCs of the time were basically office boxes that occasionally bleeped and home computers still loaded games from cassette tape. There’s understandably a lot of on-line interest in those parts of the Amiga as they’re the most impressive in an obvious “wow!” way. But while that was what drew me to the Amiga when I was a kid (and the demo/cracking/bbs scene heavily influenced me) I’ve always been more of a systems geek at heart. I’ve always loved building tools and platforms, and have long been fascinated with the world of operating systems. Apart from reading through the source code (where that’s legally available, of course…) I think there’s no better way to explore and understand a system - and the mindset that produced it - than to develop for it. What follows is a brain-dump of what I’ve learned about developing for the AmigaOS, both on classic 68k-powered hardware to modern PowerPC systems like the X5000. I’ll cover development environments, modern workflows like CI builds on containerised infrastructure, distribution of packages and even a look back in time before C existed, thanks to AmigaDOS’s odd heritage. Table Of Contents SetCmd SDK Updates Editors Native hardware Modern development AmigaDOS is weird Distribution Archive Documentation Installer File Sites External Documentation The way forward is back ? Wrap-up SetCmd There’s plenty of guides and videos on setting up an old-school game or demo-coding environment, but all of what follows is in the context of developing a systems tool in C as that’s the language of AmigaOS. I started a real-life project partly to solve a small problem I had (switching between different versions of commands/tools at the Amiga CLI) but mainly to explore and dig deeper into the OS that influenced me so much as a teenager. SetCmd was the result, and is a very simple AmigaOS 4 PowerPC package. I’m working (very slowly) on porting it to run on classic AmigaOS and variants but it has to be said this is my first time writing C in any meaningful capacity beyond wrestling with pointers at University. The source code is on GitLab if you want to take a look but bear in mind despite having owned Amigas since they were released, I’m a total newbie at most of this! I wrote it to have fun, explore the AmigaOS, set up build environments and figure out how to package it up for re-distribution. I have written a bit about my development setup in the past, but things have changed a fair bit since then - so without further ado, here’s my development environment and thoughts in 2023. SDK Updates Whilst things do move at a glacial pace in the world of AmigaOS 4/PPC, there have been a few big updates. A-EON’s Enhancer Software has had several releases, each adding new applications and developer APIs. As well as shipping their own versions of key Amiga OS applications and utilities, they also now are installing several core AmigaDOS command replacements. I tend to skip the installation of these as I’ve encountered a few edge cases where they don’t quite behave like the original OS 4 commands, but from recent discussions online it appears as if they are preparing for their own “clean-room” re-implementation and modernisation of Amiga OS 4. Presumably in order to free themselves from the eternal legal shenanigans with Hyperion et al. I’m not going to get into that raging dumpster fire here, but it’ll be interesting to see what comes of this. On the Hyperion side, they released a big SDK Update for OS 4 including updated GCC toolchains, cross-compilers, profilers and loads of updated SDKs. Bearing in mind the ancient GCC 4.x toolchain that had been in place for years it was great to have a more modern environment. On the classic Amiga front, Hyperion have also been pushing ahead with their updated AmigaOS 3.x OS for 68k-powered Amigas. Now on version 3.2.2.1 (I’ve got my boxed set of CD and floppy disks) there have also been several NDK (“Native Developer Kit”) Updates providing updated APIs and tools. AmiKit also released a great “all-in-one” environment called DevPack which includes a huge range of languages (C, Assembly, Amos, Lua, Basic…) and NDKs all configured and ready to go. As a quick and easy way of setting up a development environment on classic Amigas, it’s hard to beat and saves a lot of manual downloading, configuring and glue-ing everything together. Editors In my last update 3 years ago, I’d more or-less settled on using a GUI VIM derivative. While I’m still a die-hard VIM user at $DAYJOB, I really appreciate the modern comforts of e.g. VSCode. Thanks to the amazing work of George Sokianos, there is now a OS 4 package of the awesome Lite-XL editor along with a comprehensive set of plugins. Here’s what a hacking session on my X5000 looks like: In that session you can see alongside LiteXL two terminal windows: I’m compiling and running the PowerPC and classic 68k versions of setcmd thanks to the cross-compilers and native “Petunia” JIT 68k emulation built into OS 4. More on that later, but while we’re talking about classic Amigas… Native hardware Emulation is fine, but nothing beats running on the actual hardware! In the lead image to this article you can see my treasured Amiga 1200 (with older 8-bit friend in the background running my TNFS site) expanded with an Apollo Vampire accelerator. An Ethernet or Wifi adapter is more or less essential though when it comes to transferring data around and fortunately the Vampire card is capable of network connectivity, high-resolution display and other niceities but can easily be switched back to a more “stock” environment. I do still occasionally use my licensed copy of CubicIDE but due to the age of this architecture, I tend to keep my tools light and have settled on the simple Jano editor or sometimes CygnusEd for old time’s sake. My build toolchain is provided by Devpack and it’s included VBCC compiler. I use the vbcc_target_m68k-amigaos target with this makefile to build: Modern development I’m (sadly) not always in front of my Amigas, but these days a modern laptop and cloud-native tools offer a lot of flexibility particularly with the advanced state of emulation. I use VSCode as my editor, and a containerised cross-compiler toolchain built by - again! - George Sokianos to target both 68k and PPC platforms. I can build my project on any system capable of running OCI containers, e.g. docker run \ --user $UID:$GID \ -v ./:/opt/code \ walkero/docker4amigavbcc:latest-m68k-amd64 \ make -f makefile.docker Testing and running the code is made easy by the very advanced state of emulators. On Windows, WinUAE is the gold standard and can emulate everything from an original 1985-vintage A1000 up to modern systems with PowerPC accelerators, graphics cards and other devices. I have it multi-booting into clean Hyperion and Commodore/classic OS environments, with my source code directory shared as a virtual hard-drive: I can compile in seconds with Docker, and then straight away test the resulting binary in my emulated Amiga. Source-code is kept up to date between systems using Git; on the Amiga X5000 I use the port of SimpleGit, which is now bundled with the latest Hyperion SDK under SDK:c/sgit. I haven’t yet found a suitable Git solution for the classic Amigas, so on those I use a makeshift AmigaDOS shell script that uses Backup to copy files over to a network mount in an rsync -like fashion. I also keep meaning to test running AmigaOS 4.1 under QEMU as support for this has greatly improved and looks a lot simpler than the currently convoluted process of getting “classic OS 4” running on an emulated 68k Amiga with PPC accelerator configured. But for now, the WinUAE approach is working pretty well. Another advantage of having a containerised build-chain is that combined with Git and Drone running on my personal Kubernetes clusters, I can build and package my code with a simple git push wherever I am: AmigaDOS is weird Although AmigaOS is frequently lauded for it’s sophistication and elegance, there is a notable “oddness” about the AmigaDOS components which handle storage I/O, devices and filesystems. The original developers of the Amiga had an ambitious DOS system planned, but in the end Commodore had to purchase the Tripos operating system and port parts of it to the Amiga due to deadline challenges. This mismatch is all the more pronounced as Tripos was written in BCPL - which in turn, influenced the B programming language which begat the C we all know and… well, tolerate, in my case. So it really is looking back into computing history and remnants of this still remain even in the “modern” AmigaOS 4.x and other derivatives. Once you start diving into AmigaDOS code, you end up face-to-face with this legacy and need to convert back and forth with BCPL and C data-types. For example, BCPL strings are not NULL-terminated, instead they have a length in the first byte and then the characters follow. And pointers are similarly alien. This is why my code has stuff like this littered through it: // Convert the new node to a BPTR new_node_bptr = MKBADDR(new_node); // Set the new path cli->cli_CommandDir = new_node_bptr; As the NDK include file dos.h explains: “All BCPL data must be long word aligned. BCPL pointers are the long word address (i.e byte address divided by 4 (»2))”. It also includes helper functions like MKBADDR to help with the conversion as most DOS system calls use BCPL pointers in arguments: /* Convert BPTR to typical C pointer */ #define BADDR(x) ((APTR)((ULONG)(x) << 2)) /* Convert address into a BPTR */ #define MKBADDR(x) (((LONG)(x)) >> 2) All in all, a fascinating look back into an obscure branch of computing history, but it hasn’t furthered my appreciation of pointers any! Distribution If you want to distribute your project to a wider audience, there’s a few Amiga-specific things you can do that makes life much easier for people running AmigaOS. These all make use of some pretty cool bits of Amiga technology and have been widely adopted by native software since their introduction in the early 1990s. Archive Just use LHA format archives. It’s the standard compression tool on Amigas and even though there are modern (and technically better) alternatives, .lha files are as ubiquitous as e.g. .zip or .tar.gz packages on other systems and can also be handled by low-spec machines. There are ports of CLI tools and GUIs available on all platforms to handle these archives and while the syntax can be a little different, it’s quite easy to use. See my AmigaDOS script that builds the SetCmd release artifact for a practical example. Documentation Along with a basic README.txt, it’s a great practice to distribute more detailed documentation in AmigaGuide format. Another area the Amiga was way ahead of it’s time - AmigaGuide is a hyper-text format commonly used for application manuals although people have even used it to publish disk magazines! Introduced in 1992, the bundled tools on any AmigaOS (or clone/derivative) can read and use AmigaGuide as standard so you can include formatting, links and other content in your documentation. You can see the AmigaGuide docs I include with SetCmd in the screenshot above, or view the source to see what the syntax looks like. It’s pretty simple to write and is much like any other markdown or formatting code. You can set basic parameters: 1 2 @Width 72 @wordwrap Documents have “Nodes” which can be linked to, e.g. 1 2 @Node About "About SetCMD" @{"About" Link About} And formatting is much like HTML with opening and closing tags: @{b}@{u} Bold and Underlined! @{uu}@{ub} Installer A fantastic addition to AmigaOS, the system installer utility reads a developer-provided script which handles copying files, comparing versions, modifying system scripts and so on, in a standardized fashion. You can pass useful information and configuration which controls the Installer tool through standard Amiga tooltypes, and it uses a sort of LISP-ey syntax which again runs on all Amigas and derivatives. The syntax does some getting used to, but the best source of documentation is the AmigaGuide documentation found in the Installer dev package. As an example, here’s an excerpt of the block of code which copies the setcmd program file over to a previously created directory: 1 2 3 4 5 6 7 8 (copyfiles (source "setcmd") (dest #dname) (prompt ("Copy SetCmd program file?")) (confirm "expert") (all) (help @copyfiles-help) ) And for running commands, you can use the run command, along with cat (short for “concatenate”) to build up the command string : (run (cat "C:MakeLink FROM " #dname "/cmds/setcmd/release TO SETCMD:setcmd SOFT")) I found the best approach was to examine other Installer scripts to get a feel for common practices and idioms. Here’s my simplistic Install_SetCmd script, and if you want to see something more complex, there’s always the AmigaOS installation scripts, or the Qt installer for OS 4 which taught me a lot. File Sites The Amiga doesn’t have a universal package manager, so files are usually downloaded manually and installed from the .lha archives. The go-to place for Amiga software for all systems is AmiNet. It’s the biggest repository of Amiga packages on the internet (and, at one point in the mid-90s was actually the largest software repository of any platform) and now also supports hosting packages for Amiga OS 4, MorphOS and AROS alongside classic 68k fare. There are also smaller, platform-focused sites for each platform e.g. os4depot.net for OS 4, morphos-storage.net for MorphOS and so on. Getting your package uploaded and accepted into the repository is broadly the same for all these: You FTP your package up according to the naming standards, and supply a Readme file which provides the required metadata like this excerpt in AmiNet format: 1 2 3 4 5 6 Short: Switch between versions of software Author: amiga@markround.com (Mark Dastmalchi-Round) Type: util/shell Version: 1.1.0 Architecture: ppc-amigaos >= 4.0.0 Distribution: Aminet There are some Amiga-native GUI tools that assist with creating these files, but the specs e.g. for os4depot.net are pretty straight forward. And here’s the end result - My package available on os4depot.net and aminet. External Documentation When trying to learn or re-learn everything from C to AmigaDOS scripting, I found a few great resources. However, as with most things in Amiga-land, there’s an extraordinarily high “bus factor” for many websites and my biggest recommendation is to use native tools or save local copies of anything you find! With that said, here’s my essential Amiga bookmarks: Autodocs references. There’s lots of websites where you can browse the auto-generated docs from the SDK header files, like this with clickable links to jump between things. If I’m actually at an Amiga though, there are some useful native tools I prefer that can index and search through the local headers. The screenshot above shows the standard “AutoDoc Reader” freeware tool viewing the equivalent of a man page for the AmigaDOS library, alongside the AmigaGuide Installer language reference. http://www.pjhutchison.org/tutorial/amiga_c.html - amazing site. This is what inspired me to pick up a compiler again and get to work. There’s a great refresher on the C language itself, and then it dives into Amiga-specific coding with everything from low-level library access, sound and GUI programming and more. Amiga OS Dev wiki is a goldmine, although it can take a little searching to find what you’re after. It’s mostly OS 4-focused but because all Amiga systems share a common ancestor it’s usually pretty applicable to all platforms. Specific articles that I found useful include: OS 4 Migration Guide Programming in the Amiga Environment Fundamental Types https://www.amiga-news.de is a great news aggregator site for all things Amiga, and also has a bunch of exclusive articles on programming in the AmigaOS environment - Like this recent article on GUI programming. Well worth a read - thanks to Daniel Reimann for pointing out the great content to me! And lastly, there are great threads I constantly found on amigans.net which is where a lot of OS 4/”Next Gen” technical discussion happens. For classic systems, I found the Coders discussions on the English Amiga Board an invaluable resource. The way forward is back ? When I started this project, it was really a way to get acquainted with my new X5000. Since then, I’ve decided to port my codebase back to the classic Amiga, as well as explore porting over to other Amiga-like systems such as MorphOS and AROS. This leads to some choices: From a packaging and distribution point of view, a 68k binary is pretty much the universal standard in Amiga land. It can run natively on classic Amigas, and modern systems like AmigaOS 4.x and MorphOS can run 68k binaries through translation. In a method similar to how Apple has handled the transition between processors in the Mac, it’s a pretty seamless experience and I run a lot of classic 68k software on my X5000. As long as you aren’t “banging on the metal” it works really well and integrates smoothly with the rest of the system. The original 68k AmigaOS from Commodore is also pretty much the standard for source-code compatibility; code targetting this release can be built on most of the derivatives and later systems with very little (if any) modification. On AmigaOS 4 for example, you can simply add -D__USE_INLINE__ to your makefiles and in theory build from a common codebase. If you start the other way as I did and write initially targetting AmigaOS 4, it’s harder to port to other systems. For example, I originally followed the AmigaOS 4 programming style which favours prefixing library calls with interface names. This isn’t compatible with any other system, so the easiest way to port this to more Amiga-like platforms is to refactor this code back to the classic style of calling system functions. I do plan on building platform-specific binaries using #defines so I can for example use functions like dos.library/AddCmdPathNode on OS 4 that I otherwise have to manually implement, and while a lot of higher-level layers (like e.g. MUI for building graphical applications) are shared across platforms this is probably the best bet for adding specific features from one platform that aren’t available on others. Honestly though, at this point if you want to just get started I’d have to suggest you target classic AmigaOS compatibility and build a 68k binary. I’d personally target AmigaOS 3.x or 2.1 if you want to support a wider range of truly vintage systems; 1.x is facinating from a retro-geek perspective but lacks a lot of the nice features that came with later systems. Everything else like the installer, archive format, documentation format and so on is cross-platform anyway and supported from OS 2.1 and up. MorphOS, AROS and OS 4 are really fun systems to explore, and I highly recommend checking them out if this article has whetted your appetite (and you can find a system to run them on!) but classic is the easiest way to get your code out to the wider world and ironically provides a better code-base for future porting and native binaries than my “working backwards” approach. Wrap-up So that’s about the sum total of what I’ve picked up over the last few years, anyway! I still enjoy working on my Amigas when I get some “hacking on code in the evening” time, and in particular I find AmigaOS 4 on my X5000 a refreshing blend of retro appeal and just about enough modern convenience to use it for development tasks, or even for writing this article itself. My A1200 continues to impress me with how much utility there is in such a small box and is a wonderful distraction from the modern era of bloated systems and applications. It is perhaps an evolutionary dead-end, but it’s still a lot of fun and is one of the rare occasions these days where I feel actually in control of my computer. Working backwards in time from OS 4.1 to my classic Amigas has also really given me a greater insight and appreciation for what the Amiga engineers managed to pull off back then. If you’re in any way interested in computer history - or simply want to give something truly different a try - you should definitely check out AmigaOS. I hope this quick type BRAIN: > WEB: dump provides you with some good starting points, and maybe gets you coding too!

over a year ago 56 votes
A Splinter In Your Mind

Earlier this year, I finally discovered as an adult that I am “on the spectrum” with what used to be called Asperger’s Syndrome. The diagnosis helped make sense of a lot things and has given me a greater insight into my “way of being in the world”. Whilst there are times I struggle with things that neuro-typical people usually find easy, or I find some situations draining, the condition has also brought me many positives which often get overlooked when talking about Autism Spectrum Disorders. True, it’s made life difficult or painful at times. But now I’ve learned more about it and have had help along the way, I’ve realised that many of my abilities and passions that I write about on this site also stem from the “unusual” way my mind works. Having fun with music is one of those gifts and it’s also how I can best express myself. I started putting this latest track together as I was processing everything and blew off some steam along the way - It was a great experience and I feel like I ended this project on a very positive note. I guess this is also me going public and being open about having an ASD. There’s still a fair amount of stigma associated with these conditions, but frankly much of our favourite art, the modern world and the Internet as we know it probably wouldn’t exist without all the neuro-diverse folks who made much of it! We’re just wired a little differently - but wouldn’t life be boring if we were all the same? So here’s to all the Aspies of the world! The track is available to stream on YouTube, and all the usual stores.

over a year ago 39 votes

More in programming

first-class merges and cover letters

Although it looks really good, I have not yet tried the Jujutsu (jj) version control system, mainly because it’s not yet clearly superior to Magit. But I have been following jj discussions with great interest. One of the things that jj has not yet tackled is how to do better than git refs / branches / tags. As I underestand it, jj currently has something like Mercurial bookmarks, which are more like raw git ref plumbing than a high-level porcelain feature. In particular, jj lacks signed or annotated tags, and it doesn’t have branch names that always automatically refer to the tip. This is clearly a temporary state of affairs because jj is still incomplete and under development and these gaps are going to be filled. But the discussions have led me to think about how git’s branches are unsatisfactory, and what could be done to improve them. branch merge rebase squash fork cover letters previous branch workflow questions branch One of the huge improvements in git compared to Subversion was git’s support for merges. Subversion proudly advertised its support for lightweight branches, but a branch is not very useful if you can’t merge it: an un-mergeable branch is not a tool you can use to help with work-in-progress development. The point of this anecdote is to illustrate that rather than trying to make branches better, we should try to make merges better and branches will get better as a consequence. Let’s consider a few common workflows and how git makes them all unsatisfactory in various ways. Skip to cover letters and previous branch below where I eventually get to the point. merge A basic merge workflow is, create a feature branch hack, hack, review, hack, approve merge back to the trunk The main problem is when it comes to the merge, there may be conflicts due to concurrent work on the trunk. Git encourages you to resolve conflicts while creating the merge commit, which tends to bypass the normal review process. Git also gives you an ugly useless canned commit message for merges, that hides what you did to resolve the conflicts. If the feature branch is a linear record of the work then it can be cluttered with commits to address comments from reviewers and to fix mistakes. Some people like an accurate record of the history, but others prefer the repository to contain clean logical changes that will make sense in years to come, keeping the clutter in the code review system. rebase A rebase-oriented workflow deals with the problems of the merge workflow but introduces new problems. Primarily, rebasing is intended to produce a tidy logical commit history. And when a feature branch is rebased onto the trunk before it is merged, a simple fast-forward check makes it trivial to verify that the merge will be clean (whether it uses separate merge commit or directly fast-forwards the trunk). However, it’s hard to compare the state of the feature branch before and after the rebase. The current and previous tips of the branch (amongst other clutter) are recorded in the reflog of the person who did the rebase, but they can’t share their reflog. A force-push erases the previous branch from the server. Git forges sometimes make it possible to compare a branch before and after a rebase, but it’s usually very inconvenient, which makes it hard to see if review comments have been addressed. And a reviewer can’t fetch past versions of the branch from the server to review them locally. You can mitigate these problems by adding commits in --autosquash format, and delay rebasing until just before merge. However that reintroduces the problem of merge conflicts: if the autosquash doesn’t apply cleanly the branch should have another round of review to make sure the conflicts were resolved OK. squash When the trunk consists of a sequence of merge commits, the --first-parent log is very uninformative. A common way to make the history of the trunk more informative, and deal with the problems of cluttered feature branches and poor rebase support, is to squash the feature branch into a single commit on the trunk instead of mergeing. This encourages merge requests to be roughly the size of one commit, which is arguably a good thing. However, it can be uncomfortably confining for larger features, or cause extra busy-work co-ordinating changes across multiple merge requests. And squashed feature branches have the same merge conflict problem as rebase --autosquash. fork Feature branches can’t always be short-lived. In the past I have maintained local hacks that were used in production but were not (not yet?) suitable to submit upstream. I have tried keeping a stack of these local patches on a git branch that gets rebased onto each upstream release. With this setup the problem of reviewing successive versions of a merge request becomes the bigger problem of keeping track of how the stack of patches evolved over longer periods of time. cover letters Cover letters are common in the email patch workflow that predates git, and they are supported by git format-patch. Github and other forges have a webby version of the cover letter: the message that starts off a pull request or merge request. In git, cover letters are second-class citizens: they aren’t stored in the repository. But many of the problems I outlined above have neat solutions if cover letters become first-class citizens, with a Jujutsu twist. A first-class cover letter starts off as a prototype for a merge request, and becomes the eventual merge commit. Instead of unhelpful auto-generated merge commits, you get helpful and informative messages. No extra work is needed since we’re already writing cover letters. Good merge commit messages make good --first-parent logs. The cover letter subject line works as a branch name. No more need to invent filename-compatible branch names! Jujutsu doesn’t make you name branches, giving them random names instead. It shows the subject line of the topmost commit as a reminder of what the branch is for. If there’s an explicit cover letter the subject line will be a better summary of the branch as a whole. I often find the last commit on a branch is some post-feature cleanup, and that kind of commit has a subject line that is never a good summary of its feature branch. As a prototype for the merge commit, the cover letter can contain the resolution of all the merge conflicts in a way that can be shared and reviewed. In Jujutsu, where conflicts are first class, the cover letter commit can contain unresolved conflicts: you don’t have to clean them up when creating the merge, you can leave that job until later. If you can share a prototype of your merge commit, then it becomes possible for your collaborators to review any merge conflicts and how you resolved them. To distinguish a cover letter from a merge commit object, a cover letter object has a “target” header which is a special kind of parent header. A cover letter also has a normal parent commit header that refers to earlier commits in the feature branch. The target is what will become the first parent of the eventual merge commit. previous branch The other ingredient is to add a “previous branch” header, another special kind of parent commit header. The previous branch header refers to an older version of the cover letter and, transitively, an older version of the whole feature branch. Typically the previous branch header will match the last shared version of the branch, i.e. the commit hash of the server’s copy of the feature branch. The previous branch header isn’t changed during normal work on the feature branch. As the branch is revised and rebased, the commit hash of the cover letter will change fairly frequently. These changes are recorded in git’s reflog or jj’s oplog, but not in the “previous branch” chain. You can use the previous branch chain to examine diffs between versions of the feature branch as a whole. If commits have Gerrit-style or jj-style change-IDs then it’s fairly easy to find and compare previous versions of an individual commit. The previous branch header supports interdiff code review, or allows you to retain past iterations of a patch series. workflow Here are some sketchy notes on how these features might work in practice. One way to use cover letters is jj-style, where it’s convenient to edit commits that aren’t at the tip of a branch, and easy to reshuffle commits so that a branch has a deliberate narrative. When you create a new feature branch, it starts off as an empty cover letter with both target and parent pointing at the same commit. Alternatively, you might start a branch ad hoc, and later cap it with a cover letter. If this is a small change and rebase + fast-forward is allowed, you can edit the “cover letter” to contain the whole change. Otherwise, you can hack on the branch any which way. Shuffle the commits that should be part of the merge request so that they occur before the cover letter, and edit the cover letter to summarize the preceding commits. When you first push the branch, there’s (still) no need to give it a name: the server can see that this is (probably) going to be a new merge request because the top commit has a target branch and its change-ID doesn’t match an existing merge request. Also when you push, your client automatically creates a new instance of your cover letter, adding a “previous branch” header to indicate that the old version was shared. The commits on the branch that were pushed are now immutable; rebases and edits affect the new version of the branch. During review there will typically be multiple iterations of the branch to address feedback. The chain of previous branch headers allows reviewers to see how commits were changed to address feedback, interdiff style. The branch can be merged when the target header matches the current trunk and there are no conflicts left to resolve. When the time comes to merge the branch, there are several options: For a merge workflow, the cover letter is used to make a new commit on the trunk, changing the target header into the first parent commit, and dropping the previous branch header. Or, if you like to preserve more history, the previous branch chain can be retained. Or you can drop the cover letter and fast foward the branch on to the trunk. Or you can squash the branch on to the trunk, using the cover letter as the commit message. questions This is a fairly rough idea: I’m sure that some of the details won’t work in practice without a lot of careful work on compatibility and deployability. Do the new commit headers (“target” and “previous branch”) need to be headers? What are the compatibility issues with adding new headers that refer to other commits? How would a server handle a push of an unnamed branch? How could someone else pull a copy of it? How feasible is it to use cover letter subject lines instead of branch names? The previous branch header is doing a similar job to a remote tracking branch. Is there an opportunity to simplify how we keep a local cache of the server state? Despite all that, I think something along these lines could make branches / reviews / reworks / merges less awkward. How you merge should me a matter of your project’s preferred style, without interference from technical limitations that force you to trade off one annoyance against another. There remains a non-technical limitation: I have assumed that contributors are comfortable enough with version control to use a history-editing workflow effectively. I’ve lost all perspective on how hard this is for a newbie to learn; I expect (or hope?) jj makes it much easier than git rebase.

20 hours ago 5 votes
Performant Full-Disk Encryption on a Raspberry Pi, but Foiled by Twisty UARTs

In my post yesterday (“ARM is great, ARM is terrible (and so is RISC-V)), I described my desire to find ARM hardware with AES instructions to support full-disk encryption, and the poor state of the OS ecosystem around the newer ARM boards. I was anticipating buying either a newer ARM SBC or an x86 mini … Continue reading Performant Full-Disk Encryption on a Raspberry Pi, but Foiled by Twisty UARTs →

8 hours ago 3 votes
Words are not violence

Debates, at their finest, are about exploring topics together in search for truth. That probably sounds hopelessly idealistic to anyone who've ever perused a comment section on the internet, but ideals are there to remind us of what's possible, to inspire us to reach higher — even if reality falls short. I've been reaching for those debating ideals for thirty years on the internet. I've argued with tens of thousands of people, first on Usenet, then in blog comments, then Twitter, now X, and also LinkedIn — as well as a million other places that have come and gone. It's mostly been about technology, but occasionally about society and morality too. There have been plenty of heated moments during those three decades. It doesn't take much for a debate between strangers on this internet to escalate into something far lower than a "search for truth", and I've often felt willing to settle for just a cordial tone! But for the majority of that time, I never felt like things might escalate beyond the keyboards and into the real world. That was until we had our big blow-up at 37signals back in 2021. I suddenly got to see a different darkness from the most vile corners of the internet. Heard from those who seem to prowl for a mob-sanctioned opportunity to threaten and intimidate those they disagree with. It fundamentally changed me. But I used the experience as a mirror to reflect on the ways my own engagement with the arguments occasionally felt too sharp, too personal. And I've since tried to refocus way more of my efforts on the positive and the productive. I'm by no means perfect, and the internet often tempts the worst in us, but I resist better now than I did then. What I cannot come to terms with, though, is the modern equation of words with violence. The growing sense of permission that if the disagreement runs deep enough, then violence is a justified answer to settle it. That sounds so obvious that we shouldn't need to state it in a civil society, but clearly it is not. Not even in technology. Not even in programming. There are plenty of factions here who've taken to justify their violent fantasies by referring to their ideological opponents as "nazis", "fascists", or "racists". And then follow that up with a call to "punch a nazi" or worse. When you hear something like that often enough, it's easy to grow glib about it. That it's just a saying. They don't mean it. But I'm afraid many of them really do. Which brings us to Charlie Kirk. And the technologists who name drinks at their bar after his mortal wound just hours after his death, to name but one of the many, morbid celebrations of the famous conservative debater's death. It's sickening. Deeply, profoundly sickening. And my first instinct was exactly what such people would delight in happening. To watch the rest of us recoil, then retract, and perhaps even eject. To leave the internet for a while or forever. But I can't do that. We shouldn't do that. Instead, we should double down on the opposite. Continue to show up with our ideals held high while we debate strangers in that noble search for the truth. Where we share our excitement, our enthusiasm, and our love of technology, country, and humanity. I think that's what Charlie Kirk did so well. Continued to show up for the debate. Even on hostile territory. Not because he thought he was ever going to convince everyone, but because he knew he'd always reach some with a good argument, a good insight, or at least a different perspective. You could agree or not. Counter or be quiet. But the earnest exploration of the topics in a live exchange with another human is as fundamental to our civilization as Socrates himself. Don't give up, don't give in. Keep debating.

5 hours ago 2 votes
ARM is great, ARM is terrible (and so is RISC-V)

I’ve long been interested in new and different platforms. I ran Debian on an Alpha back in the late 1990s and was part of the Alpha port team; then I helped bootstrap Debian on amd64. I’ve got somewhere around 8 Raspberry Pi devices in active use right now, and the free NNCPNET Internet email service … Continue reading ARM is great, ARM is terrible (and so is RISC-V) →

yesterday 4 votes
Many Hard Leetcode Problems are Easy Constraint Problems

In my first interview out of college I was asked the change counter problem: Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies). I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview. But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day. int: total; array[int] of int: values = [10, 9, 1]; array[index_set(values)] of var 0..: coins; constraint sum (c in index_set(coins)) (coins[c] * values[c]) == total; solve minimize sum(coins); You can try this online here. It'll give you a prompt to put in total and then give you successively-better solutions: coins = [0, 0, 37]; ---------- coins = [0, 1, 28]; ---------- coins = [0, 2, 19]; ---------- coins = [0, 3, 10]; ---------- coins = [0, 4, 1]; ---------- coins = [1, 3, 0]; ---------- Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems.1 Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is. More examples This was a question in a different interview (which I thankfully passed): Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later. It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem: array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; var int: buy; var int: sell; var int: profit = prices[sell] - prices[buy]; constraint sell > buy; constraint profit > 0; solve maximize profit; Reminder, link to trying it online here. While working at that job, one interview question we tested out was: Given a list, determine if three numbers in that list can be added or subtracted to give 0? This is a satisfaction problem, not a constraint problem: we don't need the "best answer", any answer will do. We eventually decided against it for being too tricky for the engineers we were targeting. But it's not tricky in a solver; include "globals.mzn"; array[int] of int: numbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array[index_set(numbers)] of var {0, -1, 1}: choices; constraint sum(n in index_set(numbers)) (numbers[n] * choices[n]) = 0; constraint count(choices, -1) + count(choices, 1) = 3; solve satisfy; Okay, one last one, a problem I saw last year at Chipy AlgoSIG. Basically they pick some leetcode problems and we all do them. I failed to solve this one: Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram. The "proper" solution is a tricky thing involving tracking lots of bookkeeping states, which you can completely bypass by expressing it as constraints: array[int] of int: numbers = [2,1,5,6,2,3]; var 1..length(numbers): x; var 1..length(numbers): dx; var 1..: y; constraint x + dx <= length(numbers); constraint forall (i in x..(x+dx)) (y <= numbers[i]); var int: area = (dx+1)*y; solve maximize area; output ["(\(x)->\(x+dx))*\(y) = \(area)"] There's even a way to automatically visualize the solution (using vis_geost_2d), but I didn't feel like figuring it out in time for the newsletter. Is this better? Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver. The real advantage of solvers, though, is how well they handle new constraints. Take the stock picking problem above. I can write an O(n²) algorithm in a few minutes and the O(n) algorithm if you give me some time to think. Now change the problem to Maximize the profit by buying and selling up to max_sales stocks, but you can only buy or sell one stock at a given time and you can only hold up to max_hold stocks at a time? That's a way harder problem to write even an inefficient algorithm for! While the constraint problem is only a tiny bit more complicated: include "globals.mzn"; int: max_sales = 3; int: max_hold = 2; array[int] of int: prices = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8]; array [1..max_sales] of var int: buy; array [1..max_sales] of var int: sell; array [index_set(prices)] of var 0..max_hold: stocks_held; var int: profit = sum(s in 1..max_sales) (prices[sell[s]] - prices[buy[s]]); constraint forall (s in 1..max_sales) (sell[s] > buy[s]); constraint profit > 0; constraint forall(i in index_set(prices)) (stocks_held[i] = (count(s in 1..max_sales) (buy[s] <= i) - count(s in 1..max_sales) (sell[s] <= i))); constraint alldifferent(buy ++ sell); solve maximize profit; output ["buy at \(buy)\n", "sell at \(sell)\n", "for \(profit)"]; Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking. Because my dad will email me if I don't explain this: "leetcode" is slang for "tricky algorithmic interview questions that have little-to-no relevance in the actual job you're interviewing for." It's from leetcode.com. ↩

yesterday 4 votes