Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
20
Discussion on Hacker News Discussion on lobste.rs I’ve long since been a die-hard BeOS fan and have been running the open-source recreation Haiku for many years. I think it’s interesting to explore the “alternative OS” world and consider some great ideas that for whatever reason never caught on elsewhere. The way Haiku handles package management and its alternative approach to an “immutable system” is one of those ideas I find really cool. Here’s what it looks like from a desktop user’s perspective - there’s all the usual stuff like an “app store”, package updater, repositories of packages and so on: It’s all there and works well - it’s easily as smooth as any desktop Linux experience. However, it’s the implementation details behind the scenes that make it so interesting to me. Haiku takes a refreshingly new approach to package management: Despite the user experience “feeling” like a traditional package manager - say, something like apt or dnf - it has seamless support...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from markround.com

Disqus - An Apology

Earlier today, I got an email alerting me to an angrier than usual comment on this website. It was a proper keyboard warrior rant accusing me of all sorts of misdeads revolving around “forcing ads down people’s throats”. I replied saying that there had never been any ads on this site, never will be and I detest the enshitification trend of the modern Internet too. I also have found much of today’s web unbearable without tools such as Pi-Hole and a VPN; I use Firefox with adblockers whenever possible and generally speaking, if a site forces me to disable my ad-blocker I’ll simply stop visiting. Then I had a sinking feeling. Years ago, when I migrated this site from a PHP codebase to a static site generator, I’d enabled Disqus comments as it (at the time) seemed a reasonable alternative to dealing with all the spam, moderation, flame wars and handling user data that comes with a comments engine. I’d never really paid that much attention to it as it never got that much use, and I certainly had never noticed any ads or other junk being injected in amongst my content. But then I go to somewhat extreme lengths to avoid that sort of crap, and had a horrible feeling that maybe I’d simply just never noticed anything un-toward as I’d been blocking it. So I downloaded Google Chrome (ugh!), and browsed to this site without any kind of protection or blocking, and was confronted with an absolute abomination of link-farm “chumbox” adverts littering the bottom half of the page. Even though I’m currently on holiday, I immediately disabled the Disqus integration (yay for GitOps and CI/CD pipelines!) and can only apologise for the eye sore and god-awful mess that I was unwittingly inflicting on people. I’m not sure exactly when Disqus got that bad. It certainly wasn’t when I initially set it up, but I guess this is another example of why we can’t have nice things. So to my angry anonymous poster: You were right, I apologise (but you were still kind of a dick about it), and holy crap do I despise what we’ve done to the web.

10 months ago 26 votes
Amiga Systems Programming in 2023

Discussion on Hacker News Discussion on lobste.rs If you ever get a chance to look through the classic Amiga OS source-code still floating around some murky corners of the internet, it is a thing of beauty and astonishing capabilities. It’s an inspirational piece of computing history with unmatched capabilities for the time. Remember, this was all originally on a computer released in the 1980s with 512Kb memory, a 7Mhz 68000 16-bit CPU, and a single floppy drive with 880Kb storage. On these limited specs, AmigaOS provided a pre-emptive multi-tasking operating system, a full set of GUI primatives and built-in “Workbench” interface, expansion card auto-configuration and a fully-featured filesystem with some unique and powerful capabilities. Although to be fair, the AmigaDOS parts do literally come from a different time (and possibly planet) - but more on that later. Oh and of course, there was that amazing chipset that meant even that humble base can do things like this - while PCs of the time were basically office boxes that occasionally bleeped and home computers still loaded games from cassette tape. There’s understandably a lot of on-line interest in those parts of the Amiga as they’re the most impressive in an obvious “wow!” way. But while that was what drew me to the Amiga when I was a kid (and the demo/cracking/bbs scene heavily influenced me) I’ve always been more of a systems geek at heart. I’ve always loved building tools and platforms, and have long been fascinated with the world of operating systems. Apart from reading through the source code (where that’s legally available, of course…) I think there’s no better way to explore and understand a system - and the mindset that produced it - than to develop for it. What follows is a brain-dump of what I’ve learned about developing for the AmigaOS, both on classic 68k-powered hardware to modern PowerPC systems like the X5000. I’ll cover development environments, modern workflows like CI builds on containerised infrastructure, distribution of packages and even a look back in time before C existed, thanks to AmigaDOS’s odd heritage. Table Of Contents SetCmd SDK Updates Editors Native hardware Modern development AmigaDOS is weird Distribution Archive Documentation Installer File Sites External Documentation The way forward is back ? Wrap-up SetCmd There’s plenty of guides and videos on setting up an old-school game or demo-coding environment, but all of what follows is in the context of developing a systems tool in C as that’s the language of AmigaOS. I started a real-life project partly to solve a small problem I had (switching between different versions of commands/tools at the Amiga CLI) but mainly to explore and dig deeper into the OS that influenced me so much as a teenager. SetCmd was the result, and is a very simple AmigaOS 4 PowerPC package. I’m working (very slowly) on porting it to run on classic AmigaOS and variants but it has to be said this is my first time writing C in any meaningful capacity beyond wrestling with pointers at University. The source code is on GitLab if you want to take a look but bear in mind despite having owned Amigas since they were released, I’m a total newbie at most of this! I wrote it to have fun, explore the AmigaOS, set up build environments and figure out how to package it up for re-distribution. I have written a bit about my development setup in the past, but things have changed a fair bit since then - so without further ado, here’s my development environment and thoughts in 2023. SDK Updates Whilst things do move at a glacial pace in the world of AmigaOS 4/PPC, there have been a few big updates. A-EON’s Enhancer Software has had several releases, each adding new applications and developer APIs. As well as shipping their own versions of key Amiga OS applications and utilities, they also now are installing several core AmigaDOS command replacements. I tend to skip the installation of these as I’ve encountered a few edge cases where they don’t quite behave like the original OS 4 commands, but from recent discussions online it appears as if they are preparing for their own “clean-room” re-implementation and modernisation of Amiga OS 4. Presumably in order to free themselves from the eternal legal shenanigans with Hyperion et al. I’m not going to get into that raging dumpster fire here, but it’ll be interesting to see what comes of this. On the Hyperion side, they released a big SDK Update for OS 4 including updated GCC toolchains, cross-compilers, profilers and loads of updated SDKs. Bearing in mind the ancient GCC 4.x toolchain that had been in place for years it was great to have a more modern environment. On the classic Amiga front, Hyperion have also been pushing ahead with their updated AmigaOS 3.x OS for 68k-powered Amigas. Now on version 3.2.2.1 (I’ve got my boxed set of CD and floppy disks) there have also been several NDK (“Native Developer Kit”) Updates providing updated APIs and tools. AmiKit also released a great “all-in-one” environment called DevPack which includes a huge range of languages (C, Assembly, Amos, Lua, Basic…) and NDKs all configured and ready to go. As a quick and easy way of setting up a development environment on classic Amigas, it’s hard to beat and saves a lot of manual downloading, configuring and glue-ing everything together. Editors In my last update 3 years ago, I’d more or-less settled on using a GUI VIM derivative. While I’m still a die-hard VIM user at $DAYJOB, I really appreciate the modern comforts of e.g. VSCode. Thanks to the amazing work of George Sokianos, there is now a OS 4 package of the awesome Lite-XL editor along with a comprehensive set of plugins. Here’s what a hacking session on my X5000 looks like: In that session you can see alongside LiteXL two terminal windows: I’m compiling and running the PowerPC and classic 68k versions of setcmd thanks to the cross-compilers and native “Petunia” JIT 68k emulation built into OS 4. More on that later, but while we’re talking about classic Amigas… Native hardware Emulation is fine, but nothing beats running on the actual hardware! In the lead image to this article you can see my treasured Amiga 1200 (with older 8-bit friend in the background running my TNFS site) expanded with an Apollo Vampire accelerator. An Ethernet or Wifi adapter is more or less essential though when it comes to transferring data around and fortunately the Vampire card is capable of network connectivity, high-resolution display and other niceities but can easily be switched back to a more “stock” environment. I do still occasionally use my licensed copy of CubicIDE but due to the age of this architecture, I tend to keep my tools light and have settled on the simple Jano editor or sometimes CygnusEd for old time’s sake. My build toolchain is provided by Devpack and it’s included VBCC compiler. I use the vbcc_target_m68k-amigaos target with this makefile to build: Modern development I’m (sadly) not always in front of my Amigas, but these days a modern laptop and cloud-native tools offer a lot of flexibility particularly with the advanced state of emulation. I use VSCode as my editor, and a containerised cross-compiler toolchain built by - again! - George Sokianos to target both 68k and PPC platforms. I can build my project on any system capable of running OCI containers, e.g. docker run \ --user $UID:$GID \ -v ./:/opt/code \ walkero/docker4amigavbcc:latest-m68k-amd64 \ make -f makefile.docker Testing and running the code is made easy by the very advanced state of emulators. On Windows, WinUAE is the gold standard and can emulate everything from an original 1985-vintage A1000 up to modern systems with PowerPC accelerators, graphics cards and other devices. I have it multi-booting into clean Hyperion and Commodore/classic OS environments, with my source code directory shared as a virtual hard-drive: I can compile in seconds with Docker, and then straight away test the resulting binary in my emulated Amiga. Source-code is kept up to date between systems using Git; on the Amiga X5000 I use the port of SimpleGit, which is now bundled with the latest Hyperion SDK under SDK:c/sgit. I haven’t yet found a suitable Git solution for the classic Amigas, so on those I use a makeshift AmigaDOS shell script that uses Backup to copy files over to a network mount in an rsync -like fashion. I also keep meaning to test running AmigaOS 4.1 under QEMU as support for this has greatly improved and looks a lot simpler than the currently convoluted process of getting “classic OS 4” running on an emulated 68k Amiga with PPC accelerator configured. But for now, the WinUAE approach is working pretty well. Another advantage of having a containerised build-chain is that combined with Git and Drone running on my personal Kubernetes clusters, I can build and package my code with a simple git push wherever I am: AmigaDOS is weird Although AmigaOS is frequently lauded for it’s sophistication and elegance, there is a notable “oddness” about the AmigaDOS components which handle storage I/O, devices and filesystems. The original developers of the Amiga had an ambitious DOS system planned, but in the end Commodore had to purchase the Tripos operating system and port parts of it to the Amiga due to deadline challenges. This mismatch is all the more pronounced as Tripos was written in BCPL - which in turn, influenced the B programming language which begat the C we all know and… well, tolerate, in my case. So it really is looking back into computing history and remnants of this still remain even in the “modern” AmigaOS 4.x and other derivatives. Once you start diving into AmigaDOS code, you end up face-to-face with this legacy and need to convert back and forth with BCPL and C data-types. For example, BCPL strings are not NULL-terminated, instead they have a length in the first byte and then the characters follow. And pointers are similarly alien. This is why my code has stuff like this littered through it: // Convert the new node to a BPTR new_node_bptr = MKBADDR(new_node); // Set the new path cli->cli_CommandDir = new_node_bptr; As the NDK include file dos.h explains: “All BCPL data must be long word aligned. BCPL pointers are the long word address (i.e byte address divided by 4 (»2))”. It also includes helper functions like MKBADDR to help with the conversion as most DOS system calls use BCPL pointers in arguments: /* Convert BPTR to typical C pointer */ #define BADDR(x) ((APTR)((ULONG)(x) << 2)) /* Convert address into a BPTR */ #define MKBADDR(x) (((LONG)(x)) >> 2) All in all, a fascinating look back into an obscure branch of computing history, but it hasn’t furthered my appreciation of pointers any! Distribution If you want to distribute your project to a wider audience, there’s a few Amiga-specific things you can do that makes life much easier for people running AmigaOS. These all make use of some pretty cool bits of Amiga technology and have been widely adopted by native software since their introduction in the early 1990s. Archive Just use LHA format archives. It’s the standard compression tool on Amigas and even though there are modern (and technically better) alternatives, .lha files are as ubiquitous as e.g. .zip or .tar.gz packages on other systems and can also be handled by low-spec machines. There are ports of CLI tools and GUIs available on all platforms to handle these archives and while the syntax can be a little different, it’s quite easy to use. See my AmigaDOS script that builds the SetCmd release artifact for a practical example. Documentation Along with a basic README.txt, it’s a great practice to distribute more detailed documentation in AmigaGuide format. Another area the Amiga was way ahead of it’s time - AmigaGuide is a hyper-text format commonly used for application manuals although people have even used it to publish disk magazines! Introduced in 1992, the bundled tools on any AmigaOS (or clone/derivative) can read and use AmigaGuide as standard so you can include formatting, links and other content in your documentation. You can see the AmigaGuide docs I include with SetCmd in the screenshot above, or view the source to see what the syntax looks like. It’s pretty simple to write and is much like any other markdown or formatting code. You can set basic parameters: 1 2 @Width 72 @wordwrap Documents have “Nodes” which can be linked to, e.g. 1 2 @Node About "About SetCMD" @{"About" Link About} And formatting is much like HTML with opening and closing tags: @{b}@{u} Bold and Underlined! @{uu}@{ub} Installer A fantastic addition to AmigaOS, the system installer utility reads a developer-provided script which handles copying files, comparing versions, modifying system scripts and so on, in a standardized fashion. You can pass useful information and configuration which controls the Installer tool through standard Amiga tooltypes, and it uses a sort of LISP-ey syntax which again runs on all Amigas and derivatives. The syntax does some getting used to, but the best source of documentation is the AmigaGuide documentation found in the Installer dev package. As an example, here’s an excerpt of the block of code which copies the setcmd program file over to a previously created directory: 1 2 3 4 5 6 7 8 (copyfiles (source "setcmd") (dest #dname) (prompt ("Copy SetCmd program file?")) (confirm "expert") (all) (help @copyfiles-help) ) And for running commands, you can use the run command, along with cat (short for “concatenate”) to build up the command string : (run (cat "C:MakeLink FROM " #dname "/cmds/setcmd/release TO SETCMD:setcmd SOFT")) I found the best approach was to examine other Installer scripts to get a feel for common practices and idioms. Here’s my simplistic Install_SetCmd script, and if you want to see something more complex, there’s always the AmigaOS installation scripts, or the Qt installer for OS 4 which taught me a lot. File Sites The Amiga doesn’t have a universal package manager, so files are usually downloaded manually and installed from the .lha archives. The go-to place for Amiga software for all systems is AmiNet. It’s the biggest repository of Amiga packages on the internet (and, at one point in the mid-90s was actually the largest software repository of any platform) and now also supports hosting packages for Amiga OS 4, MorphOS and AROS alongside classic 68k fare. There are also smaller, platform-focused sites for each platform e.g. os4depot.net for OS 4, morphos-storage.net for MorphOS and so on. Getting your package uploaded and accepted into the repository is broadly the same for all these: You FTP your package up according to the naming standards, and supply a Readme file which provides the required metadata like this excerpt in AmiNet format: 1 2 3 4 5 6 Short: Switch between versions of software Author: amiga@markround.com (Mark Dastmalchi-Round) Type: util/shell Version: 1.1.0 Architecture: ppc-amigaos >= 4.0.0 Distribution: Aminet There are some Amiga-native GUI tools that assist with creating these files, but the specs e.g. for os4depot.net are pretty straight forward. And here’s the end result - My package available on os4depot.net and aminet. External Documentation When trying to learn or re-learn everything from C to AmigaDOS scripting, I found a few great resources. However, as with most things in Amiga-land, there’s an extraordinarily high “bus factor” for many websites and my biggest recommendation is to use native tools or save local copies of anything you find! With that said, here’s my essential Amiga bookmarks: Autodocs references. There’s lots of websites where you can browse the auto-generated docs from the SDK header files, like this with clickable links to jump between things. If I’m actually at an Amiga though, there are some useful native tools I prefer that can index and search through the local headers. The screenshot above shows the standard “AutoDoc Reader” freeware tool viewing the equivalent of a man page for the AmigaDOS library, alongside the AmigaGuide Installer language reference. http://www.pjhutchison.org/tutorial/amiga_c.html - amazing site. This is what inspired me to pick up a compiler again and get to work. There’s a great refresher on the C language itself, and then it dives into Amiga-specific coding with everything from low-level library access, sound and GUI programming and more. Amiga OS Dev wiki is a goldmine, although it can take a little searching to find what you’re after. It’s mostly OS 4-focused but because all Amiga systems share a common ancestor it’s usually pretty applicable to all platforms. Specific articles that I found useful include: OS 4 Migration Guide Programming in the Amiga Environment Fundamental Types https://www.amiga-news.de is a great news aggregator site for all things Amiga, and also has a bunch of exclusive articles on programming in the AmigaOS environment - Like this recent article on GUI programming. Well worth a read - thanks to Daniel Reimann for pointing out the great content to me! And lastly, there are great threads I constantly found on amigans.net which is where a lot of OS 4/”Next Gen” technical discussion happens. For classic systems, I found the Coders discussions on the English Amiga Board an invaluable resource. The way forward is back ? When I started this project, it was really a way to get acquainted with my new X5000. Since then, I’ve decided to port my codebase back to the classic Amiga, as well as explore porting over to other Amiga-like systems such as MorphOS and AROS. This leads to some choices: From a packaging and distribution point of view, a 68k binary is pretty much the universal standard in Amiga land. It can run natively on classic Amigas, and modern systems like AmigaOS 4.x and MorphOS can run 68k binaries through translation. In a method similar to how Apple has handled the transition between processors in the Mac, it’s a pretty seamless experience and I run a lot of classic 68k software on my X5000. As long as you aren’t “banging on the metal” it works really well and integrates smoothly with the rest of the system. The original 68k AmigaOS from Commodore is also pretty much the standard for source-code compatibility; code targetting this release can be built on most of the derivatives and later systems with very little (if any) modification. On AmigaOS 4 for example, you can simply add -D__USE_INLINE__ to your makefiles and in theory build from a common codebase. If you start the other way as I did and write initially targetting AmigaOS 4, it’s harder to port to other systems. For example, I originally followed the AmigaOS 4 programming style which favours prefixing library calls with interface names. This isn’t compatible with any other system, so the easiest way to port this to more Amiga-like platforms is to refactor this code back to the classic style of calling system functions. I do plan on building platform-specific binaries using #defines so I can for example use functions like dos.library/AddCmdPathNode on OS 4 that I otherwise have to manually implement, and while a lot of higher-level layers (like e.g. MUI for building graphical applications) are shared across platforms this is probably the best bet for adding specific features from one platform that aren’t available on others. Honestly though, at this point if you want to just get started I’d have to suggest you target classic AmigaOS compatibility and build a 68k binary. I’d personally target AmigaOS 3.x or 2.1 if you want to support a wider range of truly vintage systems; 1.x is facinating from a retro-geek perspective but lacks a lot of the nice features that came with later systems. Everything else like the installer, archive format, documentation format and so on is cross-platform anyway and supported from OS 2.1 and up. MorphOS, AROS and OS 4 are really fun systems to explore, and I highly recommend checking them out if this article has whetted your appetite (and you can find a system to run them on!) but classic is the easiest way to get your code out to the wider world and ironically provides a better code-base for future porting and native binaries than my “working backwards” approach. Wrap-up So that’s about the sum total of what I’ve picked up over the last few years, anyway! I still enjoy working on my Amigas when I get some “hacking on code in the evening” time, and in particular I find AmigaOS 4 on my X5000 a refreshing blend of retro appeal and just about enough modern convenience to use it for development tasks, or even for writing this article itself. My A1200 continues to impress me with how much utility there is in such a small box and is a wonderful distraction from the modern era of bloated systems and applications. It is perhaps an evolutionary dead-end, but it’s still a lot of fun and is one of the rare occasions these days where I feel actually in control of my computer. Working backwards in time from OS 4.1 to my classic Amigas has also really given me a greater insight and appreciation for what the Amiga engineers managed to pull off back then. If you’re in any way interested in computer history - or simply want to give something truly different a try - you should definitely check out AmigaOS. I hope this quick type BRAIN: > WEB: dump provides you with some good starting points, and maybe gets you coding too!

a year ago 27 votes
A Splinter In Your Mind

Earlier this year, I finally discovered as an adult that I am “on the spectrum” with what used to be called Asperger’s Syndrome. The diagnosis helped make sense of a lot things and has given me a greater insight into my “way of being in the world”. Whilst there are times I struggle with things that neuro-typical people usually find easy, or I find some situations draining, the condition has also brought me many positives which often get overlooked when talking about Autism Spectrum Disorders. True, it’s made life difficult or painful at times. But now I’ve learned more about it and have had help along the way, I’ve realised that many of my abilities and passions that I write about on this site also stem from the “unusual” way my mind works. Having fun with music is one of those gifts and it’s also how I can best express myself. I started putting this latest track together as I was processing everything and blew off some steam along the way - It was a great experience and I feel like I ended this project on a very positive note. I guess this is also me going public and being open about having an ASD. There’s still a fair amount of stigma associated with these conditions, but frankly much of our favourite art, the modern world and the Internet as we know it probably wouldn’t exist without all the neuro-diverse folks who made much of it! We’re just wired a little differently - but wouldn’t life be boring if we were all the same? So here’s to all the Aspies of the world! The track is available to stream on YouTube, and all the usual stores.

over a year ago 18 votes
DevOps for the Sinclair Spectrum - Part 4

In Part 3 I covered the backend server processes and protocols, CI/CD pipelines and unit tests I used to build the TNFS site. In this (much shorter) part, I’d like to take a step back from the hardcore geekery, and wrap up with my thoughts on the whole thing. Other Sites But before that, I’m going to explore a little part of the rest of the TNFS universe. After all, this project is intended to build a community site, and the Speccy has one of the friendliest retro computing communities out there. My site isn’t the only one out there - there’s a whole network of TNFS servers on the public internet, and the protocol has also been adopted for 8-bit Atari systems. There’s currently no central directory as such (although work is underway to create an index system using DNS TXT records) but there’s a forum thread that gets regular updates, and I have a links section on my site’s main menu. To give you an idea of some of the great content others have built, here’s a quick overview and screenshots of some of my favourite Speccy TNFS sites… Spectranet-related vexed4.alioth.net A very useful site to have bookmarked in one of the Spectranet’s slots. Hosted by the developer of the Spectranet’s firmware, It has some nice lists of classic and modern (2000s-era) games, demos and also some internet utilities including IRC and Twitter gateways. The first option in the menu is an online firmware update utility for Spectranet cards. The updater doesn’t work on an emulated Spectranet but works great on the real hardware - this is one of the first sites you should visit! The weird and wonderful tnfs.bytedelight.com This site takes up the default slot in Spectranet cards purchased from ByteDelight.com and serves as a demonstration. When you first configure your card, it’ll boot straight into this site which displays a snazzy “SkyNet” intro sequence informing you that you Speccy has been taken over, and all your base are belong to them. zx.zapto.org This wonderful fever-dream of a site was produced by a “p13z”, a regular on the Spectrum Computing Forums. It’s an impressive demonstration of what can be achieved using plain Sinclair BASIC - It’s sort of an interactive adventure where you start off by driving away from a police car against a neat parallax-scrolling background. You then end up wandering around various locations including a car-park frequented by “doggers” (and a telephone box which lets you connect to other TNFS sites), some standing stones where you can sample some “banging mushrooms” and other weird delights. Utterly bonkers in a classic Speccy kinda way… Gateways zx.desertkun.in Home of the Channels project, which provides a ZX Spectrum browser for forums and imageboards. If you boot into this site, set zx.desertkun.in as your proxy in the opening screen, and you can connect to internet forums and message boards including the Spectrum Computing Forums! There’s a Docker Image and source code available so you can run and customise things on your own systems. This uses a similar approach to my site where the heavy-lifting (in this case, TLS processing and connecting to/parsing the data from websites) is shifted to a more-capable modern environment. Awesome work! irata.online Now this one is a proper rabbit hole you could spend days exploring. It’s a fascinating part of computer history I had never encountered before: A visionary system that hosted some of the earliest ever online message boards, as well as apparently inspired the creation of Castle Wolfenstein and Lotus Notes! Their website explains it: “IRATA.ONLINE is provided for the benefit of retro-computing users to have a place to socialize, and develop interesting multi-user, interactive, and graphical games and social applications. It descends from the historical PLATO system, a massive time-sharing system that lasted from 1962 until NovaNET was closed in 2015.” The TNFS site hosts a terminal application that connects to the PLATO system (other retro systems are also supported) and you can even browse it through the web at https://js.irata.online/. Amazing stuff, and well worth some time checking out. nihirash.net This TNFS site hosts the uGophy Gopher client for the Spectrum. This lets you browse “gopherspace” from your speccy - try out gopher.floodgap.com as a starting point. You can use a web browser to access the HTTP proxy at https://gopher.floodgap.com/gopher/ to get a useful list of links to other Gopher sites still in operation, including the Veronica-2 search engine. zxnet.co.uk As well as a couple of “experiments” (a multi-player Tank game and music Jukebox), the zxnet.co.uk TNFS site hosts a copy of snapCterm (see below) and an IRC gateway. The IRC gateway allows you to connect to any public IRC server network and chat with other users - try libera.chat to get started, and use their excellent online help guides to find some interesting channels to join. snapCterm SnapCTerm is a terminal emulator that provides the basics of an ANSI 80 column terminal for the plus machines. It supports ASCII characters as well as ANSI escape and colour codes. This means you can use it to connect to Telnet servers e.g. a Linux system running telnetd which makes it possible to access the many old-school BBS systems still running. This is how we used to “social network”, kids, before this new fangled web thing took over… There’s a great curated list of systems to connect to at https://www.telnetbbsguide.com/ including all those “elite” sites that used to be advertised all over Amiga demo/warez scene productions. You can see in the screenshot a connection starting to The Sanctuary BBS, running on AmiExpress (still being updated!) and which used to be Fairlight’s World Headquarters back in the day. SnapCTerm is available on my TNFS site (2nd menu page, option 3) as well as on sites like zxnet.co.uk. File sites tnfs.millhill.org This is an archive of the zx.kupo.be TNFS site, which at one point hosted one of the largest collections of games and files for the Speccy. Sadly, Adam Colley who ran the site passed away in 2020. Kupo was one of the first sites I connected to when I first got my Spectranet card and looking through the code gave me the inspiration to start my own site. His work is archived here by someone who was in close contact with him before he passed, and it’s fitting that his site lives on and will be remembered by the community. retrojen.org As well as a nice little “blizzard” intro, retrojen.org hosts around 500 classic Spectrum scene demo tapes. Use the QAOP keys to move around the menu system, ENTER to select a title, 1 to jump to the first page and F to find a production by the release date. A great little archive of some old gems! Thanks for the recent post on my message wall, too :) szeliga.zapto.org This site hosts a curated collection of modern (mid-2000s onwards) games and titles released for the Speccy. There’s some great games that showcase the best of the current Speccy scene. Many of these titles are also available on my site, but it’s great to have them all arranged in a curated date order like this. I highly recommend trying out 2019’s “Yazzie”, a fantastic platformer/puzzle game that is ridiculously addictive. Wrap Up Since the site went online in January 2021 it’s grown to nearly 10k lines of code, handled over 11,500 connections and has gathered a small community of users chatting, playing games and leaving messages. It’s connected me to a wonderful (if slightly crazy) network of people who care about this little squidgey-keyed black box from the 80s as much as I do. It’s been a real pleasure seeing first users come to the site and I have lots of plans still for the future! Just a few of the bits and pieces in various Git branches right now: Proper pagination indication on all menus (page x of y type stuff) Message Board improvements (big speed boost and longer retention) Reply to comments - this opens up the possibility of a proper message board system! More files to upload and more curated lists I’m open to suggestions! If you have any ideas or fancy writing a text article for the site, let me know… In the words of one of the messages from a user: “It’s mad being online with a 40yr old computer designed around a tape recorder.” Perhaps it’s even crazier to drag the whole shebang into the modern age and cobble together infra-as-code pipelines, unit testing and gratuitous amounts of Kubernetes around it all. I did use a lot of my current-day practical knowledge during this project, but the most enjoyable parts where when I was doing stuff like studying the decades-old intricies of the VAL$ function. Sometimes, introducing constraints like an ancient 8-bit processor with only 48Kb of usable RAM can really force you to think “outside the box” and produce the most creative hacks. Plus, in this day and age it’s a lot of fun to do something just for the sheer hell of it! I’m also definitely going to add Sinclair BASIC to my “skills” in LinkedIn now ;) Before I close, I’d just like to add a note of thanks to everyone who’s used my site, suggested features, left messages or helped me out with my many technical queries on forums. It’s been an absolute blast and I look forward to the next 40 years of Spectrum hacking! -Mark, February 2022.

over a year ago 16 votes

More in programming

Why did Stripe build Sorbet? (~2017).

Many hypergrowth companies of the 2010s battled increasing complexity in their codebase by decomposing their monoliths. Stripe was somewhat of an exception, largely delaying decomposition until it had grown beyond three thousand engineers and had accumulated a decade of development in its core Ruby monolith. Even now, significant portions of their product are maintained in the monolithic repository, and it’s safe to say this was only possible because of Sorbet’s impact. Sorbet is a custom static type checker for Ruby that was initially designed and implemented by Stripe engineers on their Product Infrastructure team. Stripe’s Product Infrastructure had similar goals to other companies’ Developer Experience or Developer Productivity teams, but it focused on improving productivity through changes in the internal architecture of the codebase itself, rather than relying solely on external tooling or processes. This strategy explains why Stripe chose to delay decomposition for so long, and how the Product Infrastructure team invested in developer productivity to deal with the challenges of a large Ruby codebase managed by a large software engineering team with low average tenure caused by rapid hiring. Before wrapping this introduction, I want to explicitly acknowledge that this strategy was spearheaded by Stripe’s Product Infrastructure team, not by me. Although I ultimately became responsible for that team, I can’t take credit for this strategy’s thinking. Rather, I was initially skeptical, preferring an incremental migration to an existing strongly-typed programming language, either Java for library coverage or Golang for Stripe’s existing familiarity. Despite my initial doubts, the Sorbet project eventually won me over with its indisputable results. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation The Product Infrastructure team is investing in Stripe’s developer experience by: Every six months, Product Infrastructure will select its three highest priority areas to focus, and invest a significant majority of its energy into those. We will provide minimal support for other areas. We commit to refreshing our priorities every half after running the developer productivity survey. We will further share our results, and priorities, in each Quarterly Business Review. Our three highest priority areas for this half are: Add static typing to the highest value portions of our Ruby codebase, such that we can run the type checker locally and on the test machines to identify errors more quickly. Support selective test execution such that engineers can quickly determine and run the most appropriate tests on their machine rather than delaying until tests run on the build server. Instrument test failures such that we have better data to prioritize future efforts. Static typing is not a typical solution to developer productivity, so it requires some explanation when we say this is our highest priority area for investment. Doubly so when we acknowledge that it will take us 12-24 months of much of the team’s time to get our type checker to an effective place. Our type checker, which we plan to name Sorbet, will allow us to continue developing within our existing Ruby codebase. It will further allow our product engineers to remain focused on developing new functionality rather than migrating existing functionality to new services or programming languages. Instead, our Product Infrastructure team will centrally absorb both the development of the type checker and the initial rollout to our codebase. It’s possible for Product Infrastructure to take on both, despite its fixed size. We’ll rely on a hybrid approach of deep-dives to add typing to particularly complex areas, and scripts to rewrite our code’s Abstract Syntax Trees (AST) for less complex portions. In the relatively unlikely event that this approach fails, the cost to Stripe is of a small, known size: approximately six months of half the Product Infrastructure team, which is what we anticipate requiring to determine if this approach is viable. Based on our knowledge of Facebook’s Hack project, we believe we can build a static type checker that runs locally and significantly faster than our test suite. It’s hard to make a precise guess now, but we think less than 30 seconds to type our entire codebase, despite it being quite large. This will allow for a highly productive local development experience, even if we are not able to speed up local testing. Even if we do speed up local testing, typing would help us eliminate one of the categories of errors that testing has been unable to eliminate, which is passing of unexpected types across code paths which have been tested for expected scenarios but not for entirely unexpected scenarios. Once the type checker has been validated, we can incrementally prioritize adding typing to the highest value places across the codebase. We do not need to wholly type our codebase before we can start getting meaningful value. In support of these static typing efforts, we will advocate for product engineers at Stripe to begin development using the Command Query Responsibility Segregation (CQRS) design pattern, which we believe will provide high-leverage interfaces for incrementally introducing static typing into our codebase. Selective test execution will allow developers to quickly run appropriate tests locally. This will allow engineers to stay in a tight local development loop, speeding up development of high quality code. Given that our codebase is not currently statically typed, inferring which tests to run is rather challenging. With our very high test coverage, and the fact that all tests will still be run before deployment to the production environment, we believe that we can rely on statistically inferring which tests are likely to fail when a given file is modified. Instrumenting test failures is our third, and lowest priority, project for this half. Our focus this half is purely on annotating errors for which we have high conviction about their source, whether infrastructure or test issues. For escalations and issues, reach out in the #product-infra channel. Diagnose In 2017, Stripe is a company of about 1,000 people, including 400 software engineers. We aim to grow our organization by about 70% year-over-year to meet increasing demand for a broader product portfolio and to scale our existing products and infrastructure to accommodate user growth. As our production stability has improved over the past several years, we have now turned our focus towards improving developer productivity. Our current diagnosis of our developer productivity is: We primarily fund developer productivity for our Ruby-authoring software engineers via our Product Infrastructure team. The Ruby-focused portion of that team has about ten engineers on it today, and is unlikely to significantly grow in the future. (If we do expand, we are likely to staff non-Ruby ecosystems like Scala or Golang.) We have two primary mechanisms for understanding our engineer’s developer experience. The first is standard productivity metrics around deploy time, deploy stability, test coverage, test time, test flakiness, and so on. The second is a twice annual developer productivity survey. Looking at our productivity metrics, our test coverage remains extremely high, with coverage above 99% of lines, and tests are quite slow to run locally. They run quickly in our infrastructure because they are multiplexed across a large fleet of test runners. Tests have become slow enough to run locally that an increasing number of developers run an overly narrow subset of tests, or entirely skip running tests until after pushing their changes. They instead rely on our test servers to run against their pull request’s branch, which works well enough, but significantly slows down developer iteration time because the merge, build, and test cycle takes twenty to thirty minutes to complete. By the time their build-test cycle completes, they’ve lost their focus and maybe take several hours to return to addressing the results. There is significant disagreement about whether tests are becoming flakier due to test infrastructure issues, or due to quality issues of the tests themselves. At this point, there is no trustworthy dataset that allows us to attribute between those two causes. Feedback from the twice annual developer productivity survey supports the above diagnosis, and adds some additional nuance. Most concerning, although long-tenured Stripe engineers find themselves highly productive in our codebase, we increasingly hear in the survey that newly hired engineers with long tenures at other companies find themselves unproductive in our codebase. Specifically, they find it very difficult to determine how to safely make changes in our codebase. Our product codebase is entirely implemented in a single Ruby monolith. There is one narrow exception, a Golang service handling payment tokenization, which we consider out of scope for two reasons. First, it is kept intentionally narrow in order to absorb our SOC1 compliance obligations. Second, developers in that environment have not raised concerns about their productivity. Our data infrastructure is implemented in Scala. While these developers have concerns–primarily slow build times–they manage their build and deployment infrastructure independently, and the group remains relatively small. Ruby is not a highly performant programming language, but we’ve found it sufficiently efficient for our needs. Similarly, other languages are more cost-efficient from a compute resources perspective, but a significant majority of our spend is on real-time storage and batch computation. For these reasons alone, we would not consider replacing Ruby as our core programming language. Our Product Infrastructure team is about ten engineers, supporting about 250 product engineers. We anticipate this group growing modestly over time, but certainly sublinearly to the overall growth of product engineers. Developers working in Golang and Scala routinely ask for more centralized support, but it’s challenging to prioritize those requests as we’re forced to consider the return on improving the experience for 240 product engineers working in Ruby vs 10 in Golang or 40 data engineers in Scala. If we introduced more programming languages, this prioritization problem would become increasingly difficult, and we are already failing to support additional languages.

22 hours ago 4 votes
The new Framework 13 HX370

The new AMD HX370 option in the Framework 13 is a good step forward in performance for developers. It runs our HEY test suite in 2m7s, compared to 2m43s for the 7840U (and 2m49s for a M4 Pro!). It's also about 20% faster in most single-core tasks than the 7840U. But is that enough to warrant the jump in price? AMD's latest, best chips have suddenly gotten pretty expensive. The F13 w/ HX370 now costs $1,992 with 32GB RAM / 1TB. Almost the same an M4 Pro MBP14 w/ 24GB / 1TB ($2,199). I'd pick the Framework any day for its better keyboard, 3:2 matte screen, repairability, and superb Linux compatibility, but it won't be because the top option is "cheaper" any more.  Of course you could also just go with the budget 6-core Ryzen AI 5 340 in same spec for $1,362. I'm sure that's a great machine too. But maybe the sweet spot is actually the Ryzen AI 7 350. It "only" has 8 cores (vs 12 on the 370), but four of those are performance cores -- the same as the 370. And it's $300 cheaper. So ~$1,600 gets you out the door. I haven't actually tried the 350, though, so that's just speculation. I've been running the 370 for the last few months. Whichever chip you choose, the rest of the Framework 13 package is as good as it ever was. This remains my favorite laptop of at least the last decade. I've been running one for over a year now, and combined with Omakub + Neovim, it's the first machine in forever where I've actually enjoyed programming on a 13" screen. The 3:2 aspect ratio combined with Linux's superb multiple desktops that switch with 0ms lag and no animations means I barely miss the trusted 6K Apple XDR screen when working away from the desk. The HX370 gives me about 6 hours of battery life in mixed use. About the same as the old 7840U. Though if all I'm doing is writing, I can squeeze that to 8-10 hours. That's good enough for me, but not as good as a Qualcomm machine or an Apple M-chip machine. For some people, those extra hours really make the difference. What does make a difference, of course, is Linux. I've written repeatedly about how much of a joy it's been to rediscover Linux on the desktop, and it's a joy that keeps on giving. For web work, it's so good. And for any work that requires even a minimum of Docker, it's so fast (as the HEY suite run time attests). Apple still has a strong hardware game, but their software story is falling apart. I haven't heard many people sing the praises of new iOS or macOS releases in a long while. It seems like without an asshole in charge, both have move towards more bloat, more ads, more gimmicks, more control. Linux is an incredible antidote to this nonsense these days. It's also just fun! Seeing AMD catch up in outright performance if not efficiency has been a delight. Watching Framework perfect their 13" laptop while remaining 100% backwards compatible in terms of upgrades with the first versions is heartwarming. And getting to test the new Framework Desktop in advance of its Q3 release has only affirmed my commitment to both. But on the new HX370, it's in my opinion the best Linux laptop you can buy today, which by extension makes it the best web developer laptop too. The top spec might have gotten a bit pricey, but there are options all along the budget spectrum, which retains all the key ingredients any way. Hard to go wrong. Forza Framework!

yesterday 1 votes
Beyond `None`: actionable error messages for `keyring.get_password()`

I’m a big fan of keyring, a Python module made by Jason R. Coombs for storing secrets in the system keyring. It works on multiple operating systems, and it knows what password store to use for each of them. For example, if you’re using macOS it puts secrets in the Keychain, but if you’re on Windows it uses Credential Locker. The keyring module is a safe and portable way to store passwords, more secure than using a plaintext config file or an environment variable. The same code will work on different platforms, because keyring handles the hard work of choosing which password store to use. It has a straightforward API: the keyring.set_password and keyring.get_password functions will handle a lot of use cases. >>> import keyring >>> keyring.set_password("xkcd", "alexwlchan", "correct-horse-battery-staple") >>> keyring.get_password("xkcd", "alexwlchan") "correct-horse-battery-staple" Although this API is simple, it’s not perfect – I have some frustrations with the get_password function. In a lot of my projects, I’m now using a small function that wraps get_password. What do I find frustrating about keyring.get_password? If you look up a password that isn’t in the system keyring, get_password returns None rather than throwing an exception: >>> print(keyring.get_password("xkcd", "the_invisible_man")) None I can see why this makes sense for the library overall – a non-existent password is very normal, and not exceptional behaviour – but in my projects, None is rarely a usable value. I normally use keyring to retrieve secrets that I need to access protected resources – for example, an API key to call an API that requires authentication. If I can’t get the right secrets, I know I can’t continue. Indeed, continuing often leads to more confusing errors when some other function unexpectedly gets None, rather than a string. For a while, I wrapped get_password in a function that would throw an exception if it couldn’t find the password: def get_required_password(service_name: str, username: str) -> str: """ Get password from the specified service. If a matching password is not found in the system keyring, this function will throw an exception. """ password = keyring.get_password(service_name, username) if password is None: raise RuntimeError(f"Could not retrieve password {(service_name, username)}") return password When I use this function, my code will fail as soon as it fails to retrieve a password, rather than when it tries to use None as the password. This worked well enough for my personal projects, but it wasn’t a great fit for shared projects. I could make sense of the error, but not everyone could do the same. What’s that password meant to be? A good error message explains what’s gone wrong, and gives the reader clear steps for fixing the issue. The error message above is only doing half the job. It tells you what’s gone wrong (it couldn’t get the password) but it doesn’t tell you how to fix it. As I started using this snippet in codebases that I work on with other developers, I got questions when other people hit this error. They could guess that they needed to set a password, but the error message doesn’t explain how, or what password they should be setting. For example, is this a secret they should pick themselves? Is it a password in our shared password vault? Or do they need an API key for a third-party service? If so, where do they find it? I still think my initial error was an improvement over letting None be used in the rest of the codebase, but I realised I could go further. This is my extended wrapper: def get_required_password(service_name: str, username: str, explanation: str) -> str: """ Get password from the specified service. If a matching password is not found in the system keyring, this function will throw an exception and explain to the user how to set the required password. """ password = keyring.get_password(service_name, username) if password is None: raise RuntimeError( "Unable to retrieve required password from the system keyring!\n" "\n" "You need to:\n" "\n" f"1/ Get the password. Here's how: {explanation}\n" "\n" "2/ Save the new password in the system keyring:\n" "\n" f" keyring set {service_name} {username}\n" ) return password The explanation argument allows me to explain what the password is for to a future reader, and what value it should have. That information can often be found in a code comment or in documentation, but putting it in an error message makes it more visible. Here’s one example: get_required_password( "flask_app", "secret_key", explanation=( "Pick a random value, e.g. with\n" "\n" " python3 -c 'import secrets; print(secrets.token_hex())'\n" "\n" "This password is used to securely sign the Flask session cookie. " "See https://flask.palletsprojects.com/en/stable/config/#SECRET_KEY" ), ) If you call this function and there’s no keyring entry for flask_app/secret_key, you get the following error: Unable to retrieve required password from the system keyring! You need to: 1/ Get the password. Here's how: Pick a random value, e.g. with python3 -c 'import secrets; print(secrets.token_hex())' This password is used to securely sign the Flask session cookie. See https://flask.palletsprojects.com/en/stable/config/#SECRET_KEY 2/ Save the new password in the system keyring: keyring set flask_app secret_key It’s longer, but this error message is far more informative. It tells you what’s wrong, how to save a password, and what the password should be. This is based on a real example where the previous error message led to a misunderstanding. A co-worker saw a missing password called “secret key” and thought it referred to a secret key for calling an API, and didn’t realise it was actually for signing Flask session cookies. Now I can write a more informative error message, I can prevent that misunderstanding happening again. (We also renamed the secret, for additional clarity.) It takes time to write this explanation, which will only ever be seen by a handful of people, but I think it’s important. If somebody sees it at all, it’ll be when they’re setting up the project for the first time. I want that setup process to be smooth and straightforward. I don’t use this wrapper in all my code, particularly small or throwaway toys that won’t last long enough for this to be an issue. But in larger codebases that will be used by other developers, and which I expect to last a long time, I use it extensively. Writing a good explanation now can avoid frustration later. [If the formatting of this post looks odd in your feed reader, visit the original article]

yesterday 1 votes
Kagi Assistant is now available to all users!

At Kagi, our mission is simple: to humanise the web.

yesterday 1 votes
The Halting Problem is a terrible example of NP-Harder

Short one this time because I have a lot going on this week. In computation complexity, NP is the class of all decision problems (yes/no) where a potential proof (or "witness") for "yes" can be verified in polynomial time. For example, "does this set of numbers have a subset that sums to zero" is in NP. If the answer is "yes", you can prove it by presenting a set of numbers. We would then verify the witness by 1) checking that all the numbers are present in the set (~linear time) and 2) adding up all the numbers (also linear). NP-complete is the class of "hardest possible" NP problems. Subset sum is NP-complete. NP-hard is the set all problems at least as hard as NP-complete. Notably, NP-hard is not a subset of NP, as it contains problems that are harder than NP-complete. A natural question to ask is "like what?" And the canonical example of "NP-harder" is the halting problem (HALT): does program P halt on input C? As the argument goes, it's undecidable, so obviously not in NP. I think this is a bad example for two reasons: All NP requires is that witnesses for "yes" can be verified in polynomial time. It does not require anything for the "no" case! And even though HP is undecidable, there is a decidable way to verify a "yes": let the witness be "it halts in N steps", then run the program for that many steps and see if it halted by then. To prove HALT is not in NP, you have to show that this verification process grows faster than polynomially. It does (as busy beaver is uncomputable), but this all makes the example needlessly confusing.1 "What's bigger than a dog? THE MOON" Really (2) bothers me a lot more than (1) because it's just so inelegant. It suggests that NP-complete is the upper bound of "solvable" problems, and after that you're in full-on undecidability. I'd rather show intuitive problems that are harder than NP but not that much harder. But in looking for a "slightly harder" problem, I ran into an, ah, problem. It seems like the next-hardest class would be EXPTIME, except we don't know for sure that NP != EXPTIME. We know for sure that NP != NEXPTIME, but NEXPTIME doesn't have any intuitive, easily explainable problems. Most "definitely harder than NP" problems require a nontrivial background in theoretical computer science or mathematics to understand. There is one problem, though, that I find easily explainable. Place a token at the bottom left corner of a grid that extends infinitely up and right, call that point (0, 0). You're given list of valid displacement moves for the token, like (+1, +0), (-20, +13), (-5, -6), etc, and a target point like (700, 1). You may make any sequence of moves in any order, as long as no move ever puts the token off the grid. Does any sequence of moves bring you to the target? This is PSPACE-complete, I think, which still isn't proven to be harder than NP-complete (though it's widely believed). But what if you increase the number of dimensions of the grid? Past a certain number of dimensions the problem jumps to being EXPSPACE-complete, and then TOWER-complete (grows tetrationally), and then it keeps going. Some point might recognize this as looking a lot like the Ackermann function, and in fact this problem is ACKERMANN-complete on the number of available dimensions. A friend wrote a Quanta article about the whole mess, you should read it. This problem is ludicrously bigger than NP ("Chicago" instead of "The Moon"), but at least it's clearly decidable, easily explainable, and definitely not in NP. It's less confusing if you're taught the alternate (and original!) definition of NP, "the class of problems solvable in polynomial time by a nondeterministic Turing machine". Then HALT can't be in NP because otherwise runtime would be bounded by an exponential function. ↩

2 days ago 5 votes