Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
23
Discussion on Hacker News Discussion on lobste.rs I’ve long since been a die-hard BeOS fan and have been running the open-source recreation Haiku for many years. I think it’s interesting to explore the “alternative OS” world and consider some great ideas that for whatever reason never caught on elsewhere. The way Haiku handles package management and its alternative approach to an “immutable system” is one of those ideas I find really cool. Here’s what it looks like from a desktop user’s perspective - there’s all the usual stuff like an “app store”, package updater, repositories of packages and so on: It’s all there and works well - it’s easily as smooth as any desktop Linux experience. However, it’s the implementation details behind the scenes that make it so interesting to me. Haiku takes a refreshingly new approach to package management: Despite the user experience “feeling” like a traditional package manager - say, something like apt or dnf - it has seamless support...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from markround.com

Disqus - An Apology

Earlier today, I got an email alerting me to an angrier than usual comment on this website. It was a proper keyboard warrior rant accusing me of all sorts of misdeads revolving around “forcing ads down people’s throats”. I replied saying that there had never been any ads on this site, never will be and I detest the enshitification trend of the modern Internet too. I also have found much of today’s web unbearable without tools such as Pi-Hole and a VPN; I use Firefox with adblockers whenever possible and generally speaking, if a site forces me to disable my ad-blocker I’ll simply stop visiting. Then I had a sinking feeling. Years ago, when I migrated this site from a PHP codebase to a static site generator, I’d enabled Disqus comments as it (at the time) seemed a reasonable alternative to dealing with all the spam, moderation, flame wars and handling user data that comes with a comments engine. I’d never really paid that much attention to it as it never got that much use, and I certainly had never noticed any ads or other junk being injected in amongst my content. But then I go to somewhat extreme lengths to avoid that sort of crap, and had a horrible feeling that maybe I’d simply just never noticed anything un-toward as I’d been blocking it. So I downloaded Google Chrome (ugh!), and browsed to this site without any kind of protection or blocking, and was confronted with an absolute abomination of link-farm “chumbox” adverts littering the bottom half of the page. Even though I’m currently on holiday, I immediately disabled the Disqus integration (yay for GitOps and CI/CD pipelines!) and can only apologise for the eye sore and god-awful mess that I was unwittingly inflicting on people. I’m not sure exactly when Disqus got that bad. It certainly wasn’t when I initially set it up, but I guess this is another example of why we can’t have nice things. So to my angry anonymous poster: You were right, I apologise (but you were still kind of a dick about it), and holy crap do I despise what we’ve done to the web.

10 months ago 28 votes
Amiga Systems Programming in 2023

Discussion on Hacker News Discussion on lobste.rs If you ever get a chance to look through the classic Amiga OS source-code still floating around some murky corners of the internet, it is a thing of beauty and astonishing capabilities. It’s an inspirational piece of computing history with unmatched capabilities for the time. Remember, this was all originally on a computer released in the 1980s with 512Kb memory, a 7Mhz 68000 16-bit CPU, and a single floppy drive with 880Kb storage. On these limited specs, AmigaOS provided a pre-emptive multi-tasking operating system, a full set of GUI primatives and built-in “Workbench” interface, expansion card auto-configuration and a fully-featured filesystem with some unique and powerful capabilities. Although to be fair, the AmigaDOS parts do literally come from a different time (and possibly planet) - but more on that later. Oh and of course, there was that amazing chipset that meant even that humble base can do things like this - while PCs of the time were basically office boxes that occasionally bleeped and home computers still loaded games from cassette tape. There’s understandably a lot of on-line interest in those parts of the Amiga as they’re the most impressive in an obvious “wow!” way. But while that was what drew me to the Amiga when I was a kid (and the demo/cracking/bbs scene heavily influenced me) I’ve always been more of a systems geek at heart. I’ve always loved building tools and platforms, and have long been fascinated with the world of operating systems. Apart from reading through the source code (where that’s legally available, of course…) I think there’s no better way to explore and understand a system - and the mindset that produced it - than to develop for it. What follows is a brain-dump of what I’ve learned about developing for the AmigaOS, both on classic 68k-powered hardware to modern PowerPC systems like the X5000. I’ll cover development environments, modern workflows like CI builds on containerised infrastructure, distribution of packages and even a look back in time before C existed, thanks to AmigaDOS’s odd heritage. Table Of Contents SetCmd SDK Updates Editors Native hardware Modern development AmigaDOS is weird Distribution Archive Documentation Installer File Sites External Documentation The way forward is back ? Wrap-up SetCmd There’s plenty of guides and videos on setting up an old-school game or demo-coding environment, but all of what follows is in the context of developing a systems tool in C as that’s the language of AmigaOS. I started a real-life project partly to solve a small problem I had (switching between different versions of commands/tools at the Amiga CLI) but mainly to explore and dig deeper into the OS that influenced me so much as a teenager. SetCmd was the result, and is a very simple AmigaOS 4 PowerPC package. I’m working (very slowly) on porting it to run on classic AmigaOS and variants but it has to be said this is my first time writing C in any meaningful capacity beyond wrestling with pointers at University. The source code is on GitLab if you want to take a look but bear in mind despite having owned Amigas since they were released, I’m a total newbie at most of this! I wrote it to have fun, explore the AmigaOS, set up build environments and figure out how to package it up for re-distribution. I have written a bit about my development setup in the past, but things have changed a fair bit since then - so without further ado, here’s my development environment and thoughts in 2023. SDK Updates Whilst things do move at a glacial pace in the world of AmigaOS 4/PPC, there have been a few big updates. A-EON’s Enhancer Software has had several releases, each adding new applications and developer APIs. As well as shipping their own versions of key Amiga OS applications and utilities, they also now are installing several core AmigaDOS command replacements. I tend to skip the installation of these as I’ve encountered a few edge cases where they don’t quite behave like the original OS 4 commands, but from recent discussions online it appears as if they are preparing for their own “clean-room” re-implementation and modernisation of Amiga OS 4. Presumably in order to free themselves from the eternal legal shenanigans with Hyperion et al. I’m not going to get into that raging dumpster fire here, but it’ll be interesting to see what comes of this. On the Hyperion side, they released a big SDK Update for OS 4 including updated GCC toolchains, cross-compilers, profilers and loads of updated SDKs. Bearing in mind the ancient GCC 4.x toolchain that had been in place for years it was great to have a more modern environment. On the classic Amiga front, Hyperion have also been pushing ahead with their updated AmigaOS 3.x OS for 68k-powered Amigas. Now on version 3.2.2.1 (I’ve got my boxed set of CD and floppy disks) there have also been several NDK (“Native Developer Kit”) Updates providing updated APIs and tools. AmiKit also released a great “all-in-one” environment called DevPack which includes a huge range of languages (C, Assembly, Amos, Lua, Basic…) and NDKs all configured and ready to go. As a quick and easy way of setting up a development environment on classic Amigas, it’s hard to beat and saves a lot of manual downloading, configuring and glue-ing everything together. Editors In my last update 3 years ago, I’d more or-less settled on using a GUI VIM derivative. While I’m still a die-hard VIM user at $DAYJOB, I really appreciate the modern comforts of e.g. VSCode. Thanks to the amazing work of George Sokianos, there is now a OS 4 package of the awesome Lite-XL editor along with a comprehensive set of plugins. Here’s what a hacking session on my X5000 looks like: In that session you can see alongside LiteXL two terminal windows: I’m compiling and running the PowerPC and classic 68k versions of setcmd thanks to the cross-compilers and native “Petunia” JIT 68k emulation built into OS 4. More on that later, but while we’re talking about classic Amigas… Native hardware Emulation is fine, but nothing beats running on the actual hardware! In the lead image to this article you can see my treasured Amiga 1200 (with older 8-bit friend in the background running my TNFS site) expanded with an Apollo Vampire accelerator. An Ethernet or Wifi adapter is more or less essential though when it comes to transferring data around and fortunately the Vampire card is capable of network connectivity, high-resolution display and other niceities but can easily be switched back to a more “stock” environment. I do still occasionally use my licensed copy of CubicIDE but due to the age of this architecture, I tend to keep my tools light and have settled on the simple Jano editor or sometimes CygnusEd for old time’s sake. My build toolchain is provided by Devpack and it’s included VBCC compiler. I use the vbcc_target_m68k-amigaos target with this makefile to build: Modern development I’m (sadly) not always in front of my Amigas, but these days a modern laptop and cloud-native tools offer a lot of flexibility particularly with the advanced state of emulation. I use VSCode as my editor, and a containerised cross-compiler toolchain built by - again! - George Sokianos to target both 68k and PPC platforms. I can build my project on any system capable of running OCI containers, e.g. docker run \ --user $UID:$GID \ -v ./:/opt/code \ walkero/docker4amigavbcc:latest-m68k-amd64 \ make -f makefile.docker Testing and running the code is made easy by the very advanced state of emulators. On Windows, WinUAE is the gold standard and can emulate everything from an original 1985-vintage A1000 up to modern systems with PowerPC accelerators, graphics cards and other devices. I have it multi-booting into clean Hyperion and Commodore/classic OS environments, with my source code directory shared as a virtual hard-drive: I can compile in seconds with Docker, and then straight away test the resulting binary in my emulated Amiga. Source-code is kept up to date between systems using Git; on the Amiga X5000 I use the port of SimpleGit, which is now bundled with the latest Hyperion SDK under SDK:c/sgit. I haven’t yet found a suitable Git solution for the classic Amigas, so on those I use a makeshift AmigaDOS shell script that uses Backup to copy files over to a network mount in an rsync -like fashion. I also keep meaning to test running AmigaOS 4.1 under QEMU as support for this has greatly improved and looks a lot simpler than the currently convoluted process of getting “classic OS 4” running on an emulated 68k Amiga with PPC accelerator configured. But for now, the WinUAE approach is working pretty well. Another advantage of having a containerised build-chain is that combined with Git and Drone running on my personal Kubernetes clusters, I can build and package my code with a simple git push wherever I am: AmigaDOS is weird Although AmigaOS is frequently lauded for it’s sophistication and elegance, there is a notable “oddness” about the AmigaDOS components which handle storage I/O, devices and filesystems. The original developers of the Amiga had an ambitious DOS system planned, but in the end Commodore had to purchase the Tripos operating system and port parts of it to the Amiga due to deadline challenges. This mismatch is all the more pronounced as Tripos was written in BCPL - which in turn, influenced the B programming language which begat the C we all know and… well, tolerate, in my case. So it really is looking back into computing history and remnants of this still remain even in the “modern” AmigaOS 4.x and other derivatives. Once you start diving into AmigaDOS code, you end up face-to-face with this legacy and need to convert back and forth with BCPL and C data-types. For example, BCPL strings are not NULL-terminated, instead they have a length in the first byte and then the characters follow. And pointers are similarly alien. This is why my code has stuff like this littered through it: // Convert the new node to a BPTR new_node_bptr = MKBADDR(new_node); // Set the new path cli->cli_CommandDir = new_node_bptr; As the NDK include file dos.h explains: “All BCPL data must be long word aligned. BCPL pointers are the long word address (i.e byte address divided by 4 (»2))”. It also includes helper functions like MKBADDR to help with the conversion as most DOS system calls use BCPL pointers in arguments: /* Convert BPTR to typical C pointer */ #define BADDR(x) ((APTR)((ULONG)(x) << 2)) /* Convert address into a BPTR */ #define MKBADDR(x) (((LONG)(x)) >> 2) All in all, a fascinating look back into an obscure branch of computing history, but it hasn’t furthered my appreciation of pointers any! Distribution If you want to distribute your project to a wider audience, there’s a few Amiga-specific things you can do that makes life much easier for people running AmigaOS. These all make use of some pretty cool bits of Amiga technology and have been widely adopted by native software since their introduction in the early 1990s. Archive Just use LHA format archives. It’s the standard compression tool on Amigas and even though there are modern (and technically better) alternatives, .lha files are as ubiquitous as e.g. .zip or .tar.gz packages on other systems and can also be handled by low-spec machines. There are ports of CLI tools and GUIs available on all platforms to handle these archives and while the syntax can be a little different, it’s quite easy to use. See my AmigaDOS script that builds the SetCmd release artifact for a practical example. Documentation Along with a basic README.txt, it’s a great practice to distribute more detailed documentation in AmigaGuide format. Another area the Amiga was way ahead of it’s time - AmigaGuide is a hyper-text format commonly used for application manuals although people have even used it to publish disk magazines! Introduced in 1992, the bundled tools on any AmigaOS (or clone/derivative) can read and use AmigaGuide as standard so you can include formatting, links and other content in your documentation. You can see the AmigaGuide docs I include with SetCmd in the screenshot above, or view the source to see what the syntax looks like. It’s pretty simple to write and is much like any other markdown or formatting code. You can set basic parameters: 1 2 @Width 72 @wordwrap Documents have “Nodes” which can be linked to, e.g. 1 2 @Node About "About SetCMD" @{"About" Link About} And formatting is much like HTML with opening and closing tags: @{b}@{u} Bold and Underlined! @{uu}@{ub} Installer A fantastic addition to AmigaOS, the system installer utility reads a developer-provided script which handles copying files, comparing versions, modifying system scripts and so on, in a standardized fashion. You can pass useful information and configuration which controls the Installer tool through standard Amiga tooltypes, and it uses a sort of LISP-ey syntax which again runs on all Amigas and derivatives. The syntax does some getting used to, but the best source of documentation is the AmigaGuide documentation found in the Installer dev package. As an example, here’s an excerpt of the block of code which copies the setcmd program file over to a previously created directory: 1 2 3 4 5 6 7 8 (copyfiles (source "setcmd") (dest #dname) (prompt ("Copy SetCmd program file?")) (confirm "expert") (all) (help @copyfiles-help) ) And for running commands, you can use the run command, along with cat (short for “concatenate”) to build up the command string : (run (cat "C:MakeLink FROM " #dname "/cmds/setcmd/release TO SETCMD:setcmd SOFT")) I found the best approach was to examine other Installer scripts to get a feel for common practices and idioms. Here’s my simplistic Install_SetCmd script, and if you want to see something more complex, there’s always the AmigaOS installation scripts, or the Qt installer for OS 4 which taught me a lot. File Sites The Amiga doesn’t have a universal package manager, so files are usually downloaded manually and installed from the .lha archives. The go-to place for Amiga software for all systems is AmiNet. It’s the biggest repository of Amiga packages on the internet (and, at one point in the mid-90s was actually the largest software repository of any platform) and now also supports hosting packages for Amiga OS 4, MorphOS and AROS alongside classic 68k fare. There are also smaller, platform-focused sites for each platform e.g. os4depot.net for OS 4, morphos-storage.net for MorphOS and so on. Getting your package uploaded and accepted into the repository is broadly the same for all these: You FTP your package up according to the naming standards, and supply a Readme file which provides the required metadata like this excerpt in AmiNet format: 1 2 3 4 5 6 Short: Switch between versions of software Author: amiga@markround.com (Mark Dastmalchi-Round) Type: util/shell Version: 1.1.0 Architecture: ppc-amigaos >= 4.0.0 Distribution: Aminet There are some Amiga-native GUI tools that assist with creating these files, but the specs e.g. for os4depot.net are pretty straight forward. And here’s the end result - My package available on os4depot.net and aminet. External Documentation When trying to learn or re-learn everything from C to AmigaDOS scripting, I found a few great resources. However, as with most things in Amiga-land, there’s an extraordinarily high “bus factor” for many websites and my biggest recommendation is to use native tools or save local copies of anything you find! With that said, here’s my essential Amiga bookmarks: Autodocs references. There’s lots of websites where you can browse the auto-generated docs from the SDK header files, like this with clickable links to jump between things. If I’m actually at an Amiga though, there are some useful native tools I prefer that can index and search through the local headers. The screenshot above shows the standard “AutoDoc Reader” freeware tool viewing the equivalent of a man page for the AmigaDOS library, alongside the AmigaGuide Installer language reference. http://www.pjhutchison.org/tutorial/amiga_c.html - amazing site. This is what inspired me to pick up a compiler again and get to work. There’s a great refresher on the C language itself, and then it dives into Amiga-specific coding with everything from low-level library access, sound and GUI programming and more. Amiga OS Dev wiki is a goldmine, although it can take a little searching to find what you’re after. It’s mostly OS 4-focused but because all Amiga systems share a common ancestor it’s usually pretty applicable to all platforms. Specific articles that I found useful include: OS 4 Migration Guide Programming in the Amiga Environment Fundamental Types https://www.amiga-news.de is a great news aggregator site for all things Amiga, and also has a bunch of exclusive articles on programming in the AmigaOS environment - Like this recent article on GUI programming. Well worth a read - thanks to Daniel Reimann for pointing out the great content to me! And lastly, there are great threads I constantly found on amigans.net which is where a lot of OS 4/”Next Gen” technical discussion happens. For classic systems, I found the Coders discussions on the English Amiga Board an invaluable resource. The way forward is back ? When I started this project, it was really a way to get acquainted with my new X5000. Since then, I’ve decided to port my codebase back to the classic Amiga, as well as explore porting over to other Amiga-like systems such as MorphOS and AROS. This leads to some choices: From a packaging and distribution point of view, a 68k binary is pretty much the universal standard in Amiga land. It can run natively on classic Amigas, and modern systems like AmigaOS 4.x and MorphOS can run 68k binaries through translation. In a method similar to how Apple has handled the transition between processors in the Mac, it’s a pretty seamless experience and I run a lot of classic 68k software on my X5000. As long as you aren’t “banging on the metal” it works really well and integrates smoothly with the rest of the system. The original 68k AmigaOS from Commodore is also pretty much the standard for source-code compatibility; code targetting this release can be built on most of the derivatives and later systems with very little (if any) modification. On AmigaOS 4 for example, you can simply add -D__USE_INLINE__ to your makefiles and in theory build from a common codebase. If you start the other way as I did and write initially targetting AmigaOS 4, it’s harder to port to other systems. For example, I originally followed the AmigaOS 4 programming style which favours prefixing library calls with interface names. This isn’t compatible with any other system, so the easiest way to port this to more Amiga-like platforms is to refactor this code back to the classic style of calling system functions. I do plan on building platform-specific binaries using #defines so I can for example use functions like dos.library/AddCmdPathNode on OS 4 that I otherwise have to manually implement, and while a lot of higher-level layers (like e.g. MUI for building graphical applications) are shared across platforms this is probably the best bet for adding specific features from one platform that aren’t available on others. Honestly though, at this point if you want to just get started I’d have to suggest you target classic AmigaOS compatibility and build a 68k binary. I’d personally target AmigaOS 3.x or 2.1 if you want to support a wider range of truly vintage systems; 1.x is facinating from a retro-geek perspective but lacks a lot of the nice features that came with later systems. Everything else like the installer, archive format, documentation format and so on is cross-platform anyway and supported from OS 2.1 and up. MorphOS, AROS and OS 4 are really fun systems to explore, and I highly recommend checking them out if this article has whetted your appetite (and you can find a system to run them on!) but classic is the easiest way to get your code out to the wider world and ironically provides a better code-base for future porting and native binaries than my “working backwards” approach. Wrap-up So that’s about the sum total of what I’ve picked up over the last few years, anyway! I still enjoy working on my Amigas when I get some “hacking on code in the evening” time, and in particular I find AmigaOS 4 on my X5000 a refreshing blend of retro appeal and just about enough modern convenience to use it for development tasks, or even for writing this article itself. My A1200 continues to impress me with how much utility there is in such a small box and is a wonderful distraction from the modern era of bloated systems and applications. It is perhaps an evolutionary dead-end, but it’s still a lot of fun and is one of the rare occasions these days where I feel actually in control of my computer. Working backwards in time from OS 4.1 to my classic Amigas has also really given me a greater insight and appreciation for what the Amiga engineers managed to pull off back then. If you’re in any way interested in computer history - or simply want to give something truly different a try - you should definitely check out AmigaOS. I hope this quick type BRAIN: > WEB: dump provides you with some good starting points, and maybe gets you coding too!

a year ago 32 votes
A Splinter In Your Mind

Earlier this year, I finally discovered as an adult that I am “on the spectrum” with what used to be called Asperger’s Syndrome. The diagnosis helped make sense of a lot things and has given me a greater insight into my “way of being in the world”. Whilst there are times I struggle with things that neuro-typical people usually find easy, or I find some situations draining, the condition has also brought me many positives which often get overlooked when talking about Autism Spectrum Disorders. True, it’s made life difficult or painful at times. But now I’ve learned more about it and have had help along the way, I’ve realised that many of my abilities and passions that I write about on this site also stem from the “unusual” way my mind works. Having fun with music is one of those gifts and it’s also how I can best express myself. I started putting this latest track together as I was processing everything and blew off some steam along the way - It was a great experience and I feel like I ended this project on a very positive note. I guess this is also me going public and being open about having an ASD. There’s still a fair amount of stigma associated with these conditions, but frankly much of our favourite art, the modern world and the Internet as we know it probably wouldn’t exist without all the neuro-diverse folks who made much of it! We’re just wired a little differently - but wouldn’t life be boring if we were all the same? So here’s to all the Aspies of the world! The track is available to stream on YouTube, and all the usual stores.

over a year ago 21 votes
DevOps for the Sinclair Spectrum - Part 4

In Part 3 I covered the backend server processes and protocols, CI/CD pipelines and unit tests I used to build the TNFS site. In this (much shorter) part, I’d like to take a step back from the hardcore geekery, and wrap up with my thoughts on the whole thing. Other Sites But before that, I’m going to explore a little part of the rest of the TNFS universe. After all, this project is intended to build a community site, and the Speccy has one of the friendliest retro computing communities out there. My site isn’t the only one out there - there’s a whole network of TNFS servers on the public internet, and the protocol has also been adopted for 8-bit Atari systems. There’s currently no central directory as such (although work is underway to create an index system using DNS TXT records) but there’s a forum thread that gets regular updates, and I have a links section on my site’s main menu. To give you an idea of some of the great content others have built, here’s a quick overview and screenshots of some of my favourite Speccy TNFS sites… Spectranet-related vexed4.alioth.net A very useful site to have bookmarked in one of the Spectranet’s slots. Hosted by the developer of the Spectranet’s firmware, It has some nice lists of classic and modern (2000s-era) games, demos and also some internet utilities including IRC and Twitter gateways. The first option in the menu is an online firmware update utility for Spectranet cards. The updater doesn’t work on an emulated Spectranet but works great on the real hardware - this is one of the first sites you should visit! The weird and wonderful tnfs.bytedelight.com This site takes up the default slot in Spectranet cards purchased from ByteDelight.com and serves as a demonstration. When you first configure your card, it’ll boot straight into this site which displays a snazzy “SkyNet” intro sequence informing you that you Speccy has been taken over, and all your base are belong to them. zx.zapto.org This wonderful fever-dream of a site was produced by a “p13z”, a regular on the Spectrum Computing Forums. It’s an impressive demonstration of what can be achieved using plain Sinclair BASIC - It’s sort of an interactive adventure where you start off by driving away from a police car against a neat parallax-scrolling background. You then end up wandering around various locations including a car-park frequented by “doggers” (and a telephone box which lets you connect to other TNFS sites), some standing stones where you can sample some “banging mushrooms” and other weird delights. Utterly bonkers in a classic Speccy kinda way… Gateways zx.desertkun.in Home of the Channels project, which provides a ZX Spectrum browser for forums and imageboards. If you boot into this site, set zx.desertkun.in as your proxy in the opening screen, and you can connect to internet forums and message boards including the Spectrum Computing Forums! There’s a Docker Image and source code available so you can run and customise things on your own systems. This uses a similar approach to my site where the heavy-lifting (in this case, TLS processing and connecting to/parsing the data from websites) is shifted to a more-capable modern environment. Awesome work! irata.online Now this one is a proper rabbit hole you could spend days exploring. It’s a fascinating part of computer history I had never encountered before: A visionary system that hosted some of the earliest ever online message boards, as well as apparently inspired the creation of Castle Wolfenstein and Lotus Notes! Their website explains it: “IRATA.ONLINE is provided for the benefit of retro-computing users to have a place to socialize, and develop interesting multi-user, interactive, and graphical games and social applications. It descends from the historical PLATO system, a massive time-sharing system that lasted from 1962 until NovaNET was closed in 2015.” The TNFS site hosts a terminal application that connects to the PLATO system (other retro systems are also supported) and you can even browse it through the web at https://js.irata.online/. Amazing stuff, and well worth some time checking out. nihirash.net This TNFS site hosts the uGophy Gopher client for the Spectrum. This lets you browse “gopherspace” from your speccy - try out gopher.floodgap.com as a starting point. You can use a web browser to access the HTTP proxy at https://gopher.floodgap.com/gopher/ to get a useful list of links to other Gopher sites still in operation, including the Veronica-2 search engine. zxnet.co.uk As well as a couple of “experiments” (a multi-player Tank game and music Jukebox), the zxnet.co.uk TNFS site hosts a copy of snapCterm (see below) and an IRC gateway. The IRC gateway allows you to connect to any public IRC server network and chat with other users - try libera.chat to get started, and use their excellent online help guides to find some interesting channels to join. snapCterm SnapCTerm is a terminal emulator that provides the basics of an ANSI 80 column terminal for the plus machines. It supports ASCII characters as well as ANSI escape and colour codes. This means you can use it to connect to Telnet servers e.g. a Linux system running telnetd which makes it possible to access the many old-school BBS systems still running. This is how we used to “social network”, kids, before this new fangled web thing took over… There’s a great curated list of systems to connect to at https://www.telnetbbsguide.com/ including all those “elite” sites that used to be advertised all over Amiga demo/warez scene productions. You can see in the screenshot a connection starting to The Sanctuary BBS, running on AmiExpress (still being updated!) and which used to be Fairlight’s World Headquarters back in the day. SnapCTerm is available on my TNFS site (2nd menu page, option 3) as well as on sites like zxnet.co.uk. File sites tnfs.millhill.org This is an archive of the zx.kupo.be TNFS site, which at one point hosted one of the largest collections of games and files for the Speccy. Sadly, Adam Colley who ran the site passed away in 2020. Kupo was one of the first sites I connected to when I first got my Spectranet card and looking through the code gave me the inspiration to start my own site. His work is archived here by someone who was in close contact with him before he passed, and it’s fitting that his site lives on and will be remembered by the community. retrojen.org As well as a nice little “blizzard” intro, retrojen.org hosts around 500 classic Spectrum scene demo tapes. Use the QAOP keys to move around the menu system, ENTER to select a title, 1 to jump to the first page and F to find a production by the release date. A great little archive of some old gems! Thanks for the recent post on my message wall, too :) szeliga.zapto.org This site hosts a curated collection of modern (mid-2000s onwards) games and titles released for the Speccy. There’s some great games that showcase the best of the current Speccy scene. Many of these titles are also available on my site, but it’s great to have them all arranged in a curated date order like this. I highly recommend trying out 2019’s “Yazzie”, a fantastic platformer/puzzle game that is ridiculously addictive. Wrap Up Since the site went online in January 2021 it’s grown to nearly 10k lines of code, handled over 11,500 connections and has gathered a small community of users chatting, playing games and leaving messages. It’s connected me to a wonderful (if slightly crazy) network of people who care about this little squidgey-keyed black box from the 80s as much as I do. It’s been a real pleasure seeing first users come to the site and I have lots of plans still for the future! Just a few of the bits and pieces in various Git branches right now: Proper pagination indication on all menus (page x of y type stuff) Message Board improvements (big speed boost and longer retention) Reply to comments - this opens up the possibility of a proper message board system! More files to upload and more curated lists I’m open to suggestions! If you have any ideas or fancy writing a text article for the site, let me know… In the words of one of the messages from a user: “It’s mad being online with a 40yr old computer designed around a tape recorder.” Perhaps it’s even crazier to drag the whole shebang into the modern age and cobble together infra-as-code pipelines, unit testing and gratuitous amounts of Kubernetes around it all. I did use a lot of my current-day practical knowledge during this project, but the most enjoyable parts where when I was doing stuff like studying the decades-old intricies of the VAL$ function. Sometimes, introducing constraints like an ancient 8-bit processor with only 48Kb of usable RAM can really force you to think “outside the box” and produce the most creative hacks. Plus, in this day and age it’s a lot of fun to do something just for the sheer hell of it! I’m also definitely going to add Sinclair BASIC to my “skills” in LinkedIn now ;) Before I close, I’d just like to add a note of thanks to everyone who’s used my site, suggested features, left messages or helped me out with my many technical queries on forums. It’s been an absolute blast and I look forward to the next 40 years of Spectrum hacking! -Mark, February 2022.

over a year ago 19 votes

More in programming

How should Stripe deprecate APIs? (~2016)

While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability. This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success. This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts. Reading this document To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore. More detail on this structure in Making a readable Engineering Strategy document. Policy & Operation Our policies for managing API changes are: Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed. At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API. All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received. All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision. We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration. If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations. When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support. With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments. We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally. All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions. In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs. We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today. Diagnosis Our diagnosis of the impact on API changes and deprecation on our business is: If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging. Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work. While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change. We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally. Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.

a week ago 15 votes
Requirements change until they don't

Recently I got a question on formal methods1: how does it help to mathematically model systems when the system requirements are constantly changing? It doesn't make sense to spend a lot of time proving a design works, and then deliver the product and find out it's not at all what the client needs. As the saying goes, the hard part is "building the right thing", not "building the thing right". One possible response: "why write tests"? You shouldn't write tests, especially lots of unit tests ahead of time, if you might just throw them all away when the requirements change. This is a bad response because we all know the difference between writing tests and formal methods: testing is easy and FM is hard. Testing requires low cost for moderate correctness, FM requires high(ish) cost for high correctness. And when requirements are constantly changing, "high(ish) cost" isn't affordable and "high correctness" isn't worthwhile, because a kinda-okay solution that solves a customer's problem is infinitely better than a solid solution that doesn't. But eventually you get something that solves the problem, and what then? Most of us don't work for Google, we can't axe features and products on a whim. If the client is happy with your solution, you are expected to support it. It should work when your customers run into new edge cases, or migrate all their computers to the next OS version, or expand into a market with shoddy internet. It should work when 10x as many customers are using 10x as many features. It should work when you add new features that come into conflict. And just as importantly, it should never stop solving their problem. Canonical example: your feature involves processing requested tasks synchronously. At scale, this doesn't work, so to improve latency you make it asynchronous. Now it's eventually consistent, but your customers were depending on it being always consistent. Now it no longer does what they need, and has stopped solving their problems. Every successful requirement met spawns a new requirement: "keep this working". That requirement is permanent, or close enough to decide our long-term strategy. It takes active investment to keep a feature behaving the same as the world around it changes. (Is this all a pretentious of way of saying "software maintenance is hard?" Maybe!) Phase changes In physics there's a concept of a phase transition. To raise the temperature of a gram of liquid water by 1° C, you have to add 4.184 joules of energy.2 This continues until you raise it to 100°C, then it stops. After you've added two thousand joules to that gram, it suddenly turns into steam. The energy of the system changes continuously but the form, or phase, changes discretely. Software isn't physics but the idea works as a metaphor. A certain architecture handles a certain level of load, and past that you need a new architecture. Or a bunch of similar features are independently hardcoded until the system becomes too messy to understand, you remodel the internals into something unified and extendable. etc etc etc. It's doesn't have to be totally discrete phase transition, but there's definitely a "before" and "after" in the system form. Phase changes tend to lead to more intricacy/complexity in the system, meaning it's likely that a phase change will introduce new bugs into existing behaviors. Take the synchronous vs asynchronous case. A very simple toy model of synchronous updates would be Set(key, val), which updates data[key] to val.3 A model of asynchronous updates would be AsyncSet(key, val, priority) adds a (key, val, priority, server_time()) tuple to a tasks set, and then another process asynchronously pulls a tuple (ordered by highest priority, then earliest time) and calls Set(key, val). Here are some properties the client may need preserved as a requirement: If AsyncSet(key, val, _, _) is called, then eventually db[key] = val (possibly violated if higher-priority tasks keep coming in) If someone calls AsyncSet(key1, val1, low) and then AsyncSet(key2, val2, low), they should see the first update and then the second (linearizability, possibly violated if the requests go to different servers with different clock times) If someone calls AsyncSet(key, val, _) and immediately reads db[key] they should get val (obviously violated, though the client may accept a slightly weaker property) If the new system doesn't satisfy an existing customer requirement, it's prudent to fix the bug before releasing the new system. The customer doesn't notice or care that your system underwent a phase change. They'll just see that one day your product solves their problems, and the next day it suddenly doesn't. This is one of the most common applications of formal methods. Both of those systems, and every one of those properties, is formally specifiable in a specification language. We can then automatically check that the new system satisfies the existing properties, and from there do things like automatically generate test suites. This does take a lot of work, so if your requirements are constantly changing, FM may not be worth the investment. But eventually requirements stop changing, and then you're stuck with them forever. That's where models shine. As always, I'm using formal methods to mean the subdiscipline of formal specification of designs, leaving out the formal verification of code. Mostly because "formal specification" is really awkward to say. ↩ Also called a "calorie". The US "dietary Calorie" is actually a kilocalorie. ↩ This is all directly translatable to a TLA+ specification, I'm just describing it in English to avoid paying the syntax tax ↩

a week ago 16 votes
We'll always need junior programmers

We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase. This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied! It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance. I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded. However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.

a week ago 20 votes
Brian Regan Helped Me Understand My Aversion to Job Titles

I like the job title “Design Engineer”. When required to label myself, I feel partial to that term (I should, I’ve written about it enough). Lately I’ve felt like the term is becoming more mainstream which, don’t get me wrong, is a good thing. I appreciate the diversification of job titles, especially ones that look to stand in the middle between two binaries. But — and I admit this is a me issue — once a title starts becoming mainstream, I want to use it less and less. I was never totally sure why I felt this way. Shouldn’t I be happy a title I prefer is gaining acceptance and understanding? Do I just want to rebel against being labeled? Why do I feel this way? These were the thoughts simmering in the back of my head when I came across an interview with the comedian Brian Regan where he talks about his own penchant for not wanting to be easily defined: I’ve tried over the years to write away from how people are starting to define me. As soon as I start feeling like people are saying “this is what you do” then I would be like “Alright, I don't want to be just that. I want to be more interesting. I want to have more perspectives.” [For example] I used to crouch around on stage all the time and people would go “Oh, he’s the guy who crouches around back and forth.” And I’m like, “I’ll show them, I will stand erect! Now what are you going to say?” And then they would go “You’re the guy who always feels stupid.” So I started [doing other things]. He continues, wondering aloud whether this aversion to not being easily defined has actually hurt his career in terms of commercial growth: I never wanted to be something you could easily define. I think, in some ways, that it’s held me back. I have a nice following, but I’m not huge. There are people who are huge, who are great, and deserve to be huge. I’ve never had that and sometimes I wonder, ”Well maybe it’s because I purposely don’t want to be a particular thing you can advertise or push.” That struck a chord with me. It puts into words my current feelings towards the job title “Design Engineer” — or any job title for that matter. Seven or so years ago, I would’ve enthusiastically said, “I’m a Design Engineer!” To which many folks would’ve said, “What’s that?” But today I hesitate. If I say “I’m a Design Engineer” there are less follow up questions. Now-a-days that title elicits less questions and more (presumed) certainty. I think I enjoy a title that elicits a “What’s that?” response, which allows me to explain myself in more than two or three words, without being put in a box. But once a title becomes mainstream, once people begin to assume they know what it means, I don’t like it anymore (speaking for myself, personally). As Brian says, I like to be difficult to define. I want to have more perspectives. I like a title that befuddles, that doesn’t provide a presumed sense of certainty about who I am and what I do. And I get it, that runs counter to the very purpose of a job title which is why I don’t think it’s good for your career to have the attitude I do, lol. I think my own career evolution has gone something like what Brian describes: Them: “Oh you’re a Designer? So you make mock-ups in Photoshop and somebody else implements them.” Me: “I’ll show them, I’ll implement them myself! Now what are you gonna do?” Them: “Oh, so you’re a Design Engineer? You design and build user interfaces on the front-end.” Me: “I’ll show them, I’ll write a Node server and setup a database that powers my designs and interactions on the front-end. Now what are they gonna do?” Them: “Oh, well, we I’m not sure we have a term for that yet, maybe Full-stack Design Engineer?” Me: “Oh yeah? I’ll frame up a user problem, interface with stakeholders, explore the solution space with static designs and prototypes, implement a high-fidelity solution, and then be involved in testing, measuring, and refining said solution. What are you gonna call that?” [As you can see, I have some personal issues I need to work through…] As Brian says, I want to be more interesting. I want to have more perspectives. I want to be something that’s not so easily definable, something you can’t sum up in two or three words. I’ve felt this tension my whole career making stuff for the web. I think it has led me to work on smaller teams where boundaries are much more permeable and crossing them is encouraged rather than discouraged. All that said, I get it. I get why titles are useful in certain contexts (corporate hierarchies, recruiting, etc.) where you’re trying to take something as complicated and nuanced as an individual human beings and reduce them to labels that can be categorized in a database. I find myself avoiding those contexts where so much emphasis is placed in the usefulness of those labels. “I’ve never wanted to be something you could easily define” stands at odds with the corporate attitude of, “Here’s the job req. for the role (i.e. cog) we’re looking for.” Email · Mastodon · Bluesky

a week ago 18 votes
Bike Brooklyn! zine

I've been biking in Brooklyn for a few years now! It's hard for me to believe it, but I'm now one of the people other bicyclists ask questions to now. I decided to make a zine that answers the most common of those questions: Bike Brooklyn! is a zine that touches on everything I wish I knew when I started biking in Brooklyn. A lot of this information can be found in other resources, but I wanted to collect it in one place. I hope to update this zine when we get significantly more safe bike infrastructure in Brooklyn and laws change to make streets safer for bicyclists (and everyone) over time, but it's still important to note that each release will reflect a specific snapshot in time of bicycling in Brooklyn. All text and illustrations in the zine are my own. Thank you to Matt Denys, Geoffrey Thomas, Alex Morano, Saskia Haegens, Vishnu Reddy, Ben Turndorf, Thomas Nayem-Huzij, and Ryan Christman for suggestions for content and help with proofreading. This zine is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, so you can copy and distribute this zine for noncommercial purposes in unadapted form as long as you give credit to me. Check out the Bike Brooklyn! zine on the web or download pdfs to read digitally or print here!

2 weeks ago 15 votes