Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
20
A long time ago, there was a small Estonian website called “Mängukoobas” (literal translation from Estonian is “game cave”). It started out as a place for people to share various links to browser games, mostly built with Flash or Shockwave. It had a decent moderation system, randomized treasure chests that could appear on any part of the website, and a lot more.1 What it also had was a basic filtering system. As a good chunk of the audience was children (myself included), there was a need to filter out all the naughty Estonian words, such as “kurat”, “türa”, “lits” and many more colorful ones. The filtering was very basic, however, and some took it to themselves to demonstrate how flawed the system was by intentionally using phrases like “politsei”, which is Estonian for “police”. It would end up being filtered to “po****ei” as it also contained the word “lits”, which translates to “slut”2. Of course, you could easily overcome the filter by using a healthy dose of period characters,...
a week ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from ./techtipsy

Those cheap AliExpress 18650 Li-ion cell power bank enclosures suck, actually

I had a few old ThinkPad batteries lying around. They were big, bulky and not holding much of a charge. Inside those were standard 18650 Li-ion battery cells. I have two TOMO M4 power banks around, and they are fantastic for reusing these old 18650 battery cells inside them. You can even mix and match cells without a worry because they are individually addressed, meaning that any issues with battery charge levels and voltages differing between cells are not a concern. Unfortunately the TOMO M4 lacks modern features, such as USB-C ports and USB-C PD outputs at higher voltages and currents, which makes it less useful and convenient in 2025. I haven’t found any newer designs from them as well that are just as cool. I still wanted to reuse those 18650 cells, so I went to AliExpress and bought some 18650 battery enclosures for testing. One holds 8 cells, another one 10 cells, and the largest one could fit 20 cells inside it. Unfortunately, they all suck and are likely a huge fire hazard in the wrong hands. For the 8-cell variant, I used newly bought 18650 Li-ion cells that were charged up to the same level. This battery enclosure worked quite well, until it didn’t. For whatever reason, the enclosure could not charge itself and other devices at the same time. With the 10-cell variant, I used two different batches of used 18650 Li-ion cells from old ThinkPad batteries, charging them up first. That one worked fine, until it also failed in weird ways. It got quite hot during charging/discharging cycles, and eventually the segment display that’s responsible for displaying the charge level stopped showing certain segments. At that point I lost trust in that enclosure, too. I had the most fun with the 20-cell battery enclosure. My first fuck-up involved using two old battery cells with different charge levels, which resulted in some magic smoke coming out of the PCB of the enclosure itself. Somehow that didn’t break the battery bank enclosure, so I crammed 20 charged up used and mixed 18650 Li-ion cells in it and started charging and discharging it. The batteries got quite hot, likely around 50-70°C based on the temperature readings of my hand.1 At that point I realized I was playing with fire and stopped. The USB-C PD behaviour was different on all power banks. Some were fine with powering a ThinkPad laptop with the appropriate cable, some were flaky with setting the power levels, and some were just useless with certain cable or device combinations. The battery banks rely on a very simple arrangement: the 18650 Li-ion cells are connected in parallel, and the resulting 3.7-4.2V is then boosted up for the appropriate voltage on the control board. This carries risks: if you insert two or more Li-ion cells with different voltages, then one will start charging the others to bring the cells to the same voltage, and that can become uncontrolled and result in a cell overheating and/or exploding. It’s also a horrible idea to mix and match used cells of different capacities and wear levels as they will charge and discharge at different rates. In my experience, a cheap DIY power bank enclosure also carries the risk of attracting attention at an airport security check. After learning how bad these can be, that is an entirely justified suspicion. I ended up throwing all the battery bank enclosures out, the hardware failures and issues made me too concerned about one of these starting a fire. I like controlled fires, but the uncontrolled ones are really not my cup of tea. If you know of a 18650 Li-ion cell battery bank enclosure that works like the TOMO M4 but has modern features (USB-C port, USB-PD, can charge laptops etc.) then please do reach out to me as I’d love to test one out. You can find the contact details below the post. 50-55°C feels very hot to the touch, so it’s a good rule of thumb (no pun intended) for determining the minimum temperature of a hot surface by hand. Disclaimer: not physics advice. ↩︎

6 days ago 15 votes
How to run Uptime Kuma in Docker in an IPv6-only environment

I use Uptime Kuma to check the availability of a few services that I run, with the most important one being my blog. It’s really nice. Today I wanted to set it up on a different machine to help troubleshoot and confirm some latency issues that I’ve observed, and for that purpose I picked the cheapest ARM-based Hetzner Cloud VM hosted in Helsinki, Finland. Hetzner provides a public IPv6 address for free, but you have to pay extra for an IPv4 address. I didn’t want to do that out of principle, so I went ahead and copied my Docker Compose definition over to the new server. For some reason, Uptime Kuma would start up on the new IPv6-only VM, but it was unsuccessful in making requests to my services, which support both IPv4 and IPv6. The requests would time out and show up as “Pending” in the UI, and the service logs complained about not being able to deliver e-mails about the failures. I confirmed IPv6 connectivity within the container by running docker exec -it uptime-kuma bash and running a few curl and ping commands with IPv6 flags, had no issues with those. When I added a public IPv4 address to the container, everything started working again. I fixed the issue by explicitly disabling the IPv4 network in the Docker Compose service definition, and that did the trick, Uptime Kuma made successful requests towards my services. It seems that the service defaults to IPv4 due to the internal Docker network giving it an IPv4 network to work with, and that causes issues when your machine doesn’t have any IPv4 network or public IPv4 address associated with it. Here’s an example Docker Compose file: name: uptime-kuma services: uptime-kuma: container_name: uptime-kuma networks: - uptime-kuma ports: - 3001:3001" volumes: - /path/to/your/storage:/app/data image: docker.io/louislam/uptime-kuma restart: always networks: uptime-kuma: enable_ipv6: true enable_ipv4: false That’s it! If you’re interested in different ways to set up IPv6 networking in Docker, check out this overview that I wrote a while ago.

2 weeks ago 23 votes
3D printing is pretty darn cool, actually

I love 3D printing. Out of all the tech hype cycles and trends over the last decade, this one is genuinely useful. There’s simply something magical about being able to design or download a model from the internet, send it to a machine, and after a few hours you get an actual physical object in return! I don’t own a 3D printer myself, but I’ve had access to people who are happy to help out by printing something for me. So far I’ve printed the following useful things: a Makita vacuum cleaner holder a dual vertical laptop stand it’s such a simple and cheap design, and yet it works incredibly well if you add some rubberized material to the bottom and inside the laptop holder a dual HDD adapter for a Zimaboard a stand for the Steam Deck a carrying case insert for the Steam Deck a case for the Orange Pi Zero There’s so much more that I’d want to print, like various battery holders, controller stands, and IKEA SKÅDIS mounts. There’s also the option of downloading and printing a whole PC case, which is incredibly tempting. Will I finally be able to build the perfect home server according to my very specific requirements? Probably not, given how often my preferences change, but it would be incredibly cool! And yet I don’t own a 3D printer. The main obstacle for me is the time, I feel like in order to be successful with a 3D printer, I’ll need to at the very least learn the basics of filaments, their properties, what parameters to configure and how, how to maintain a 3D printer, how to fix one when it breaks, how to diagnose misalignment issues etc. I’ll also need space for one, extruding hot melting plastic seems like a thing that I’d want to host in a proper workshop and with actual ventilation. It’s a whole-ass hobby, not a half-ass one. Durability can be problematic with 3D prints, even in my limited experience. For example, I tried positioning the Makita vacuum cleaner holder differently, but ended up putting too much strain on the design, which eventually lead to it completely failing. In other cases, filaments like PLA aren’t suitable for designs where they are attached to warm or hot computer parts, they will warp like crazy. I appreciate the hell out of anyone that shares their designs with the world, and especially those that allow remixing or customizing their designs. There are fantastic designs and ideas out there on sites like Printables, and the creativity that’s on display warms my heart.

2 weeks ago 24 votes
PSA: part of your Kagi subscription fee goes to a Russian company (Yandex)

Today I learned that Kagi uses Yandex as part of its search infrastructure, making up about 2% of their costs, and their CEO has confirmed that they do not plan to change that. To quote: Yandex represents about 2% of our total costs and is only one of dozens of sources we use. To put this in perspective: removing any single source would degrade search quality for all users while having minimal economic impact on any particular region. The world doesn’t need another politicized search engine. It needs one that works exceptionally well, regardless of the political climate. That’s what we’re building. That is unfortunate, as I found Kagi to be a good product with an interesting take on utilizing LLM models with search that is kind of useful, but I cannot in good heart continue to support it while they unapologetically finance a major company that has ties to the Russian government, the same country that is actively waging a war against Ukraine, an European country, for over 11 years, during which they’ve committed countless war crimes against civilians and military personnel. Kagi has the freedom to decide how they build the best search engine, and I have the freedom to use something else. Please send all your whataboutisms to /dev/null.

a month ago 35 votes

More in technology

Reverse-engineering Roadsearch Plus, or, roadgeeking with an 8-bit CPU

other irredeemably nerdy habit is roadgeeking, exploring and mapping highways both old and new, and it turns out that 8-bit roadgeeking on ordinary home computers was absolutely possible. For computers of this class, devising an optimal highway route becomes an exercise not only in how to encode sufficient map data to a floppy disk, but also performing efficient graph traversal with limited hardware. Today we'll explore Roadsearch-Plus, one of the (if not the) earliest such software — primarily on the Commodore 64, but originating on the Apple II — and at the end "drive" all the way from southern California to British Columbia along US Highway 395, my first long haul expedition, but as it was in 1985. Buckle up while we crack the program's runtime library, extract its database, and (working code included) dive deeply into the quickest ways to go from A to B using a contemporary home computer. Although this article assumes a little bit of familiarity with the United States highway system, I'll provide a 30-second version. The top-tier national highway network is the 1956 Eisenhower Interstate System (abbreviated I-, such as I-95), named for president Dwight D. Eisenhower who promulgated it, signed with red, white and blue shields. Nearly all of its alignments, which is to say the physical roads composing it, are grade-separated full freeway. It has come to eclipse the 1926 United States Numbered Highway System (abbreviated US, such as US 395), a nationally-numbered grid system of highways maintained by the states, albeit frequently with federal funding. Signed using a horned white shield, these roads vary from two-lane highway all the way to full freeway and may be multiplexed (i.e., multiply signed) with other US highways or Interstates in many areas. While they are no longer the highest class of U.S. national road, they nevertheless remain very important for regional links especially in those areas that Interstates don't service. States and counties maintain their own locally allocated highway systems in parallel. Here is a glossary of these and other roadgeek terms. Geographic information systems (GIS) started appearing in the 1960s, after Waldo Tobler's 1959 "Automation and Cartography" paper about his experience with the military Semi-Autographic Ground Environment system. SAGE OA-1008 displays relied on map transparencies developed manually but printed with computers like the IBM 704. Initially such systems contained only geographic features like terrain and coastlines and specific points of interest, but support for highways as a layer or integral component was rapidly implemented for land use applications, and such support became part of most, if not all, mainframe and minicomputer GIS systems by the late 1970s. However, these systems generally only handled highways as one of many resource or entity types; rarely were there specific means of using them for navigational purposes. Honda Electro-Gyrocator. Because Global Positioning System (GPS) satellite data was not then available for civilian applications and other radio navigational systems like LoRAN were hard to reliably receive in canyons or tunnels, it relied on its own internal gas gyroscope to detect rotation and movement aided by a servo in the car's transmission. The Electro-Gyrocator used a Texas Instruments TMS9980 (a derivative of the 16-bit TMS9900 in the TI-99/4A but with an 8-bit data bus) as its CPU and a sidecar TMS9901 for I/O. It had 10K of ROM, 1K of SRAM and 16K of DRAM, hopeless for storing map data of any consequence, so the actual maps were transparencies too; the 9980 integrated sensor data provided by the 9901 from the gyroscope and servo to plot the car's course on a small 6" CRT behind the map overlay. The user was expected to set the starting location on the map before driving and there was likewise no provision for routing. It was only made available that year for ¥300,000 (about US$2900 in 2025 dollars at current exchange rates) on the JDM Honda Accord and Honda Vigor, and discontinued by 1982. There were also a few early roadgeek-oriented computer and console games, which I distinguish from more typical circuit or cross-country racers by an attempt to base them on real (not fictional) roads with actual geography. One I remember vividly was Imagic's Truckin', a Mattel Intellivision-exclusive game from 1983 which we played on our Tandyvision One. Juggernaut for the ZX Spectrum by at least two years, and Juggernaut uses a completely fictitious game world instead. you're expected to do that! an easter egg. Besides very terse prompts and a heavily compressed internal routing table, the game makes all highway numbers unique and has no multiplexes or three-digit auxiliary Interstates, and while you can drive into Canada you can't drive into Mexico (admittedly it was pre-NAFTA). Additionally, for gameplay reasons every highway junction is a named "city," introducing irregularities like Interstate 70 ending in Provo, UT when it really ends about 130 miles south, or Interstate 15 ending in El Cajon, CA, and many cities and various two-digit primary Interstates are simply not included (e.g., I-12, I-19, I-30, I-85, I-93, etc.). As a result of these constraints, among other inaccuracies Interstate 90 terminates in Madison, WI at I-94 (not Boston), I-94 terminates in Milwaukee at I-55 (not Port Huron, MI), and I-40 is extended west from its true terminus in Barstow, CA along real-world California State Highway 58 to intersect I-5 "in Bakersfield" (real Interstate 5 is around twenty miles away). Still, it contains an extensive network of real highways with their lengths and control cities, managing it all on an early console platform with limited memory, while simultaneously supporting full 1983-level gameplay. The 8-bit home computer was ascendant during the same time and at least one company perceived a market for computer-computed routes with vacations and business trips. I can't find any references to an earlier software package of this kind for this class of computer, at least in the United States, so we'll call it the first. The Software Writer's Marketplace from 1984. If this is the same person, he later appears in a 2001 document as the Ground and Mission Systems Manager at the NASA Goddard Space Flight Center in Greenbelt, Maryland, about a half hour's drive away. Columbia Software does not appear in any magazines prior to 1982 nor does it appear after 1985, though no business record under that name is known to either the state of Maryland or the state of Delaware. T%(), N%() and B%() which sound like Applesoft BASIC integer arrays. (The file MPH just contains the MPG and MPH targets in text.) In fact, there are actually two compiled programs present, ROADSEARCH (the main executable) and DBA (the distance editor), and using these "variable" files allows both programs to keep the memory image of the arrays, no doubt the representation of its map database, consistent between them. You can start either compiled program by BRUNning them, as HELLO does. Exactly which compiler we can also make a fairly educated guess about: there were only a few practical choices in 1981-82 when it was probably written, and most of them we can immediately eliminate due to what they (don't) support or what they (don't) generate. For example, being almost certainly Applesoft BASIC would obviously eliminate any Integer BASIC compiler. It also can't be On-Line Systems' Expediter II because that generates "Applesoft BASIC" programs with a single CALL statement instead of binaries that are BRUN, and it probably isn't Southwestern Data Systems' Speed Star because of the apparent length of the program and that particular compiler's limited capacity. That leaves the two major compilers of this era, Microsoft's TASC ("The Applesoft Compiler"), which was famously compiled with itself, and Hayden Book Company's Applesoft Compiler. TASC uses a separate runtime that must be BLOADed before BRUNning the main executable, which HELLO doesn't do. Hayden's compiler is thus the most likely tool, and this theory is buoyed by the fact that this compiler does support variable sharing between modules. If we run strings on the DOS 3.3 .dsk image, we see things like Accounting for the fact that the disk interleave will not necessarily put lines in sequential order, you can pick out the strings of the program as well as a number of DOS 3.3 file manager commands, sometimes in pieces, such as B%(),A30733,L5448 which is probably part of a BSAVE command, or BLOAD B%(),A30733. The disk also has examples of both kinds of DOS 3.3 text files, both sequentially accessed (such as OPEN T%(), READ T%() and CLOSE T%()) but also the less commonly encountered random access (with explicitly specified record numbers and lengths such as OPEN ROADS,L12 and OPEN ROADMAP CITIES,L19, then READ ROADS,R and READ ROADMAP CITIES,R which would be followed by the record number). For these random access files, given a record length and record number, the DOS track-and-sector list is walked to where that record would be and only the necessary sector(s) are read to construct and return the record. We can see the contents with a quick Perl one-liner to strip off the high bit and feeding that to strings: Again note that the order is affected by the disk interleave, but the file is stored alphabetically (we'll extract this file properly in a moment). Another interesting string I found this way was "TIABSRAB WS AIBMULOC" which is COLUMBIA SW BARSBAIT backwards. Perhaps someone can explain this reference. In a hex editor the city database looks like this, where you can see the regularly repeating 19-byte record format for the names. Remember that the characters are stored with the high bit set. This is not a very efficient means of storage especially considering DOS 3.3 only had 124K free per disk side (after DOS itself and filesystem overhead), but it would be a lot easier for an Applesoft BASIC program to handle since the record lookup work could be shunted off to DOS 3.3 and performed quickly. Also, while you can list the cities and junctions from the menu, they are not indexed by state, only by first letter: ROADMAP CITIES to the emulator's virtual printer. We know its record length because it was stored as a string in the compiled BASIC text we scanned previously. That, in turn, gives us a text file we can read on the Mac side. It is, as expected, 406 lines long. The same process gets us the ROADS list, which is 171 lines long. Keep in mind that's just a list of the names of the roads; it does not contain information on the actual highway segments between points. T%()), where for each letter index 1-26, the value is the first record number for all cities starting with that letter. As we saw previously, the cities are already sorted on disk to facilitate. There are no cities starting with Z, so that letter just goes to the end of the file (non-existent record 407) and terminates. B%() and N%(), but we'll solve that problem a little later on. and updated routing information. However, this release appears to be specific to the Commodore port — if there were a 1985 map update for the Apple II, it has yet to surface — and I can find no subsequent release of Roadsearch after 1985 on any platform. Both C64 versions came in the same Roadsearch and Roadsearch-Plus variants for the same prices, so the iteration we'll explore here is Roadsearch-Plus with the presumed final 1985 database. For purposes of emulation we'll use VICE's SuperCPU spin with the 65816 enabled as some number crunching is involved, but I also tested it on my real Commodore SX-64 and 128DCR for veracity. (Running xscpu64 in warp mode will absolutely tear through any routing task, but I strongly recommend setting location 650 to 64 in the VICE monitor to disable key repeat.) It's worth noting that the various circulating 1541 disk images of the 1984 and 1985 editions were modified to add entries by their previous owners, though we'll point out how these can be detected and reversed. For much of this article I'll be running a 1985 disk that I manually cleared in a hex editor to what I believe was its original contents. ROADSEARCH+), the editor (IDBA) and two RELative files for the CITIES and ROADS. RELative files are functionally equivalent to DOS 3.3 random access files, being a record-based file format with a fixed size and an index (made up of "side sectors"). They are an uncommon sight on commercial software due to their idiosyncracies and a few outright bugs, and they don't work well for binary data or support variable record sizes which is why Berkeley Softworks came up with VLIR for GEOS instead. On the other hand, they do just fine for text strings and the lookup can be very fast. The cities file looks like this when dumping the raw sectors from the .d64 disk image: The directory entry indicates a file with 31-byte records, the first side sector at track 19 sector 10 and the first data sector (shown in part below) at track 19 sector 0. Other than the obvious typo in Abilene, TX, it is the same basic format as the Apple version and also sorted, ending each string with a carriage return. As a method of what I assume prevents trivially dumping its contents, a naive read of each record won't yield anything useful because every record starts with an $ff and a whole bunch of nulls, which Commodore DOS interprets as the end. The actual string doesn't start until offset 12. The same basic idea holds for the roads file: The same offset trick is used, but here the records are 24-byte since route names are shorter. Again, this isn't a particularly efficient storage mechanism, but we have over 165K available on a formatted disk, and RELative file access to any arbitrary record is quite quick. Despite the presence of side sectors, the actual records of a RELative file are still sequentially stored on disk with the usual forward track and sector pointers. As such, we don't need to grab the side sectors to simply extract its contents. For some period of time the c1541 tool from the VICE suite would not copy REL files and this was only recently fixed, so here is a Perl script I threw together to iterate over a D64 disk image and transfer a file to standard output, either by name or if you specify a starting track and sector. Because this yanks the files "raw," we will then need to strip them down. Feed the extracted REL records to this and you'll get a text file: You'll notice that both the cities and roads lists are, or at least start out, sorted. All C64 Roadsearch images I've seen circulating so far have been altered by their previous owners to add their own local roads and cities, and their changes can be easily distinguished at the point where the lists abruptly go out of alphabetical order. There is also a pair of SEQuential files, one named MPH and serving the same function to store the preferred speed and fuel economy, and a second one simply named M. This file contains four ASCII numbers, one obviously recognizeable as our serial number, though this is only one of the two places the program checks. The others are the number of cities (406, or 487 in the 1985 version), number of roads (215 in 1985), and a third we don't yet know the purpose of. You can confirm the first two by checking it against the number of lines in the files you extracted. What we don't see are files for the arrays we spotted on the Apple disk. The only file left big enough to account for those is MATS. To figure out how that works, we should start digging into the program. RTL-64, a runtime library and the telltale sign of a program compiled with DTL BASIC. DTL BASIC, formally DTL-BASIC 64 Jetpack, became extremely popular with developers not just due to its good performance and compatibility but also requiring no royalties to sell programs compiled with it as long as credit was given. An optional "protector" version can obfuscate the program and/or require a hardware dongle, though this version is rarer due to its expense (and fortunately was not used here). The runtime library slots into the RAM under the BASIC ROM, so there is no obvious loss of free memory. DTL stood for "Drive Technology Ltd." and was written by David Hughes in the UK, first for the PET; the compiler is notable for being compiled with itself, like Microsoft TASC, and using the same "RTL" runtime library and protection system (and, obnoxiously, a dongle) as the object code it generates. The 64 tape version is substantially less capable than the disk one. DTL BASIC compiles to its own bespoke P-code which is executed by the RTL. It achieves its speed through a greater degree of precalculation and preparsing (e.g., pre-resolving values and line numbers, removal of comments, etc.), a custom garbage collection routine, and also, where possible, the use of true signed 16-bit integer math. This is a substantial speed-up over most Microsoft-derived BASICs, in which Microsoft Binary Format floating point is the native system for all calculations to the point where integer variables must first be converted to floating point, the computation performed, and then converted back. Ordinarily this would only make them useful for smaller arrays in memory because the double conversion will make them slower than regular floating point variables. However, DTL BASIC does perform true integer math without conversion, first for all variables explicitly declared as integer, and even autoconverting other variables at compile time with a directive (pragma). Although some private work on a decompiler reportedly exists, to the best of my knowledge that decompiler remains unfinished and unavailable. Interestingly, what I presume is the earlier 1984 disk image has some ghost directory entries, two each for the source BASIC programs and their "symbol tables" used by ERROR LOCATE for runtime errors: Sadly these source files were overwritten and cannot be resurrected, and even the ghost entries were purged on the second 1984 image. However, two aspects of the compiler make it possible to recover at least a portion of the original program text by hand. The first is that not all statements of the original program are fully converted to P-code: in particular, READ/DATA, OPEN, INPUT/INPUT#, DIM, PRINT/PRINT# and possibly others will be preserved nearly in their original BASIC form, including literal strings and most usefully variable names. For example, if we pull the compiled ROADSEARCH+ off the disk image and run it through strings, we see text not unlike a BASIC program: DTL-compiled programs always start with an SYS 2073 to jump into a small machine-code subroutine linked into the object. This section of code loads the RTL (the filename follows) and has other minor housekeeping functions, including one we'll explain shortly when we break into the program while it's running. It resides here so that it can bank BASIC ROM in or out as necessary without crashing the computer. Following that is an incomplete machine language subroutine in a consolidated DATA statement. Disassembly of the fragment shows it's clearly intended to run from $c000, but at least part of it is missing. Of the portion we can see here, however, there are calls for what appears to be a Kernal load, so if we drop a breakpoint on $ffba in VICE (e.g., the Kernal SETLFS routine) we can run the 6502's call stack back to see the full routine and the filename (at $c0e1) it's trying to access: MATS. It loads it straight into memory in the middle of the available BASIC range. After that, now we see the arrays we saw in the Apple II version, but more importantly they're clearly part of DIM statements, so we can also see their dimensions. We already knew T%() was likely to be only 26 (27 counting the zero index) integers long from dumping the Apple version's contents, but the N%() array has up to 757 entries of five fields each, and B%() is even bigger, with 1081 records of four fields each. This is obviously where our map data is stored, and MATS is the file it seems to be loading to populate them. This brings us to the second aspect of DTL Jetpack that helps to partially decipher the program: to facilitate some reuse of the BASIC ROM, the generated code will still create and maintain BASIC-compatible variables which we can locate in RAM. So that we know what we're dealing with, we need to figure out some way to stop the program while it's running to examine its state. Naturally the compiler offers a means to defeat this. ; warm start (2061) .C:080d 20 52 08 JSR $0852 .C:0810 4C 2B A9 JMP $A92B ; not called? .C:0813 20 52 08 JSR $0852 .C:0816 4C 11 A9 JMP $A911 ; cold start (2073) ; load the RTL if flag not set, then start P-code execution .C:0819 20 52 08 JSR $0852 .C:081c AD FF 02 LDA $02FF .C:081f C9 64 CMP #$64 .C:0821 F0 17 BEQ $083A .C:0823 A9 3F LDA #$3F .C:0825 85 BB STA $BB .C:0827 A9 08 LDA #$08 .C:0829 85 BC STA $BC .C:082b A9 06 LDA #$06 .C:082d 85 B7 STA $B7 .C:082f A9 00 LDA #$00 .C:0831 85 B9 STA $B9 .C:0833 A2 00 LDX #$00 .C:0835 A0 A0 LDY #$A0 .C:0837 20 D5 FF JSR $FFD5 .C:083a 68 PLA .C:083b 68 PLA .C:083c 4C 48 A0 JMP $A048 .C:083f .asc "RTL-64" ; seems to get called on a crash or error .C:0845 20 52 08 JSR $0852 .C:0848 20 2F A0 JSR $A02F ; jumps to $a948 .C:084b 20 5F 08 JSR $085F .C:084e 60 RTS ; dummy STOP routine .C:084f A2 01 LDX #$01 .C:0851 60 RTS ; bank out BASIC ROM .C:0852 A9 03 LDA #$03 .C:0854 05 00 ORA $00 .C:0856 85 00 STA $00 .C:0858 A9 FE LDA #$FE .C:085a 25 01 AND $01 .C:085c 85 01 STA $01 .C:085e 60 RTS ; bank in BASIC ROM .C:085f 48 PHA .C:0860 A9 03 LDA #$03 .C:0862 05 00 ORA $00 .C:0864 85 00 STA $00 .C:0866 A5 01 LDA $01 .C:0868 09 01 ORA #$01 .C:086a 85 01 STA $01 .C:086c 68 PLA .C:086d 60 RTS ; execute next statement (using BASIC ROM) .C:086e 20 5F 08 JSR $085F .C:0871 20 ED A7 JSR $A7ED .C:0874 20 52 08 JSR $0852 .C:0877 60 RTS DS and ES compiler directives which disable and enable RUN/STOP respectively. If we scan the RTL for code that modifies $0328 and $0329 where the STOP routine is vectored, we find this segment: $f6ed is the normal value for the STOP vector, so we can assume that the routine at $b7c9 (which calls the routine at $b7da to set it) enables RUN/STOP, and thus the routine at $b7e5 disables it. Both routines twiddle a byte at $a822, part of this section: By default the byte at $a822 is $ea, a 6502 NOP. This falls through to checking $91 for the state of the STOP key at the last time the keyboard matrix was scanned and branching accordingly. When the STOP routine is revectored, at the same time the byte at $a822 is changed to $60 RTS so that the STOP key check is never performed. (The RTL further achieves additional speed here by only doing this check on NEXT and IF statements even if the check is enabled.) The simplest way to deal with this is to alter RTL-64 with a hex editor and turn everything from $b7e5 to $b7ec inclusive into NOPs. This turns the DS directive into a no-op as well, and now we can break out of the program by mashing RUN/STOP, though we'll do this after MATS is loaded. Parenthetically, instead of using CONT, the compiled program can be continued with SYS 2061 instead of SYS 2073. T%(), since we've assumed it has the same purpose in the Commodore port, and it does (once again there are no cities starting with Z, so the final letter points to non-existent record 488 beyond Yuma, AZ as record 487). We then use the BASIC routine at $b08b (SYS 45195) to look up the address of the first variable in the array — note no comma between the SYS and the variable reference — which is deposited as a 16-bit address in location $47 (71). In this case the pointer is to $6b8a (27530), the actual zeroth data element, so if we rewind a few bytes we'll also get the array descriptor as well as its entire contents: C:6b83 d4 80 3d 00 01 00 1b 00 01 00 01 00 0f 00 2d 00 >C:6b93 52 00 60 00 69 00 7c 00 8b 00 97 00 ed 00 f6 00 >C:6ba3 fe 01 16 01 36 01 43 01 4e 01 62 01 63 01 70 01 >C:6bb3 a3 01 b6 01 c8 01 cf 01 e4 01 e4 01 e8 big-endian: the dimensions themselves (a single big-endian short specified as 27, i.e. 26 plus the zeroth element), and then each value. In multidimensional arrays, each dimension is run out fully before moving to the next. We can easily pick out the values we saw for T%() in the above dump, but more importantly we now have a long unambiguous 61-byte key we can search for. The entire sequence shows up in MATS, demonstrating that the file is in fact nothing more than an in-place memory dump of the program arrays' contents. Rather than manually dump the other two arrays from BASIC, we can simply walk MATS and pull the array values directly. This Perl script walks a memory dump of arrays (currently only two-dimensional integer arrays are implemented because that's all we need for this project). It skips the starting address and then writes out tab-separated values into files named by the arrays it finds. Unlike their storage in memory, where values go (0,0), (1,0), (2,0) ... (0,1), (1,1) and so on, this pivots the arrays horizontally so you get (0,0), (0,1), (0,2), etc., grouped into lines as "rows." B%() is the easiest to grok. Ignoring index 0, here is an extract from the file: B%() is referenced directly in one PRINT statement when displaying outgoing roads from a particular city: Let's take city record number 1, which is the misspelled (but only in the 1985 version) ABILINE TX (that is, Abilene, Texas). The develop-a-route feature will list all connected cities. For Abilene, there are five in the database. PRINTed. PRINTs above that B%(I,3) represents the mile length of the current connecting segment indexed by I (here 39), which would make B%() the array containing the connecting roads between cities and junctions. We also know it gets R$ and C$ from the RELative files ROADS and CITIES on each row. Helpfully, it tells us that FORT WORTH TX is city number 115 (it is), which we can see is B%(I,1). B%(I,2) is 1, which must be us, leaving B%(I,0) as the route number which must be the record number for Interstate 20 in ROADS. If we grab line 39 from out-B% (the value of index I), we do indeed see these same values. However, we can now do the search ourselves by just hunting for any record containing city 1 in columns 1 or 2 of our dumped array file (^I is a TAB character): Or, to show the correspondence more plainly: B%(). N%(), on the other hand, is a little harder to crack. Other than its apparent DIM statement at the beginning (as shown above), it is never displayed directly to the user and never appears again in plaintext in the compiled object. Here are the first few entries of the array: The first record (index 0) is the second place where the serial number is stored, though only the map editor references it. Not counting index 0 the file has exactly one record for every city or junction, so it must correspond to them somehow. Otherwise, column 3 (except for index 0) is always an unusual value 32,766, near the positive maximum for a 16-bit integer variable, and columns 4 and 5 are always zero (note from the future: this is not always true for routes which are added in the editor). Additionally, columns 1 and 2 have an odd statistical distribution where the first is always between 4499 and 8915 and the second between 11839 and 21522. There are no negative values anywhere in the file, and the first three columns are never zero (other than index 0). Whatever they are, they are certainly not random. The meaning of the values in this array managed to successfully evade my understanding for awhile, so in the meantime I turned to figuring out how Roadsearch does its routing. The theorists reading this have already internalized this part as an exercise in graph traversal, specifically finding the shortest path. Abstractly the cities and junctions can be considered as the nodes or vertices of a graph. These nodes are enumerated by name in CITIES with additional, currently opaque, metadata in N%(). The nodes are then connected with edges, which are weighted by their mile length. These edges are the highway alignments listed with termini and length in B%() and their names in ROADS. The program appears to universally treat all edges as bi-directional, going both to and from a destination, which also makes the graph generally undirected. Because of the way the database (and indeed the nature of the American highway network) is constructed, all nodes are eventually reachable from any other node. For a first analysis I presented Roadsearch with a drive I know well, having done it as part of longer trips many times in both directions: Bishop, California (city number 31) to Pendleton, Oregon (city number 338). This run can be traveled on a single highway number, namely US Highway 395, and it is also the shortest route. I have not indicated any roads that should be removed, so it will use everything in its database. five seconds. No, warp mode was not on. Take note of the progress display showing miles traveled. This is the first milepoint that appears. If we compare the edges (highway alignments) directly leaving from Bishop and Pendleton, an edge with a weight (length) of 198 miles only shows up leaving Pendleton, suggesting the program works the routing backwards: Edsger Dijkstra's algorithm, conceived in 1956 and published in 1959, an independent reimplementation of Vojtěch Jarník's 1930 minimum spanning tree algorithm that was also separately rediscovered and published by Robert C. Prim in 1957. In broad strokes, the algorithm works by building a tree out of the available nodes, putting them all into a queue. It keeps an array of costs for each node, initially set to an "infinite" value for all nodes except for the first node to be examined, traditionally the start point. It then repeatedly iterates over the queue: of all current nodes in the queue the lowest cost node is pulled out (so, first out, this will be the start point) and its edges are examined, selecting and marking any edges where the sum of the current node's cost and the cost of the edge (the mile length) are less than the current cost of the node it connects to (which, first out, will be "infinite"). The nodes connected by these marked edges take the current node as their parent, constructing the tree, and store the new lower cost value. Once the queue is empty, the algorithm halts and the tree is walked backwards from the target back to the start point, accumulating the optimal route using the parent pointers. This Perl script accepts two city numbers and will generate the optimal route between them using the Roadsearch database with Dijkstra's algorithm. It expects cities, roads (both converted to text, not the raw RELative files) and out-B% to be in the same directory. We build arrays of @cities and @roads, and turn B%() into @edges and @a (for arcs) for expedience. We then walk down the nodes and build the tree in @s, noting which arc/edge was used in @prou so that we can look it up later, and then at the end walk the parent pointers back and do all the dereferencing to generate a human-readable route. As a check we also dump @s, which indexes @edges. Here's what it computes for Bishop to Pendleton, forward and reverse: This is the same answer, and Dijkstra's algorithm further gives us a clue about one of the columns in N%() — the one which is invariably 32766. We already know from the plaintext portion of the compiled program that there are no other arrays, so the routing must be built in-place in the arrays that are present. We need an starting "infinite" value for each node within the range of a 16-bit signed integer, and no optimal path in continental North America would ever exceed that mileage, so this is the "infinite" value used to build the route into N%(). (It gets reset by reloading MATS if you select the option to start over after route generation.) That said, it's almost certain that the program is not using an implementation of Dijkstra's algorithm. The queue starts out with all (in this case 487) nodes in it, making finding the lowest cost node in the queue on each iteration quite expensive on a little ~1.02MHz 6502. The usual solution in later implementations is a min-priority queue to speed up the search, but we already know there are no other arrays in memory to construct a faster heap or tree, so any efficiency gained would likely be unimpressive. Furthermore, it's not clear the author would have known about this approach (the earliest work on it dates to the late 1970s, such as Johnson [1977]), and even if he did, it still doesn't explain the meaning of the first two columns in N%() which aren't even used by this algorithm. After all, they're not there for nothing. E(x) specific to the problem being examined to walk from a source to a target node, terminating when the target was reached. In the process it created a set of partial trees from the source to the target from which the lowest cumulative value of E(x) would ultimately be selected, starting with individual nodes where E(x) to some next immediate destination was locally smaller. Additionally, since it was now no longer required to construct a complete tree touching every possible node, the initial set of nodes considered could simply be the starting point. How the evaluation function was to be implemented was not specified, but a simple straight-line distance to the goal node would have been one logical means. Raphael instead suggested that the function to be minimized be the sum of the distance to the goal node and the evaluation function, which was reworked as a heuristic. The heuristic thus provided a hint to the algorithm, ideally attracting it to the solution sooner by considering a smaller set of nodes. If the heuristic function was properly admissible — what Hart defined as never overstating the actual cost to get to the target — then this new algorithm could be proven to always find the lowest-cost path and greatly reduce comparisons if the solution converged quickly. Considering Graph Traverser as algorithm "A," this more informed goal-directed algorithm was dubbed "A*" (say it A-star). If Roadsearch really is using A-star, at this point we don't know what the heuristic function is. (Note from the future: stay tuned.) Fortunately, a heuristic function that returns zero for all values is absolutely admissible because it will never overstate the actual cost. Although as a degenerate case doing so effectively becomes Dijkstra's algorithm, we'll still observe the runtime benefit of not having an initial queue potentially containing all possible nodes and being able to terminate early if the target is reached (which is always possible with Roadsearch's database). To make the number of nodes more manageable for study, since we will also want to observe how the program behaves for comparison, we'll now consider a smaller routing problem that will be easier to reason about. #!/usr/bin/perl die("usage: $0 point_no_1 point_no_2\n") if (scalar(@ARGV) != 2); $p1 = $ARGV[0]; $p2 = $ARGV[1]; die("wherever you go there you are\n") if ($p1 == $p2); open(K, "cities") || open(K, "cities.txt") || die("cities: $!\n"); @cities = ( undef ); while(<K>) { $ln++; if ($ln==$p1) { $v1=$p1; print; } if ($ln==$p2) { $v2=$p2; print; } chomp; push(@cities, $_); # default gscore and fscore $gscore[$ln] = 99999; $fscore[$ln] = 99999; } die("both cities must be valid\n") if (!$v1 || !$v2); close(K); open(R, "roads") || open(R, "roads.txt") || die("roads: $!\n"); open(B, "out-B%") || die("out-B%: $!\n"); @roads = ( undef ); while(<R>) { chomp; push(@roads, $_); } close(R); $ee = 0; while(<B>) { chomp; ($rn, $c1, $c2, $d) = split(/\t/, $_); $rn += 0; $c1 += 0; $c2 += 0; $d += 0; next if (!$d || !$c1 || !$c2 || !$rn); push(@edges, [ $rn, $c1, $c2, $d ]); push(@{ $a[$c1] }, [ $rn, $c2, $d, $ee ]); push(@{ $a[$c2] }, [ $rn, $c1, $d, $ee++ ]); } close(B); @camefrom = (); @openset = ( $v1 ); $gscore[$v1] = 0; $fscore[$v1] = 0; # heuristic of distance is 0 for the start while(scalar(@openset)) { @openset = sort { $fscore[$a] <=> $fscore[$b] } @openset; print join(", ", @openset), "\n"; $current = shift(@openset); last if ($current == $v2); foreach $n (@{ $a[$current] }) { $ni = $n->[1]; $tgscore = $gscore[$current] + $n->[2]; if ($tgscore < $gscore[$ni]) { $camefrom[$ni] = $current; $routefrom[$ni] = $n->[3]; $gscore[$ni] = $tgscore; $fscore[$ni] = $tgscore + 0; # "heuristic" unless (scalar(grep { $_ == $ni } @openset)) { push(@openset, $ni); } } } } @s = ( ); while(defined($camefrom[$current])) { $route = $routefrom[$current]; $current = $camefrom[$current]; unshift(@s, $route); } print join(' - ', @s), "\n"; $miles = 0; foreach(@s) { print $roads[$edges[$_]->[0]], "($edges[$_]->[0]) - ", $cities[$edges[$_]->[2]], "($edges[$_]->[2]) - ", $cities[$edges[$_]->[1]], "($edges[$_]->[1]) ", $edges[$_]->[3], " miles\n"; $miles += $edges[$_]->[3]; } print "total $miles miles\n"; f(x) (realized here as @fscore) and g(x) (@gscore). The G-score for a given node is the currently known cost of the cheapest path from the start to that node, which we build from the mile length of each edge. The node's F-score is its G-score plus the value of the heuristic function for that node, representing our best guess as to how cheap the overall path could be if the path from start to finish goes through it. In this case, the F-score and G-score will be identical because the heuristic function in this implementation always equals zero. Also, because we're interested in knowing how many fewer nodes we've considered, we dump the open set on every iteration. % route-dij 375 311 NEEDLES CA SAN BERNARDINO CA 237 - 236 - 63 - 719 I 15 - SAN BERNARDINO CA - US395/I15 CA 27 miles I 15 - US395/I15 CA - BARSTOW CA 44 miles I 40 - BARSTOW CA - I40/US95 CA 134 miles I 40 - I40/US95 CA - NEEDLES CA 12 miles total 217 miles % route-astar 375 311 NEEDLES CA SAN BERNARDINO CA 375 443, 335, 274, 376 335, 274, 442, 18, 376 274, 442, 18, 376, 34 442, 18, 376, 171, 380, 34 18, 376, 171, 380, 15, 34, 31 376, 171, 380, 15, 34, 169, 262, 31 424, 171, 380, 15, 34, 169, 262, 31, 487 171, 380, 15, 34, 169, 262, 31, 487 380, 15, 34, 169, 275, 262, 31, 487 15, 34, 169, 275, 262, 31, 379, 487 34, 45, 169, 275, 262, 31, 379, 487 45, 169, 275, 262, 31, 379, 311, 487, 342 169, 275, 262, 31, 379, 311, 119, 487, 342 275, 311, 262, 31, 379, 119, 487, 342 311, 262, 31, 379, 336, 119, 487, 170, 342 237 - 236 - 63 - 719 I 15 (31) - SAN BERNARDINO CA (375) - US395/I15 CA (443) 27 miles I 15 (31) - US395/I15 CA (443) - BARSTOW CA (18) 44 miles I 40 (60) - BARSTOW CA (18) - I40/US95 CA (169) 134 miles I 40 (60) - I40/US95 CA (169) - NEEDLES CA (311) 12 miles total 217 miles % route-astar 311 375 NEEDLES CA SAN BERNARDINO CA 311 169, 251, 34 251, 34, 262, 18 168, 34, 262, 18 34, 262, 18, 475, 110 262, 18, 475, 110, 335, 342 454, 18, 475, 110, 335, 342, 427 18, 475, 110, 335, 342, 53, 427, 446 442, 443, 475, 110, 335, 342, 53, 427, 446 443, 475, 110, 335, 342, 15, 53, 427, 31, 446 475, 110, 375, 335, 342, 15, 53, 427, 31, 446 110, 375, 335, 342, 15, 53, 427, 31, 446 375, 335, 342, 15, 53, 427, 31, 446, 124, 247 719 - 63 - 236 - 237 I 40 (60) - I40/US95 CA (169) - NEEDLES CA (311) 12 miles I 40 (60) - BARSTOW CA (18) - I40/US95 CA (169) 134 miles I 15 (31) - US395/I15 CA (443) - BARSTOW CA (18) 44 miles I 15 (31) - SAN BERNARDINO CA (375) - US395/I15 CA (443) 27 miles total 217 miles N%(), we'll instruct VICE to trap each integer array write to the range covered by MATS. (Trapping reads would also be handy but we'd go mad with the amount of data this generates.) We can get the value being written from $64/$65 (using the BASIC #1 floating point accumulator as temporary space) based on this code in the RTL: On each store we'll duly log the value in $64/$65 (remember it's big-endian) and the address it's being stored to. I wrote a one-off script to turn this string of writes into array offsets so we can understand how they relate to N%(), and then look them up in the tables so that we know which node and edge is under consideration. Remember, Roadsearch works this problem backwards starting with Needles. From the above you can see where the program marks the order of each node and the accumulated mileage. Our running totals, most likely the F-score and G-score, for a given node x are in N%(x,1) and N%(x,2), the length of the candidate edge is in N%(x,1), the optimal edge for the node is in N%(x,4), and the iteration it was marked in is recorded in N%(x,3). (We count an iteration as any loop in which a candidate node is marked, which in this simple example will occur on every run through the open set.) N%(x,0) also looks like a distance, but it doesn't correlate to a highway distance. To construct the itinerary at the end, it starts with San Bernardino and then repeatedly walks the selected edge to the next node until it reaches Needles. It's painfully obvious that compared to our models Roadsearch is considering a much smaller number of nodes (18, 34, 169, 251, 262, 311, 375, 442 and 443), counting the five in the optimal solution, and it duly converges on the optimal routing in just four iterations compared to twelve in our best case. I did look at its read pattern and found that N%(x,0) and N%(x,1) lit up a lot. These values are clearly important to computing whatever heuristic it's using, so I pulled out a few from across North America. I stared at it for awhile until it dawned on me what the numbers are. Do you see what I saw? Here, let me plot these locations on a Mercator projection for you (using the United States' territorial boundaries): coordinates. The first column increases going north, and the second column increases going west. Indeed, in the very paper written by Hart et al., they suggest that the straight-line distance from the current node to the goal would make a dandy heuristic and now we can compute it! The thing we have to watch for here is that the scales of the mileage and the coordinates are not identical, and if we use a simple distance calculation we'll end up clobbering the cumulative mileage with it — which would not only make it non-admissible as a heuristic, but also give us the wrong answer. To avoid that, we'll compute a fudge factor to yield miles from "map units" and keep the heuristic function at the same scale. Let's take San Diego, California to Bangor, Maine as a reference standard, for which computing the geodesic distance using WGS 84 yields 2,696.83 miles from city centre to city centre as the crow flies. If we compute the straight-line distance between them using the coordinates above, we get 8703.63 "map units," which is a ratio of 3.23:1. To wit: We now have implemented an h(x) that for a given node x returns its straight-line distance from the target node. Let's try it out. Remembering that Roadsearch works it backwards, we converge on the same solution examining the same nodes in the same number of iterations (four). Further proof is if we dump the program's array writes for the opposite direction: This routing proceeds in six iterations, just as we do, and once again the nodes we end up considering in our new A-star model are also the same. It also explains N%(x,0) — this is the straight-line distance and thus our heuristic, calculated (it's backwards) as the distance to the "start." For example, Palm Springs is indeed roughly 135 miles from Needles as the crow flies, again depending on your exact termini, whereas the western US 95/Interstate 40 junction is only about 11 miles. It should also be obvious that this "fudge" divisor has a direct effect on the efficiency of the routine. While we're purportedly using it as a means to scale down the heuristic, doing so is actually just a backhanded way of deciding how strongly we want the heuristic weighted. However, we can't really appreciate its magnitude in a problem space this small, so now we'll throw it a big one: drive from San Diego (376) to Bangor (17). (I did myself drive from Bangor to San Diego in 2006, but via Georgia to visit relatives.) This route requires a lot more computation and will also generate multiple useless cycles in which no node is sufficiently profitable, so I added code to our heuristic A-star router to explicitly count iterations only when a candidate node is marked: With our computed initial fudge factor of 3.23, we get this (again, worked backwards): And now for the real thing. quick. Both our simulation and the real program agree on the optimal route, as demonstrated by the cumulative mileage. N%(x,3). Interestingly, we do not: our simulation converged on the optimal route in 328 iterations, but the Commodore got it in 307. However, if we tune our simulation's fudge factor to 3.03, we also get it in 307. Is there an optimal fudge divisor? We know the optimal route, so we'll run a whole bunch of simulations over the entire interval, rejecting ones that get the wrong answer and noting the iterations that were required for the right ones. In fact, we can do this generally for any routing by using the longhand Dijkstra method to get the correct answer and then run a whole bunch of tweaked A-stars compared with its routing after that. In our simulation, stepping the fudge divisor over the interval from 0 inclusive to 4 by 0.001 increments, I ran six long haul drives in total, San Diego (376) to Bangor (17) and back, Bellingham, WA (24) to Miami, FL (291) and back, and then San Francisco, CA (377) to Washington, DC (465) and back. I then plotted them all out as curves. h(x)=0 heuristic from above, which is always accurate and the least goal directed (and consequently requires the most iterations). First off, there is no single fudge divisor that always yields the most optimal route for all of our test cases. Notice how much of the graph yields absolutely wrong answers (so nothing is plotted), and even in the interval between around 2.2 and 2.8 or so not all of the curves are valid. They all become valid around 3, which I enlarged in the inset on the left, and each one's iteration count slowly rises after that with the zero heuristic as more or less the upper asymptote. Except for the purple curve, however, 3 is not generally their lower bound. Second, there is no single fudge divisor that always corresponds to the program's iteration count, which is 311 (17 to 376), 307 (376 to 17), 183 (291 to 24), 264 (24 to 291), 261 (377 to 465) and 220 (465 to 377). With the value of 3.03 from above, however, our simulation generally does better than the real thing, which is both gratifying for the purposes of pathfinding and frustrating for the purposes of modeling. B%(x,3)). That assures it will always have an insuperably high cost, yet be unlikely to overflow if it's involved in any calculations, and thus it will never be pulled from the open set for evaluation. The in-memory database is always reloaded from disk after the route is disposed of. As our final stop in this article nearly as long as some of the drives we've analysed, we'll do my first 2005 long haul expedition along US Highway 395. This is a useful way to show how to add your own roads and what the program does with that, since not all of its segments are in the database. M, which will then cause the program to ignore any extraneous ones. Style points are added if you clear the records in the RELative files and turn them into nulls, though this isn't required. For the 1985 release, set M to 20 34 38 37 20 0D 20 37 37 35 20 0D 20 32 31 35 20 0D 20 39 39 39 39 20 0D, which is (in PETSCII) 487 cities, 775 road segments, 215 road names and serial number 9999 or as you like. (Note this will not change the serial number in the editor, but with this patch you're only using the main program in any case.) However, if you try to use a disk modified this way to add routes or cities, it will correctly recognize the new ones as new but erroneously find their old links in the arrays. That will not only get you unexpected residual road links to any new cities, but the links' presence can also confuse the program to such an extent it will end up in an infinite loop. In this case you'll need to delete these hanging links by patching MATS to null out the entries in B%() and N%() referring to city numbers greater than 487 and road numbers greater than 215 (remember that there are three arrays in that file and that multi-dimensional arrays run out each dimension before going to the next, meaning you can't just truncate it and stuff nulls on the end). This is simple to understand in concept but a little tedious to do, so I have left this as an exercise for the reader. Should I come up with a clean MATS myself, I will put it up somewhere if there is interest; for now we'll do our work on a well-loved copy which means that the city numbers you see here may not necessarily be the ones you get if you're following along at home. From there it proceeded north to the Columbia River gorge and intersected US 730 further west, then went east with it as before to US 12. This intermediate state is shown on this 1976 Donnelley atlas page, with the relevant portion highlighted. It was not until around 1986-7 when Interstate 82 was completed that US 395 was moved to I-82 instead as its present-day routing, leaving only a tiny residual overlap with US 730 east of Umatilla, Oregon. The problem is that none of these locations have waypoints we can easily connect to. M or MATS to disk until this point so that if the process gets aborted in the middle, the database remains internally consistent except for some harmless unreferenced RELative file records that can be overwritten later. N%() as its heuristic, but we were never asked for anything like latitude or longitude. How, then, can it get this information for the new cities we just created? At this point I dumped the RELative files and the new MATS to find out, and it so happens that the new city gets the same coordinates as the one it was connected to: both Wallula Junction and Walla Walla get coordinates 8062 and 20802, which are Walla Walla's coordinates. As we are connecting a couple new cities together along US 12, they also get the same coordinates, up to Yakima which retains its own. If we extended from both sides into the middle it would probably have made the database slightly more accurate at the cost of some inconvenient dancing around. This would be a good idea if you needed to reconstruct or chop up a particularly long segment, for example. I had also made an earlier call-forward that cities added from the editor sometimes don't have columns 4 or 5 set to zero. I can see this for the entries the previous owner entered, but all of mine ended up zero, so I'm not sure under what conditions that occurred. It doesn't seem to matter to the program in any case. .d64 disk images for this program all show new routes had been added suggests they got non-trivial usage by their owners, which really speaks to the strengths of the program and what it could accomplish. While it's hardly a sexy interface, it's functional and straightforward, and clearly a significant amount of time was spent getting the map data in and tuning up the routing algorithm so that it could perform decently on an entry-level home computer. Unfortunately, I suspect it didn't sell a lot of copies, because there are few advertisements for Columbia Software in any magazine of the time and no evidence that the Commodore version sold any better than the Apple II release did. That's a shame because I think its technical achievements merited a better market reception then, and it certainly deserves historical recognition today. We take things like Google Maps almost for granted, but a program that could drive a highway map on your 1985 Apple or Commodore would truly have been fascinating. As for me, I look forward to another long haul road trip in the future, maybe US 95 or US 101, or possibly reacquainting myself with US 6 which I drove in 2006 from Bishop to Cape Cod, Massachusetts. Regardless of where we end up going, though, this time we won't be taking the SX-64 with us — my wife has insisted on getting her seat back.

4 hours ago 2 votes
Meet the teens behind RedSnapper: a smart Arduino-powered prosthetic arm

An affordable, AI-assisted, wearable robotic arm? That’s not science fiction – it’s RedSnapper, an open-source prosthesis built by PAC Tech, a team of high school students from Istituto Maria Immacolata in Gorgonzola, Italy. Powered entirely by Arduino boards, their project won the national “Robot Arm Makers” title at the 2025 RomeCup – and we think […] The post Meet the teens behind RedSnapper: a smart Arduino-powered prosthetic arm appeared first on Arduino Blog.

3 days ago 6 votes
Vote for the August 2025 + Post Topic

Let's get this party started.

4 days ago 9 votes
Power Switch and Battery Replacement of an SR620 Universal Time Interval Counter

Introduction The SR620 Repairing the SR620 Replacing the Backup 3V Lithium Battery Switching to an External Reference Clock Running Auto-Calibration Oscilloscope Display Mode References Footnotes Introduction A little over a year ago, I found a Stanford Research Systems SR620 universal time interval counter at the Silicon Valley Electronics Flea Market. It had a big sticker “Passes Self-Test” and “Tested 3/9/24” (the day before the flea market) on it so I took the gamble and spent an ungodly $4001 on it. Luckily, it did work fine, initially at least, but I soon discovered that it sometimes got into some weird behavior after pressing the power-on switch. The SR620 The SR620 was designed sometime in the mid-1980s. Mine has a rev C PCB with a date of July 1988, 37 year old! The manual lists 1989, 2006, 2019 and 2025 revisions. I don’t know if there were any major changes along the way, but I doubt it. It’s still for sale on the SRS website, starting at $5150. The specifications are still pretty decent, especially for a hobbyist: 25 ps single shot time resolution 1.3 GHz frequency range 11-digit resolution over a 1 s measurement interval The SR620 is not perfect, one notable issue is its thermal design. It simply doesn’t have enough ventilation holes, the heat-generating power regulators are located close to the high precision time-to-analog converters, and the temperature sensor for the fan is inexplicably placed right next to the fan, which is not close at all to the power regulators. The Signal Path has an SR620 repair video that talks about this. Repairing the SR620 You can see the power-on behavior in the video below: Of note is that lightly touching the power button changes the behavior and sometimes makes it get all the way through the power-on sequence. This made me hopeful that the switch itself was bad, something that should be easy to fix. Unlike my still broken SRS DG535, another flea market buy with the most cursed assembly, the SR620 is a dream to work on: 4 side screws is all it takes to remove the top of the case and have access to all the components from the top. Another 4 screws to remove the bottom panel and you have access to the solder side of the PCB. You can desolder components without lifting the PCB out of the enclosure. Like my HP 5370A, the power switch of the SR620 selects between power on and standby mode. The SR620 enables the 15V rail at all times to keep a local TCXO or OCXO warmed up. The power switch is located at the right of the front panel. It has 2 black and 2 red wires. When the unit is powered on, the 2 black wires and the 2 red wires are connected to each other. To make sure that the switch itself was the problem, I soldered the wires together to create a permanent connection: After this, the SR620 worked totall fine! Let’s replace the switch. Unscrew 4 more screws and pull the knobs of the 3 front potentiometers and power switch to get rid of the front panel: A handful of additional screws to remove the front PCB from the chassis, and you have access to the switch: The switch is an ITT Schadow NE15 T70. Unsurprisingly, these are not produced anymore, but you can still find them on eBay. I paid $7.5 + shipping, the price increased to $9.5 immediately after that. According to this EEVblog forum post, this switch on Digikey is a suitable replacement, but I didn’t try it. The old switch (bottom) has 6 contact points vs only 4 of the new one (top), but that wasn’t an issue since only 4 were used. Both switches also have a metal screw plate, but they were oriented differently. However, you can easily reconfigure the screw plate by straightening 4 metal prongs. If you buy the new switch from Digikey and it doesn’t come with the metal screw plate, you should be able to transplant the plate from the broken switch to the new one just the same. To get the switch through the narrow hole of the case, you need to cut off the pins on the one side of the switch and you need to bend the contact points a bit. After soldering the wires back in place, the SR620 powered on reliably. Switch replacement completed! Replacing the Backup 3V Lithium Battery The SR620 has a simple microcontroller system consists of a Z8800 CPU, 64 KB of EPROM and a 32 KB SRAM. In addition to program data, the SRAM also contains calibration and settings. replaced one such battery in my HP 3478A multimeter. These batteries last almost forever, but mine had a 1987 date code and 38 years is really pushing things, so I replaced it with this new one from Digikey. The 1987 version of this battery had 1 pin on each side, on the new ones, the + side has 2 pins, so you need to cut one of those pins and install the battery slightly crooked back onto the PCB. When you first power up the SR620 after replacing the battery, you might see “Test Error 3” on the display. According to the manual: Test error 3 is usually “self-healing”. The instrument settings will be returned to their default values and factory calibration data will be recalled from ROM. Test Error 3 will recur if the Lithium battery or RAM is defective. After power cycling the device again, the test error was gone and everything worked, but with a precision that was slightly lower than before: before the battery replacement, when feeding the 10 MHz output reference clock into channel A and measuring frequency with a 1s gate time, I’d get a read-out of 10,000,000.000N MHz. In other words: around a milli-Hz accuracy. After the replacment, the accuracy was about an order of magnitude worse. That’s just not acceptable! The reason for this loss in accuracy is because the auto-calibration parameters were lost. Luckily, this is easy to fix. Switching to an External Reference Clock My SR620 has the cheaper TCXO option which gives frequency measurement results that are about one order of magnitude less accurate than using an external OCXO based reference clock. So I always switch to an external reference clock. The SR620 doesn’t do that automatically, you need to manually change it in the settings, as follows: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” SET -> “cloc source int” Scale Down arrow -> “cloc source rear” SET -> “cloc Fr 10000000” SET If you have a 5 MHz reference clock, use the down or up arrow to switch between 1000000 and 5000000. Running Auto-Calibration You can rerun auto-calibration manually from the front panel without opening up the device with this sequence: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” START The auto-calibration will take around 2 minutes. Only run it once the device has been running for a while to make sure all components have warmed up and are at stable temperature. The manual recommends a 30 minute warmup time. After doing auto-calibration, feeding back the reference clock into channel A and measuring frequency with a 1 s gate time gave me a result that oscillated around 10 MHz, with the mHz digits always 000 or 999.2 It’s possible to fine-tune the SR620 beyond the auto-calibration settings. One reason why one might want to do this is to correct for drift of the internal oscillator To enable this kind of tuning, you need to move a jumper inside the case. The time-nuts email list has a couple of discussions about this, here is one such post. Page 69 of the SR620 manual has detailed calibration instructions. Oscilloscope Display Mode When the 16 7-segment LEDs on the front panel are just not enough, the SR620 has this interesting way of (ab)using an oscilloscope as general display: it uses XY mode to paint the data. I had tried this mode in the past with my Sigilent digital oscilloscope, but the result was unreadable: for this kind of rendering, having a CRT beam that lights up all the phosphor from one point to the next is a feature, not a bug. This time, I tried it with an old school analog oscilloscope3: (Click to enlarge) The result is much better on the analog scope, but still very hard to read. When you really need all the data you can get from the SR620, just use the GPIB or RS232 interface. References The Signal Path - TNP #41 - Stanford Research SR620 Universal Time Interval Counter Teardown, Repair & Experiments Some calibration info about the SR620 Fast High Precision Set-up of SR 620 Counter The rest of this page has a bunch of other interesting SR620 related comments. Time-Nuts topics The SR620 is mentioned in tons of threads on the time-nuts emaiml list. Here are just a few interesting posts: This post talks about some thermal design mistakes in the SR620. E.g. the linear regulators and heat sink are placed right next to the the TCXO. It also talks about the location of the thermistor inside the fan path, resulting in unstable behavior. This is something Shrirar of The Signal Path fixed by moving the thermistor. This comment mentions that while the TXCO stays powered on in standby, the DAC that sets the control voltage does not, which results in an additional settling time after powering up. General recommendation is to use an external 10 MHz clock reference. This comment talks about warm-up time needed depending on the desired accuracy. It also has some graphs. Footnotes This time, the gamble paid off, and the going rate of a good second hand SR620 is quite a bit higher. But I don’t think I’ll ever do this again! ↩ In other words, when fed with the same 10 MHz as the reference clock, the display always shows a number that is either 10,000,000,000x or 9,999,999,xx. ↩ I find it amazing that this scope was calibrated as recently as April 2023. ↩

5 days ago 12 votes
Comics from March 1978 Issue of Creative Computing

Old Timey fun

6 days ago 15 votes