Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
114
Introduction Reserving a fixed IP address for the logic analyzer Assign the chosen IP address to the logic analyzer Allow the logic analyzer to access your Ubuntu X server Configure the X-window Settings on the logic analyzer Install and declare the HP logic analyzer font files Connect your logic analyzer to your Linux PC Trouble shooting Introduction Earlier this year, I bought a pristine HP 1670G logic analyzer at the Silicon Valley Electronics Flea Market. Produced sometime early 2000, it’s new enough to have an Agilent sticker. Unlike most other HP logic analyzers, a 1670G doesn’t have expansion cards: what you see is what you get, but 136 channels that can be sampled at 500MHz is plenty, and my machine has the 2 channel oscilloscope option. A bandwidth of 500MHz and 2GSa/s sampling rate are very nice specs. The 1670G is one of the last HP models that runs on a custom operating system, later models switched to HP/UX, but it already has some advanced features, such as being to...
a year ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Electronics etc…

Power Switch and Battery Replacement of an SR620 Universal Time Interval Counter

Introduction The SR620 Repairing the SR620 Replacing the Backup 3V Lithium Battery Switching to an External Reference Clock Running Auto-Calibration Oscilloscope Display Mode References Footnotes Introduction A little over a year ago, I found a Stanford Research Systems SR620 universal time interval counter at the Silicon Valley Electronics Flea Market. It had a big sticker “Passes Self-Test” and “Tested 3/9/24” (the day before the flea market) on it so I took the gamble and spent an ungodly $4001 on it. Luckily, it did work fine, initially at least, but I soon discovered that it sometimes got into some weird behavior after pressing the power-on switch. The SR620 The SR620 was designed sometime in the mid-1980s. Mine has a rev C PCB with a date of July 1988, 37 year old! The manual lists 1989, 2006, 2019 and 2025 revisions. I don’t know if there were any major changes along the way, but I doubt it. It’s still for sale on the SRS website, starting at $5150. The specifications are still pretty decent, especially for a hobbyist: 25 ps single shot time resolution 1.3 GHz frequency range 11-digit resolution over a 1 s measurement interval The SR620 is not perfect, one notable issue is its thermal design. It simply doesn’t have enough ventilation holes, the heat-generating power regulators are located close to the high precision time-to-analog converters, and the temperature sensor for the fan is inexplicably placed right next to the fan, which is not close at all to the power regulators. The Signal Path has an SR620 repair video that talks about this. Repairing the SR620 You can see the power-on behavior in the video below: Of note is that lightly touching the power button changes the behavior and sometimes makes it get all the way through the power-on sequence. This made me hopeful that the switch itself was bad, something that should be easy to fix. Unlike my still broken SRS DG535, another flea market buy with the most cursed assembly, the SR620 is a dream to work on: 4 side screws is all it takes to remove the top of the case and have access to all the components from the top. Another 4 screws to remove the bottom panel and you have access to the solder side of the PCB. You can desolder components without lifting the PCB out of the enclosure. Like my HP 5370A, the power switch of the SR620 selects between power on and standby mode. The SR620 enables the 15V rail at all times to keep a local TCXO or OCXO warmed up. The power switch is located at the right of the front panel. It has 2 black and 2 red wires. When the unit is powered on, the 2 black wires and the 2 red wires are connected to each other. To make sure that the switch itself was the problem, I soldered the wires together to create a permanent connection: After this, the SR620 worked totall fine! Let’s replace the switch. Unscrew 4 more screws and pull the knobs of the 3 front potentiometers and power switch to get rid of the front panel: A handful of additional screws to remove the front PCB from the chassis, and you have access to the switch: The switch is an ITT Schadow NE15 T70. Unsurprisingly, these are not produced anymore, but you can still find them on eBay. I paid $7.5 + shipping, the price increased to $9.5 immediately after that. According to this EEVblog forum post, this switch on Digikey is a suitable replacement, but I didn’t try it. The old switch (bottom) has 6 contact points vs only 4 of the new one (top), but that wasn’t an issue since only 4 were used. Both switches also have a metal screw plate, but they were oriented differently. However, you can easily reconfigure the screw plate by straightening 4 metal prongs. If you buy the new switch from Digikey and it doesn’t come with the metal screw plate, you should be able to transplant the plate from the broken switch to the new one just the same. To get the switch through the narrow hole of the case, you need to cut off the pins on the one side of the switch and you need to bend the contact points a bit. After soldering the wires back in place, the SR620 powered on reliably. Switch replacement completed! Replacing the Backup 3V Lithium Battery The SR620 has a simple microcontroller system consists of a Z8800 CPU, 64 KB of EPROM and a 32 KB SRAM. In addition to program data, the SRAM also contains calibration and settings. replaced one such battery in my HP 3478A multimeter. These batteries last almost forever, but mine had a 1987 date code and 38 years is really pushing things, so I replaced it with this new one from Digikey. The 1987 version of this battery had 1 pin on each side, on the new ones, the + side has 2 pins, so you need to cut one of those pins and install the battery slightly crooked back onto the PCB. When you first power up the SR620 after replacing the battery, you might see “Test Error 3” on the display. According to the manual: Test error 3 is usually “self-healing”. The instrument settings will be returned to their default values and factory calibration data will be recalled from ROM. Test Error 3 will recur if the Lithium battery or RAM is defective. After power cycling the device again, the test error was gone and everything worked, but with a precision that was slightly lower than before: before the battery replacement, when feeding the 10 MHz output reference clock into channel A and measuring frequency with a 1s gate time, I’d get a read-out of 10,000,000.000N MHz. In other words: around a milli-Hz accuracy. After the replacment, the accuracy was about an order of magnitude worse. That’s just not acceptable! The reason for this loss in accuracy is because the auto-calibration parameters were lost. Luckily, this is easy to fix. Switching to an External Reference Clock My SR620 has the cheaper TCXO option which gives frequency measurement results that are about one order of magnitude less accurate than using an external OCXO based reference clock. So I always switch to an external reference clock. The SR620 doesn’t do that automatically, you need to manually change it in the settings, as follows: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” SET -> “cloc source int” Scale Down arrow -> “cloc source rear” SET -> “cloc Fr 10000000” SET If you have a 5 MHz reference clock, use the down or up arrow to switch between 1000000 and 5000000. Running Auto-Calibration You can rerun auto-calibration manually from the front panel without opening up the device with this sequence: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” START The auto-calibration will take around 2 minutes. Only run it once the device has been running for a while to make sure all components have warmed up and are at stable temperature. The manual recommends a 30 minute warmup time. After doing auto-calibration, feeding back the reference clock into channel A and measuring frequency with a 1 s gate time gave me a result that oscillated around 10 MHz, with the mHz digits always 000 or 999.2 It’s possible to fine-tune the SR620 beyond the auto-calibration settings. One reason why one might want to do this is to correct for drift of the internal oscillator To enable this kind of tuning, you need to move a jumper inside the case. The time-nuts email list has a couple of discussions about this, here is one such post. Page 69 of the SR620 manual has detailed calibration instructions. Oscilloscope Display Mode When the 16 7-segment LEDs on the front panel are just not enough, the SR620 has this interesting way of (ab)using an oscilloscope as general display: it uses XY mode to paint the data. I had tried this mode in the past with my Sigilent digital oscilloscope, but the result was unreadable: for this kind of rendering, having a CRT beam that lights up all the phosphor from one point to the next is a feature, not a bug. This time, I tried it with an old school analog oscilloscope3: (Click to enlarge) The result is much better on the analog scope, but still very hard to read. When you really need all the data you can get from the SR620, just use the GPIB or RS232 interface. References The Signal Path - TNP #41 - Stanford Research SR620 Universal Time Interval Counter Teardown, Repair & Experiments Some calibration info about the SR620 Fast High Precision Set-up of SR 620 Counter The rest of this page has a bunch of other interesting SR620 related comments. Time-Nuts topics The SR620 is mentioned in tons of threads on the time-nuts emaiml list. Here are just a few interesting posts: This post talks about some thermal design mistakes in the SR620. E.g. the linear regulators and heat sink are placed right next to the the TCXO. It also talks about the location of the thermistor inside the fan path, resulting in unstable behavior. This is something Shrirar of The Signal Path fixed by moving the thermistor. This comment mentions that while the TXCO stays powered on in standby, the DAC that sets the control voltage does not, which results in an additional settling time after powering up. General recommendation is to use an external 10 MHz clock reference. This comment talks about warm-up time needed depending on the desired accuracy. It also has some graphs. Footnotes This time, the gamble paid off, and the going rate of a good second hand SR620 is quite a bit higher. But I don’t think I’ll ever do this again! ↩ In other words, when fed with the same 10 MHz as the reference clock, the display always shows a number that is either 10,000,000,000x or 9,999,999,xx. ↩ I find it amazing that this scope was calibrated as recently as April 2023. ↩

a week ago 18 votes
Repairing an HP 5370A Time Interval Counter

MathJax.Hub.Config({ jax: ["input/TeX", "output/HTML-CSS"], tex2jax: { inlineMath: [ ['$', '$'], ["\\(", "\\)"] ], displayMath: [ ['$$', '$$'], ["\\[", "\\]"] ], processEscapes: true, skipTags: ['script', 'noscript', 'style', 'textarea', 'pre', 'code'] } //, //displayAlign: "left", //displayIndent: "2em" }); Introduction Inside the HP 5370A High Stability Reference Clock with an HP 10811-60111 OCXO RIFA Capacitors in the Corcom F2058 Power Entry Module? 15V Rail Issues Power Suppy Architecture Fault Isolation - It’s the Reference Frequency Buffer PCB! The Reference Frequency Buffer Board Fixing the Internal Reference Clock Fixing the External Reference Clock Future work Footnotes Introduction I bought an HP 5370A time interval counter at the Silicon Valley Electronics Flea Market for a cheap $40. The 5370A is a pretty popular device among time nuts: it has a precision of 20ps for single-shot time interval measurements, amazing for a device that was released in 1978, and even compared to contemporary time interval counters it’s still a decent performance. The 74LS chips in mine have a 1981 time code which makes the unit a whopping 44 years old. But after I plugged it in and pressed the power button, smoke and a horrible smell came out after a few minutes. I had just purchased myself hours of entertainment! Inside the HP 5370A It’s trivial to open the 5370A: remove the 4 feet in the back by removing the Philips screws inside them. remove a screw to release the top or bottom cover (Click to enlarge) Once inside, you can see an extremely modular build: the center consists of a motherboard with 10 plug-in PCBs, 4 on the left for an embedded computer that’s based on an MC6800 CPU, 6 on the right for the time acquisition. The top has plug-in PCBs as well, with the power supply on the left and reference clock circuitry on the right. My unit uses the well known HP 10811-60111 high-stability OCXO as 10MHz clock reference. The bottom doesn’t have plug-in PCBs. It has PCBs for trigger logic and the front panel. This kind of modular build probably adds significant cost, but it’s a dream for servicing and tracking down faults. To make things even easier, the vertical PCBs have a plastic ring or levers to pull them out of their slot! There are also plenty of generously sized test pins and some status LEDs. High Stability Reference Clock with an HP 10811-60111 OCXO Since the unit has the high stability option, I have now yet another piece of test equipment with an HP 10811-60111. OCXOs are supposed to be powered on at all time: environmental changes tend to stress them out and result in a deviation of their clock speed, which is why there’s a “24 hour warm-up” sticker on top of the case. It can indeed take a while for an OCXO to relax and settle back into its normal behavior, though 24 hours seems a bit excessive. The 5370A has a separate always-on power supply just for the oven of the OCXO to keeps the crystal at constant temperature even when the power switch on the front is not in the ON position. Luckily, the fan is powered off when the front switch is set to stand-by.1 In the image above, from top to bottom, you can see: the main power supply control PCB the HP 10811-60111 OCXO. To the right of it is the main power relay. the OCXO oven power supply the reference frequency buffer PCB These are the items that will play the biggest role during the repair. RIFA Capacitors in the Corcom F2058 Power Entry Module? Spoiler: probably not… After plugging in the 5370A the first time, magic smoke came out of it along with a pretty disgusting chemical smell, one that I already knew from some work that I did on my HP 8656A RF signal generator. I unplugged the power, opened up the case, and looked for burnt components but couldn’t find any. After a while, I decided to power the unit back on and… nothing. No smoke, no additional foul smell, but also no display. One of common failure mode of test equipment from way back when are RIFA capacitors that sit right next to the mains power input, before any kind of power switch. Their primary function is to filter out high frequency noise that’s coming from the device and reduce EMI. RIFAs have a well known tendency to crack over time and eventually catch fire. A couple of years ago, I replaced the RIFA capacitors of my HP 3457A, but a general advice is to inspect all old equipment for these gold colored capacitors. However, no such discrete capacitors could be found. But that doesn’t mean they are not there: like a lot of older HP test equipment, the 5370A uses a Corcom F2058 line power module that has capacitors embedded inside. Below is the schematic of the Corcom F2058 (HP part number 0960-0443). The capacitors are marked in red. You can also see a fuse F1, a transformer and, on the right, a selector that can be used to configure the device for 100V, 115V/120V, 220V and 230V/240V operation. (Click to enlarge) There was a bad smell lingering around the Corcom module, so I removed it to check it out. There are metal clips on the left and right side that you need to push in to get the module out. It takes a bit of wiggling, but it works out eventually. Once removed, however, the Corcom didn’t really have a strong smell at all. I couldn’t find any strong evidence online that these modules have RIFAs inside them, so for now, my conclusion is that they don’t have them and that there’s no need to replace them. Module replacement In the unlikely case that you want to replace the Corcom module, you can use this $20 AC Power Entry Module from Mouser. One reason why you might want to do this is because the new module has a built-in power switch. If you use an external 10 MHz clock reference instead of the 10811 OCXO, then there’s really no need to keep the 5370A connected to the mains all the time. There are two caveats, however: while it has the same dimensions as the Corcom F2058, the power terminals are located at the very back, not in an indented space. This is not a problem for the 5370A, which still has enough room for both, but it doesn’t work for most other HP devices that don’t have an oversized case. You can see that in the picture below: Unlike the Corcom F2058, the replacement only feeds through the line, neutral and ground that’s fed into it. You’d have to choose one configuration, 120V in my case, and wire a bunch of wires together to drive the transformer correctly. If you do this wrong, the input voltage to the power regulator will either be too low, and it wont work, or too high, and you might blow up the power regulation transistors. It’s not super complicated, but you need to know what you’re doing. 15V Rail Issues After powering the unit back up, it still didn’t work, but thanks to the 4 power rail status LEDs, it was immediately obvious that +15V power rail had issues. A close-by PCB is the reference frequency buffer PCB. It has a “10 MHz present” status LED that didn’t light up either, suggesting an issue with the 10811 OCXO, but I soon figured out that this status LED relies on the presence of the 15V rail. Power Suppy Architecture The 5370A was first released in 1978, decades before HP decided to stop including detailed schematics in their service manuals. Until Keysight, the Company Formerly Known as HP, decides to change its name again, you can download the operating and service manual here. If you need a higher quality scan, you can also purchase the manual for $10 from ArtekManuals2. The diagrams below were copied from the Keysight version. The power supply architecture is straightforward: the line transformer has 5 separate windings, 4 for the main power supply and 1 for the always-on OCXO power supply. A relay is used to disconnect the 4 unregulated DC rails from the power regulators when the front power button is in stand-by position, but the diode rectification bridge and the gigantic smoothing capacitors are located before the relay.3 For each of the 4 main power rails, a discrete linear voltage regulator is built around a power transistor, an LM307AN opamp and a smaller transistor for over-current protection, and a fuse. The 4 regulators share a 10V voltage reference. The opamps and the voltage reference are powered by a simple +16.2V power rail built out of a resistor and a Zener diode. (Click to enlarge) The power regulators for the +5V and -5.2V rails have a current sense resistor of 0.07 Ohm. The sense resistors for the +15V and -15V rails have a value of 0.4 Ohm. When the voltage across these resistors exceeds the 0.7V base-emitter potential of the bipolar transistors across them, the transistors start to conduct and pull down the base-emitter voltage of the power transistor, thus shutting them off. In the red rectanngle of the schematic above, the +15V power transistor is on the right, the current control transistor on the left, and current sense resistor R4 is right next to the +15V label. Using the values of 0.4 Ohm, 0.07 Ohm and 0.7V, we can estimate that the power regulators enter current control (and reduce the output voltage) when the current exceeds 10A for the +5/-5.2V rails and 1.5A for the +15/-15V rails. This more or less matches the value of the fuses, which are rated at 7A and 1.5A respectively. Power loss in this high current linear regulators is signficant and the heat sinks in the back become pretty hot. Some people have installed an external fan too cool it down a bit. Fault Isolation - It’s the Reference Frequency Buffer PCB! I measured a voltage of 8V instead of 15V. I would have prefered if I had measured no voltage at all, because a lower than expected voltage suggests that the power regulator is in current control instead of voltage control mode. In other words: there’s a short somewhere which results in a current that exceeds what’s expected under normal working conditions. Such a short can be located anywhere. But this is where the modular design of the 5370A shines: you can unplug all the PCBs, check the 15V rail, and if it’s fine, add back PCBs until it’s dead again. And, indeed, with all the PCBs removed, the 15V rail worked fine. I first added the CPU related PCBs, then the time acquisition PCBs, and the 15V stayed healthy. But after plugging in the reference frequency buffer PCB, the 15V LED went off and I measured 8V again. Of all the PCBs, this one is the easiest one to understand. The Reference Frequency Buffer Board The reference frequency buffer board has the following functionality: Convert the internally generated 10MHz frequency to emitter-coupled logic (ECL) signaling. The 5370A came either with the OCXO or with a lower performance crystal oscillator. These cheaper units were usually deployed in labs that already had an external reference clock network. Receive an external reference clock of 5 MHz or 10 MHz, multiply by 2 in the case of 5 MHz, and apply a 10 MHz filter. Convert to ECL as well. Select between internal and external clock to create the final reference clock. Send final reference clock out as ECL (time measurement logic), TTL (CPU) and sine wave (reference-out connector on the back panel). During PCB swapping, the front-panel display had remained off when all CPU boards were plugged in. Unlike later HP test equipment like the HP 5334A universal counter, the CPU clock of the 5370A is derived from the 10 MHz clock that comes out of this reference frequency buffer PCB4, so if this board is broken, nothing works. When we zoom down from block diagram to the schematic, we get this: (Click to enlarge) Leaving aside the debug process for a moment, I thought the 5 MHz/10 MHz to 10 MHz circuit was intriguing. I assumed that it worked by creating some second harmonic and filter out the base frequency, and that’s kind of how it works. There are 3 LC tanks with an inductance of 1 uH and a capacitance of 250pF, good for a natural resonance frequency of \(f = \frac{1}{2 \pi \sqrt{ L C }}\) = 10.066 MHz. The first 2 LC tanks are each part of a class C amplifier. The 3rd LC tank is an additional filter. The incoming 5 MHz or 10 MHz signal periodically inserts a bit of energy into the LC tank and nudges it to be in sync with it. This circuit deserves a blog post on its own. Fixing the Internal Reference Clock When you take a closer look at the schematic, there are 2 points that you can take advantage of: The only part on the path from the internal clock input to the various internal outputs that depends on the 15V rail is the ECL to TTL conversion circuit. And that part of the 15V rail is only connected to 3k Ohm resistor R4. Immediately after the connector, 15V first goes through an L/C/R/C circuit. In the process of debugging, I noticed the following: The arrow points to capacitor C17, which looks suspicioulsy black. I found the magic smoke generator. This was the plan off attack: Replace C17 with a new 10uF capacitor Remove resistor R16 to decouple the internal 15V rail from the external one. Disconnect the top side of R4 from the internal 15V and wire it up straight to the connector 15V rail. It’s an ugly bodge, but after these 3 fixes, I had a nice 10MHz ECL clock signal on the output clock test pin. The 5370A was alive and working fine! Fixing the External Reference Clock I usually connect my test equipment to my GT300 frequency standard, so I really wanted to fix that part of the board as well. This took way longer than it could have been… I started by replacing the burnt capacitor with a 10uF electrolytic capacitor and reinstalling R16. That didn’t go well: this time, the resistor went up in smoke. My theory is that, with shorted capacitor C17 removed, there was still another short and now the current path had to go through this resistor. Before burning up, this 10 Ohm resistor measured only 4 Ohms. I then removed the board and created a stand-alone setup to debug the board in isolation. With that burnt up R16 removed again, 15V applied to the internal 15V and a 10 MHz signal at the external input, the full circuit was working fine. I removed capacitor C16, checked it with an LCR tester and the values nicely in spec. Unable to find any real issues, I finally put in a new 10 Ohm resistor, put a new 10uF capacitor for C16 as well, plugged in the board and… now the external clock input was working fine too?! So the board is fixed now and I can use both the internal and external clock, but I still don’t why R16 burnt up after the first capacitor was replaced. Future work The HP 5370A is working very well now. Once I have another Digikey order going out, I want to add 2 capacitors to install 2 tantalum ones instead of the electrolytics that used to repair. I can’t find it back, but on the time-nuts email list, 2 easy modifications were suggested: Drill a hole through the case right above the HP 10811-60111 to have access to the frequency adjust screw. An OCXO is supposed to be immune to external temperature variations, but when you’re measuring picoseconds, a difference in ambient temperature can still have a minor impact. With this hole, you can keep the case closed while calibrating the internal oscillator. Disconnect the “10 MHz present” status LED on the reference clock buffer PCB. Apparently, this circuit creates some frequency spurs that can introduce some additional jitter on the reference clock. If you’re really hard core: Replace the entire CPU system by a modern CPU board More than 10 years ago, the HP5370 Processor Replacement Project reverse engineered the entire embedded software stack, created a PCB based on a Beagle board with new firmware. PCBs are not available anymore, but one could easily have a new one made for much cheaper than what it would have cost back then. Footnotes My HP 8656A RF signal generator has an OCXO as well. But the fan keeps running even when it’s in stand-by mode, and the default fan is very loud too! ↩ Don’t expect to be able to cut-and-paste text from the ArtekManuals scans, because they have some obnoxious rights managment that prevents this. ↩ Each smoothing capacitor has a bleeding resistor in parallel to discharge the capacitors when the power cable is unplugged. But these resistors will leak power even when the unit is switched off. Energy Star regulations clearly weren’t a thing back in 1978. ↩ The CPU runs at 1.25 MHz, the 10 MHz divided by 8. ↩

2 weeks ago 80 votes
Brightness and Contrast Adjustment of Tektronix TDS 500/600/700 Oscilloscopes

Introduction Finding the Display Tuning Potentiometers The Result Hardcopy Preview Mode Introduction Less than a week after finishing my TDS 684B analog memory blog post, a TDS 684C landed on my lab bench with a very dim CRT. If you follow the lives the 3-digit TDS oscilloscope series, you probably know that this is normally a bit of death sentence of the CRT: after years of use, the cathode ray loses its strength and there’s nothing you can do about it other than replace the CRT with an LCD screen. I was totally ready to go that route, and if I ever need to do it, here are 3 possible LCD upgrade options that I list for later reference: The most common one is to buy a $350 Newscope-T1 LCD display kit by SimmConn Labs. A cheaper hobbyist alternative is to hack something together with a VGA to LVDS interface board and some generic LCD panel, as described in this build report. He uses a VGA LCD Controller Board KYV-N2 V2 with a 7” A070SN02 LCD panel. As I write this, the cost is $75, but I assume this used to be a lot cheaper before tariffs were in place. If you really want to go hard-core, you could make your own interface board with an FPGA that snoops the RAMDAC digital signals and converts them to LVDS, just like the Newscope-T1. There is a whole thread about this the EEVblog forum. But this blog post is not about installing an LCD panel! Before going that route, you should try to increase the brightness of the CRT by turning a potentiometer on the display board. It sounds like an obvious thing to try, but didn’t a lot of reference to online. And in my case, it just worked. Finding the Display Tuning Potentiometers In the Display Assembly Adjustment section of chapter 5 of the TDS 500D, TDS 600C, TDS 700D and TDS 714L Service Manual, page 5-23, you’ll find the instructions on how to change rotation, brightness and contrast. It says to remove the cabinet and then turn some potentiometer, but I just couldn’t find them! They’re supposed to be next to the fan. Somewhere around there: Well, I couldn’t see any. It’s only the next day, when I was ready to take the whole thing apart that I noticed these dust covered holes: A few minutes and a vaccum cleaning operation later reveals 5 glorious potentiometers: From left to right: horizontal position rotation vertical position brightness contrast Rotate the last 2 at will and if you’re lucky, your dim CRT will look brand new again. It did for me! The Result The weird colors in the picture above is a photography artifact that’s caused by Tektronix NuColor display technology: it uses a monochrome CRT with an R/G/B shutter in front of it. You can read more about it in this Hackaday article. In real life, the image looks perfectly fine! Hardcopy Preview Mode If dialing up the brightness doesn’t work and you don’t want to spend money on an LCD upgrade, there is the option of switching the display to Hardcopy mode, like this: [Display] -> [Settings <Color>] -> [Palette] -> [Hardcopy preview] Instead of a black, you will now get a white background. It made the scope usable before I made the brightness adjustment.

3 months ago 43 votes
A Tektronix TDS 684B Oscilloscope Uses CCD Analog Memory

Introduction The TDS600 Series The Acquisition Board Measuring Along the Signal Path A Closer Look at the Noise Issue Conclusion Introduction I have a Tektronix TDS 684B oscilloscope that I bought cheaply at an auction. It has 4 channels, 1 GHz of BW and a sample rate of 5 Gsps. Those are respectable numbers even by today’s standards. It’s also the main reason why I have it: compared to modern oscilloscopes, the other features aren’t nearly as impressive. It can only record 15k samples per channel at a time, for example. But at least the sample rate doesn’t go down when you increase the number of recording channels: it’s 5 Gsps even all 4 channels are enabled. I’ve always wondered how Tektronix managed to reach such high specifications back in the nineties, so in this blog post I take a quick look at the internals, figure out how it works, and do some measurements along the signal path. The TDS600 Series The first oscilloscopes of the TDS600 series were introduced around 1993. The last one, the TDS694C was released in 2002. The TDS684 version was from sometime 1995. The ICs on my TDS684C have date codes from as early as the first half of 1997. The main characteristic of these scopes was their extreme sample rate for that era, going from 2 Gsps for the TDS620, TDS640 and TDS644, 5 Gsps for the TDS654, TDS680 and TDS684, and 10 Gsps for the TDS694C which was developed under the Screamer code name. The oscilloscopes have 2 main boards: the acquisition board contains all the parts from the analog input down to the sample memory as well as some triggering logic. (Click to enlarge) a very busy CPU board does the rest. (Click to enlarge) 2 flat cables and a PCB connect the 2 boards. The interconnect PCB traces go to the memory on the acquisition board. It’s safe to assume that this interface is used for high-speed waveform data transfer while the flat cables are for lower speed configuration and status traffic. If you ever remove the interconnection PCB, make sure to put it back with the same orientation. It will fit just fine when rotated 180 degrees but the scope won’t work anymore! The Acquisition Board The TDS 684B has 4 identical channels that can easily be identified. (Click to enlarge) There are 6 major components in the path from input to memory: Analog front-end Hidden under a shielding cover, but you’d expect to find a bunch of relays there to switch between different configurations: AC/DC, 1Meg/50 Ohm termination, … I didn’t open it because it requires disassembling pretty much the whole scope. Signal Conditioner IC(?) This is the device with the glued-on heatsink. I left it in place because there’s no metal attachment latch. Reattaching it would be a pain. Since the acquisition board has a bunch of custom ICs already, chances are this one is custom as well, so knowing the exact part number wouldn’t add a lot of extra info. We can see one differential pair going from the analog front-end into this IC and a second one going from this IC to the next one, an ADG286D. National Semi ADG286D Mystery Chip Another custom chip with unknown functionality. Motorola MC10319DW 8-bit 25 MHz A/D Converter Finally, an off-the-shelf device! But why is it only rated for 25MHz? National Semi ADG303 - A Custom Memory Controller Chip It receives the four 8-bit lanes from the four ADCs on one side and connects to four SRAMs on the other. 4 Alliance AS7C256-15JC SRAMs Each memory has a capacity of 32KB and a 15ns access time, which allows for a maximum clock of 66 MHz. The TDS 684B supports waveform traces of 15k points, so they either only use half of the available capacity or they use some kind of double-buffering scheme. There are four unpopulated memory footprints. In one of my TDS 420A blog posts, I extend the waveform memory by soldering in extra SRAM chips. I’m not aware of a TDS 684B option for additional memory, so I’m not optimistic about the ability to expand its memory. There’s also no such grayed-out option in the acquisition menu. When googling for “ADG286D”, I got my answer when I stumbled on this comment on BlueSky which speculates that it’s an analog memory, probably some kind of CCD FIFO. Analog values are captured at a rate of up to 5 GHz and then shifted out at a much lower speed and fed into the ADC. I later found a few other comments that confirm this theory. Measuring Along the Signal Path Let’s verify this by measuring a few signals on the board with a different scope. The ADC input pins are large enough to attach a Tektronix logic analyzer probe: ADC sampling the signal With a 1 MHz signal and using a 100Msps sample rate, the input to the ADC looks like this: The input to the ADC is clearly chopped into discrete samples, with a new sample every 120 ns. We can discern a sine wave in the samples, but there’s a lot of noise on the signal too. Meanwhile the TDS684B CRT shows a nice and clean 1 MHz signal. I haven’t been able to figure out how that’s possible. For some reason, simply touching the clock pin of the ADC with a 1 MOhm oscilloscope probe adds a massive amount of noise to the input signal, but it shows the clock nicely: The ADC clock matches the input signal. It’s indeed 8.33 MHz. Acquistion refresh rate The scope only records in bursts. When recording 500, 1000 or 2500 sample points at 100Msps, it records a new burst every 14ms or 70Hz. When recording 5000 points, the refresh rate drops to 53Hz. For 15000 points, it drops even lower, to 30Hz: Sampling burst duration The duration of a sampling burst is always 2 ms, irrespective of the sample rate of the oscilloscope or the number of points acquired! The combination of a 2 ms burst and 8 MHz sample clock results in 16k samples. So the scope always acquires what’s probably the full contents of the CCD FIFO and throws a large part away when a lower sample length is selected. Here’s the 1 MHz signal sampled at 100 Msps: And here’s the same signal sampled at 5 Gsps: It looks like the signal doesn’t scan out of the CCD memory in the order it was received, hence the signal discontinuity in the middle. Sampling a 1 GHz signal I increased the input signal from 1 MHz to 1 GHz. Here’s the ADC input at 5 Gsps: With a little bit of effort, you can once again imagine a sine wave in those samples. There’s periodicity of 5 samples, as one would expect for a 1 GHz to 5 Gsps ratio. The sample rate is still 8.3 MHz. Sampling a 200 MHz signal I also applied a 200 MHz input signal. The period is now ~22 samples, as expected. 200 MHz is low enough to measure with my 350 MHz bandwidth Siglent oscilloscope. To confirm that the ADG286D chip contains the CCD memory, I measured the signal on one of the differential pins going into that chip: And here it is, a nice 200 MHz signal: A Closer Look at the Noise Issue After initially publishing this blog post, I had a discussion on Discord about the noise issue which made me do a couple more measurements. Input connected to ground Here’s what the ADC input looks like when the input of the scope is connected to ground: 2 major observations: there’s a certain amount of repetitiveness to it. there are these major voltage spikes in between each repetition. They are very faint on the scope shot. Let’s zoom in on that: The spikes are still hard to see so I added the arrows, but look how the sample pattern repeats after each spike! The time delay between each spike is ~23.6 us. With a sample rate of 120ns, that converts into a repetitive pattern of ~195 samples. I don’t know why a pattern of 195 samples exists, but it’s clear that each of those 195 locations have a fixed voltage offset. If the scope measures those offsets during calibration, it can subtract them after measurement and get a clean signal out. 50 kHz square wave Next I applied a 50kHz square wave to the input. This frequency was chosen so that, for the selected sample rate, a single period would cover the 15000 sampling points. 2 more observations: the micro-repetitiveness is still there, irrespective of the voltage offset due to the input signal. That means that subtracting the noise should be fine for different voltage inputs. We don’t see a clean square wave outline. It looks like there’s some kind of address interleaving going on. 50kHz sawtooth wave We can see the interleaving even better when applying a sawtooth wavefrom that covers one burst: Instead of a clean break from high-to-low somewhere in the middle, there is a transition period where you get both high and low values. This confirms that some kind of interleaving is happening. Conclusion The TDS684B captures input signals at high speed in an analog memory and digitizes them at 8 MHz. The single-ended input to the ADC is noisy yet the signal looks clean when displayed on the CRT of the scope, likely because the noise pattern is repetitive and predictable. In addition to noise, there’s also an interleaving pattern during the reading out of the analog FIFO contents. The number of samples digitized is always the same, irrespective of the settings in the horizontal acquisition menu. (Not written by ChatGPT, I just like to use bullet points…)

3 months ago 38 votes
Rohde &amp; Schwarz AMIQ Modulation Generator - Teardown and Analog Deep Dive

Introduction The Rohde & Schwarz AMIQ Modulation Generator WinIQSim Software Inside the AMIQ The Signal Generation PCB Analog Signal Generation Architecture: Fixed vs Variable DAC Clock Internal Reference Clock Generation DAC Clock Synthesizer I/Q Output Skew Tuning Variable Gain Amplifier Internal Diagnostics Efficient Distribution of Configuration Signals Conclusion References Introduction Every few months, a local company auctions off all kinds of lab, production and test equipment. I shouldn’t be subscribed to their email list but I am, and that’s one way I end up with more stuff that I don’t really need. During a recent auction, I got my hands on a Rohde & Schwarz AMIQ, an I/Q modulation generator, for a grand total of $45. Add to that another 30% for the auction fee and taxes and you’re still paying much less than what others would pay for a round of golf? But instead of one morning of fun, this thing has the potential to keep me busy for many weekends, so what a deal! (Click to enlarge) A few days after “winning” the auction, I drove to a dark dungeon of a warehouse in San Jose to pick up the loot. The AMIQ has a power on/off button and 3 LEDs and that’s in terms of user interface. There are no dials, there’s no display. So without any other options, I simply powered it up and I was immediately greeted by the ominous clicking of a hard drive. I was right: this thing would keep me entertained for at least a little bit! (Click to enlarge) It took a significant amount of effort to restore the machine back to its working state. I’ll write about that in future blog posts, but let’s start with an overview of the functionality and a teardown of the R&S AMIQ and then deep dive into some of its analog circuits. AMIQ prices on eBay vary wildly, from $129 to $2600 at the time of writing this. Even if you get one of the higher priced ones, you should expect to get a unit that’s close to failing due to leaking capacitors and a flaky harddrive! The Rohde & Schwarz AMIQ Modulation Generator Reduced to an elevator sales pitch, the AMIQ is a 2-channel arbitrary waveform generator (AWG) with a deep sample buffer. That’s it! It has a streaming buffer that feeds samples to 2 14-bit DACs at a sample rate of up to 105MHz. Two output channels I and Q will typically contain quadrature modulation signals that are sent to an RF vector signal generator such as a Rohde & Schwarz SMIQ for the actual high-frequency modulation. In a typical setup, the AMIQ is used to generate the baseband modulated signal and the SMIQ shifts the baseband signal to an RF frequency. Since the AMIQ has no user interface, the waveform data must provided by an external device. This could be a PC that runs R&S WinIQSim software or even the SMIQ itself because it has the ability to control an AMIQ. You can also create your own waveforms and upload them via floppy disk, GPIB or an RS-232 interface using SCPI control commands. Figure 4-1 of the AMIQ Operating Manual has a simplified block diagram. It is pretty straightforward and somewhat similar to the one of my HP 33120A function generator: (Click to enlarge) On the left are 2 blocks that are shared: a clock synthesizer waveform memory And then for each channel: 14-bit D/A converter analog filters output section with amplifier/attenuator differential analog output driver (AMIQ-B2 option) The major blocks are surrounded by a large amount of DACs that are used to control everything from the tuning input of local 10MHz clock oscillator, gain and offset of output signals, clock skew between I and Q signal and much more. You could do some of this with a modern SDR setup, but the specifications of the AMIQ units are dialed up a notch. Both channels are completely symmetrical to avoid modulation errors at the source. If there’s a need to compensate for small delay differences in, for example, the external cables, you can compensate for that by changing the skew between clocks of the output DACs with a precision of 10ps. Similarly, the DAC sample frequency can be programmed with 32-bit precision. (Click to enlarge) In addition to the main I/Q outputs at the front, there are a bunch of secondary input and output signals: 10MHz reference input and output sample clock trigger input marker output external filter loopback output and input bit error rate measurement (BER) connector (AMIQ-B1 option) parallel interface with the digital value of the samples that are sent to the DAC (front panel, AMIQ-B3 option) (Click to enlarge) Above those speciality inputs and outputs are the obligatory GPIB interface and a bunch of generic connectors that look suspiciously like the ones you’d find on an early century PC: PS/2 keyboard and mouse Parallel port RS232 USB There are 3 different AMIQ versions: 1110.2003.02: 4M samples 1110.2003.03: 4M samples 1110.2003.04: 16M samples Mine is an AMIQ-04. WinIQSim Software While it is possible to control the AMIQ over RS-232 or GPIB with your own software, you’d have a hard time matching the features of WinIQSim, a R&S Windows application that supports the AMIQ. (Click to enlarge) With WinIQSim, you can select one of the popular communication protocols from the late nineties and early 2000s, fill in digital framing data, apply all kinds of distortions and interferences, compute the I and Q waveforms and send it to the AMIQ. Some of the supported formats include CDMA2000, 802.11a WLAN, TD-SCDMA and more. But you don’t have to use these official protocols, WinIQSim supports any kind of FSK, QPSK, QAM or other common modulation method. You need a license for some of the communication protocols. The license is linked to the AMIQ device, not the PC, but the license check is pretty naive, and while I haven’t tried it… yet, the EEVblog forum has discussions about how to enable features yourself. My device only came with a license for IS-95 CDMA. Inside the AMIQ It’s trivial to open up an AMIQ: after removing the 4 feet in the back with a regular Philips screwdriver, you can simply slide off the outer case. It has 2 major subsystems: the top contains all the components of a standard PC (Click to enlarge) the bottom has a signal generation PCB (Click to enlarge) The PCB looks incredibly clean and well laid out and I love how they printed the names of different sections on the metal gray shielding plates. We’ll leave the PC system for a future blog post, and focus on the signal generation PCB. The Signal Generation PCB Let’s remove the shielding plates to see what’s underneath. I had to drill out one of the screws that attach the plates because the head was stripped. (Did somebody before me already try to repair it?) (Click to enlarge) The bottom half left and right sections are perfectly symmetrical, as one would expect for a device that has the ability to tune skew mismatches with a 10 ps precision. Annotated, it looks like this: (Click to enlarge) Rohde & Schwarz recently made the terrible decision to lock all their software and manuals behind an approval-only corporate log-in wall, but luckily some of the most important AMIQ assets can be found online elsewhere, including the operating manual and a service manual contains the full schematics! Let’s dig a bit deeper into the various aspects of the design. In what follows I’ll be focusing primarily on analog aspects of the design. This is a very personal choice: not that the digital sections aren’t important, it’s just that, as digital design engineer, they’re not particular interesting to me. By studying the analog sections, I hope to stumble into circuits that I didn’t really know a lot about before. Fantastic Schematics Before digging in for real, a word about the schematics: they are fantastic. Each sub-system has a block diagram that is already pretty detailed, with signal names that match the schematics and test points, often with annotations to indicate the voltage or frequency range. Here’s the block diagram of the reference and DAC clock generation section, for example. Schematic page 5 (Click to enlarge) Signals that come from or go to other pages are fully referenced. Look at the SYN_OUT_CLK signal below: Schematic page 10 (Click to enlarge) The signal is also used on page 6, coordinate 1D and 7B and page 22, coordinate 8A. How cool is that? Signal path test points One of the awesome features of the PCB is the generous amount of test points. We’re not just talking PCB test point against which you can hold your oscilloscope probe or even header pins, though there are plenty of those too, but full on SMB connectors. In addition to these SMD connectors, there are also plenty of jumpers that can be used to interrupt the default signal flow and insert your own test signal instead. Analog Signal Generation Architecture: Fixed vs Variable DAC Clock In the HP 33120A the DAC has a fixed 40 MHz clock. There’s a 16 kB waveform RAM that contains, say, one quarter period of a 100 Hz sine. If you want to send out a sine wave of 200 Hz, instead of sequentially stepping through all the addresses of the waveform RAM, you just skip every other address. One of the benefits of this kind of generation scheme is that you can make do with a fixed frequency analog anti-aliasing filter: the Nyquist frequency is always the same after all. A major disadvantage, however, is that even if the output signal has a bandwidth of only 1MHz, you still need to feed the DAC at the fixed clock rate. You could insert a digital upsampling filter between the waveform memory and the DAC, but that requires significant mathematical DSP fire power, or you’d have to increase the depth of the waveform memory. For an arbitrary waveform generator, it makes more sense to run the DAC at whichever clock speed is sufficient to meet the Nyquist requirement of the desired signal and provide a number of different filtering options. The AMIQ has 4 such options: no filter, a 25 MHz, a 2.5 MHz or a loopback through an external filter. The DAC sample clock range is huge, from 10 Hz all the way to 105 MHz, though specifications are only guaranteed up to 100 MHz. According to the data sheet, the clock frequency can be set with a precision of 10^-7. Internal Reference Clock Generation Like all professional test and measurement equipment, the AMIQ uses a 10MHz reference clock that can come from outside or that can be generated locally with a 10MHz crystal. It’s common for high-end equipment to have an oven controlled crystal oscillator (OCXO), but AMIQ has a lower spec’ed temperature controlled one (TCXO), a Milliren Technologies 453-0210. If we look at the larger reference clock generation block diagram, we can see something slightly unusual: instead of selecting between the internal TCXO output or the external reference clock input, the internal reference clock always comes from the TCXO (green). Schematic page 5 (Click to enlarge) When the internal clock is selected, the TCXO output frequency can be tuned with an analog signal that comes from the VTXCO TUNE DAC (blue), but when the external reference input is active, the TCXO is phase locked to the external clock. You can see the phase comparator and low pass filter in red. The reason for using a PLL with the internal TCXO as the voltage controlled oscillator is probably to ensure that the generated reference clock has the phase noise of the TCXO while tracking the frequency of the external reference clock: a PLL acts as a low-pass filter to the reference clock and as a high-pass filter to the VCO. If the TCXO has a better high frequency phase noise than the external reference clock, that makes sense. This is really out of my wheelhouse, so take all of this with a grain of salt… SYN_REF is the output of the internal reference clock generation unit. DAC Clock Synthesizer The clock synthesizer creates a highly programmable DAC clock from the internal reference clock SYN_REF from previous section. It should come as no surprise that this clock is generated by a PLL as well. Schematic page 5 (Click to enlarge) There are two speciality components in the clock generation path: a Mini-Circuits JTOS-200 VCO an Analog Devices AD9850 DDS Synthesizer (Click to enlarge) The VCO has an operating frequency between 100 and 200 MHz. I don’t know enough about VCOs to give meaningful commentary about the specifications, but based on comparisons with similar components on Digikey, such as this FMVC11009, it’s safe to assume that it’s an expensive and high quality component. The AD9850 is located in the feedback path of the PLL where it acts as a feedback divider with a precision of 32 bits. The signal flow inside the AD9850 is interesting: It works as follows: each clock cycle, the phase of a numerical controlled oscillator (NCO) accumulates with the programmable 32-bit increment. the upper bits of the phase accumulator serve as the address of a sine waveform table. a 10-bit DAC converts the digital sine wave to analog. the analog signal is sent through an external low-pass filter. the output of the low-pass filter goes back into the AD9850 and through a comparator to generate a digital output clock. This thing has its own programmable signal generator! But why the roundtrip from digital to analog back to digital? In theory, the MSB of the NCO could be used as output clock of the clock generator. The problem is that in such a configuration, the length of each clock period toggles between N and N+1 clock cycles, with a ratio so that the average clock period ends up with the desired value. But this creates major spurs in the frequency spectrum of the generated clock. When fed into the phase comparator of a PLL, these frequency spurs can show up as jitter at the output of the PLL and thus into the spectrum of the generated I and Q signals. By converting the signal to analog and using a low pass filter, the spurs can be filtered away. The combination of a low pass filter and a comparator acts as an interpolator: the edges of the generated clock fall somewhere in between N and N+1. Schematic page 21 (Click to enlarge) The steepness of the low pass filter depends on the ratio between the input and the output clock: the lower the ratio, the steeper the filter. There are a bunch of binary clock dividers that make it hard to know the exact ratio, but if the output of the VCO is 100 MHz and the input 10 MHz or less, there is a 10:1 ratio. The AMIQ has a 7th order elliptical low pass filter. I ran a quick simulation in LTspice to check the behavior. (dds_filter.asc source file.) The filter has a cut-off frequency of around 13 MHz: In modern fractional PLLs, instead of using a regular DAC, one could use a high-order sigma-delta unit to create a pulse density modulated output with the noise pushed to higher frequencies and a low pass filter that can be less aggressive. There’s a plenty of literature online about DDS clock generators, some of which I’ve listed in the references at the bottom, but the datasheet of the AD9850 itself is a good start. I/Q Output Skew Tuning Earlier I mentioned the ability to tune the skew between the I and the Q output to compensate for a difference in cable length when connecting the AMIQ to an RF signal generator. While the digital waveform fetching circuit works on one clock, the two clocks that go to the DAC are the ones that can be changed: Here’s what the skewing circuit looks like: Schematic page 10 (Click to enlarge) Starting with input signal DAC_CLK, the goal is to create I_DAC_CLK and Q_DAC_CLK that can be moved up to 1ns ahead or behind the other. Signals can be delayed by sending them through an R/C combo. Since we need a variable delay, we need a way to change the value of either the R or the C. A varactor or varicap diode is exactly that kind of device: its capacitance changes depending on the reverse bias voltage across the diode. Static input signal SKEW_TUNE comes from a DAC. It goes to the cathode of one varicap and the anode of the other. When its voltage increase, the capacitance of those 2 diodes moves in opposite ways, and so does the R/C delay along the I and Q clock path. A BB147 varicap has a capacitance that varies between 2pF and 112pF depending on the bias voltage. Capacitances C168 and C619 prevent the relatively high bias voltages from reaching the digital signal path. Does the circuit work? Absolutely! The scope photos below show the impact of the skewing circuit when dialed to the maximum in both directions: With a 2ns/div horizontal scale, you can see a skew of ~1ns either way. Variable Gain Amplifier The analog signal path starts at the DAC and then goes through the anti-aliasing filters, an output section that does amplification and attenuation, and finally the output connector board. It is common for signal generators to have a fixed gain amplification section and then some selectable fixed attenuation stages: amplifiers with a variable gain and low distortion are hard to design. If you need a signal amplitude that can’t be achieved with one of the fixed attenuation settings, one solution is to multiply the signal before it enters the DAC to reduce the output amplitude, though that’s at the expense of losing some of the dynamic range of the DAC. This is not the case for the AMIQ: while it can use a signal path that only uses fixed amplification and attenuation stages, it also offers the intriguing option to send the analog signal through an analog multiplier stage: If we zoom down from the system diagram to the block diagram, we can see how the multiplier/variable attenuator sits between the filter and the output amplifier, with 2 control input signals: AMPL_CNTRL and OFFSET. This circuit exists twice, of course, for the I and the Q channel. Schematic page 3 (Click to enlarge) Let’s check out the details! The heavy lifting of the variable gain amplifier is performed by an Analog Devices AD835 250 MHz, Voltage Output, 4-Quadrant Multiplier, a part from their Analog Multipliers & Dividers catalog. Schematic page 14 (Click to enlarge) It’s a tiny 8-pin device that calculates W = (X1-X2) * (Y1-Y2) + Z. In low volume, it will set you back $32 on Digikey. In addition to the multiplication, you can set a fixed output gain by feeding back output W through a resistive divider back into Z. For inputs less than 10 MHz, it has a typical harmonic distortion of -70dB. Analog Devices datasheets usually have a Theory of Operation section that explain, well, the underlying theory of operation. The AD835 has such a section as well, but it doesn’t go any further than stating that the multiplier is based on a classic form, having a translinear core, supported by three (X, Y, and Z) linearized voltage-to-current converters, and the load driving output amplifier. I have no clue how the thing works! In the case of the AMIQ, X2 and Y2 are strapped to ground, and there’s no feedback from W back into Z either which reduces the functionality to: W = FILTER_OUT * AMPL_VAR + OFFSET. AMPL_VAR and OFFSET are static analog values that are each created by a 12-bit DAC8143, just like many other analog configuration signals. It’s almost shame that this powerful little device is asked to perform such a basic operation. While researching the AD835, somebody pointed out the AD8310, another interesting speciality chip from Analog Devices. It’s a DC to 440 MHz, 95dB logarithmic amplifier, a converter from linear to logarithmic scale basically. Discovering little gems like this is why I love studying schematics of complex devices. Internal Diagnostics The AMIQ has a set of 15 internal analog signal to monitor the health of the device. Various meaningful signals are gathered all around the signal generation board and measured at power up to give you that dreaded Diagnostic Check Fail message. Schematic page 27 (Click to enlarge) The circuit itself is straightforward: 2 8-to-1 analog multiplexers feed into a CS5507AS 16-bit AD converter. It can only do 100 samples per second, but that’s sufficient to measure a bunch of mostly static signals. Like many other devices, the measured value is sent serially to one of the FPGAs. The ADC needs a 2.5V reference voltage. It’s funny that this reference voltage also goes to the analog multiplexers as one one of the diagnostic signals. One wonders what the ADC returns as a result when it tries to convert the output of a broken reference voltage generator. There are a bunch of circuits in the AMIQ whose only purpose is generating diagnostic signals. Here’s a good example of that: Schematic page 14 (Click to enlarge) I_OUT_DIAG and Q_OUT_DIAG are the outputs after the attenuator that will eventually end up at the connectors. The circuit in the top red square is a signal peak detector, similar to what you’d find in the rectifier of a power supply. It allows the AMIQ to track the output level of signals with a frequency that is way higher than the sample rate of the ADC. The circuit in the red rectangle below performs a digital XOR on the analog I and the Q signals and then sends it through a simple R/C low pass filter. I think that it allows the AMIQ to check that the phase difference between the I and Q channel is sensible when applying a test signal during power-on self-test. Efficient Distribution of Configuration Signals The AMIQ signal board has hundreds of configuration bits: there are the obvious enable/disable or selection bits, such as those to select between different output filters, but the lion’s share are used to set the output value of many 12-bit DACs. Instead of using parallel busses that fan out from an FPGA, the AMIQ has a serial configuration scan chain. Discrete 8-bit 74HCT4094 shift-and-store registers are located all over the design for the digital configuration bits. The DAC8134 devices have their own built-in shift register and are part of the same scan chain. Schematic page 19 (Click to enlarge) The schematic above is an example of that. The red scan chain data input goes through the VTCXO_TUNE DAC, then the 74HCT4094 after which it exist to some other page of the schematics. Conclusion Around the late nineties, test equipment companies started to stop adding schematics to their service manuals, but the R&S AMIQ is a nice exception to that. And while the device already has a bunch of FPGAs, most of the components are off-the-shelf single-function components. Thanks to that, the AMIQ is an excellent candidate for a deep dive: all the information is there, you need to just spend a bit of effort to go through things. I had a ton of fun figuring out how things worked. References Rohde & Schwarz documents R&S - IQ Modulation Generator AMIQ R&S - AMIQ Datasheet R&S - AMIQ Operating Manual R&S - AMIQ Service Manual with schematic Various application notes R&S - Floppy Disk Control of the I/Q Modulation Generator AMIQ R&S - Software WinIQSIM for Calculating I/Q Signals for Modulation Generator R&S AMIQ R&S - Creating Test Signals for Bluetooth with AMIQ / WinIQSIM and SMIQ R&S - WCDMA Signal Generator Solutions R&S - Golden devices: ideal path or detour? R&S - Demonstration of BER Test with AMIQ controlled by WinIQSIM Other AMIQ content on the web zw-ix has a blog - Getting an Rohde Schwarz AMIQ up and running zw-ix has a blog - Connecting a Rohde Schwarz AMIQ to a SMIQ04 Bosco tweets DDS Clock Synthesis MT-085 Tutorial: Fundamentals of Direct Digital Synthesis (DDS) How to Predict the Frequency and Magnitude of the Primary Phase Truncation Spur in the Output Spectrum of a Direct Digital Synthesizer (DDS) Related content The Signal Path - Teardown, Repair & Analysis of a Rohde & Schwarz AFQ100A I/Q (ARB) Modulation Generator

4 months ago 28 votes

More in technology

ArtSci MagiCalc

The Visual Difference. See for yourself.

19 hours ago 5 votes
MIT’s Octopus is a delightful environmental sensor for urban deployment

There are a lot of very good reasons to monitor local environmental conditions, especially as they pertain to climate change research and various ecological protection initiatives. But doing so in urban locales is a challenge, as simply mounting a device can be difficult. That’s why a team from MIT’s Senseable City Lab built Octopus, which […] The post MIT’s Octopus is a delightful environmental sensor for urban deployment appeared first on Arduino Blog.

2 days ago 6 votes
2025-08-25 teletext in north america

I have an ongoing fascination with "interactive TV": a series of efforts, starting in the 1990s and continuing today, to drag the humble living room television into the world of the computer. One of the big appeals of interactive TV was adoption, the average household had a TV long before the average household had a computer. So, it seems like interactive TV services should have proliferated before personal computers, at least following the logic that many in the industry did at the time. This wasn't untrue! In the UK, for example, Ceefax was a widespread success by the 1980s. In general, TV-based teletext systems were pretty common in Europe. In North America, they never had much of an impact---but not for lack of trying. In fact, there were multiple competing efforts at teletext in the US and Canada, and it may very well have been the sheer number of independent efforts that sunk the whole idea. But let's start at the beginning. The BBC went live with Ceefax in 1974, the culmination of years of prototype development and test broadcasts over the BBC network. Ceefax was quickly joined by other teletext standards in Europe, and the concept enjoyed a high level of adoption. This must have caught the attention of many in the television industry on this side of the ocean, but it was Bonneville International that first bit [1]. Its premier holding, KSL-TV of Salt Lake City, has an influence larger than its name suggests: KSL was carried by an extensive repeater network and reached a large portion of the population throughout the Mountain States. Because of the wide reach of KSL and the even wider reach of the religion that relied on Bonneville for communications, Bonneville was also an early innovator in satellite distribution of television and data. These were ingredients that made for a promising teletext network, one that could quickly reach a large audience and expand to broader television networks through satellite distribution. KSL applied to the FCC for an experimental license to broadcast teletext in addition to its television signal, and received it in June of 1978. I am finding some confusion in the historical record over whether KSL adopted the BBC's Ceefax protocol or the competing ORACLE, used in the UK by the independent broadcasters. A 1982 paper on KSL's experiment confusingly says they used "the British CEEFAX/Oracle," but then in the next sentence the author gives the first years of service for Ceefax and ORACLE the wrong way around, so I think it's safe to say that they were just generally confused. I think I know the reason why: in the late '70s, the British broadcasters were developing something called World System Teletext (WST), a new common standard based on aspects of both Ceefax and ORACLE. Although WST wasn't quite final in 1978, I believe that what KSL adopted was actually a draft of WST. That actually hints at an interesting detail which becomes important to these proposals: in Europe, where teletext thrived, there were usually not very many TV channels. The US's highly competitive media landscape lead to a proliferation of different TV networks, and local operations in addition. It was a far cry from the UK, for example, where 1982 saw the introduction of a fourth channel called, well, Channel 4. By contrast, Salt Lake City viewers with cable were picking from over a dozen channels in 1982, and that wasn't an especially crowded media market. This difference in the industry, between a few major nationwide channels and a longer list of often local ones, has widespread ramifications on how UK and US television technology evolved. One of them is that, in the UK, space in the VBI to transmit data became a hotly contested commodity. By the '80s, obtaining a line of the VBI on any UK network to use for your new datacasting scheme involved a bidding war with your potential competitors, not unlike the way spectrum was allocated in the US. Teletext schemes were made and broken by the outcomes of these auctions. Over here, there was a long list of television channels and on most of them only a single line of the VBI was in use for data (line 21 for closed captions). You might think this would create fertile ground for VBI-based services, but it also posed a challenge: the market was extensively fractured. You could not win a BBC or IBA VBI allocation and then have nationwide coverage, you would have to negotiate such a deal with a long list of TV stations and then likely provide your own infrastructure for injecting the signal. In short, this seems to be one of the main reasons for the huge difference in teletext adoption between Europe and North America: throughout Europe, broadcasting tended to be quite centralized, which made it difficult to get your foot in the door but very easy to reach a large customer base once you had. In the US, it was easier to get started, but you had to fight for each market area. "Critical mass" was very hard to achieve [2]. Back at KSL, $40,000 (~$200,000 today) bought a General Automation computer and Tektronix NTSC signal generator that made up the broadcast system. The computer could manage as many as 800 pages of 20x32 teletext, but KSL launched with 120. Texas Instruments assisted KSL in modifying thirty television sets with a new decoder board and a wired remote control for page selection. This setup, very similar to teletext sets in Europe, nearly doubled the price of the TV set. This likely would have become a problem later on, but for the pilot stage, KSL provided the modified sets gratis to their 30 test households. One of the selling points of teletext in Europe was its ability to provide real-time data. Things like sports scores and stock quotations could be quickly updated in teletext, and news headlines could make it to teletext before the next TV news broadcast. Of course, collecting all that data and preparing it as teletext pages required either a substantial investment in automation or a staff of typists. At the pilot stage, KSL opted for neither, so much of the information that KSL provided was out-of-date. It was very much a prototype. Over time, KSL invested more in the system. In 1979, for example, KSL partnered with the National Weather Service to bring real-time weather updates to teletext---all automatically via the NWS's computerized system called AFOS. At that time, KSL was still operating under an experimental license, one that didn't allow them to onboard customers beyond their 30-set test market. The goal was to demonstrate the technology and its compatibility with the broader ecosystem. In 1980, the FCC granted a similar experimental license to CBS affiliated KMOX in St. Louis, who started a similar pilot effort using a French system called Antiope. Over the following few years, the FCC allowed expansion of this test to other CBS affiliates including KNXT in Los Angeles. To emphasize the educational and practical value of teletext (and no doubt attract another funding source), CBS partnered with Los Angeles PBS affiliate KCET who carried their own Teletext programming with a characteristic slant towards enrichment. Meanwhile, in Chicago, station WFLD introduced a teletext service called Keyfax, built on Ceefax technology as a joint venture with Honeywell and telecom company Centel. Despite the lack of consumer availability, teletext was becoming a crowded field---and for the sake of narrative simplicity I am leaving out a whole set of other North American ventures right now. In 1983, there were at least a half dozen stations broadcasting teletext based on British or French technology, and yet, there were zero teletext decoders on the US market. Besides their use of an experimental license, the teletext pilot projects were constrained by the need for largely custom prototype decoders integrated into customer's television sets. Broadcast executives promised the price could come down to $25, but the modifications actually available continued to cost in the hundreds. The director of public affairs at KSL, asked about this odd conundrum of a nearly five-year-old service that you could not buy, pointed out that electronics manufacturers were hesitant to mass produce an inexpensive teletext decoder as long as it was unclear which of several standards would prevail. The reason that no one used teletext, then, was in part the sheer number of different teletext efforts underway. And, of course, things were looking pretty evenly split: CBS had fully endorsed the French-derived system, and was a major nationwide TV network. But most non-network stations with teletext projects had gone the British route. In terms of broadcast channels, it was looking about 50/50. Further complicating things, teletext proper was not the only contender. There was also videotex. The terminology has become somewhat confused, but I will stick to the nomenclature used in the 1980s: teletext services used a continuous one-way broadcast of every page and decoders simply displayed the requested page when it came around in the loop. Videotex systems were two-way, with the customer using a phone line to request a specific page which was then sent on-demand. Videotex systems tended to operate over telephone lines rather than television cable, but were frequently integrated into television sets. Videotex is not as well remembered as teletext because it was a massive commercial failure, with the very notable exception of the French Minitel. But in the '80s they didn't know that yet, and the UK had its own videotex venture called Prestel. Prestel had the backing of the Post Office, because they ran the telephones and thus stood to make a lot of money off of it. For the exact same reason, US telephone company GTE bought the rights to the system in the US. Videotex is significantly closer to "the internet" in its concept than teletext, and GTE was entering a competitive market. In 1981, Radio Shack had introduced a videotex terminal for several years already, a machine originally developed as the "AgVision" for use with an experimental Kentucky agricultural videotex service and then offered nationwide. This creates an amusing irony: teletext services existed but it was very difficult to obtain a decoder to use them. Radio Shack was selling a videotex client nationwide, but what service would you use it with? In practice, the "TRS-80 Videotex" as the AgVision came to be known was used mostly as a client for CompuServe and Dow Jones. Neither of these were actually videotex services, using neither the videotex UX model nor the videotex-specific features of the machine. The TRS-80 Videotex was reduced to just a slightly weird terminal with a telephone modem, and never sold well until Radio Shack beefed it up into a complete microcomputer and relaunched it as the TRS-80 Color Computer. Radio Shack also sold a backend videotex system, and apparently some newspapers bought it in an effort to launch a "digital edition." The only one to achieve long-term success seems to have been StarText, a service of the Fort Worth Star-Telegram. It was popular enough to be remembered by many from the Fort Worth area, but there was little national impact. It was clearly not enough to float sales of the TRS-80 Videotex and the whole thing has been forgotten. Well, with such a promising market, GTE brought its US Prestel service to market in 1982. As the TRS-80 dropped its Videotex ambitions, Zenith launched a US television set with a built-in Prestel client. Prestel wasn't the only videotex operation, and GTE wasn't the only company marketing videotex in the US. If the British Post Office and GTE thought they could make money off of something, you know AT&T was somewhere around. They were, and in classic AT&T fashion. During the 1970s, the Canadian Communications Research Center developed a vector-based drawing system. Ontario manufacturer Norpak developed a consumer terminal that could request full-color pages from this system using a videotex-like protocol. Based on the model of Ceefax, the CRC designed a system called Telidon that worked over television (in a more teletext-like fashion) or phone lines (like videotex), with the capability of publishing far more detailed graphics than the simple box drawings of teletext. Telidon had several cool aspects, like the use of a pretty complicated vector-drawing terminal and a flexible protocol designed for interoperability between different communications media. That's the kind of thing AT&T loved, so they joined the effort. With CRC, AT&T developed NABTS, the North American Broadcast Teletext Specification---based on Telidon and intended for one-way broadcast over TV networks. NABTS was complex and expensive compared to Ceefax/ORACLE/WST based systems. A review of KSL's pilot notes how the $40,000 budget for their origination system compared to the cost quoted by AT&T for an NABTS headend: as much as $2 million. While KSL's estimates of $25 for a teletext decoder had not been achieved, the prototypes were still running cheaper than NABTS clients that ran into the hundreds. Still, the graphical capabilities of NABTS were immediately impressive compared to text-only services. Besides, the extensibility of NABTS onto telephone systems, where pages could be delivered on-demand, made it capable of far larger databases. When KSL first introduced teletext, they spoke of a scheme where a customer could call a phone number and, via DTMF menus, request an "extended" page beyond the 120 normally transmitted. They could then request that page on their teletext decoder and, at the end of the normal 120 page loop, it would be sent just for them. I'm not sure if that was ever implemented or just a concept. In any case, videotex systems could function this way natively, with pages requested and retrieved entirely by telephone modem, or using hybrid approaches. NABTS won the support of NBC, who launched a pilot NABTS service (confusingly called NBC Teletext) in 1981 and went into full service in 1983. CBS wasn't going to be left behind, and trialed and launched NABTS (as CBS ExtraVision) at the same time. That was an ignominious end for CBS's actual teletext pilot, which quietly shut down without ever having gone into full service. ExtraVision and NBC Teletext are probably the first US interactive TV services that consumers could actually buy and use. Teletext was not dead, though. In 1982, Cincinnati station WKRC ran test broadcasts for a WST-based teletext service called Electra. WKRC's parent company, Taft, partnered with Zenith to develop a real US-market consumer WST decoder for use with the Electra service. In 1983, the same year that ExtraVision and CBS Teletext went live, Zenith teletext decoders appeared on the shelves of Cincinnati stores. They were plug-in modules for recent Zenith televisions, meaning that customers would likely also need to buy a whole new TV to use the service... but it was the only option, and seems to have remained that way for the life of US teletext. I believe that Taft's Electra was the first teletext service to achieve a regular broadcast license. Through the mid 1980s, Electra would expand to more television stations, reaching similar penetration to the videotex services. In 1982, KeyFax (remember KeyFax? it was the one on WFLD in Chicago) had made the pivot from teletext to videotex as well, adopting the Prestel-derived technology from GTE. In 1984, KeyFax gave up on their broadcast television component and became a telephone modem service only. Electra jumped on the now-free VBI lines of WFLD and launched in Chicago. WTBS in Atlanta carried Electra, and then in the biggest expansion of teletext, Electra appeared on SPN---a satellite network that would later become CNBC. While major networks, and major companies like GTE and AT&T, pushed for the videotex NABTS, teletext continued to have its supporters among independent stations. Los Angeles's KTTV started its own teletext service in 1984, which combined locally-developed pages with national news syndicated from Electra. This seemed like the start of a promising model for teletext across independent stations, but it wasn't often repeated. Oh, and KSL? at some point, uncertain to me but before 1984, they switched to NABTS. Let's stop for a moment and recap the situation. Between about 1978 and 1984, over a dozen major US television stations launched interactive TV offerings using four major protocols that fell into two general categories. One of those categories was one-way over television while the other was two-way over telephone or one-way over television with some operators offering both. Several TV stations switched between types. The largest telcos and TV networks favored one option, but it was significantly more expensive than the other, leading smaller operators to choose differently. The hardware situation was surprisingly straightforward in that, within teletext and videotex, consumers only had one option and it was very expensive. Oh, and that's just the technical aspects. The business arrangements could get even stranger. Teletext services were generally free, but videotex services often charged a service fee. This was universally true for videotex services offered over telephone and often, but not always, true for videotex services over cable. Were the videotex services over cable even videotex? doesn't that contradict the definition I gave earlier? is that why NBC called their videotex service teletext? And isn't videotex over telephone barely differentiated from computer-based services like CompuServe and The Source that were gaining traction at the same time? I think this all explains the failure of interactive TV in the 1980s. As you've seen, it's not that no one tried. It's that everyone tried, and they were all tripping over each other the entire time. Even in Canada, where the government had sponsored development of the Telidon system ground-up to be a nationwide standard, the influence of US teletext services created similar confusion. For consumers, there were so many options that they didn't know what to buy, and besides, the price of the hardware was difficult to justify with the few stations that offered teletext. The fact that teletext had been hyped as the "next big thing" by newspapers since 1978, and only reached the market in 1983 as a shambled mess, surely did little for consumer confidence. You might wonder: where was the FCC during this whole thing? In the US, we do not have a state broadcaster, but we do have state regulation of broadcast media that is really quite strict as to content and form. During the late '70s, under those first experimental licenses, the general perception seemed to be that the FCC was waiting for broadcasters to evaluate the different options before selecting a nationwide standard. Given that the FCC had previously dictated standards for television receivers, it didn't seem like that far of a stretch to think that a national-standard teletext decoder might become mandatory equipment on new televisions. Well, it was political. The long, odd experimental period from 1978 to 1983 was basically a result of FCC indecision. The commission wasn't prepared to approve anything as a national standard, but the lack of approval meant that broadcasters weren't really allowed to use anything outside of limited experimental programs. One assumes that they were being aggressively lobbied by every side of the debate, which no doubt factored into the FCC's 1981 decision that teletext content would be unregulated, and 1982 statements from commissioners suggesting that the FCC would not, in fact, adopt any technical standards for teletext. There is another factor wrapped up in this whole story, another tumultuous effort to deliver text over television: closed captioning. PBS introduced closed captioning in 1980, transmitting text over line 21 of the VBI for decoding by a set-top box. There are meaningful technical similarities between closed captioning and teletext, to the extent that the two became competitors. Some broadcasters that added NABTS dropped closed captioning because of incompatibility between the equipment in use. This doesn't seem to have been a real technical constraint, and was perhaps more likely cover for a cost-savings decision, but it generated considerable controversy that lead to the National Association for the Deaf organizing for closed captioning and against teletext. The topic of closed captioning continued to haunt interactive TV. TV networks tended to view teletext or videotex as the obvious replacements for line 21 closed captioning, due to their more sophisticated technical features. Of course, the problems that limited interactive TV adoption in general, high cost and fragmentation, made it unappealing to the deaf. Closed captioning had only just barely become well-standardized in the mid-1980s and its users were not keen to give it up for another decade of frustration. While some deaf groups did support NABTS, the industry still set up a conflict between closed captioning and interactive TV that must have contributed to the FCC's cold feet. In April of 1983, at the dawn of US broadcast teletext, the FCC voted 6-1 to allow television networks and equipment manufacturers to support any teletext or videotex protocol of their choice. At the same time, they declined to require cable networks to carry teletext content from broadcast television stations, making it more difficult for any TV network to achieve widespread adoption [3]. The FCC adopted what was often termed a "market-based" solution to the question of interactive TV. The market would not provide that solution. It had already failed. In November of 1983, Time ended their teletext service. That's right, Time used to have a TV network and it used to have teletext; it was actually one of the first on the market. It was also the first to fall, but they had company. CBS and NBC had significantly scaled back their NABTS programs, which were failing to make any money because of the lack of hardware that could decode the service. On the WST side of the industry, Taft reported poor adoption of Electra and Zenith reported that they had sold very few decoders, so few that they were considering ending the product line. Taft was having a hard time anyway, going through a rough reorganization in 1986 that seems to have eliminated most of the budget for Electra. Electra actually seems to have still been operational in 1992, an impressive lifespan, but it says something about the level of adoption that we have to speculate as to the time of death. Interactive TV services had so little adoption that they ended unnoticed, and by 1990, almost none remained. Conflict with closed captioning still haunted teletext. There had been some efforts towards integrating teletext decoders into TV sets, by Zenith for example, but in 1990 line 21 closed caption decoding became mandatory. The added cost of a closed captioning decoder, and the similarity to teletext, seems to have been enough for the manufacturing industry to decide that teletext had lost the fight. Few, possibly no teletext decoders were commercially available after that date. In Canada, Telidon met a similar fate. Most Telidon services were gone by 1986, and it seems likely that none were ever profitable. On the other hand, the government-sponsored, open-standards nature of Telidon mean that it and descendants like NABTS saw a number of enduring niche uses. Environment Canada distributed weather data via a dedicated Telidon network, and Transport Canada installed Telidon terminals in airports to distribute real-time advisories. Overall, the Telidon project is widely considered a failure, but it has had enduring impact. The original drawing vector drawing language, the idea that had started the whole thing, came to be known as NAPLPS, the North American Presentation Layer Protocol Syntax. NAPLPS had some conceptual similarities to HTML, as Telidon's concept of interlinking did to the World Wide Web. That similarity wasn't just theoretical: Prodigy, the second largest information service after CompuServe and first to introduce a GUI, ran on NAPLPS. Prodigy is now viewed as an important precursor to the internet, but seen in a different light, it was just another videotex---but one that actually found success. I know that there are entire branches of North American teletext and videotex and interactive TV services that I did not address in this article, and I've become confused enough in the timeline and details that I'm sure at least one thing above is outright wrong. But that kind of makes the point, doesn't it? The thing about teletext here is that we tried, we really tried, but we badly fumbled it. Even if the internet hadn't happened, I'm skeptical that interactive television efforts would have gotten anywhere without a complete fresh start. And the internet did happen, so abruptly that it nearly killed the whole concept while television carriers were still tossing it around. Nearly killed... but not quite. Even at the beginning of the internet age, televisions were still more widespread than computers. In fact, from a TV point of view, wasn't the internet a tremendous opportunity? Internet technology and more compact computers could enable more sophisticated interactive television services at lower prices. At least, that's what a lot of people thought. I've written before about (Cablesoft)[https://computer.rip/2024-11-23-cablesoft.html] and it is just one small part of an entire 1990s renaissance of interactive TV. There's a few major 1980s-era services that I didn't get to here either. Stick around and you'll hear more. You know what's sort of funny? Remember the AgVision, the first form of the TRS-80? It was built as a client for AGTEXT, a joint project of Kentucky Educational Television (who carried it on the VBI of their television network) and the Kentucky College of Agriculture. At some point, AGTEXT switched over the line 21 closed captioning protocol and operated until 1998. It was almost the first teletext service, and it may very well have been the last. [1] There's this weird thing going on where I keep tangentially bringing up Bonnevilles. I think it's just a coincidence of what order I picked topics off of my list but maybe it reflects some underlying truth about the way my brain works. This Bonneville, Bonneville International, is a subsidiary of the LDS Church that owns television and radio stations. It is unrelated, except by being indirectly named after the same person, to the Bonneville Power Administration that operated an early large-area microwave communications network. [2] There were of course large TV networks in the US, and they will factor into the story later, but they still relied on a network of independent but affiliated stations to reach their actual audience---which meant a degree of technical inconsistency that made it hard to rollout nationwide VBI services. Providing another hint at how centralization vs. decentralization affected these datacasting services, adoption of new datacasting technologies in the US has often been highest among PBS and NPR affiliates, our closest equivalents to something like the BBC or ITV. [3] The regulatory relationship between broadcast TV stations, cable network TV stations, and cable carriers is a complex one. The FCC's role in refereeing the competition between these different parts of the television industry, which are all generally trying to kill each other off, has lead to many odd details of US television regulation and some of the everyday weirdness of the American TV experience. It's also another area where the US television industry stands in contrast to the European television industry, where state-owned or state-chartered broadcasting meant that the slate of channels available to a consumer was generally the same regardless of how they physically received them. Not so in the US! This whole thing will probably get its own article one day.

3 days ago 14 votes
Reverse-engineering Roadsearch Plus, or, roadgeeking with an 8-bit CPU

other irredeemably nerdy habit is roadgeeking, exploring and mapping highways both old and new, and it turns out that 8-bit roadgeeking on ordinary home computers was absolutely possible. For computers of this class, devising an optimal highway route becomes an exercise not only in how to encode sufficient map data to a floppy disk, but also performing efficient graph traversal with limited hardware. Today we'll explore Roadsearch-Plus, one of the (if not the) earliest such software — primarily on the Commodore 64, but originating on the Apple II — and at the end "drive" all the way from southern California to British Columbia along US Highway 395, my first long haul expedition, but as it was in 1985. Buckle up while we crack the program's runtime library, extract its database, and (working code included) dive deeply into the quickest ways to go from A to B using a contemporary home computer. Although this article assumes a little bit of familiarity with the United States highway system, I'll provide a 30-second version. The top-tier national highway network is the 1956 Eisenhower Interstate System (abbreviated I-, such as I-95), named for president Dwight D. Eisenhower who promulgated it, signed with red, white and blue shields. Nearly all of its alignments, which is to say the physical roads composing it, are grade-separated full freeway. It has come to eclipse the 1926 United States Numbered Highway System (abbreviated US, such as US 395), a nationally-numbered grid system of highways maintained by the states, albeit frequently with federal funding. Signed using a horned white shield, these roads vary from two-lane highway all the way to full freeway and may be multiplexed (i.e., multiply signed) with other US highways or Interstates in many areas. While they are no longer the highest class of U.S. national road, they nevertheless remain very important for regional links especially in those areas that Interstates don't service. States and counties maintain their own locally allocated highway systems in parallel. Here is a glossary of these and other roadgeek terms. Geographic information systems (GIS) started appearing in the 1960s, after Waldo Tobler's 1959 "Automation and Cartography" paper about his experience with the military Semi-Autographic Ground Environment system. SAGE OA-1008 displays relied on map transparencies developed manually but printed with computers like the IBM 704. Initially such systems contained only geographic features like terrain and coastlines and specific points of interest, but support for highways as a layer or integral component was rapidly implemented for land use applications, and such support became part of most, if not all, mainframe and minicomputer GIS systems by the late 1970s. However, these systems generally only handled highways as one of many resource or entity types; rarely were there specific means of using them for navigational purposes. Honda Electro-Gyrocator. Because Global Positioning System (GPS) satellite data was not then available for civilian applications and other radio navigational systems like LoRAN were hard to reliably receive in canyons or tunnels, it relied on its own internal gas gyroscope to detect rotation and movement aided by a servo in the car's transmission. The Electro-Gyrocator used a Texas Instruments TMS9980 (a derivative of the 16-bit TMS9900 in the TI-99/4A but with an 8-bit data bus) as its CPU and a sidecar TMS9901 for I/O. It had 10K of ROM, 1K of SRAM and 16K of DRAM, hopeless for storing map data of any consequence, so the actual maps were transparencies too; the 9980 integrated sensor data provided by the 9901 from the gyroscope and servo to plot the car's course on a small 6" CRT behind the map overlay. The user was expected to set the starting location on the map before driving and there was likewise no provision for routing. It was only made available that year for ¥300,000 (about US$2900 in 2025 dollars at current exchange rates) on the JDM Honda Accord and Honda Vigor, and discontinued by 1982. There were also a few early roadgeek-oriented computer and console games, which I distinguish from more typical circuit or cross-country racers by an attempt to base them on real (not fictional) roads with actual geography. One I remember vividly was Imagic's Truckin', a Mattel Intellivision-exclusive game from 1983 which we played on our Tandyvision One. Juggernaut for the ZX Spectrum by at least two years, and Juggernaut uses a completely fictitious game world instead. you're expected to do that! an easter egg. Besides very terse prompts and a heavily compressed internal routing table, the game makes all highway numbers unique and has no multiplexes or three-digit auxiliary Interstates, and while you can drive into Canada you can't drive into Mexico (admittedly it was pre-NAFTA). Additionally, for gameplay reasons every highway junction is a named "city," introducing irregularities like Interstate 70 ending in Provo, UT when it really ends about 130 miles south, or Interstate 15 ending in El Cajon, CA, and many cities and various two-digit primary Interstates are simply not included (e.g., I-12, I-19, I-30, I-85, I-93, etc.). As a result of these constraints, among other inaccuracies Interstate 90 terminates in Madison, WI at I-94 (not Boston), I-94 terminates in Milwaukee at I-55 (not Port Huron, MI), and I-40 is extended west from its true terminus in Barstow, CA along real-world California State Highway 58 to intersect I-5 "in Bakersfield" (real Interstate 5 is around twenty miles away). Still, it contains an extensive network of real highways with their lengths and control cities, managing it all on an early console platform with limited memory, while simultaneously supporting full 1983-level gameplay. The 8-bit home computer was ascendant during the same time and at least one company perceived a market for computer-computed routes with vacations and business trips. I can't find any references to an earlier software package of this kind for this class of computer, at least in the United States, so we'll call it the first. The Software Writer's Marketplace from 1984. If this is the same person, he later appears in a 2001 document as the Ground and Mission Systems Manager at the NASA Goddard Space Flight Center in Greenbelt, Maryland, about a half hour's drive away. Columbia Software does not appear in any magazines prior to 1982 nor does it appear after 1985, though no business record under that name is known to either the state of Maryland or the state of Delaware. T%(), N%() and B%() which sound like Applesoft BASIC integer arrays. (The file MPH just contains the MPG and MPH targets in text.) In fact, there are actually two compiled programs present, ROADSEARCH (the main executable) and DBA (the distance editor), and using these "variable" files allows both programs to keep the memory image of the arrays, no doubt the representation of its map database, consistent between them. You can start either compiled program by BRUNning them, as HELLO does. Exactly which compiler we can also make a fairly educated guess about: there were only a few practical choices in 1981-82 when it was probably written, and most of them we can immediately eliminate due to what they (don't) support or what they (don't) generate. For example, being almost certainly Applesoft BASIC would obviously eliminate any Integer BASIC compiler. It also can't be On-Line Systems' Expediter II because that generates "Applesoft BASIC" programs with a single CALL statement instead of binaries that are BRUN, and it probably isn't Southwestern Data Systems' Speed Star because of the apparent length of the program and that particular compiler's limited capacity. That leaves the two major compilers of this era, Microsoft's TASC ("The Applesoft Compiler"), which was famously compiled with itself, and Hayden Book Company's Applesoft Compiler. TASC uses a separate runtime that must be BLOADed before BRUNning the main executable, which HELLO doesn't do. Hayden's compiler is thus the most likely tool, and this theory is buoyed by the fact that this compiler does support variable sharing between modules. If we run strings on the DOS 3.3 .dsk image, we see things like Accounting for the fact that the disk interleave will not necessarily put lines in sequential order, you can pick out the strings of the program as well as a number of DOS 3.3 file manager commands, sometimes in pieces, such as B%(),A30733,L5448 which is probably part of a BSAVE command, or BLOAD B%(),A30733. The disk also has examples of both kinds of DOS 3.3 text files, both sequentially accessed (such as OPEN T%(), READ T%() and CLOSE T%()) but also the less commonly encountered random access (with explicitly specified record numbers and lengths such as OPEN ROADS,L12 and OPEN ROADMAP CITIES,L19, then READ ROADS,R and READ ROADMAP CITIES,R which would be followed by the record number). For these random access files, given a record length and record number, the DOS track-and-sector list is walked to where that record would be and only the necessary sector(s) are read to construct and return the record. We can see the contents with a quick Perl one-liner to strip off the high bit and feeding that to strings: Again note that the order is affected by the disk interleave, but the file is stored alphabetically (we'll extract this file properly in a moment). Another interesting string I found this way was "TIABSRAB WS AIBMULOC" which is COLUMBIA SW BARSBAIT backwards. Perhaps someone can explain this reference. In a hex editor the city database looks like this, where you can see the regularly repeating 19-byte record format for the names. Remember that the characters are stored with the high bit set. This is not a very efficient means of storage especially considering DOS 3.3 only had 124K free per disk side (after DOS itself and filesystem overhead), but it would be a lot easier for an Applesoft BASIC program to handle since the record lookup work could be shunted off to DOS 3.3 and performed quickly. Also, while you can list the cities and junctions from the menu, they are not indexed by state, only by first letter: ROADMAP CITIES to the emulator's virtual printer. We know its record length because it was stored as a string in the compiled BASIC text we scanned previously. That, in turn, gives us a text file we can read on the Mac side. It is, as expected, 406 lines long. The same process gets us the ROADS list, which is 171 lines long. Keep in mind that's just a list of the names of the roads; it does not contain information on the actual highway segments between points. T%()), where for each letter index 1-26, the value is the first record number for all cities starting with that letter. As we saw previously, the cities are already sorted on disk to facilitate. There are no cities starting with Z, so that letter just goes to the end of the file (non-existent record 407) and terminates. B%() and N%(), but we'll solve that problem a little later on. and updated routing information. However, this release appears to be specific to the Commodore port — if there were a 1985 map update for the Apple II, it has yet to surface — and I can find no subsequent release of Roadsearch after 1985 on any platform. Both C64 versions came in the same Roadsearch and Roadsearch-Plus variants for the same prices, so the iteration we'll explore here is Roadsearch-Plus with the presumed final 1985 database. For purposes of emulation we'll use VICE's SuperCPU spin with the 65816 enabled as some number crunching is involved, but I also tested it on my real Commodore SX-64 and 128DCR for veracity. (Running xscpu64 in warp mode will absolutely tear through any routing task, but I strongly recommend setting location 650 to 64 in the VICE monitor to disable key repeat.) It's worth noting that the various circulating 1541 disk images of the 1984 and 1985 editions were modified to add entries by their previous owners, though we'll point out how these can be detected and reversed. For much of this article I'll be running a 1985 disk that I manually cleared in a hex editor to what I believe was its original contents. ROADSEARCH+), the editor (IDBA) and two RELative files for the CITIES and ROADS. RELative files are functionally equivalent to DOS 3.3 random access files, being a record-based file format with a fixed size and an index (made up of "side sectors"). They are an uncommon sight on commercial software due to their idiosyncracies and a few outright bugs, and they don't work well for binary data or support variable record sizes which is why Berkeley Softworks came up with VLIR for GEOS instead. On the other hand, they do just fine for text strings and the lookup can be very fast. The cities file looks like this when dumping the raw sectors from the .d64 disk image: The directory entry indicates a file with 31-byte records, the first side sector at track 19 sector 10 and the first data sector (shown in part below) at track 19 sector 0. Other than the obvious typo in Abilene, TX, it is the same basic format as the Apple version and also sorted, ending each string with a carriage return. As a method of what I assume prevents trivially dumping its contents, a naive read of each record won't yield anything useful because every record starts with an $ff and a whole bunch of nulls, which Commodore DOS interprets as the end. The actual string doesn't start until offset 12. The same basic idea holds for the roads file: The same offset trick is used, but here the records are 24-byte since route names are shorter. Again, this isn't a particularly efficient storage mechanism, but we have over 165K available on a formatted disk, and RELative file access to any arbitrary record is quite quick. Despite the presence of side sectors, the actual records of a RELative file are still sequentially stored on disk with the usual forward track and sector pointers. As such, we don't need to grab the side sectors to simply extract its contents. For some period of time the c1541 tool from the VICE suite would not copy REL files and this was only recently fixed, so here is a Perl script I threw together to iterate over a D64 disk image and transfer a file to standard output, either by name or if you specify a starting track and sector. Because this yanks the files "raw," we will then need to strip them down. Feed the extracted REL records to this and you'll get a text file: You'll notice that both the cities and roads lists are, or at least start out, sorted. All C64 Roadsearch images I've seen circulating so far have been altered by their previous owners to add their own local roads and cities, and their changes can be easily distinguished at the point where the lists abruptly go out of alphabetical order. There is also a pair of SEQuential files, one named MPH and serving the same function to store the preferred speed and fuel economy, and a second one simply named M. This file contains four ASCII numbers, one obviously recognizeable as our serial number, though this is only one of the two places the program checks. The others are the number of cities (406, or 487 in the 1985 version), number of roads (215 in 1985), and a third we don't yet know the purpose of. You can confirm the first two by checking it against the number of lines in the files you extracted. What we don't see are files for the arrays we spotted on the Apple disk. The only file left big enough to account for those is MATS. To figure out how that works, we should start digging into the program. RTL-64, a runtime library and the telltale sign of a program compiled with DTL BASIC. DTL BASIC, formally DTL-BASIC 64 Jetpack, became extremely popular with developers not just due to its good performance and compatibility but also requiring no royalties to sell programs compiled with it as long as credit was given. An optional "protector" version can obfuscate the program and/or require a hardware dongle, though this version is rarer due to its expense (and fortunately was not used here). The runtime library slots into the RAM under the BASIC ROM, so there is no obvious loss of free memory. DTL stood for "Drive Technology Ltd." and was written by David Hughes in the UK, first for the PET; the compiler is notable for being compiled with itself, like Microsoft TASC, and using the same "RTL" runtime library and protection system (and, obnoxiously, a dongle) as the object code it generates. The 64 tape version is substantially less capable than the disk one. DTL BASIC compiles to its own bespoke P-code which is executed by the RTL. It achieves its speed through a greater degree of precalculation and preparsing (e.g., pre-resolving values and line numbers, removal of comments, etc.), a custom garbage collection routine, and also, where possible, the use of true signed 16-bit integer math. This is a substantial speed-up over most Microsoft-derived BASICs, in which Microsoft Binary Format floating point is the native system for all calculations to the point where integer variables must first be converted to floating point, the computation performed, and then converted back. Ordinarily this would only make them useful for smaller arrays in memory because the double conversion will make them slower than regular floating point variables. However, DTL BASIC does perform true integer math without conversion, first for all variables explicitly declared as integer, and even autoconverting other variables at compile time with a directive (pragma). Although some private work on a decompiler reportedly exists, to the best of my knowledge that decompiler remains unfinished and unavailable. Interestingly, what I presume is the earlier 1984 disk image has some ghost directory entries, two each for the source BASIC programs and their "symbol tables" used by ERROR LOCATE for runtime errors: Sadly these source files were overwritten and cannot be resurrected, and even the ghost entries were purged on the second 1984 image. However, two aspects of the compiler make it possible to recover at least a portion of the original program text by hand. The first is that not all statements of the original program are fully converted to P-code: in particular, READ/DATA, OPEN, INPUT/INPUT#, DIM, PRINT/PRINT# and possibly others will be preserved nearly in their original BASIC form, including literal strings and most usefully variable names. For example, if we pull the compiled ROADSEARCH+ off the disk image and run it through strings, we see text not unlike a BASIC program: DTL-compiled programs always start with an SYS 2073 to jump into a small machine-code subroutine linked into the object. This section of code loads the RTL (the filename follows) and has other minor housekeeping functions, including one we'll explain shortly when we break into the program while it's running. It resides here so that it can bank BASIC ROM in or out as necessary without crashing the computer. Following that is an incomplete machine language subroutine in a consolidated DATA statement. Disassembly of the fragment shows it's clearly intended to run from $c000, but at least part of it is missing. Of the portion we can see here, however, there are calls for what appears to be a Kernal load, so if we drop a breakpoint on $ffba in VICE (e.g., the Kernal SETLFS routine) we can run the 6502's call stack back to see the full routine and the filename (at $c0e1) it's trying to access: MATS. It loads it straight into memory in the middle of the available BASIC range. After that, now we see the arrays we saw in the Apple II version, but more importantly they're clearly part of DIM statements, so we can also see their dimensions. We already knew T%() was likely to be only 26 (27 counting the zero index) integers long from dumping the Apple version's contents, but the N%() array has up to 757 entries of five fields each, and B%() is even bigger, with 1081 records of four fields each. This is obviously where our map data is stored, and MATS is the file it seems to be loading to populate them. This brings us to the second aspect of DTL Jetpack that helps to partially decipher the program: to facilitate some reuse of the BASIC ROM, the generated code will still create and maintain BASIC-compatible variables which we can locate in RAM. So that we know what we're dealing with, we need to figure out some way to stop the program while it's running to examine its state. Naturally the compiler offers a means to defeat this. ; warm start (2061) .C:080d 20 52 08 JSR $0852 .C:0810 4C 2B A9 JMP $A92B ; not called? .C:0813 20 52 08 JSR $0852 .C:0816 4C 11 A9 JMP $A911 ; cold start (2073) ; load the RTL if flag not set, then start P-code execution .C:0819 20 52 08 JSR $0852 .C:081c AD FF 02 LDA $02FF .C:081f C9 64 CMP #$64 .C:0821 F0 17 BEQ $083A .C:0823 A9 3F LDA #$3F .C:0825 85 BB STA $BB .C:0827 A9 08 LDA #$08 .C:0829 85 BC STA $BC .C:082b A9 06 LDA #$06 .C:082d 85 B7 STA $B7 .C:082f A9 00 LDA #$00 .C:0831 85 B9 STA $B9 .C:0833 A2 00 LDX #$00 .C:0835 A0 A0 LDY #$A0 .C:0837 20 D5 FF JSR $FFD5 .C:083a 68 PLA .C:083b 68 PLA .C:083c 4C 48 A0 JMP $A048 .C:083f .asc "RTL-64" ; seems to get called on a crash or error .C:0845 20 52 08 JSR $0852 .C:0848 20 2F A0 JSR $A02F ; jumps to $a948 .C:084b 20 5F 08 JSR $085F .C:084e 60 RTS ; dummy STOP routine .C:084f A2 01 LDX #$01 .C:0851 60 RTS ; bank out BASIC ROM .C:0852 A9 03 LDA #$03 .C:0854 05 00 ORA $00 .C:0856 85 00 STA $00 .C:0858 A9 FE LDA #$FE .C:085a 25 01 AND $01 .C:085c 85 01 STA $01 .C:085e 60 RTS ; bank in BASIC ROM .C:085f 48 PHA .C:0860 A9 03 LDA #$03 .C:0862 05 00 ORA $00 .C:0864 85 00 STA $00 .C:0866 A5 01 LDA $01 .C:0868 09 01 ORA #$01 .C:086a 85 01 STA $01 .C:086c 68 PLA .C:086d 60 RTS ; execute next statement (using BASIC ROM) .C:086e 20 5F 08 JSR $085F .C:0871 20 ED A7 JSR $A7ED .C:0874 20 52 08 JSR $0852 .C:0877 60 RTS DS and ES compiler directives which disable and enable RUN/STOP respectively. If we scan the RTL for code that modifies $0328 and $0329 where the STOP routine is vectored, we find this segment: $f6ed is the normal value for the STOP vector, so we can assume that the routine at $b7c9 (which calls the routine at $b7da to set it) enables RUN/STOP, and thus the routine at $b7e5 disables it. Both routines twiddle a byte at $a822, part of this section: By default the byte at $a822 is $ea, a 6502 NOP. This falls through to checking $91 for the state of the STOP key at the last time the keyboard matrix was scanned and branching accordingly. When the STOP routine is revectored, at the same time the byte at $a822 is changed to $60 RTS so that the STOP key check is never performed. (The RTL further achieves additional speed here by only doing this check on NEXT and IF statements even if the check is enabled.) The simplest way to deal with this is to alter RTL-64 with a hex editor and turn everything from $b7e5 to $b7ec inclusive into NOPs. This turns the DS directive into a no-op as well, and now we can break out of the program by mashing RUN/STOP, though we'll do this after MATS is loaded. Parenthetically, instead of using CONT, the compiled program can be continued with SYS 2061 instead of SYS 2073. T%(), since we've assumed it has the same purpose in the Commodore port, and it does (once again there are no cities starting with Z, so the final letter points to non-existent record 488 beyond Yuma, AZ as record 487). We then use the BASIC routine at $b08b (SYS 45195) to look up the address of the first variable in the array — note no comma between the SYS and the variable reference — which is deposited as a 16-bit address in location $47 (71). In this case the pointer is to $6b8a (27530), the actual zeroth data element, so if we rewind a few bytes we'll also get the array descriptor as well as its entire contents: C:6b83 d4 80 3d 00 01 00 1b 00 01 00 01 00 0f 00 2d 00 >C:6b93 52 00 60 00 69 00 7c 00 8b 00 97 00 ed 00 f6 00 >C:6ba3 fe 01 16 01 36 01 43 01 4e 01 62 01 63 01 70 01 >C:6bb3 a3 01 b6 01 c8 01 cf 01 e4 01 e4 01 e8 big-endian: the dimensions themselves (a single big-endian short specified as 27, i.e. 26 plus the zeroth element), and then each value. In multidimensional arrays, each dimension is run out fully before moving to the next. We can easily pick out the values we saw for T%() in the above dump, but more importantly we now have a long unambiguous 61-byte key we can search for. The entire sequence shows up in MATS, demonstrating that the file is in fact nothing more than an in-place memory dump of the program arrays' contents. Rather than manually dump the other two arrays from BASIC, we can simply walk MATS and pull the array values directly. This Perl script walks a memory dump of arrays (currently only two-dimensional integer arrays are implemented because that's all we need for this project). It skips the starting address and then writes out tab-separated values into files named by the arrays it finds. Unlike their storage in memory, where values go (0,0), (1,0), (2,0) ... (0,1), (1,1) and so on, this pivots the arrays horizontally so you get (0,0), (0,1), (0,2), etc., grouped into lines as "rows." B%() is the easiest to grok. Ignoring index 0, here is an extract from the file: B%() is referenced directly in one PRINT statement when displaying outgoing roads from a particular city: Let's take city record number 1, which is the misspelled (but only in the 1985 version) ABILINE TX (that is, Abilene, Texas). The develop-a-route feature will list all connected cities. For Abilene, there are five in the database. PRINTed. PRINTs above that B%(I,3) represents the mile length of the current connecting segment indexed by I (here 39), which would make B%() the array containing the connecting roads between cities and junctions. We also know it gets R$ and C$ from the RELative files ROADS and CITIES on each row. Helpfully, it tells us that FORT WORTH TX is city number 115 (it is), which we can see is B%(I,1). B%(I,2) is 1, which must be us, leaving B%(I,0) as the route number which must be the record number for Interstate 20 in ROADS. If we grab line 39 from out-B% (the value of index I), we do indeed see these same values. However, we can now do the search ourselves by just hunting for any record containing city 1 in columns 1 or 2 of our dumped array file (^I is a TAB character): Or, to show the correspondence more plainly: B%(). N%(), on the other hand, is a little harder to crack. Other than its apparent DIM statement at the beginning (as shown above), it is never displayed directly to the user and never appears again in plaintext in the compiled object. Here are the first few entries of the array: The first record (index 0) is the second place where the serial number is stored, though only the map editor references it. Not counting index 0 the file has exactly one record for every city or junction, so it must correspond to them somehow. Otherwise, column 3 (except for index 0) is always an unusual value 32,766, near the positive maximum for a 16-bit integer variable, and columns 4 and 5 are always zero (note from the future: this is not always true for routes which are added in the editor). Additionally, columns 1 and 2 have an odd statistical distribution where the first is always between 4499 and 8915 and the second between 11839 and 21522. There are no negative values anywhere in the file, and the first three columns are never zero (other than index 0). Whatever they are, they are certainly not random. The meaning of the values in this array managed to successfully evade my understanding for awhile, so in the meantime I turned to figuring out how Roadsearch does its routing. The theorists reading this have already internalized this part as an exercise in graph traversal, specifically finding the shortest path. Abstractly the cities and junctions can be considered as the nodes or vertices of a graph. These nodes are enumerated by name in CITIES with additional, currently opaque, metadata in N%(). The nodes are then connected with edges, which are weighted by their mile length. These edges are the highway alignments listed with termini and length in B%() and their names in ROADS. The program appears to universally treat all edges as bi-directional, going both to and from a destination, which also makes the graph generally undirected. Because of the way the database (and indeed the nature of the American highway network) is constructed, all nodes are eventually reachable from any other node. For a first analysis I presented Roadsearch with a drive I know well, having done it as part of longer trips many times in both directions: Bishop, California (city number 31) to Pendleton, Oregon (city number 338). This run can be traveled on a single highway number, namely US Highway 395, and it is also the shortest route. I have not indicated any roads that should be removed, so it will use everything in its database. five seconds. No, warp mode was not on. Take note of the progress display showing miles traveled. This is the first milepoint that appears. If we compare the edges (highway alignments) directly leaving from Bishop and Pendleton, an edge with a weight (length) of 198 miles only shows up leaving Pendleton, suggesting the program works the routing backwards: Edsger Dijkstra's algorithm, conceived in 1956 and published in 1959, an independent reimplementation of Vojtěch Jarník's 1930 minimum spanning tree algorithm that was also separately rediscovered and published by Robert C. Prim in 1957. In broad strokes, the algorithm works by building a tree out of the available nodes, putting them all into a queue. It keeps an array of costs for each node, initially set to an "infinite" value for all nodes except for the first node to be examined, traditionally the start point. It then repeatedly iterates over the queue: of all current nodes in the queue the lowest cost node is pulled out (so, first out, this will be the start point) and its edges are examined, selecting and marking any edges where the sum of the current node's cost and the cost of the edge (the mile length) are less than the current cost of the node it connects to (which, first out, will be "infinite"). The nodes connected by these marked edges take the current node as their parent, constructing the tree, and store the new lower cost value. Once the queue is empty, the algorithm halts and the tree is walked backwards from the target back to the start point, accumulating the optimal route using the parent pointers. This Perl script accepts two city numbers and will generate the optimal route between them using the Roadsearch database with Dijkstra's algorithm. It expects cities, roads (both converted to text, not the raw RELative files) and out-B% to be in the same directory. We build arrays of @cities and @roads, and turn B%() into @edges and @a (for arcs) for expedience. We then walk down the nodes and build the tree in @s, noting which arc/edge was used in @prou so that we can look it up later, and then at the end walk the parent pointers back and do all the dereferencing to generate a human-readable route. As a check we also dump @s, which indexes @edges. Here's what it computes for Bishop to Pendleton, forward and reverse: This is the same answer, and Dijkstra's algorithm further gives us a clue about one of the columns in N%() — the one which is invariably 32766. We already know from the plaintext portion of the compiled program that there are no other arrays, so the routing must be built in-place in the arrays that are present. We need an starting "infinite" value for each node within the range of a 16-bit signed integer, and no optimal path in continental North America would ever exceed that mileage, so this is the "infinite" value used to build the route into N%(). (It gets reset by reloading MATS if you select the option to start over after route generation.) That said, it's almost certain that the program is not using an implementation of Dijkstra's algorithm. The queue starts out with all (in this case 487) nodes in it, making finding the lowest cost node in the queue on each iteration quite expensive on a little ~1.02MHz 6502. The usual solution in later implementations is a min-priority queue to speed up the search, but we already know there are no other arrays in memory to construct a faster heap or tree, so any efficiency gained would likely be unimpressive. Furthermore, it's not clear the author would have known about this approach (the earliest work on it dates to the late 1970s, such as Johnson [1977]), and even if he did, it still doesn't explain the meaning of the first two columns in N%() which aren't even used by this algorithm. After all, they're not there for nothing. E(x) specific to the problem being examined to walk from a source to a target node, terminating when the target was reached. In the process it created a set of partial trees from the source to the target from which the lowest cumulative value of E(x) would ultimately be selected, starting with individual nodes where E(x) to some next immediate destination was locally smaller. Additionally, since it was now no longer required to construct a complete tree touching every possible node, the initial set of nodes considered could simply be the starting point. How the evaluation function was to be implemented was not specified, but a simple straight-line distance to the goal node would have been one logical means. Raphael instead suggested that the function to be minimized be the sum of the distance to the goal node and the evaluation function, which was reworked as a heuristic. The heuristic thus provided a hint to the algorithm, ideally attracting it to the solution sooner by considering a smaller set of nodes. If the heuristic function was properly admissible — what Hart defined as never overstating the actual cost to get to the target — then this new algorithm could be proven to always find the lowest-cost path and greatly reduce comparisons if the solution converged quickly. Considering Graph Traverser as algorithm "A," this more informed goal-directed algorithm was dubbed "A*" (say it A-star). If Roadsearch really is using A-star, at this point we don't know what the heuristic function is. (Note from the future: stay tuned.) Fortunately, a heuristic function that returns zero for all values is absolutely admissible because it will never overstate the actual cost. Although as a degenerate case doing so effectively becomes Dijkstra's algorithm, we'll still observe the runtime benefit of not having an initial queue potentially containing all possible nodes and being able to terminate early if the target is reached (which is always possible with Roadsearch's database). To make the number of nodes more manageable for study, since we will also want to observe how the program behaves for comparison, we'll now consider a smaller routing problem that will be easier to reason about. #!/usr/bin/perl die("usage: $0 point_no_1 point_no_2\n") if (scalar(@ARGV) != 2); $p1 = $ARGV[0]; $p2 = $ARGV[1]; die("wherever you go there you are\n") if ($p1 == $p2); open(K, "cities") || open(K, "cities.txt") || die("cities: $!\n"); @cities = ( undef ); while(<K>) { $ln++; if ($ln==$p1) { $v1=$p1; print; } if ($ln==$p2) { $v2=$p2; print; } chomp; push(@cities, $_); # default gscore and fscore $gscore[$ln] = 99999; $fscore[$ln] = 99999; } die("both cities must be valid\n") if (!$v1 || !$v2); close(K); open(R, "roads") || open(R, "roads.txt") || die("roads: $!\n"); open(B, "out-B%") || die("out-B%: $!\n"); @roads = ( undef ); while(<R>) { chomp; push(@roads, $_); } close(R); $ee = 0; while(<B>) { chomp; ($rn, $c1, $c2, $d) = split(/\t/, $_); $rn += 0; $c1 += 0; $c2 += 0; $d += 0; next if (!$d || !$c1 || !$c2 || !$rn); push(@edges, [ $rn, $c1, $c2, $d ]); push(@{ $a[$c1] }, [ $rn, $c2, $d, $ee ]); push(@{ $a[$c2] }, [ $rn, $c1, $d, $ee++ ]); } close(B); @camefrom = (); @openset = ( $v1 ); $gscore[$v1] = 0; $fscore[$v1] = 0; # heuristic of distance is 0 for the start while(scalar(@openset)) { @openset = sort { $fscore[$a] <=> $fscore[$b] } @openset; print join(", ", @openset), "\n"; $current = shift(@openset); last if ($current == $v2); foreach $n (@{ $a[$current] }) { $ni = $n->[1]; $tgscore = $gscore[$current] + $n->[2]; if ($tgscore < $gscore[$ni]) { $camefrom[$ni] = $current; $routefrom[$ni] = $n->[3]; $gscore[$ni] = $tgscore; $fscore[$ni] = $tgscore + 0; # "heuristic" unless (scalar(grep { $_ == $ni } @openset)) { push(@openset, $ni); } } } } @s = ( ); while(defined($camefrom[$current])) { $route = $routefrom[$current]; $current = $camefrom[$current]; unshift(@s, $route); } print join(' - ', @s), "\n"; $miles = 0; foreach(@s) { print $roads[$edges[$_]->[0]], "($edges[$_]->[0]) - ", $cities[$edges[$_]->[2]], "($edges[$_]->[2]) - ", $cities[$edges[$_]->[1]], "($edges[$_]->[1]) ", $edges[$_]->[3], " miles\n"; $miles += $edges[$_]->[3]; } print "total $miles miles\n"; f(x) (realized here as @fscore) and g(x) (@gscore). The G-score for a given node is the currently known cost of the cheapest path from the start to that node, which we build from the mile length of each edge. The node's F-score is its G-score plus the value of the heuristic function for that node, representing our best guess as to how cheap the overall path could be if the path from start to finish goes through it. In this case, the F-score and G-score will be identical because the heuristic function in this implementation always equals zero. Also, because we're interested in knowing how many fewer nodes we've considered, we dump the open set on every iteration. % route-dij 375 311 NEEDLES CA SAN BERNARDINO CA 237 - 236 - 63 - 719 I 15 - SAN BERNARDINO CA - US395/I15 CA 27 miles I 15 - US395/I15 CA - BARSTOW CA 44 miles I 40 - BARSTOW CA - I40/US95 CA 134 miles I 40 - I40/US95 CA - NEEDLES CA 12 miles total 217 miles % route-astar 375 311 NEEDLES CA SAN BERNARDINO CA 375 443, 335, 274, 376 335, 274, 442, 18, 376 274, 442, 18, 376, 34 442, 18, 376, 171, 380, 34 18, 376, 171, 380, 15, 34, 31 376, 171, 380, 15, 34, 169, 262, 31 424, 171, 380, 15, 34, 169, 262, 31, 487 171, 380, 15, 34, 169, 262, 31, 487 380, 15, 34, 169, 275, 262, 31, 487 15, 34, 169, 275, 262, 31, 379, 487 34, 45, 169, 275, 262, 31, 379, 487 45, 169, 275, 262, 31, 379, 311, 487, 342 169, 275, 262, 31, 379, 311, 119, 487, 342 275, 311, 262, 31, 379, 119, 487, 342 311, 262, 31, 379, 336, 119, 487, 170, 342 237 - 236 - 63 - 719 I 15 (31) - SAN BERNARDINO CA (375) - US395/I15 CA (443) 27 miles I 15 (31) - US395/I15 CA (443) - BARSTOW CA (18) 44 miles I 40 (60) - BARSTOW CA (18) - I40/US95 CA (169) 134 miles I 40 (60) - I40/US95 CA (169) - NEEDLES CA (311) 12 miles total 217 miles % route-astar 311 375 NEEDLES CA SAN BERNARDINO CA 311 169, 251, 34 251, 34, 262, 18 168, 34, 262, 18 34, 262, 18, 475, 110 262, 18, 475, 110, 335, 342 454, 18, 475, 110, 335, 342, 427 18, 475, 110, 335, 342, 53, 427, 446 442, 443, 475, 110, 335, 342, 53, 427, 446 443, 475, 110, 335, 342, 15, 53, 427, 31, 446 475, 110, 375, 335, 342, 15, 53, 427, 31, 446 110, 375, 335, 342, 15, 53, 427, 31, 446 375, 335, 342, 15, 53, 427, 31, 446, 124, 247 719 - 63 - 236 - 237 I 40 (60) - I40/US95 CA (169) - NEEDLES CA (311) 12 miles I 40 (60) - BARSTOW CA (18) - I40/US95 CA (169) 134 miles I 15 (31) - US395/I15 CA (443) - BARSTOW CA (18) 44 miles I 15 (31) - SAN BERNARDINO CA (375) - US395/I15 CA (443) 27 miles total 217 miles N%(), we'll instruct VICE to trap each integer array write to the range covered by MATS. (Trapping reads would also be handy but we'd go mad with the amount of data this generates.) We can get the value being written from $64/$65 (using the BASIC #1 floating point accumulator as temporary space) based on this code in the RTL: On each store we'll duly log the value in $64/$65 (remember it's big-endian) and the address it's being stored to. I wrote a one-off script to turn this string of writes into array offsets so we can understand how they relate to N%(), and then look them up in the tables so that we know which node and edge is under consideration. Remember, Roadsearch works this problem backwards starting with Needles. From the above you can see where the program marks the order of each node and the accumulated mileage. Our running totals, most likely the F-score and G-score, for a given node x are in N%(x,1) and N%(x,2), the length of the candidate edge is in N%(x,1), the optimal edge for the node is in N%(x,4), and the iteration it was marked in is recorded in N%(x,3). (We count an iteration as any loop in which a candidate node is marked, which in this simple example will occur on every run through the open set.) N%(x,0) also looks like a distance, but it doesn't correlate to a highway distance. To construct the itinerary at the end, it starts with San Bernardino and then repeatedly walks the selected edge to the next node until it reaches Needles. It's painfully obvious that compared to our models Roadsearch is considering a much smaller number of nodes (18, 34, 169, 251, 262, 311, 375, 442 and 443), counting the five in the optimal solution, and it duly converges on the optimal routing in just four iterations compared to twelve in our best case. I did look at its read pattern and found that N%(x,0) and N%(x,1) lit up a lot. These values are clearly important to computing whatever heuristic it's using, so I pulled out a few from across North America. I stared at it for awhile until it dawned on me what the numbers are. Do you see what I saw? Here, let me plot these locations on a Mercator projection for you (using the United States' territorial boundaries): coordinates. The first column increases going north, and the second column increases going west. Indeed, in the very paper written by Hart et al., they suggest that the straight-line distance from the current node to the goal would make a dandy heuristic and now we can compute it! The thing we have to watch for here is that the scales of the mileage and the coordinates are not identical, and if we use a simple distance calculation we'll end up clobbering the cumulative mileage with it — which would not only make it non-admissible as a heuristic, but also give us the wrong answer. To avoid that, we'll compute a fudge factor to yield miles from "map units" and keep the heuristic function at the same scale. Let's take San Diego, California to Bangor, Maine as a reference standard, for which computing the geodesic distance using WGS 84 yields 2,696.83 miles from city centre to city centre as the crow flies. If we compute the straight-line distance between them using the coordinates above, we get 8703.63 "map units," which is a ratio of 3.23:1. To wit: We now have implemented an h(x) that for a given node x returns its straight-line distance from the target node. Let's try it out. Remembering that Roadsearch works it backwards, we converge on the same solution examining the same nodes in the same number of iterations (four). Further proof is if we dump the program's array writes for the opposite direction: This routing proceeds in six iterations, just as we do, and once again the nodes we end up considering in our new A-star model are also the same. It also explains N%(x,0) — this is the straight-line distance and thus our heuristic, calculated (it's backwards) as the distance to the "start." For example, Palm Springs is indeed roughly 135 miles from Needles as the crow flies, again depending on your exact termini, whereas the western US 95/Interstate 40 junction is only about 11 miles. It should also be obvious that this "fudge" divisor has a direct effect on the efficiency of the routine. While we're purportedly using it as a means to scale down the heuristic, doing so is actually just a backhanded way of deciding how strongly we want the heuristic weighted. However, we can't really appreciate its magnitude in a problem space this small, so now we'll throw it a big one: drive from San Diego (376) to Bangor (17). (I did myself drive from Bangor to San Diego in 2006, but via Georgia to visit relatives.) This route requires a lot more computation and will also generate multiple useless cycles in which no node is sufficiently profitable, so I added code to our heuristic A-star router to explicitly count iterations only when a candidate node is marked: With our computed initial fudge factor of 3.23, we get this (again, worked backwards): And now for the real thing. quick. Both our simulation and the real program agree on the optimal route, as demonstrated by the cumulative mileage. N%(x,3). Interestingly, we do not: our simulation converged on the optimal route in 328 iterations, but the Commodore got it in 307. However, if we tune our simulation's fudge factor to 3.03, we also get it in 307. Is there an optimal fudge divisor? We know the optimal route, so we'll run a whole bunch of simulations over the entire interval, rejecting ones that get the wrong answer and noting the iterations that were required for the right ones. In fact, we can do this generally for any routing by using the longhand Dijkstra method to get the correct answer and then run a whole bunch of tweaked A-stars compared with its routing after that. In our simulation, stepping the fudge divisor over the interval from 0 inclusive to 4 by 0.001 increments, I ran six long haul drives in total, San Diego (376) to Bangor (17) and back, Bellingham, WA (24) to Miami, FL (291) and back, and then San Francisco, CA (377) to Washington, DC (465) and back. I then plotted them all out as curves. h(x)=0 heuristic from above, which is always accurate and the least goal directed (and consequently requires the most iterations). First off, there is no single fudge divisor that always yields the most optimal route for all of our test cases. Notice how much of the graph yields absolutely wrong answers (so nothing is plotted), and even in the interval between around 2.2 and 2.8 or so not all of the curves are valid. They all become valid around 3, which I enlarged in the inset on the left, and each one's iteration count slowly rises after that with the zero heuristic as more or less the upper asymptote. Except for the purple curve, however, 3 is not generally their lower bound. Second, there is no single fudge divisor that always corresponds to the program's iteration count, which is 311 (17 to 376), 307 (376 to 17), 183 (291 to 24), 264 (24 to 291), 261 (377 to 465) and 220 (465 to 377). With the value of 3.03 from above, however, our simulation generally does better than the real thing, which is both gratifying for the purposes of pathfinding and frustrating for the purposes of modeling. B%(x,3)). That assures it will always have an insuperably high cost, yet be unlikely to overflow if it's involved in any calculations, and thus it will never be pulled from the open set for evaluation. The in-memory database is always reloaded from disk after the route is disposed of. As our final stop in this article nearly as long as some of the drives we've analysed, we'll do my first 2005 long haul expedition along US Highway 395. This is a useful way to show how to add your own roads and what the program does with that, since not all of its segments are in the database. M, which will then cause the program to ignore any extraneous ones. Style points are added if you clear the records in the RELative files and turn them into nulls, though this isn't required. For the 1985 release, set M to 20 34 38 37 20 0D 20 37 37 35 20 0D 20 32 31 35 20 0D 20 39 39 39 39 20 0D, which is (in PETSCII) 487 cities, 775 road segments, 215 road names and serial number 9999 or as you like. (Note this will not change the serial number in the editor, but with this patch you're only using the main program in any case.) However, if you try to use a disk modified this way to add routes or cities, it will correctly recognize the new ones as new but erroneously find their old links in the arrays. That will not only get you unexpected residual road links to any new cities, but the links' presence can also confuse the program to such an extent it will end up in an infinite loop. In this case you'll need to delete these hanging links by patching MATS to null out the entries in B%() and N%() referring to city numbers greater than 487 and road numbers greater than 215 (remember that there are three arrays in that file and that multi-dimensional arrays run out each dimension before going to the next, meaning you can't just truncate it and stuff nulls on the end). This is simple to understand in concept but a little tedious to do, so I have left this as an exercise for the reader. Should I come up with a clean MATS myself, I will put it up somewhere if there is interest; for now we'll do our work on a well-loved copy which means that the city numbers you see here may not necessarily be the ones you get if you're following along at home. From there it proceeded north to the Columbia River gorge and intersected US 730 further west, then went east with it as before to US 12. This intermediate state is shown on this 1976 Donnelley atlas page, with the relevant portion highlighted. It was not until around 1986-7 when Interstate 82 was completed that US 395 was moved to I-82 instead as its present-day routing, leaving only a tiny residual overlap with US 730 east of Umatilla, Oregon. The problem is that none of these locations have waypoints we can easily connect to. M or MATS to disk until this point so that if the process gets aborted in the middle, the database remains internally consistent except for some harmless unreferenced RELative file records that can be overwritten later. N%() as its heuristic, but we were never asked for anything like latitude or longitude. How, then, can it get this information for the new cities we just created? At this point I dumped the RELative files and the new MATS to find out, and it so happens that the new city gets the same coordinates as the one it was connected to: both Wallula Junction and Walla Walla get coordinates 8062 and 20802, which are Walla Walla's coordinates. As we are connecting a couple new cities together along US 12, they also get the same coordinates, up to Yakima which retains its own. If we extended from both sides into the middle it would probably have made the database slightly more accurate at the cost of some inconvenient dancing around. This would be a good idea if you needed to reconstruct or chop up a particularly long segment, for example. I had also made an earlier call-forward that cities added from the editor sometimes don't have columns 4 or 5 set to zero. I can see this for the entries the previous owner entered, but all of mine ended up zero, so I'm not sure under what conditions that occurred. It doesn't seem to matter to the program in any case. .d64 disk images for this program all show new routes had been added suggests they got non-trivial usage by their owners, which really speaks to the strengths of the program and what it could accomplish. While it's hardly a sexy interface, it's functional and straightforward, and clearly a significant amount of time was spent getting the map data in and tuning up the routing algorithm so that it could perform decently on an entry-level home computer. Unfortunately, I suspect it didn't sell a lot of copies, because there are few advertisements for Columbia Software in any magazine of the time and no evidence that the Commodore version sold any better than the Apple II release did. That's a shame because I think its technical achievements merited a better market reception then, and it certainly deserves historical recognition today. We take things like Google Maps almost for granted, but a program that could drive a highway map on your 1985 Apple or Commodore would truly have been fascinating. As for me, I look forward to another long haul road trip in the future, maybe US 95 or US 101, or possibly reacquainting myself with US 6 which I drove in 2006 from Bishop to Cape Cod, Massachusetts. Regardless of where we end up going, though, this time we won't be taking the SX-64 with us — my wife has insisted on getting her seat back.

4 days ago 13 votes
Meet the teens behind RedSnapper: a smart Arduino-powered prosthetic arm

An affordable, AI-assisted, wearable robotic arm? That’s not science fiction – it’s RedSnapper, an open-source prosthesis built by PAC Tech, a team of high school students from Istituto Maria Immacolata in Gorgonzola, Italy. Powered entirely by Arduino boards, their project won the national “Robot Arm Makers” title at the 2025 RomeCup – and we think […] The post Meet the teens behind RedSnapper: a smart Arduino-powered prosthetic arm appeared first on Arduino Blog.

a week ago 13 votes