More from Electronics etc…
Introduction The SR620 Repairing the SR620 Replacing the Backup 3V Lithium Battery Switching to an External Reference Clock Running Auto-Calibration Oscilloscope Display Mode References Footnotes Introduction A little over a year ago, I found a Stanford Research Systems SR620 universal time interval counter at the Silicon Valley Electronics Flea Market. It had a big sticker “Passes Self-Test” and “Tested 3/9/24” (the day before the flea market) on it so I took the gamble and spent an ungodly $4001 on it. Luckily, it did work fine, initially at least, but I soon discovered that it sometimes got into some weird behavior after pressing the power-on switch. The SR620 The SR620 was designed sometime in the mid-1980s. Mine has a rev C PCB with a date of July 1988, 37 year old! The manual lists 1989, 2006, 2019 and 2025 revisions. I don’t know if there were any major changes along the way, but I doubt it. It’s still for sale on the SRS website, starting at $5150. The specifications are still pretty decent, especially for a hobbyist: 25 ps single shot time resolution 1.3 GHz frequency range 11-digit resolution over a 1 s measurement interval The SR620 is not perfect, one notable issue is its thermal design. It simply doesn’t have enough ventilation holes, the heat-generating power regulators are located close to the high precision time-to-analog converters, and the temperature sensor for the fan is inexplicably placed right next to the fan, which is not close at all to the power regulators. The Signal Path has an SR620 repair video that talks about this. Repairing the SR620 You can see the power-on behavior in the video below: Of note is that lightly touching the power button changes the behavior and sometimes makes it get all the way through the power-on sequence. This made me hopeful that the switch itself was bad, something that should be easy to fix. Unlike my still broken SRS DG535, another flea market buy with the most cursed assembly, the SR620 is a dream to work on: 4 side screws is all it takes to remove the top of the case and have access to all the components from the top. Another 4 screws to remove the bottom panel and you have access to the solder side of the PCB. You can desolder components without lifting the PCB out of the enclosure. Like my HP 5370A, the power switch of the SR620 selects between power on and standby mode. The SR620 enables the 15V rail at all times to keep a local TCXO or OCXO warmed up. The power switch is located at the right of the front panel. It has 2 black and 2 red wires. When the unit is powered on, the 2 black wires and the 2 red wires are connected to each other. To make sure that the switch itself was the problem, I soldered the wires together to create a permanent connection: After this, the SR620 worked totall fine! Let’s replace the switch. Unscrew 4 more screws and pull the knobs of the 3 front potentiometers and power switch to get rid of the front panel: A handful of additional screws to remove the front PCB from the chassis, and you have access to the switch: The switch is an ITT Schadow NE15 T70. Unsurprisingly, these are not produced anymore, but you can still find them on eBay. I paid $7.5 + shipping, the price increased to $9.5 immediately after that. According to this EEVblog forum post, this switch on Digikey is a suitable replacement, but I didn’t try it. The old switch (bottom) has 6 contact points vs only 4 of the new one (top), but that wasn’t an issue since only 4 were used. Both switches also have a metal screw plate, but they were oriented differently. However, you can easily reconfigure the screw plate by straightening 4 metal prongs. If you buy the new switch from Digikey and it doesn’t come with the metal screw plate, you should be able to transplant the plate from the broken switch to the new one just the same. To get the switch through the narrow hole of the case, you need to cut off the pins on the one side of the switch and you need to bend the contact points a bit. After soldering the wires back in place, the SR620 powered on reliably. Switch replacement completed! Replacing the Backup 3V Lithium Battery The SR620 has a simple microcontroller system consists of a Z8800 CPU, 64 KB of EPROM and a 32 KB SRAM. In addition to program data, the SRAM also contains calibration and settings. replaced one such battery in my HP 3478A multimeter. These batteries last almost forever, but mine had a 1987 date code and 38 years is really pushing things, so I replaced it with this new one from Digikey. The 1987 version of this battery had 1 pin on each side, on the new ones, the + side has 2 pins, so you need to cut one of those pins and install the battery slightly crooked back onto the PCB. When you first power up the SR620 after replacing the battery, you might see “Test Error 3” on the display. According to the manual: Test error 3 is usually “self-healing”. The instrument settings will be returned to their default values and factory calibration data will be recalled from ROM. Test Error 3 will recur if the Lithium battery or RAM is defective. After power cycling the device again, the test error was gone and everything worked, but with a precision that was slightly lower than before: before the battery replacement, when feeding the 10 MHz output reference clock into channel A and measuring frequency with a 1s gate time, I’d get a read-out of 10,000,000.000N MHz. In other words: around a milli-Hz accuracy. After the replacment, the accuracy was about an order of magnitude worse. That’s just not acceptable! The reason for this loss in accuracy is because the auto-calibration parameters were lost. Luckily, this is easy to fix. Switching to an External Reference Clock My SR620 has the cheaper TCXO option which gives frequency measurement results that are about one order of magnitude less accurate than using an external OCXO based reference clock. So I always switch to an external reference clock. The SR620 doesn’t do that automatically, you need to manually change it in the settings, as follows: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” SET -> “cloc source int” Scale Down arrow -> “cloc source rear” SET -> “cloc Fr 10000000” SET If you have a 5 MHz reference clock, use the down or up arrow to switch between 1000000 and 5000000. Running Auto-Calibration You can rerun auto-calibration manually from the front panel without opening up the device with this sequence: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” START The auto-calibration will take around 2 minutes. Only run it once the device has been running for a while to make sure all components have warmed up and are at stable temperature. The manual recommends a 30 minute warmup time. After doing auto-calibration, feeding back the reference clock into channel A and measuring frequency with a 1 s gate time gave me a result that oscillated around 10 MHz, with the mHz digits always 000 or 999.2 It’s possible to fine-tune the SR620 beyond the auto-calibration settings. One reason why one might want to do this is to correct for drift of the internal oscillator To enable this kind of tuning, you need to move a jumper inside the case. The time-nuts email list has a couple of discussions about this, here is one such post. Page 69 of the SR620 manual has detailed calibration instructions. Oscilloscope Display Mode When the 16 7-segment LEDs on the front panel are just not enough, the SR620 has this interesting way of (ab)using an oscilloscope as general display: it uses XY mode to paint the data. I had tried this mode in the past with my Sigilent digital oscilloscope, but the result was unreadable: for this kind of rendering, having a CRT beam that lights up all the phosphor from one point to the next is a feature, not a bug. This time, I tried it with an old school analog oscilloscope3: (Click to enlarge) The result is much better on the analog scope, but still very hard to read. When you really need all the data you can get from the SR620, just use the GPIB or RS232 interface. References The Signal Path - TNP #41 - Stanford Research SR620 Universal Time Interval Counter Teardown, Repair & Experiments Some calibration info about the SR620 Fast High Precision Set-up of SR 620 Counter The rest of this page has a bunch of other interesting SR620 related comments. Time-Nuts topics The SR620 is mentioned in tons of threads on the time-nuts emaiml list. Here are just a few interesting posts: This post talks about some thermal design mistakes in the SR620. E.g. the linear regulators and heat sink are placed right next to the the TCXO. It also talks about the location of the thermistor inside the fan path, resulting in unstable behavior. This is something Shrirar of The Signal Path fixed by moving the thermistor. This comment mentions that while the TXCO stays powered on in standby, the DAC that sets the control voltage does not, which results in an additional settling time after powering up. General recommendation is to use an external 10 MHz clock reference. This comment talks about warm-up time needed depending on the desired accuracy. It also has some graphs. Footnotes This time, the gamble paid off, and the going rate of a good second hand SR620 is quite a bit higher. But I don’t think I’ll ever do this again! ↩ In other words, when fed with the same 10 MHz as the reference clock, the display always shows a number that is either 10,000,000,000x or 9,999,999,xx. ↩ I find it amazing that this scope was calibrated as recently as April 2023. ↩
MathJax.Hub.Config({ jax: ["input/TeX", "output/HTML-CSS"], tex2jax: { inlineMath: [ ['$', '$'], ["\\(", "\\)"] ], displayMath: [ ['$$', '$$'], ["\\[", "\\]"] ], processEscapes: true, skipTags: ['script', 'noscript', 'style', 'textarea', 'pre', 'code'] } //, //displayAlign: "left", //displayIndent: "2em" }); Introduction Inside the HP 5370A High Stability Reference Clock with an HP 10811-60111 OCXO RIFA Capacitors in the Corcom F2058 Power Entry Module? 15V Rail Issues Power Suppy Architecture Fault Isolation - It’s the Reference Frequency Buffer PCB! The Reference Frequency Buffer Board Fixing the Internal Reference Clock Fixing the External Reference Clock Future work Footnotes Introduction I bought an HP 5370A time interval counter at the Silicon Valley Electronics Flea Market for a cheap $40. The 5370A is a pretty popular device among time nuts: it has a precision of 20ps for single-shot time interval measurements, amazing for a device that was released in 1978, and even compared to contemporary time interval counters it’s still a decent performance. The 74LS chips in mine have a 1981 time code which makes the unit a whopping 44 years old. But after I plugged it in and pressed the power button, smoke and a horrible smell came out after a few minutes. I had just purchased myself hours of entertainment! Inside the HP 5370A It’s trivial to open the 5370A: remove the 4 feet in the back by removing the Philips screws inside them. remove a screw to release the top or bottom cover (Click to enlarge) Once inside, you can see an extremely modular build: the center consists of a motherboard with 10 plug-in PCBs, 4 on the left for an embedded computer that’s based on an MC6800 CPU, 6 on the right for the time acquisition. The top has plug-in PCBs as well, with the power supply on the left and reference clock circuitry on the right. My unit uses the well known HP 10811-60111 high-stability OCXO as 10MHz clock reference. The bottom doesn’t have plug-in PCBs. It has PCBs for trigger logic and the front panel. This kind of modular build probably adds significant cost, but it’s a dream for servicing and tracking down faults. To make things even easier, the vertical PCBs have a plastic ring or levers to pull them out of their slot! There are also plenty of generously sized test pins and some status LEDs. High Stability Reference Clock with an HP 10811-60111 OCXO Since the unit has the high stability option, I have now yet another piece of test equipment with an HP 10811-60111. OCXOs are supposed to be powered on at all time: environmental changes tend to stress them out and result in a deviation of their clock speed, which is why there’s a “24 hour warm-up” sticker on top of the case. It can indeed take a while for an OCXO to relax and settle back into its normal behavior, though 24 hours seems a bit excessive. The 5370A has a separate always-on power supply just for the oven of the OCXO to keeps the crystal at constant temperature even when the power switch on the front is not in the ON position. Luckily, the fan is powered off when the front switch is set to stand-by.1 In the image above, from top to bottom, you can see: the main power supply control PCB the HP 10811-60111 OCXO. To the right of it is the main power relay. the OCXO oven power supply the reference frequency buffer PCB These are the items that will play the biggest role during the repair. RIFA Capacitors in the Corcom F2058 Power Entry Module? Spoiler: probably not… After plugging in the 5370A the first time, magic smoke came out of it along with a pretty disgusting chemical smell, one that I already knew from some work that I did on my HP 8656A RF signal generator. I unplugged the power, opened up the case, and looked for burnt components but couldn’t find any. After a while, I decided to power the unit back on and… nothing. No smoke, no additional foul smell, but also no display. One of common failure mode of test equipment from way back when are RIFA capacitors that sit right next to the mains power input, before any kind of power switch. Their primary function is to filter out high frequency noise that’s coming from the device and reduce EMI. RIFAs have a well known tendency to crack over time and eventually catch fire. A couple of years ago, I replaced the RIFA capacitors of my HP 3457A, but a general advice is to inspect all old equipment for these gold colored capacitors. However, no such discrete capacitors could be found. But that doesn’t mean they are not there: like a lot of older HP test equipment, the 5370A uses a Corcom F2058 line power module that has capacitors embedded inside. Below is the schematic of the Corcom F2058 (HP part number 0960-0443). The capacitors are marked in red. You can also see a fuse F1, a transformer and, on the right, a selector that can be used to configure the device for 100V, 115V/120V, 220V and 230V/240V operation. (Click to enlarge) There was a bad smell lingering around the Corcom module, so I removed it to check it out. There are metal clips on the left and right side that you need to push in to get the module out. It takes a bit of wiggling, but it works out eventually. Once removed, however, the Corcom didn’t really have a strong smell at all. I couldn’t find any strong evidence online that these modules have RIFAs inside them, so for now, my conclusion is that they don’t have them and that there’s no need to replace them. Module replacement In the unlikely case that you want to replace the Corcom module, you can use this $20 AC Power Entry Module from Mouser. One reason why you might want to do this is because the new module has a built-in power switch. If you use an external 10 MHz clock reference instead of the 10811 OCXO, then there’s really no need to keep the 5370A connected to the mains all the time. There are two caveats, however: while it has the same dimensions as the Corcom F2058, the power terminals are located at the very back, not in an indented space. This is not a problem for the 5370A, which still has enough room for both, but it doesn’t work for most other HP devices that don’t have an oversized case. You can see that in the picture below: Unlike the Corcom F2058, the replacement only feeds through the line, neutral and ground that’s fed into it. You’d have to choose one configuration, 120V in my case, and wire a bunch of wires together to drive the transformer correctly. If you do this wrong, the input voltage to the power regulator will either be too low, and it wont work, or too high, and you might blow up the power regulation transistors. It’s not super complicated, but you need to know what you’re doing. 15V Rail Issues After powering the unit back up, it still didn’t work, but thanks to the 4 power rail status LEDs, it was immediately obvious that +15V power rail had issues. A close-by PCB is the reference frequency buffer PCB. It has a “10 MHz present” status LED that didn’t light up either, suggesting an issue with the 10811 OCXO, but I soon figured out that this status LED relies on the presence of the 15V rail. Power Suppy Architecture The 5370A was first released in 1978, decades before HP decided to stop including detailed schematics in their service manuals. Until Keysight, the Company Formerly Known as HP, decides to change its name again, you can download the operating and service manual here. If you need a higher quality scan, you can also purchase the manual for $10 from ArtekManuals2. The diagrams below were copied from the Keysight version. The power supply architecture is straightforward: the line transformer has 5 separate windings, 4 for the main power supply and 1 for the always-on OCXO power supply. A relay is used to disconnect the 4 unregulated DC rails from the power regulators when the front power button is in stand-by position, but the diode rectification bridge and the gigantic smoothing capacitors are located before the relay.3 For each of the 4 main power rails, a discrete linear voltage regulator is built around a power transistor, an LM307AN opamp and a smaller transistor for over-current protection, and a fuse. The 4 regulators share a 10V voltage reference. The opamps and the voltage reference are powered by a simple +16.2V power rail built out of a resistor and a Zener diode. (Click to enlarge) The power regulators for the +5V and -5.2V rails have a current sense resistor of 0.07 Ohm. The sense resistors for the +15V and -15V rails have a value of 0.4 Ohm. When the voltage across these resistors exceeds the 0.7V base-emitter potential of the bipolar transistors across them, the transistors start to conduct and pull down the base-emitter voltage of the power transistor, thus shutting them off. In the red rectanngle of the schematic above, the +15V power transistor is on the right, the current control transistor on the left, and current sense resistor R4 is right next to the +15V label. Using the values of 0.4 Ohm, 0.07 Ohm and 0.7V, we can estimate that the power regulators enter current control (and reduce the output voltage) when the current exceeds 10A for the +5/-5.2V rails and 1.5A for the +15/-15V rails. This more or less matches the value of the fuses, which are rated at 7A and 1.5A respectively. Power loss in this high current linear regulators is signficant and the heat sinks in the back become pretty hot. Some people have installed an external fan too cool it down a bit. Fault Isolation - It’s the Reference Frequency Buffer PCB! I measured a voltage of 8V instead of 15V. I would have prefered if I had measured no voltage at all, because a lower than expected voltage suggests that the power regulator is in current control instead of voltage control mode. In other words: there’s a short somewhere which results in a current that exceeds what’s expected under normal working conditions. Such a short can be located anywhere. But this is where the modular design of the 5370A shines: you can unplug all the PCBs, check the 15V rail, and if it’s fine, add back PCBs until it’s dead again. And, indeed, with all the PCBs removed, the 15V rail worked fine. I first added the CPU related PCBs, then the time acquisition PCBs, and the 15V stayed healthy. But after plugging in the reference frequency buffer PCB, the 15V LED went off and I measured 8V again. Of all the PCBs, this one is the easiest one to understand. The Reference Frequency Buffer Board The reference frequency buffer board has the following functionality: Convert the internally generated 10MHz frequency to emitter-coupled logic (ECL) signaling. The 5370A came either with the OCXO or with a lower performance crystal oscillator. These cheaper units were usually deployed in labs that already had an external reference clock network. Receive an external reference clock of 5 MHz or 10 MHz, multiply by 2 in the case of 5 MHz, and apply a 10 MHz filter. Convert to ECL as well. Select between internal and external clock to create the final reference clock. Send final reference clock out as ECL (time measurement logic), TTL (CPU) and sine wave (reference-out connector on the back panel). During PCB swapping, the front-panel display had remained off when all CPU boards were plugged in. Unlike later HP test equipment like the HP 5334A universal counter, the CPU clock of the 5370A is derived from the 10 MHz clock that comes out of this reference frequency buffer PCB4, so if this board is broken, nothing works. When we zoom down from block diagram to the schematic, we get this: (Click to enlarge) Leaving aside the debug process for a moment, I thought the 5 MHz/10 MHz to 10 MHz circuit was intriguing. I assumed that it worked by creating some second harmonic and filter out the base frequency, and that’s kind of how it works. There are 3 LC tanks with an inductance of 1 uH and a capacitance of 250pF, good for a natural resonance frequency of \(f = \frac{1}{2 \pi \sqrt{ L C }}\) = 10.066 MHz. The first 2 LC tanks are each part of a class C amplifier. The 3rd LC tank is an additional filter. The incoming 5 MHz or 10 MHz signal periodically inserts a bit of energy into the LC tank and nudges it to be in sync with it. This circuit deserves a blog post on its own. Fixing the Internal Reference Clock When you take a closer look at the schematic, there are 2 points that you can take advantage of: The only part on the path from the internal clock input to the various internal outputs that depends on the 15V rail is the ECL to TTL conversion circuit. And that part of the 15V rail is only connected to 3k Ohm resistor R4. Immediately after the connector, 15V first goes through an L/C/R/C circuit. In the process of debugging, I noticed the following: The arrow points to capacitor C17, which looks suspicioulsy black. I found the magic smoke generator. This was the plan off attack: Replace C17 with a new 10uF capacitor Remove resistor R16 to decouple the internal 15V rail from the external one. Disconnect the top side of R4 from the internal 15V and wire it up straight to the connector 15V rail. It’s an ugly bodge, but after these 3 fixes, I had a nice 10MHz ECL clock signal on the output clock test pin. The 5370A was alive and working fine! Fixing the External Reference Clock I usually connect my test equipment to my GT300 frequency standard, so I really wanted to fix that part of the board as well. This took way longer than it could have been… I started by replacing the burnt capacitor with a 10uF electrolytic capacitor and reinstalling R16. That didn’t go well: this time, the resistor went up in smoke. My theory is that, with shorted capacitor C17 removed, there was still another short and now the current path had to go through this resistor. Before burning up, this 10 Ohm resistor measured only 4 Ohms. I then removed the board and created a stand-alone setup to debug the board in isolation. With that burnt up R16 removed again, 15V applied to the internal 15V and a 10 MHz signal at the external input, the full circuit was working fine. I removed capacitor C16, checked it with an LCR tester and the values nicely in spec. Unable to find any real issues, I finally put in a new 10 Ohm resistor, put a new 10uF capacitor for C16 as well, plugged in the board and… now the external clock input was working fine too?! So the board is fixed now and I can use both the internal and external clock, but I still don’t why R16 burnt up after the first capacitor was replaced. Future work The HP 5370A is working very well now. Once I have another Digikey order going out, I want to add 2 capacitors to install 2 tantalum ones instead of the electrolytics that used to repair. I can’t find it back, but on the time-nuts email list, 2 easy modifications were suggested: Drill a hole through the case right above the HP 10811-60111 to have access to the frequency adjust screw. An OCXO is supposed to be immune to external temperature variations, but when you’re measuring picoseconds, a difference in ambient temperature can still have a minor impact. With this hole, you can keep the case closed while calibrating the internal oscillator. Disconnect the “10 MHz present” status LED on the reference clock buffer PCB. Apparently, this circuit creates some frequency spurs that can introduce some additional jitter on the reference clock. If you’re really hard core: Replace the entire CPU system by a modern CPU board More than 10 years ago, the HP5370 Processor Replacement Project reverse engineered the entire embedded software stack, created a PCB based on a Beagle board with new firmware. PCBs are not available anymore, but one could easily have a new one made for much cheaper than what it would have cost back then. Footnotes My HP 8656A RF signal generator has an OCXO as well. But the fan keeps running even when it’s in stand-by mode, and the default fan is very loud too! ↩ Don’t expect to be able to cut-and-paste text from the ArtekManuals scans, because they have some obnoxious rights managment that prevents this. ↩ Each smoothing capacitor has a bleeding resistor in parallel to discharge the capacitors when the power cable is unplugged. But these resistors will leak power even when the unit is switched off. Energy Star regulations clearly weren’t a thing back in 1978. ↩ The CPU runs at 1.25 MHz, the 10 MHz divided by 8. ↩
Introduction Finding the Display Tuning Potentiometers The Result Hardcopy Preview Mode Introduction Less than a week after finishing my TDS 684B analog memory blog post, a TDS 684C landed on my lab bench with a very dim CRT. If you follow the lives the 3-digit TDS oscilloscope series, you probably know that this is normally a bit of death sentence of the CRT: after years of use, the cathode ray loses its strength and there’s nothing you can do about it other than replace the CRT with an LCD screen. I was totally ready to go that route, and if I ever need to do it, here are 3 possible LCD upgrade options that I list for later reference: The most common one is to buy a $350 Newscope-T1 LCD display kit by SimmConn Labs. A cheaper hobbyist alternative is to hack something together with a VGA to LVDS interface board and some generic LCD panel, as described in this build report. He uses a VGA LCD Controller Board KYV-N2 V2 with a 7” A070SN02 LCD panel. As I write this, the cost is $75, but I assume this used to be a lot cheaper before tariffs were in place. If you really want to go hard-core, you could make your own interface board with an FPGA that snoops the RAMDAC digital signals and converts them to LVDS, just like the Newscope-T1. There is a whole thread about this the EEVblog forum. But this blog post is not about installing an LCD panel! Before going that route, you should try to increase the brightness of the CRT by turning a potentiometer on the display board. It sounds like an obvious thing to try, but didn’t a lot of reference to online. And in my case, it just worked. Finding the Display Tuning Potentiometers In the Display Assembly Adjustment section of chapter 5 of the TDS 500D, TDS 600C, TDS 700D and TDS 714L Service Manual, page 5-23, you’ll find the instructions on how to change rotation, brightness and contrast. It says to remove the cabinet and then turn some potentiometer, but I just couldn’t find them! They’re supposed to be next to the fan. Somewhere around there: Well, I couldn’t see any. It’s only the next day, when I was ready to take the whole thing apart that I noticed these dust covered holes: A few minutes and a vaccum cleaning operation later reveals 5 glorious potentiometers: From left to right: horizontal position rotation vertical position brightness contrast Rotate the last 2 at will and if you’re lucky, your dim CRT will look brand new again. It did for me! The Result The weird colors in the picture above is a photography artifact that’s caused by Tektronix NuColor display technology: it uses a monochrome CRT with an R/G/B shutter in front of it. You can read more about it in this Hackaday article. In real life, the image looks perfectly fine! Hardcopy Preview Mode If dialing up the brightness doesn’t work and you don’t want to spend money on an LCD upgrade, there is the option of switching the display to Hardcopy mode, like this: [Display] -> [Settings <Color>] -> [Palette] -> [Hardcopy preview] Instead of a black, you will now get a white background. It made the scope usable before I made the brightness adjustment.
Introduction The TDS600 Series The Acquisition Board Measuring Along the Signal Path A Closer Look at the Noise Issue Conclusion Introduction I have a Tektronix TDS 684B oscilloscope that I bought cheaply at an auction. It has 4 channels, 1 GHz of BW and a sample rate of 5 Gsps. Those are respectable numbers even by today’s standards. It’s also the main reason why I have it: compared to modern oscilloscopes, the other features aren’t nearly as impressive. It can only record 15k samples per channel at a time, for example. But at least the sample rate doesn’t go down when you increase the number of recording channels: it’s 5 Gsps even all 4 channels are enabled. I’ve always wondered how Tektronix managed to reach such high specifications back in the nineties, so in this blog post I take a quick look at the internals, figure out how it works, and do some measurements along the signal path. The TDS600 Series The first oscilloscopes of the TDS600 series were introduced around 1993. The last one, the TDS694C was released in 2002. The TDS684 version was from sometime 1995. The ICs on my TDS684C have date codes from as early as the first half of 1997. The main characteristic of these scopes was their extreme sample rate for that era, going from 2 Gsps for the TDS620, TDS640 and TDS644, 5 Gsps for the TDS654, TDS680 and TDS684, and 10 Gsps for the TDS694C which was developed under the Screamer code name. The oscilloscopes have 2 main boards: the acquisition board contains all the parts from the analog input down to the sample memory as well as some triggering logic. (Click to enlarge) a very busy CPU board does the rest. (Click to enlarge) 2 flat cables and a PCB connect the 2 boards. The interconnect PCB traces go to the memory on the acquisition board. It’s safe to assume that this interface is used for high-speed waveform data transfer while the flat cables are for lower speed configuration and status traffic. If you ever remove the interconnection PCB, make sure to put it back with the same orientation. It will fit just fine when rotated 180 degrees but the scope won’t work anymore! The Acquisition Board The TDS 684B has 4 identical channels that can easily be identified. (Click to enlarge) There are 6 major components in the path from input to memory: Analog front-end Hidden under a shielding cover, but you’d expect to find a bunch of relays there to switch between different configurations: AC/DC, 1Meg/50 Ohm termination, … I didn’t open it because it requires disassembling pretty much the whole scope. Signal Conditioner IC(?) This is the device with the glued-on heatsink. I left it in place because there’s no metal attachment latch. Reattaching it would be a pain. Since the acquisition board has a bunch of custom ICs already, chances are this one is custom as well, so knowing the exact part number wouldn’t add a lot of extra info. We can see one differential pair going from the analog front-end into this IC and a second one going from this IC to the next one, an ADG286D. National Semi ADG286D Mystery Chip Another custom chip with unknown functionality. Motorola MC10319DW 8-bit 25 MHz A/D Converter Finally, an off-the-shelf device! But why is it only rated for 25MHz? National Semi ADG303 - A Custom Memory Controller Chip It receives the four 8-bit lanes from the four ADCs on one side and connects to four SRAMs on the other. 4 Alliance AS7C256-15JC SRAMs Each memory has a capacity of 32KB and a 15ns access time, which allows for a maximum clock of 66 MHz. The TDS 684B supports waveform traces of 15k points, so they either only use half of the available capacity or they use some kind of double-buffering scheme. There are four unpopulated memory footprints. In one of my TDS 420A blog posts, I extend the waveform memory by soldering in extra SRAM chips. I’m not aware of a TDS 684B option for additional memory, so I’m not optimistic about the ability to expand its memory. There’s also no such grayed-out option in the acquisition menu. When googling for “ADG286D”, I got my answer when I stumbled on this comment on BlueSky which speculates that it’s an analog memory, probably some kind of CCD FIFO. Analog values are captured at a rate of up to 5 GHz and then shifted out at a much lower speed and fed into the ADC. I later found a few other comments that confirm this theory. Measuring Along the Signal Path Let’s verify this by measuring a few signals on the board with a different scope. The ADC input pins are large enough to attach a Tektronix logic analyzer probe: ADC sampling the signal With a 1 MHz signal and using a 100Msps sample rate, the input to the ADC looks like this: The input to the ADC is clearly chopped into discrete samples, with a new sample every 120 ns. We can discern a sine wave in the samples, but there’s a lot of noise on the signal too. Meanwhile the TDS684B CRT shows a nice and clean 1 MHz signal. I haven’t been able to figure out how that’s possible. For some reason, simply touching the clock pin of the ADC with a 1 MOhm oscilloscope probe adds a massive amount of noise to the input signal, but it shows the clock nicely: The ADC clock matches the input signal. It’s indeed 8.33 MHz. Acquistion refresh rate The scope only records in bursts. When recording 500, 1000 or 2500 sample points at 100Msps, it records a new burst every 14ms or 70Hz. When recording 5000 points, the refresh rate drops to 53Hz. For 15000 points, it drops even lower, to 30Hz: Sampling burst duration The duration of a sampling burst is always 2 ms, irrespective of the sample rate of the oscilloscope or the number of points acquired! The combination of a 2 ms burst and 8 MHz sample clock results in 16k samples. So the scope always acquires what’s probably the full contents of the CCD FIFO and throws a large part away when a lower sample length is selected. Here’s the 1 MHz signal sampled at 100 Msps: And here’s the same signal sampled at 5 Gsps: It looks like the signal doesn’t scan out of the CCD memory in the order it was received, hence the signal discontinuity in the middle. Sampling a 1 GHz signal I increased the input signal from 1 MHz to 1 GHz. Here’s the ADC input at 5 Gsps: With a little bit of effort, you can once again imagine a sine wave in those samples. There’s periodicity of 5 samples, as one would expect for a 1 GHz to 5 Gsps ratio. The sample rate is still 8.3 MHz. Sampling a 200 MHz signal I also applied a 200 MHz input signal. The period is now ~22 samples, as expected. 200 MHz is low enough to measure with my 350 MHz bandwidth Siglent oscilloscope. To confirm that the ADG286D chip contains the CCD memory, I measured the signal on one of the differential pins going into that chip: And here it is, a nice 200 MHz signal: A Closer Look at the Noise Issue After initially publishing this blog post, I had a discussion on Discord about the noise issue which made me do a couple more measurements. Input connected to ground Here’s what the ADC input looks like when the input of the scope is connected to ground: 2 major observations: there’s a certain amount of repetitiveness to it. there are these major voltage spikes in between each repetition. They are very faint on the scope shot. Let’s zoom in on that: The spikes are still hard to see so I added the arrows, but look how the sample pattern repeats after each spike! The time delay between each spike is ~23.6 us. With a sample rate of 120ns, that converts into a repetitive pattern of ~195 samples. I don’t know why a pattern of 195 samples exists, but it’s clear that each of those 195 locations have a fixed voltage offset. If the scope measures those offsets during calibration, it can subtract them after measurement and get a clean signal out. 50 kHz square wave Next I applied a 50kHz square wave to the input. This frequency was chosen so that, for the selected sample rate, a single period would cover the 15000 sampling points. 2 more observations: the micro-repetitiveness is still there, irrespective of the voltage offset due to the input signal. That means that subtracting the noise should be fine for different voltage inputs. We don’t see a clean square wave outline. It looks like there’s some kind of address interleaving going on. 50kHz sawtooth wave We can see the interleaving even better when applying a sawtooth wavefrom that covers one burst: Instead of a clean break from high-to-low somewhere in the middle, there is a transition period where you get both high and low values. This confirms that some kind of interleaving is happening. Conclusion The TDS684B captures input signals at high speed in an analog memory and digitizes them at 8 MHz. The single-ended input to the ADC is noisy yet the signal looks clean when displayed on the CRT of the scope, likely because the noise pattern is repetitive and predictable. In addition to noise, there’s also an interleaving pattern during the reading out of the analog FIFO contents. The number of samples digitized is always the same, irrespective of the settings in the horizontal acquisition menu. (Not written by ChatGPT, I just like to use bullet points…)
More in technology
In the worlds of programming and robotics, turtles are entities — either virtual or physical robots— that follow commands to move around a 2D plane. Those are usually very simple commands, such as “move forward 10 units” or “rotate 90 degrees clockwise,” and they help people learn some programming fundamentals (like Logo in the ’80s!) […] The post Turtle bots, Gestalt principles, and emergent art appeared first on Arduino Blog.
It was probably going to happen sooner or later, but Microsoft has officially released the source code for 6502 BASIC. The specific revision is very Commodore-centric: it's the 1977 "8K" BASIC variant "1.1," which Commodore users know better as BASIC V2.0, the same BASIC used in the early PET and with later spot changes from Commodore (including removing Bill Gates' famous Easter egg) in the VIC-20 and Commodore 64. I put "8K" in quotes because the 40-bit Microsoft Binary Format version, which is most familiar as the native floating point format for most 8-bit BASICs derived from Microsoft's and all Commodore BASICs from the PET on up, actually starts at 9K in size. In the C64, because there is RAM and I/O between the BASIC ROM and the Kernal ROM, there is an extra JMP at the end of the BASIC ROM to continue to the routine in the lowest portions of the Kernal ROM. The jump doesn't exist in the VIC-20 where the ROM is contiguous and as a result everything past that point is shifted by three bytes on the C64, the length of the instruction. This is, of course, the same BASIC that Gates wanted a percentage of but Jack Tramiel famously refused to budge on the $25,000 one-time fee, claiming "I'm already married." Gates yielded to Tramiel, as most people did then, but I suspect the slight was never forgotten. Not until the 128 did Microsoft officially appear in the credits for Commodore BASIC, and then likely only as a way to push its bona fides as a low-end business computer. Microsoft's source release also includes changes from Commodore's own John Feagans, who rewrote the garbage collection routine, and was the original developer of the Commodore Kernal and later Magic Desk. The source code is all in one big file (typical for the time) and supports six machine models, the first most likely a vapourware 6502 system never finished by Canadian company Semi-Tech Microelectronics (STM) better known for the CP/M-based Pied Piper, then the Apple II, the Commodore (in this case PET 2001), the Ohio Scientific (OSI) Challenger, the Commodore/MOS KIM-1, and most intriguingly a PDP-10-based simulator written by Paul Allen. The source code, in fact, was cross-assembled on a PDP-10 using MACRO-10, and when assembled for the PDP-10 emulator it actually emits a PDP-10 executable that traps on every instruction into the simulator linked with it — an interesting way of effectively accomplishing threaded code. A similar setup was used for their 8080 emulator. Unfortunately, I don't believe Allen's code has been released anywhere, though I'd love to be proven wrong if people know otherwise. Note that they presently don't even mention the STM port in the Github README, possibly because no one was sure what it did. While MACRO-10 source for 6502 BASIC has circulated before and been analysed in detail, most notably by Michael Steil, this is nevertheless the first official release where it is truly open-source under the MIT license and Microsoft should be commended for doing so. This also makes it much easier to pull a BASIC up for your own 6502 homebrew system — there's nothing like the original.