Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
3
Introduction The DSLogic U3Pro16 In the Box Probe Cables and Clips The Controller Hardware The Input Circuit Impact of Input Circuit on Circuit Under Test Additional IOs: External Clock, Trigger In, Trigger Out Software: From Saleae Logic to PulseView to DSView Installing DSView on a Linux Machine DSView UI Streaming Data to the Host vs Local Storage in DRAM Triggers Conclusion References Footnotes Introduction The year was 2020 and offices all over the world shut down. A house remodel had just started, so my office moved from a comfortably airconditioned corporate building to a very messy garage. Since I’m in the business of developing and debugging hardware, a few pieces of equipment came along for the ride, including a Saleae Logic Pro 16. I had the unit for work stuff, I may once in a while have used it for some hobby-related activities too. There’s no way around it: Saleae makes some of the best USB logic analyzers around. Plenty of competitors have matched or surpassed their...
2 days ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Electronics etc…

HP Laptop 17 RAM Upgrade

Introduction Selecting the RAM Opening up Replacing the RAM Reassembly References Introduction I do virtually all of my hobby and home computing on Linux and MacOS. The MacOS stuff on a laptop and almost all Linux work a desktop PC. The desktop PC has Windows on it installed as well, but it’s too much of a hassle to reboot so it never gets used in practice. Recently, I’ve been working on a project that requires a lot of Spice simulations. NGspice works fine under Linux, but it doesn’t come standard with a GUI and, more important, the simulation often refuse to converge once your design becomes a little bit bigger. Tired of fighting against the tool, I switched to LTspice from Analog Devices. It’s free to use and while it support Windows and MacOS in theory, the Mac version is many years behind the Windows one and nearly unusuable. After dual-booting into Windows too many times, a Best Buy deal appeared on my BlueSky timeline for an HP laptop for just $330. The specs were pretty decent too: AMD Ryzen 5 7000 17.3” 1080p screen 512GB SSD 8 GB RAM Full size keyboard Windows 11 Someone at the HP marketing departement spent long hours to come up with a suitable name and settled on “HP Laptop 17”. I generally don’t pay attention to what’s available on the PC laptop market, but it’s hard to really go wrong for this price so I took the plunge. Worst case, I’d return it. We’re now 8 weeks later and the laptop is still firmly in my possession. In fact, I’ve used it way more than I thought I would. I haven’t noticed any performance issues, the screen is pretty good, the SSD larger than what I need for the limited use case, and, surprisingly, the trackpad is the better than any Windows laptop that I’ve ever used, though that’s not a high bar. It doesn’t come close to MacBook quality, but palm rejection is solid and it’s seriously good at moving the mouse around in CAD applications. The two worst parts are the plasticy keyboard and the 8GB of RAM. I can honestly not quantify whether or not it has a practical impact, but I decided to upgrade it anyway. In this blog post, I go through the steps of doing this upgrade. Important: there’s a good chance that you will damage your laptop when trying this upgade and almost certainly void your warranty. Do this at your own risk! Selecting the RAM The laptop wasn’t designed to be upgradable and thus you can’t find any official resources about it. And with such a generic name, there’s guaranteed to be multiple hardware versions of the same product. To have reasonable confidence that you’re buying the correct RAM, check out the full product name first. You can find it on the bottom: Mine is an HP Laptop 17-cp3005dx. There’s some conflicting information about being able to upgrade the thing. The BestBuy Q&A page says: The HP 17.3” Laptop Model 17-cp3005dx RAM and Storage are soldered to the motherboard, and are not upgradeable on this model. This is flat out wrong for my device. After a bit of Googling around, I learned that it has a single 8GB DDR4 SODIMM 260-pin RAM stick but that the motherboard has 2 RAM slots and that it can support up to 2x32GB. I bought a kit with Crucial 2x16GB 3200MHz SODIMMs from Amazon. As I write this, the price is $44. Opening up Removing the screws This is the easy part. There are 10 screws at the bottom, 6 of which are hidden underneath the 2 rubber anti-slip strips. It’s easy to peel these stips loose. It’s als easy to put them back without losing the stickiness. Removing the bottom cover The bottom cover is held back by those annoying plastic tabs. If you have a plastic spudger or prying tool, now is the time to use them. I didn’t so I used a small screwdriver instead. Chances are high that you’ll leave some tiny scuffmarks on the plastic casing. I found it easiest to open the top lid a bit, place the laptop on its side, and start on the left and right side of the keyboard. After that, it’s a matter of working your way down the long sides at the front and back of the laptop. There are power and USB connectors that are right against the side of the bottom panel so be careful not to poke with the spudger or screwdriver inside the case. It’s a bit of a jarring process, going back and forth and making steady improvement. In addition to all the clips around the board of the bottom panel, there are also a few in the center that latch on to the side of the battery. But after enough wiggling and creaking sounds, the panel should come loose. Replacing the RAM As expected, there are 2 SODIMM slots, one of which is populated with a 3200MHz 8GDB RAM stick. At the bottom right of the image below, you can also see the SSD slot. If you don’t enjoy the process of opening up the laptop and want to upgrade to a larger drive as well, now would be the time for that. New RAM in place! It’s always a good idea to test the surgery before reassembly: Success! Reassembly Reassembly of the laptop is much easier than taking it apart. Everything simply clicks together. The only minor surprise was that both anti-slip strips became a little bit longer… References Memory Upgrade for HP 17-cp3005dx Laptop Upgrading Newer HP 17.3” Laptop With New RAM And M.2 NVMe SSD Different model with Intel CPU but the case is the same.

a month ago 21 votes
Symbolic Reference and Hardware Models in Python

The Traditional Hardware Design and Verification Flow An Image Downscaler as Example Design The Reference Model The Micro-Architecture Model Comparing the results Conversion to Hardware Combining symbolic models with random input generation Specification changes Things to experiment with… Symbolic models are best for block or sub-block level modelling References Conclusion The Traditional Hardware Design and Verification Flow In a professional FPGA or ASIC development flow, multiple models are tested against each other to ensure that the final design behaves the way it should. Common models are: a behavioral model that describes the functionality at the highest level These models can be implemented in Matlab, Python, C++ etc. and are usually completely hardware architecture agnostic. They are often not bit accurate in their calculated results, for example because they use floating point numbers instead of fixed point numbers that are more commonly used by the hardware, A good example is the floating point C model that I used to develop my Racing the Beam Ray Tracer, though in this case, the model later transistioned into a hybrid reference/achitectural model. an architectural transaction accurate model An architectural model is already aware of how the hardware is split into major functional groups and models the interfaces between these functional groups in a bit-accurate and transaction-accurate way at the interface level. It doesn’t have a concept of timing in the form of clock cycles. source hardware model This model is the source from which the actual hardware is generated. Traditionally, and still in most cases, this is a synthesizable RTL model written in Verilog or VHDL, but high-level synthesis (HLS) is getting some traction as well. In the case of RTL, this model is cycle accurate. In the case of HLS, it still won’t be. The difference between an HLS C++ model1 and an architectural C++ model is in the way it is coded: HLS code needs to obey coding style restrictions that will otherwise prevent the HLS tool to convert the code to RTL. The HLS model is usually also split up in much more smaller units that interact with each other. RTL model The Verilog or VHDL model of the design. This can be the same as the source hardware model or it can be generated from HLS. Gate-level model The RTL model synthesized into a gatelevel netlist. During the design process, different models are compared against each other. Their outputs should be the same… to a certain extent, since it’s not possible to guarantee identical results between floating point and fixed point models. One thing that is constant among these models is that they get fed with, operate on, and output actual data values. Let’s use the example of a video pipeline. The input of the hardware block might be raw pixels from a video sensor, the processing could be some filtering algorithm to reduce noise, and the outputs are processed pixels. To verify the design, the various models are fed with a combination of generic images, ‘interesting’ images that are expected to hit certain use cases, images with just random pixels, or directed tests that explicity try to trigger corner cases. When there is mismatch between different models, the fun part begins: figuring out the root cause. For complex algorithms that have a lot of internal state, an error may have happened thousands of transactions before it manifests itself at the output. Tracking down such an issue can be a gigantic pain. For many hardware units, the hard part of the design is not the math, but getting the right data to the math units at the right time, by making sure that the values are written, read, and discarded from internal RAMs and FIFOs in the right order. Even with a detailed micro-architectural specification, a major part of the code may consist of using just the correct address calculation or multiplixer input under various conditions. For these kind of units, I use a different kind of model: instead of passing around and operating on data values through the various stages of the pipeline or algorithm, I carry around where the data is coming from. This is not so easy to do in C++ or RTL, but it’s trivial in Python. For lack of a better name, I call these symbolic models. There are thus two additional models to my arsenal of tools: a reference symbolic model a hardware symbolic model These models are both written in Python and their outputs are compared against each other. In this blog post, I’ll go through an example case where I use such model. An Image Downscaler as Example Design Let’s design a hardware module that is easy enough to not spend too much time on it for a blog post, but complex enough to illustrate the benefits of a symbolic model: an image downscaler. The core functionality is the following: it accepts an monochrome image with a maximum resolution of 7680x4320. it downscales the input image with a fixed ratio of 2 in both directions. it uses a 3x3 tap 2D filter kernel for downscaling. The figure below shows how an image with a 12x8 resolution that gets filtered and downsampled into a 6x4 resolution image. Each square represents an input pixel, each hatched square an output pixel, and the arrows show how input pixels contribute to the input of the 3x3 filter for the output pixel. For pixels that lay against the top or left border, the top and left pixels are repeated upward and leftward so that the same 3x3 filter can be used. If this downscaler is part of a streaming display pipeline that eventually sends pixels to a monitor, there is not a lot of flexibility: pixels arrive in a left to right and top to bottom scan order and you need 2 line stores (memories) because there are 3 vertical filter taps. Due to the 2:1 scaling ratio, the line stores can be half the horizontal resolution, but for an 8K resolution that’s still 7680/2 ~ 4KB of RAM just for line buffering. In the real world, you’d have to multiply that by 3 to support RGB instead of monochrome. And since we need to read and write from this RAM every clock cycle, there’s no chance of off-loading this storage to cheaper memory such as external DRAM. However, we’re lucky: the downscaler is part of a video decoder pipeline and those typically work with super blocks of 32x32 or 64x64 pixels that are scanned left-to-right and top-to-bottom. Within each super block, pixels are grouped in tiles of 4x4 pixels that are scanned the same way. In other words, there are 3 levels of left-to-right, top-to-bottom scan operations: the pixels inside each 4x4 pixel tile the pixel tiles inside each super block the super blocks inside each picture (Click to enlarge) The output has the same organization of pixels, 4x4 pixel blocks and super blocks, but due to the 2:1 downsampling in both directions, the size of a super block is 32x32 instead of 64x64 pixels. There are two major advantages to having the data flow organized this way: the downscaler can operate on one super block at a time instead of the full image For pixels inside the super block, that reduces size of the active input image width from 7680 to just 64 pixels. as long as the filter kernel is less than 5 pixels high, only 1 line store is needed The line store contains a partial sum of multiple lines of pixels. While the line store still needs to cover the full picture width when moving from one row of super blocks to the one below it, the bandwidth that is required to access the line store is but a fraction of the one befire: 1/64th to be exact. That opens up the opportunity to stream line store data in and out of external DRAM instead of keeping it in expensive on-chip RAMs. But it’s not all roses! There are some negative consequences as well: pixels from the super block above the current one must be fetched from DMA and stored in a local memory pixels at the bottom of the current super block must be sent to DMA the right-most column of pixels from the current super block are used in the next super block to when doing the 3x3 filter operation 4x4 size input tiles get downsampled to 2x2 size output tiles, but they must be sent out again as 4x4 tiles. This requires some kind of pixel tile merging operation. While the RAM area savings are totally worth it, all this adds a significant amount of data management complexity. This is the kind of problem where a symbolic micro-architecture model shines. The Reference Model When modeling transformations that work at the picture level, it’s convenient to assume that there are no memory size constraints and that you can access all pixels at all times no matter where they are located in the image. You don’t have to worry about how much RAM this would take on silicon: it’s up to the designer of the micro-architecture to figure out how to create an area efficient implementation. This usually results in a huge simplification of the reference model, which is good because as the source of truth you want to avoid any bugs in it. For our downscaler, the reference model creates an array of output pixels where each output pixel contains the coordinates of all the input pixels that are required to calculate its value. The pseudo code is someting like this: for y in range(OUTPUT_HEIGHT): for x in range(OUTPUT_WIDTH): get coordinates of input pixels for filter store coordinates at (x,y) of the output image The reference model python code is not much more complicated. You can find the code here. Instead of a 2-dimensional array, it uses a dictionary with the output pixel coordinates as key. This is a personal preference: I think ref_output_pixels [ (x,y) ] looks cleaner than ref_output_pixels[y][x] or ref_output_pixels[x][y]. When the reference model data creation is complete, the ref_output_pixels array contains values like this: (0,0) => [ Pixel(x=0, y=0), Pixel(x=0, y=0), Pixel(x=1, y=0), Pixel(x=0, y=0), Pixel(x=0, y=0), Pixel(x=1, y=0), Pixel(x=0, y=1), Pixel(x=0, y=1), Pixel(x=1, y=1) ] (1,0) => [ Pixel(x=1, y=0), Pixel(x=2, y=0), Pixel(x=3, y=0), Pixel(x=1, y=0), Pixel(x=2, y=0), Pixel(x=3, y=0), Pixel(x=1, y=1), Pixel(x=2, y=1), Pixel(x=3, y=1) ] ... (8,7) => [ Pixel(x=15, y=13), Pixel(x=16, y=13), Pixel(x=17, y=13), Pixel(x=15, y=14), Pixel(x=16, y=14), Pixel(x=17, y=14), Pixel(x=15, y=15), Pixel(x=16, y=15), Pixel(x=17, y=15) ] ... The reference value of each output pixel is a list of input pixels that are needed to calculate its value. I do not care about the actual value of the pixels or the mathematical operation that is applied on these inputs. The Micro-Architecture Model The source code of the hardware symbolic model can be found here. It has the following main data buffers and FIFOs: an input stream, generated by gen_input_stream, 4x4 pixel tiles that are sent super block by super block and then tile by tile. an output stream of 4x4 pixel tiles with the downsampled image. a DMA FIFO, modelled with simple Python list in which the bottom pixels of a super block area stored and later fetched when the super block of the next row needs the neighboring pixels above. buffers with above and left neighboring pixels that cover the width and height of a super block. an output merge FIFO is used to group a set to 4 2x2 downsampled pixels into a 4x4 tile of pixels The model loops through the super blocks in scan order and then the tiles in scan order, and for each 4x4 tile it calculates a 2x2 output tiles. for sy in range(nr of vertical super block): for sx in range(nr of horizontal super block): for tile_y in range(nr of vertical tiles in a superblock): for tile_x in range(nr of horizontal tiles in a superblock): fetch 4x4 tile with input pixels calculate 2x2 output pixels merge 2x2 output pixels into 4x4 output tiles data management When we look at the inputs that are required to calculate the 4 output pixels for each tile of 16 input pixels, we get the following: In addition to pixels from the tile itself, some output pixels also need input values from above, above-left and/or left neighbors. When switching from one super block to the next, buffers must be updated with neighboring pixels for the whole width and height of the super block. But instead of storing the values of individual pixels, we can store intermediate sums to reduce the number of values: At first sight, it looks like this reduces the number of values in the above and left neighbors buffers by half, but that’s only true for the above buffer. While the left neighbors can be reduced by half for the current tile, the bottom left pixel value is still needed to calculuate the above value for the 4x4 tiles of the next row. So the size of the left buffer is not 1/2 of the size of the super block but 3/4. In the left figure above, the red rectangles contain the components needed for the top-left output pixel, the green for the top-right output pixel etc. The right figure shows the partial sums that must be calculated for the left and above neighbors for future 4x4 tiles. These red, green, blue and purple rectangles have direct corresponding sections in the code. p00 = ( tile_above_pixels[0], tile_left_pixels[0], input_tile[0], input_tile[1], input_tile[4], input_tile[5] ) p10 = ( tile_above_pixels[1], input_tile[ 1], input_tile[ 2], input_tile[ 3] , input_tile[ 5], input_tile[ 6], input_tile[ 7] ) p01 = (tile_left_pixels[1], input_tile[ 4], input_tile[ 5] , input_tile[ 8], input_tile[ 9] , input_tile[12], input_tile[13] ) p11 = ( input_tile[ 5], input_tile[ 6], input_tile[ 7] , input_tile[ 9], input_tile[10], input_tile[11] , input_tile[13], input_tile[14], input_tile[15] ) For each tile, there’s quite a bit of bookkeeping of context values, reading and writing to buffers to keep everything going. Comparing the results In traditional models, as soon as intermediate values are calculated, the original values can be dropped. In the case of our example, with a filter where all coefficients are 1, the above and left values intermediate values of the top-left output pixels are summed and stored as 18 and 11, and the original values of (3,9,6) and (5,6) aren’t needed anymore. This and the fact the multiple inputs might have the same numerical value is what makes traditional models often hard to debug. This is not the case for symbolic models where all input values, the input pixel coordinates, are carried along until the end. In our model, the intermediate results are not removed from the final result. Here is the output result for output pixel (12,10): ... ( # Above neighbor intermediate sum (Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19)), # Left neighbor intermediate sum (Pixel(x=23, y=20), Pixel(x=23, y=21)), # Values from the current tile Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=24, y=21), Pixel(x=25, y=21) ), ... Keeping the intermediate results makes it easier to debug but to compare against the reference model, the data with nested lists must be flattened into this: ... ( Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19), Pixel(x=23, y=20), Pixel(x=23, y=21), Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=24, y=21), Pixel(x=25, y=21) ), ... But even that is not enough to compare: the reference value has the 3x3 input values in a scan order that was destroyed due to using intermediate values so there’s a final sorting step to restore the scan order: ... ( Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19), Pixel(x=23, y=20), Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=23, y=21), Pixel(x=24, y=21), Pixel(x=25, y=21) ) Finally, we can go through all the output tiles of the hardware model and compare them against the tiles of the reference model. If all goes well, the script should give the following: > ./downscaler.py PASS! Any kind of bug will result in an error message like this one: > ./downscaler.py MISMATCH! sb(1,0) tile(0,0) (0,0) 1 ref: [Pixel(x=7, y=0), Pixel(x=7, y=0), Pixel(x=8, y=0), Pixel(x=8, y=0), Pixel(x=9, y=0), Pixel(x=9, y=0), Pixel(x=7, y=1), Pixel(x=8, y=1), Pixel(x=9, y=1)] hw: [Pixel(x=7, y=0), Pixel(x=8, y=0), Pixel(x=8, y=0), Pixel(x=9, y=0), Pixel(x=9, y=0), Pixel(x=7, y=1), Pixel(x=8, y=1), Pixel(x=9, y=1), Pixel(x=7, y=4)] Conversion to Hardware The difficulty of converting the Python micro-architectural model to a hardware implementation model depends on the abstraction level of the hardware implementation language. When using C++ and HLS, the effort can be trivial: some of my blocks have a thousand or more lines of Python that can be converted entirely to C++ pretty much line by line. It can take a few weeks to develop and debug the Python model yet getting the C++ model running only takes a day or two. If the Python model is fully debugged, the only issues encountered are typos made during the conversion and signal precision mistakes. The story is different when using RTL: with HLS, the synthesis-to-Verilog will convert for loops to FSMs and take care of pipelining. When writing RTL directly, that tasks falls on you. You could change the Python model and switch to FSMs there to make that step a bit easier. Either way, having flushed out all the data management will allow you to focus on just the RTL specific tasks while being confident that the core architecture is correct. Combining symbolic models with random input generation The downscaler example is a fairly trivial unit with a predictable input data stream and a simple algorithm. In a video encoder or decoder, instead of a scan-order stream of 4x4 tiles, the input is often a hierarchical coding tree with variable size coding units that are scanned in quad-tree depth-first order. Dealing with this kind of data stream kicks up the complexity a whole lot. For designs like this, the combination of a symbolic model and a random coding tree generator is a super power that will hit corner case bugs with an efficiency that puts regular models to shame. Specification changes The benefits of symbolic models don’t stop with quickly finding corner case bugs. I’ve run in a number of cases where the design requirements weren’t fully understood at the time of implementation and incorrect behavior was discovered long after the hardware model was implemented. By that time, some of the implementation subtleties may have already been forgotten. It’s scary to make changes on a hardware design that has complex data management when corner case bugs take thousands of regression simulations to uncover. If the symbolic model is the initial source of truth, this is usually just not an issue: exhaustive tests can often be run in seconds and once the changes pass there, you have confidence that the corresponding changes in the hardware model source code are sound. Things to experiment with… Generating hardware stimuli I haven’t explored this yet, but it is possible to use a symbolic model to generate stimuli for the hardware model. All it takes is to replace the symbolic input values (pixel coordinates) by the actual pixel values at that location and performing the right mathematical equations on the symbolic values. A joint symbolic/hardware model Having a Python symbolic model and a C++ HLS hardware model isn’t a huge deal but there’s still the effort of converting one into the other. There is a way to have a unified symbolic/hardware model by switching the data type of the input and output values from one that contains symbolic values to one that contains the real values. If C++ is your HLS language, then this requires writing the symbolic model in C++ instead of Python. You’d trade off the rapid interation time and conciseness of Python against having only a single code base. Symbolic models are best for block or sub-block level modelling Since symbolic models carry along the full history of calculated values, symbolic models aren’t very practical when modelling multiple larger blocks together: hierarchical lists with tens or more input values create information overload. For this reason, I use symbolic models at the individual block level or sometimes even sub-block level when dealing with particularly tricky data management cases. My symbolic model script might contain multiple disjoint models that each implement a sub-block of the same major block, without interacting with eachother. References symbolic_model repo on GitHub Conclusion Symbolic models in Python have been a major factor in boosting my design productivity and increasing my confidence in a micro-architectural implementation. If you need to architect and implement a hardware block with some tricky data management, give them a try, and let me know how it went! Not all HLS code is written with C++. There are other languages as well. ↩

3 months ago 53 votes
Making Screenshots of Test Equipment Old and New

Introduction Screenshot Capturing Interfaces Hardware and Software Tools Capturing GPIB data in Talk Only mode TDS 540 Oscilloscope - GPIB - PCL Output HP 54542A Oscilloscope - Parallel Port - PCL or HPGL Output HP Inifinium 54825A Oscilloscope - Parallel Port - Encapsulated Postscript TDS 684B - Parallel Port - PCX Color Output Advantest R3273 Spectrum Analyzer - Parallel Port - PCL Output HP 8753C Vector Network Analyzer - GPIB - HP 8753 Companion Siglent SDS 2304X Oscilloscope - USB Drive, Ethernet or USB Introduction Last year, I create Fake Parallel Printer, a tool to capture the output of the parallel printer port of old-ish test equipment so that it can be converted into screenshots for blog posts etc. It’s definitely a niche tool, but of all the projects that I’ve done, it’s definitely the one that has seen the most amount of use. One issue is that converting the captured raw printing data to a bitmap requires recipes that may need quite a bit of tuning. Some output uses HP PCL, other is Encapsulated Postscript (EPS), if you’re lucky the output is a standard bitmap format like PCX. In the blog post, describe the procedures that I use to create screenshots of the test equipment that I personally own, so that don’t need to figure it out again when I use the device a year later. That doesn’t make it all that useful for others, but somebody may benefit from it when googling for it… As always, I’m using Linux so the software used below reflects that. Screenshot Capturing Interfaces Here are some common ways to transfer screenshots from test equipment to your PC: USB flash drive Usually the least painless by far, but it only works on modern equipment. USB cable Requires some effort to set the right udev driver permissions and a script that sends commands that are device specific. But it generally works fine. Ethernet Still requires slightly modern equipment, and there’s often some configuration pain involved. RS-232 serial Reliable, but often slow. Floppy disk I have a bunch of test equipment with a floppy drive and I also have a USB floppy drive for my PC. However, the drives on all this equipment is broken, in the sense that it can’t correctly write data to a disc. There must be some kind of contamination going on when a floppy drive head isn’t used for decades. GPIB Requires an expensive interface dongle and I’ve yet to figure out how to make it work for all equipment. Below, I was able to make it work for a TDS 540 oscilloscope, but not for an HP 54532A oscilloscope, for example. Parallel printer port Available on a lot of old equipment, but it normally can’t be captured by a PC unless you use Fake Parallel Printer. We’re now more than a year later, and I use it all the time. I find it to be the easiest to use of all the printer interfaces. Hardware and Software Tools GPIB to USB Dongle If you want to print to GPIB, you’ll need a PC to GPIB interface. These days, the cheapest and most common are GPIB to USB dongles. I’ve written about those here and here. The biggest take-away is that they’re expensive (>$100 second hand) and hard to configure when using Linux. And as mentioned above, I have only had limited success and using them in printer mode. ImageMagick ImageMagick is the swiss army knife of bitmap file processing. It has a million features, but I primarily use it for file format conversion and image cropping. I doubt that there’s any major Linux distribution that doesn’t have it as a standard package… sudo apt install imagemagick GhostPCL GhostPCL is used to decode PCL files. On many old machines, these files are created when printing to Thinkjet, Deskjet or Laserjet. Installation: Download the GhostPCL/GhostPDL source code. Compile cd ~/tools tar xfv ~/Downloads/ghostpdl-10.03.0.tar.gz cd ghostpdl-10.03.0/ ./configure --prefix=/opt/ghostpdl make -j$(nproc) export PATH="/opt/ghostpdl/bin:$PATH" Install sudo make install A whole bunch of tools will now be available in /opt/ghostpdl/bin, including gs (Ghostscript) and gpcl6. hp2xx hp2xx converts HPGL files, originally intended for HP plotter, to bitmaps, EPS etc. It’s available as a standard package for Ubuntu: sudo apt install hp2xx Inkscape Inkscape is full-featured vector drawing app, but it can also be used as a command line tool to convert vector content to bitmaps. I use it to convert Encapsulated Postscript file (EPS) to bitmaps. Like other well known tools, installation on Ubuntu is simple: sudo apt install inkscape HP 8753C Companion This tool is specific to HP 8753 vector network analyzers. It captures HPGL plotter commands, extracts the data, recreates what’s displayed on the screen, and allow you to interact with it. It’s available on GitHub. Capturing GPIB data in Talk Only mode Some devices will only print to GPIB in Talk Only mode, or sometimes it’s just easier to use that way. When the device is in Talk Only mode, the PC GPIB controller becomes a Listen Only device, a passive observer that doesn’t initiate commands but just receives data. I wrote the following script to record the printing data and save it to a file: gpib_talk_to_file.py: (Click to download) #! /usr/bin/env python3 import sys import pyvisa gpib_addr = int(sys.argv[1]) output_filename = sys.argv[2] rm = pyvisa.ResourceManager() inst = rm.open_resource(f'GPIB::{gpib_addr}') try: # Read data from the device data = inst.read_raw() with open(output_filename, 'wb') as file: file.write(data) except pyvisa.VisaIOError as e: print(f"Error: {e}") Pyvisa is a universal library to take to test equipement. I wrote about it here. It will quickly time out when no data arrives in Talk Only mode, but since all data transfers happen with valid-ready protocol, you can avoid time-out issued by pressing the hardcopy or print button on your oscilloscope first, and only then launch the script above. This will work as long as the printing device doesn’t go silent while in the middle of printing a page. TDS 540 Oscilloscope - GPIB - PCL Output My old TDS 540 oscilloscope doesn’t have a printer port, so I had to make do with GPIB. Unlike later version of the TDS series, it also doesn’t have the ability to export bitmaps directly, but it has outputs for: Thinkjet, Deskjet, and Laserjet in PCL format Epson in ESC/P format Interleaf format EPS Image format HPGL plotter format The TDS 540 has a screen resolution of 640x480. I found the Thinkjet output format, with a DPI of 75x75, easiest to deal with. The device adds a margin of 20 pixels to the left, and 47 pixels at the top, but those can be removed with ImageMagick. With a GPIB address of 11, the overall recipe looks like this: # Capture the PCL data gpib_talk_to_file.py 11 tds540.thinkjet.pcl # Convert PCL to png gpcl6 -dNOPAUSE -sOutputFile=tds540.png -sDEVICE=png256 -g680x574 -r75x75 tds540.thinkjet.pcl # Remove the margins and crop the image to 640x480 convert tds540.png -crop 640x480+20+47 tds540.crop.png The end result looks like this: HP 54542A Oscilloscope - Parallel Port - PCL or HPGL Output This oscilloscope was an ridiculous $20 bargain at the Silicon Valley Electronics Flea Market and it’s the one I love working with the most: the user interface is just so smooth and intuitive. Like all other old oscilloscopes, the biggest thing going against it is the amount if desk space it requires. It has a GPIB, RS-232, and Centronics parallel port, and all 3 can be used for printing. I tried to get printing to GPIB to work but wasn’t successful: I’m able to talk to the device and send commands like “*IDN?” and get a reply just fine, but the GPIB script that works fine with the TDS 540 always times out eventually. I switched to my always reliable Fake Parallel Printer and that worked fine. There’s also the option to use the serial cable. The printer settings menu can by accessed by pressing the Utility button and then the top soft-button with the name “HPIB/RS232/CENT CENTRONICS”. You have the following options: ThinkJet DeskJet75dpi, DeskJet100dpi, DeskJet150dpi, DeskJet300dpi LaserJet PaintJet Plotter Unlike the TDS 540 I wasn’t able to get the ThinkJet option to convert into anything, but the DeskJet75dpi option worked fine with this recipe: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f hp54542a_ -s deskjet.pcl -v gpcl6 -dNOPAUSE -sOutputFile=hp54542a.png -sDEVICE=png256 -g680x700 -r75x75 hp54542a_0.deskjet.pcl convert hp54542a.png -crop 640x388+19+96 hp54542a.crop.png The 54542A doesn’t just print out the contents of the screen, it also prints the date and adds the settings for the channels that are enabled, trigger options etc. The size of these additional values depends on how many channels and other parameters are enabled. When you select PaintJet or Plotter as output device, you have the option to select different colors for regular channels, math channels, graticule, markers etc. So it is possible to create nice color screenshots from this scope, even if the CRT is monochrome. I tried the PaintJet option, and while gcpl6 was able to extract an image, the output was much worse than the DeskJet option. I had more success using the Plotter option. It prints out a file in HPGL format that can be converted to a bitmap with hp2xx. The following recipe worked for me: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f hp54542a_ -s plotter.hpgl -v hp2xx -m png -a 1.4 --width 250 --height 250 -c 12345671 -p 11111111 hp54542a_0.plotter.hpgl I’m not smitten with the way it looks, but if you want color, this is your best option. The command line options of hp2xx are not intuitive. Maybe it’s possible to get this to look a bit better with some other options. Click to enlarge HP Inifinium 54825A Oscilloscope - Parallel Port - Encapsulated Postscript This indefinite-loaner-from-a-friend oscilloscope has a small PC in it that runs an old version of Windows. It can be connected to Ethernet, but I’ve never done that: capturing parallel printer traffic is just too convenient. On this oscilloscope, I found that printing things out as Encapsulated Postscript was the best option. I then use inkscape to convert the screenshot to PNG: ./fake_printer.py --port=/dev/ttyACM0 -t 2 -v --prefix=hp_osc_ -s eps inkscape -f ./hp_osc_0.eps -w 1580 -y=255 -e hp_osc_0.png convert hp_osc_0.png -crop 1294x971+142+80 hp_osc_0_cropped.png Ignore the part circled in red, that was added in post for an earlier article: Click to enlarge TDS 684B - Parallel Port - PCX Color Output I replaced my TDS 540 oscilloscope with a TDS 684B. On the outside they look identical. They also have the same core user interface, but the 648B has a color screen, a bandwidth of 1GHz, and a sample rate of 5 Gsps. Print formats The 684B also has a lot more output options: Thinkjet, Deskjet, DeskjetC (color), Laserjet output in PCL format Epson in ESC/P format DPU thermal printer PC Paintbrush mono and color in PCX file format TIFF file format BMP mono and color format RLE color format EPS mono and color printer format EPS mono and color plotter format Interleaf .img format HPGL color plot Phew. Like the HP 54542A, my unit has GPIB, parallel port, and serial port. It can also write out the files to floppy drive. So which one to use? BMP is an obvious choice and supported natively by all modern PCs. The only issue is that it gets written out without any compression so it takes over 130 seconds to capture with fake printer. PCX is a very old bitmap file format, I used it way back in 1988 on my first Intel 8088 PC, but it compresses with run length encoding which works great on oscilloscope screenshots. It only take 22 seconds to print. I tried the TIFF option and was happy to see that it only took 17 seconds, but the output was monochrome. So for color bitmap files, PCX is the way to go. The recipe: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f tds684_ -s pcx -v convert tds684_0.pcx tds684.png The screenshot above uses the Normal color setting. The scope also a Bold color setting: There’s a Hardcopy option as well: It’s a matter of personal taste, but my preference is the Normal option. Advantest R3273 Spectrum Analyzer - Parallel Port - PCL Output Next up is my Advantest R3273 spectrum analyzer. It has a printer port, a separate parallel port that I don’t know what it’s used for, a serial port, a GPIB port, and floppy drive that refuses to work. However, in the menus I can only configure prints to go to floppy or to the parallel port, so fake parallel printer is what I’m using. The print configuration menu can be reached by pressing: [Config] - [Copy Config] -> [Printer]: The R3273 supports a bunch of formats, but I had the hardest time getting it create a color bitmap. After a lot of trial and error, I ended up with this: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f r3273_ -s pcl -v gpcl6 -dNOPAUSE -sOutputFile=r3273_tmp.png -sDEVICE=png256 -g4000x4000 -r600x600 r3273_0.pcl convert r3273_tmp.png -filter point -resize 1000 r3273_filt.png rm r3273_tmp.png convert r3273_filt.png -crop 640x480+315+94 r3273.png rm r3273_filt.png The conversion loses something in the process. The R3273 hardcopy mimics the shades of depressed buttons with a 4x4 pixel dither pattern: If you use a 4x4 pixel box filter and downsample by a factor of 4, this dither pattern converts to a nice uniform gray, but the actual spectrum data gets filtered down as well: With the recipe above, I’m using 4x4 to 1 pixel point-sampling instead, with a phase that is chosen just right so that the black pixels of the dither pattern get picked. The highlighted buttons are now solid black and everything looks good. HP 8753C Vector Network Analyzer - GPIB - HP 8753 Companion My HP 8753C VNA only has a GPIB interface, so there’s not a lot of choice there. I’m using HP 8753 Companion. It can be used for much more than just grabbing screenshots: you can save the measured data to a filter, upload calibration kit data and so on. It’s great! You can render the screenshot the way it was plotted by the HP 8753C, like this: Click to enlarge Or you can display it as in a high resolution mode, like this: Click to enlarge Default color settings for the HPGL plot aren’t ideal, but everything is configurable. If you don’t have one, the existence of HP 8753 Companion alone is a good reason to buy a USB-to-GPIB dongle. Click to enlarge Siglent SDS 2304X Oscilloscope - USB Drive, Ethernet or USB My Siglent SDS 2304X was my first oscilloscope. It was designed 20 years later than all the other stuff, with a modern UI, and modern interfaces such as USB and Ethernet. There is no GPIB, parallel or RS-232 serial port to be found. I don’t love the scope. The UI can become slow when you’re displaying a bunch of data on the screen, and selecting anything from a menu with a detentless rotary knob can be the most infuriating experience. But it’s my daily driver because it’s not a boat anchor: even on my messy desk, I can usually create room to put it down without too much effort. You’d think that I use USB or Ethernet to grab screenshots, but most of the time I just use a USB stick and shuttle it back and forth between the scope and the PC. That’s because setting up the connection is always a bit of pain. However, if you insist, you can set things up this way: Ethernet To configure Ethernet, you need to go to [Utility] -> [Next Page] -> [I/O] -> [LAN]. Unlike my HP 1670G logic analyzer, the Siglent supports DHCP but when writing this blog post, the scope refused to grab an IP address on my network. No amount of rebooting, disabling and re-enabling DHCP helped. I have gotten it to work in the past, but today it just wasn’t happening. You’ll probably understand why using a zero-configuration USB stick becomes an attractive alternative. USB If you want to use USB, you need an old relic of a USB-B cable. It shows up like this: sudo dmesg -w [314170.674538] usb 1-7.1: new full-speed USB device number 11 using xhci_hcd [314170.856450] usb 1-7.1: not running at top speed; connect to a high speed hub [314170.892455] usb 1-7.1: New USB device found, idVendor=f4ec, idProduct=ee3a, bcdDevice= 2.00 [314170.892464] usb 1-7.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [314170.892469] usb 1-7.1: Product: SDS2304X [314170.892473] usb 1-7.1: Manufacturer: Siglent Technologies Co,. Ltd. [314170.892476] usb 1-7.1: SerialNumber: SDS2XJBD1R2754 Note 3 key parameters: USB vendor ID: f4ec USB product ID: ee3a Product serial number: SDS2XJBD1R2754 Set udev rules so that you can access this device of USB without requiring root permission by creating an /etc/udev/rules.d/99-usbtmc.rules file and adding the following line: SUBSYSTEM=="usb", ATTR{idVendor}=="f4ec", ATTR{idProduct}=="ee3a", MODE="0666" You should obviously replace the vendor ID and product ID with the one of your case. Make the new udev rules active: sudo udevadm control --reload-rules sudo udevadm trigger You can now download screenshots with the following script: siglent_screenshot_usb.py: (Click to download) #!/usr/bin/env python3 import argparse import io import pyvisa from pyvisa.constants import StatusCode from PIL import Image def screendump(filename): rm = pyvisa.ResourceManager('') # Siglent SDS2304X scope = rm.open_resource('USB0::0xF4EC::0xEE3A::SDS2XJBD1R2754::INSTR') scope.read_termination = None scope.write('SCDP') data = scope.read_raw(2000000) image = Image.open(io.BytesIO(data)) image.save(filename) scope.close() rm.close() if __name__ == '__main__': parser = argparse.ArgumentParser( description='Grab a screenshot from a Siglent DSO.') parser.add_argument('--output', '-o', dest='filename', required=True, help='the output filename') args = parser.parse_args() screendump(args.filename) Once again, take note of this line scope = rm.open_resource('USB0::0xF4EC::0xEE3A::SDS2XJBD1R2754::INSTR') and don’t forget to replace 0xF4EC, 0xEE3A, and SDS2XJBD1R2754 by the correct USB product ID, vendor ID and serial number. Call the script like this: ./siglent_screenshot_usb.py -o siglent_screenshot.png If all goes well, you’ll get something like this:

4 months ago 59 votes
A Hardware Interposer to Fix the Symmetricom SyncServer S200 GPS Week Number Rollover Problem

Introduction IMPORTANT: Use the Right GPS Antenna! The Problem: SyncServer Refuses to Lock to GPS The GPS Week Number Rollover Issue Making the Furuno GT-8031 Work Again How It Works Build Instructions Power Supply Recapping The Future: A Software-Only Solution The Result References Footnotes Introduction In my earlier blog post, I wrote about how to set up a SyncServer S200 as a regular NTP server, and how to install the backside BNC connectors to bring out the 10 MHz and 1PPS outputs. The ultimate goal is to use the SyncServer as a lab timing reference, but at the end of that blog post, it’s clear that using NTP alone is not good enough to get a precise 10 MHz clock: the output frequency was off by almost 100Hz! To get a more accurate output clock, you need to synchronize the SyncServer to the GPS system so that it becomes a GPS disciplined oscillator (GPSDO) and a stratum 1 time keeping device. The S200 has a GPS antenna input and a GPS receiver module inside, so in theory this should be a matter of connecting the right GPS antenna. But in practice it wasn’t simple at all because the GPS module in the SyncServer S200 is so old that it suffers from the so-called Week Number Roll-Over (WNRO) problem. In this blog post, I’ll discuss what the WNRO problem is all about and show my custom hardware solution that fixed the problem. IMPORTANT: Use the Right GPS Antenna! Let me once again point out the importance of using the right GPS antenna to avoid damaging it permanently due to over-voltage. GPS antennas have active elements that amplify the received signal right at the point of reception before sending it down the cable to the GPS unit. Most antennas need 3.3V or 5V that is supplied through the GPS antenna connector, but Symmetricom S200 supplies 12V! Make sure you are using a 12V GPS antenna! Check out my earlier blog post for more information. The Problem: SyncServer Refuses to Lock to GPS When you connect a GPS antenna to a SyncServer in its original configuration, everything seems to go fine initially. The front panel reports the antenna connection as “Good”, a few minutes later the number of satellites detected goes up, and the right location gets reported. But the most important “Status” field remains stuck in the “Unlocked” state which means that the SyncServer refuses to lock its internal clock to the GPS unit. This issue has been discussed to death in a number of EEVblog forum threads, but conclusion is always the same: the Furuno GT-8031 GPS module suffers from the GPS Week Number Roll-Over (WNRO) issue and nothing can be done about it other than replacing the GPS module with an after-market replacement one. The GPS Week Number Rollover Issue The original GPS system used a 10-bit number to count the number of weeks that have elapsed since January 6, 1980. Every 19.7 years, this number rolls over from 1023 back to 0. The first rollover happened on August 21, 1999, the second on April 6, 2019, and the next one will be on November 20, 2038. Check out this US Naval Observatory presention for some more information. GPS module manufacturers have dealt with the issue is by using a dynamic base year or variable pivot year. Let’s say that a device is designed in at the start of 2013 during week 697 of the 19.7 year epoch that started in 1999. The device then assumes that all week numbers higher than 697 are for years 2013 to 2019, and that numbers from 0 to 697 are for the years 2019 and later. Such a device will work fine for the 19.7 years from 2013 until 2032. And with just a few bits of non-volatile storage it is even possible to make a GPS unit robust against this kind of rollover forever: if the last seen date by the GPS unit was 2019 and suddenly it sees a date of 1999, it can infer that there was a rollover and record in the storage that the next GPS epoch has started. Unfortunately many modules don’t do that. The only way to fix the issue is to either update the module firmware or for some external device to tell the GPS module about the current GPS epoch. Many SyncServer S2xx devices shipped with a Motorola M12 compatible module that is based on a Furuno GT-8031 which has a starting date of February 2, 2003 and rolled over on September 18, 2022. You can send a command to the module that hints the current date and that fixes the issue, but there is no SyncServer S200 firmware that supports that. Check out this Furuno technical document for more the rollover details. The same document also tells us how to adjust the rollover date. It depends on the protocol that is supposed by the module. For a GT-8031, you need to use the ZDA command, other modules require the @@Gb or the TIME command. If you want to run a SyncServer for its intended purpose, a time server, it is of course important that you get the correct date. But if you don’t care about the date because the primary purpose it to use it as a GPSDO then the 1PPS output from the GPS module should be sufficient to drive a PLL that locks the internal 10 MHz oscillator to the GPS system. This 1PPS signal is still present on the GT-8031 of my unit and I verified with an oscilloscope that it matches the 1PPS output of my TM4313 GPSDO both in frequency and in phase as soon as it sees a copule of satellites. But there is something in the SyncServer firmware that depends on more than just the 1PPS signal because my S200 refuses to enter into “GPS Locked” mode, and the 10 MHz oscillator stays in free-running mode at the miserable frequency of roughly 9,999,993 Hz. Making the Furuno GT-8031 Work Again There are aftermarket replacement modules out there with a rollover date that is far into the future, but they are priced pretty high. There’s the iLotus IL-GPS-0030-B Module which goes for around $100 in AliExpress but that one has a rollover date on August 2024, and other modules go as high as $240. The reason for these high prices is because these modules don’t use a $5 location GPS chip but specialized ones that are designed for accurate time keeping such as the u-blox NEO/LEA series. Instead of solving the problem with money, I wondered if it was possible to make the GT-8031 send the right date to the S200 with a hardware interposer that sits between the module and motherboard. There were 2 options: intercept the date sent from the GPS module, correct it, and transmit it to the motherboard I tried that, but didn’t get it work. send a configuration command to the GPS module to set the right date This method was suggested by Alex Forencich on the time-nuts mailing list. He implemented it by patching the firmware of a microcontroller on his SyncServer S350. His solution might be the best one eventually, it doesn’t require extra hardware, but by the time he posted his message, my interposer was already up and running on my desk. It took 2 PCB spins, but I eventually came up with the following solution: Click to enlarge In the picture above, you see the GT-8031 plugged into my interposer that is in turn plugged into the motherboard. The interposer itself looks like this: The design is straightforward: an RP2024-zero, a smaller variant of the Raspberry Pico, puts itself in between the serial TX and RX wires that normally go between the module and the motherboard. It’s up to the software that runs on the RP2040 to determine what to do with the data streams that run over those wires. There are a few of other connectors: the one at the bottom right is for observing the signals with a logic analyzer. There are also 2 connectors for power. When finally installed, the interposer gets powered with a 5V supply that’s available on a pin that is conveniently located right behind the GPS module. In the picture above, the red wires provides the 5V, the ground is connected through the screws that hold the board in place. The total cost of the board is as follows: PCB: $2 for 5 PCBs + $1.50 shipping = $3.50 RP2040-zero: $9 on Amazon 2 5x2 connectors: $5 on Mouser + $5 shipping = $10 Total: $22.50 The full project details can be found my gps_interposer GitHub repo. How It Works To arrive at a working solution, I recorded all the transactions on the serial port RX and TX and ran them through a decoder script to convert them into readable GPS messages. Here are the messages that are exchanged between the motherboard after powering up the unit: >>> @@Cf - set to defaults command: [], [37], [13, 10] - 7 >>> @@Gf - time raim alarm message: [0, 10], [43], [13, 10] - 9 >>> @@Aw - time correction select: [1], [55], [13, 10] - 8 >>> @@Bp - request utc/ionospheric data: [0], [50], [13, 10] - 8 >>> @@Ge - time raim select message: [1], [35], [13, 10] - 8 >>> @@Gd - position control message: [3], [32], [13, 10] - 8 <<< @@Aw - time correction select: [1], [55], [13, 10] - 8 time_mode:UTC <<< @@Co - utc/ionospheric data input: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [44], [13, 10] - 29 alpha0:0, alpha1:0, alhpa2:0, alpha3:0 beta0:0, beta1:0, alhpa2:0, beta3:0 A0:0, A1:0 delta_tls:0, tot:0, WNt:0, WNlsf:0, DN:0, delta_Tlsf:0 The messages above can be seen in the blue and green rectangles of this logic analyzer screenshot: Click to enlarge Earlier, we saw that the GT-8031 module itself needs the ZDA command to set the time. This is an NMEA command. The messages above are Motorola M12 commands however. On the M12 module that contains the GT-8031 there is also a TI M430F147 microcontroller that takes care of the conversion between Motorola and NMEA commands. Note how the messages that arrive at the interposer immediately get forwarded to the other side. But there is one transaction marked in red, generated by the interposer itself, that sends the @@Gb command. When the GPS module is not yet locked to a satelllite, this command sends an initial estimate of the current date and time. The M12 User Guide has the following to say about this command: The interposer sends a hinted date of May 4, 2024. When the GPS module receives date from its first satellite, it corrects the date and time to the right value but it uses the initial estimated date to correct for the week number rollover! I initially placed the @@Gb command right after the @@Cf command that resets the module to default values, but that didn’t work. The solution was to send it after the initial burst of configuration commmands. With this fix in place, it still takes almost 15 minutes before the S200 enters into GPS lock. You can see this on the logic analyzer by an increased rate of serial port traffic: Click to enlarge There’s also the immense satisfaction to see the GPS status changed to “Locked”: Build Instructions Disclaimer: it’s easy to make mistakes and permanently damage your SyncServer. Build this at your own risk! All design assets for this project can be found in my gps_interposer project on GitHub. There is a KiCAD project with schematic, PCB design and gerber files. A PDF with the schematics can be found here. The design consists of a few connectors, some resistors and the RP2024-zero. If you just want the Gerbers, you can use gps_interposer_v2.zip. The interposer is connected to the motherboard and to the M12 module with 2x5 pin 0.050” connectors, one male and one female. But because it was difficult to come up with the right height of the male connector, I chose to add 2 female connectors, available here on Mouser and used a loose male-to-male connector between them to make the connection. The trickiest part of the build is soldering 2x5 pin connectors that connects to the motherboard: it needs to be reasonably well positioned otherwise the alignment of the screw holes will be out of whack. I applied some solder past on the solder pads, carefully placed the connector on top of the past, and used a soldering iron to melt the paste without moving the connector. It wasn’t too bad. Under normal circumstances, the board is powered by the 5V rail that’s present right next to the GPS module. However, when you plug in the USB-C cable into the RP2040-zero, you need to make sure to remove that connection because you’ll short out the 5V rail of the S200 otherwise! The board has 2 connectors for the 5V: at the front next to the USB-C connector and in the back right above the 5V of the motherboard. The goal was to plug in the power there, but it turned out to be easier to use a patch wire between the motherboard and the front power connector. You’ll need to the program the RP2024-zero with the firmware of course. You can find it in the sw/rp2040_v2 directory of my gps_interposer GitHub repo. Power Supply Recapping If you plan to deploy the SyncServer in alway-on mode, you should seriously consider replacement the capacitors in the power supply unit: they are known to be leaky and if things really go wrong they can be a fire hazard. The power supply board can easily be removed with a few screws. In my version, the capacitors were held in place with some hard sticky goo, so it takes some effort to remove them. The Future: A Software-Only Solution My solution uses a cheap piece of custom hardware. Alex’s solution currently requires patching some microcontroller firmware and then flashing this firmware with $30 programming dongle. So both solutions require some hardware. A software-only solution should be possible though: the microcontroller gets reprogrammed during an official SyncServer software update procedure. It should be possible to insert the patched microcontroller firmware into an existing software update package and then do an upgrade. Since the software update package is little more than a tar-archive of the Linux disk image that runs the embedded PC, it shouldn’t be very hard to make this happen, but right now, this doesn’t exist, and I’m happy with what I have. The Result The video below shows the 10 MHz output of the S200 being measured by a frequency counter that uses a calibrated stand-alone frequency standard as reference clock. The stability of the S200 seems slightly worse than that of my TM4313 GPSDO, but it’s close. Right now, I don’t have the knowledge yet to measure and quantify these differences in a scientifically acceptable way. References Microsemi SyncServer S200 datasheet Microsemi SyncServer S200, S250, S250i User Guide EEVblog - Symmetricom S200 Teardown/upgrade to S250 EEVblog - Symmetricom Syncserver S350 + Furuno GT-8031 timing GPS + GPS week rollover EEVblog - Synserver S200 GPS lock question Furuno GPS/GNSS Receiver GPS Week Number Rollover Footnotes

7 months ago 88 votes

More in technology

A tricky Commodore PET repair: tracking down 6 1/2 bad chips

.cite { font-size: 70%;} .ref { vertical-align: super; font-size: 60%;} code {font-size: 100%; font-family: courier, fixed;} In 1977, Commodore released the PET computer, a quirky home computer that combined the processor, a tiny keyboard, a cassette drive for storage, and a trapezoidal screen in a metal unit. The Commodore PET, the Apple II, and Radio Shack's TRS-80 started the home computer market with ready-to-run computers, systems that were called in retrospect the 1977 Trinity. I did much of my early programming on the PET, so when someone offered me a non-working PET a few years ago, I took it for nostalgic reasons. You'd think that a home computer would be easy to repair, but it turned out to be a challenge.1 The chips in early PETs are notorious for failures and, sure enough, we found multiple bad chips. Moreover, these RAM and ROM chips were special designs that are mostly unobtainable now. In this post, I'll summarize how we repaired the system, in case it helps anyone else. When I first powered up the computer, I was greeted with a display full of random characters. This was actually reassuring since it showed that most of the computer was working: not just the monitor, but the video RAM, character ROM, system clock, and power supply were all operational. The Commodore PET started up, but the screen was full of garbage. With an oscilloscope, I examined signals on the system bus and found that the clock, address, and data lines were full of activity, so the 6502 CPU seemed to be operating. However, some of the data lines had three voltage levels, as shown below. This was clearly not good, and suggested that a chip on the bus was messing up the data signals. The scope shows three voltage levels on the data bus. Some helpful sites online7 suggested that if a PET gets stuck before clearing the screen, the most likely cause is a failure of a system ROM chip. Fortunately, Marc has a Retro Chip Tester, a cool device designed to test vintage ICs: not just 7400-series logic, but vintage RAMs and ROMs. Moreover, the tester knows the correct ROM contents for a ton of old computers, so it can tell if a PET ROM has the right contents. The Retro Chip Tester showed that two of the PET's seven ROM chips had failed. These chips are MOS Technologies MPS6540, a 2K×8 ROM with a weird design that is incompatible with standard ROMs. Fortunately, several people make adapter boards that let you substitute a standard 2716 EPROM, so I ordered two adapter boards, assembled them, and Marc programmed the 2716 EPROMs from online data files. The 2716 EPROM requires a bit more voltage to program than Marc's programmer supported, but the chips seemed to have the right contents (foreshadowing). The PET opened, showing the motherboard. The PET's case swings open with an arm at the left to hold it open like a car hood. The first two rows of chips at the front of the motherboard are the RAM chips. Behind the RAM are the seven ROM chips; two have been replaced by the ROM adapter boards. The 6502 processor is the large black chip behind the ROMs, toward the right. With the adapter boards in place, I powered on the PET with great expectations of success, but it failed in precisely the same way as before, failing to clear the garbage off the screen. Marc decided it was time to use his Agilent 1670G logic analyzer to find out what was going on; (Dating back to 1999, this logic analyzer is modern by Marc's standards.) He wired up the logic analyzer to the 6502 chip, as shown below, so we could track the address bus, data bus, and the read/write signal. Meanwhile, I disassembled the ROM contents using Ghidra, so I could interpret the logic analyzer against the assembly code. (Ghidra is a program for reverse-engineering software that was developed by the NSA, strangely enough.) Marc wired up the logic analyzer to the 6502 chip. The logic analyzer provided a trace of every memory access from the 6502 processor, showing what it was executing. Everything went well for a while after the system was turned on: the processor jumped to the reset vector location, did a bit of initialization, tested the memory, but then everything went haywire. I noticed that the memory test failed on the first byte. Then the software tried to get more storage by garbage collecting the BASIC program and variables. Since there wasn't any storage at all, this didn't go well and the system hung before reaching the code that clears the screen. We tested the memory chips, using the Retro Chip Tester again, and found three bad chips. Like the ROM chips, the RAM chips are unusual: MOS Technology 6550 static RAM chip, 1K×4. By removing the bad chips and shuffling the good chips around, we reduced the 8K PET to a 6K PET. This time, the system booted, although there was a mysterious 2×2 checkerboard symbol near the middle of the screen (foreshadowing). I typed in a simple program to print "HELLO", but the results were very strange: four floating-point numbers, followed by a hang. This program didn't work the way I expected. This behavior was very puzzling. I could successfully enter a program into the computer, which exercises a lot of the system code. (It's not like a terminal, where echoing text is trivial; the PET does a lot of processing behind the scenes to parse a BASIC program as it is entered.) However, the output of the program was completely wrong, printing floating-point numbers instead of a string. We also encountered an intermittent problem that after turning the computer on, the boot message would be complete gibberish, as shown below. Instead of the "*** COMMODORE BASIC ***" banner, random characters and graphics would appear. The garbled boot message. How could the computer be operating well for the most part, yet also completely wrong? We went back to the logic analyzer to find out. I figured that the gibberish boot message would probably be the easiest thing to track down, since that happens early in the boot process. Looking at the code, I discovered that after the software tests the memory, it converts the memory size to an ASCII string using a moderately complicated algorithm.2 Then it writes the system boot message and the memory size to the screen. The PET uses a subroutine to write text to the screen. A pointer to the text message is held in memory locations 0071 and 0072. The assembly code below stores the pointer (in the X and Y registers) into these memory locations. (This Ghidra output shows the address, the instruction bytes, and the symbolic assembler instructions.) For the code above, you'd expect the processor to read the instruction bytes 86 and 71, and then write to address 0071. Next it should read the bytes 84 and 72 and write to address 0072. However, the logic analyzer output below showed that something slightly different happened. The processor fetched instruction bytes 86 and 71 from addresses D5AE and D5AF, then wrote 00 to address 0071, as expected. Next, it fetched instruction bytes 84 and 72 as expected, but wrote 01 to address 007A, not 0072! 007A 01 0 This was a smoking gun. The processor had messed up and there was a one-bit error in the address. Maybe the 6502 processor issued a bad signal or maybe something else was causing problems on the bus. The consequence of this error was that the string pointer referenced random memory rather than the desired boot message, so random characters were written to the screen. Next, I investigated why the screen had a mysterious checkerboard character. I wrote a program to scan the logic analyzer output to extract all the writes to screen memory. Most of the screen operations made sense—clearing the screen at startup and then writing the boot message—but I found one unexpected write to the screen. In the assembly code below, the Y register should be written to zero-page address 5e, and the X register should be written to the address 66, some locations used by the BASIC interpreter. However, the logic analyzer output below showed a problem. The first line should fetch the opcode 84 from address d3c8, but the processor received the opcode 8c from the ROM, the instruction to write to a 16-bit address. The result was that instead of writing to a zero-page address, the 6502 fetched another byte to write to a 16-bit address. Specifically, it grabbed the STX instruction (86) and used that as part of the address, writing FF (a checkerboard character) to screen memory at 865E3 instead of to the BASIC data structure at 005E. Moreover, the STX instruction wasn't executed, since it was consumed as an address. Thus, not only did a stray character get written to the screen, but data structures in memory didn't get updated. It's not surprising that the BASIC interpreter went out of control when it tried to run the program. 8C 1 186601 D3C9 5E 1 186602 D3CA 86 1 186603 865E FF 0 We concluded that a ROM was providing the wrong byte (8C) at address D3C8. This ROM turned out to be one of our replacements; the under-powered EPROM programmer had resulted in a flaky byte. Marc re-programmed the EPROM with a more powerful programmer. The system booted, but with much less RAM than expected. It turned out that another RAM chip had failed. Finally, we got the PET to run. I typed in a simple program to generate an animated graphical pattern, a program I remembered from when I was about 134, and generated this output: Finally, the PET worked and displayed some graphics. Imagine this pattern constantly changing. In retrospect, I should have tested all the RAM and ROM chips at the start, and we probably could have found the faults without the logic analyzer. However, the logic analyzer gave me an excuse to learn more about Ghidra and the PET's assembly code, so it all worked out in the end. In the end, the PET had 6 bad chips: two ROMs and four RAMs. The 6502 processor itself turned out to be fine.5 The photo below shows the 6 bad chips on top of the PET's tiny keyboard. On the top of each key, you can see the quirky graphical character set known as PETSCII.6 As for the title, I'm counting the badly-programmed ROM as half a bad chip since the chip itself wasn't bad but it was functioning erratically. The bad chips sitting on top of the keyboard. Follow me on Bluesky (@righto.com) or RSS for updates. (I'm no longer on Twitter.) Thanks to Mike Naberezny for providing the PET. Thanks to TubeTime, Mike Stewart, and especially CuriousMarc for help with the repairs. Some useful PET troubleshooting links are in the footnotes.7 Footnotes and references So why did I suddenly decide to restore a PET that had been sitting in my garage since 2017? Well, CNN was filming an interview with Bill Gates and they wanted background footage of the 1970s-era computers that ran the Microsoft BASIC that Bill Gates wrote. Spoiler: I didn't get my computer working in time for CNN, but Marc found some other computers.  ↩ Converting a number to an ASCII string is somewhat complicated on the 6502. You can't quickly divide by 10 for the decimal conversion, since the processor doesn't have a divide instruction. Instead, the PET's conversion routine has hard-coded four-byte constants: -100000000, 10000000, -100000, 100000, -10000, 1000, -100, 10, and -1. The routine repeatedly adds the first constant (i.e. subtracting 100000000) until the result is negative. Then it repeatedly adds the second constant until the result is positive, and so forth. The number of steps gives each decimal digit (after adjustment). The same algorithm is used with the base-60 constants: -2160000, 216000, -36000, 3600, -600, and 60. This converts the uptime count into hours, minutes, and seconds for the TIME$ variable. (The PET's basic time count is the "jiffy", 1/60th of a second.) ↩ Technically, the address 865E is not part of screen memory, which is 1000 characters starting at address 0x8000. However, the PET's address uses some shortcuts in address decoding, so 865E ends up the same as 825e, referencing the 7th character of the 16th line. ↩ Here's the source code for my demo program, which I remembered from my teenage programming. It simply displays blocks (black, white, or gray) with 8-fold symmetry, writing directly to screen memory with POKE statements. (It turns out that almost anything looks good with 8-fold symmetry.) The cryptic heart in the first PRINT statement is the clear-screen character. My program to display some graphics.  ↩ I suspected a problem with the 6502 processor because the logic analyzer showed that the 6502 read an instruction correctly but then accessed the wrong address. Eric provided a replacement 6502 chip but swapping the processor had no effect. However, reprogramming the ROM fixed both problems. Our theory is that the signal on the bus either had a timing problem or a voltage problem, causing the logic analyzer to show the correct value but the 6502 to read the wrong value. Probably the ROM had a weakly-programmed bit, causing the ROM's output for that bit to either be at an intermediate voltage or causing the output to take too long to settle to the correct voltage. The moral is that you can't always trust the logic analyzer if there are analog faults. ↩ The PETSCII graphics characters are now in Unicode in the Symbols for Legacy Computing block. ↩ The PET troubleshooting site was very helpful. The Commodore PET's Microsoft BASIC source code is here, mostly uncommented. I mapped many of the labels in the source code to the assembly code produced by Ghidra to understand the logic analyzer traces. The ROM images are here. Schematics of the PET are here. ↩↩

11 hours ago 2 votes
Real MLCCs (and inductors) have curves

Linear components are pretty nonlinear -- and parasitics don't tell the whole story.

3 hours ago 2 votes
Humanities Crash Course Week 15: Boethius

In week 15 of the humanities crash course, we started making our way out of classical antiquity and into the Middle Ages. The reading for this week was Boethius’s The Consolation of Philosophy, a book perhaps second only to the Bible in influencing Medieval thinking. I used the beautiful edition from Standard Ebooks. Readings Boethius was a philosopher, senator, and Christian born shortly after the fall of the Western Roman Empire. After a long, fruitful, and respectable life, he fell out of favor with the Ostrogothic king Theodoric and was imprisoned and executed without a trial. He wrote The Consolation while awaiting execution. Boethius imagines being visited in prison by a mysterious woman, Lady Philosophy, who helps him put his situation in perspective. He bemoans his luck. Lady Philosophy explains that he can’t expect to have good fortune without bad fortune. She evokes the popular image of the Wheel of Fortune, whose turns sometimes bring benefits and sometimes curses. She argues that rather than focusing on fortune, Boethius should focus on the highest good: happiness. She identifies true happiness with God, who transcends worldly goods and standards. They then discuss free will — does it exist? Lady Philosophy argues that it does and that it doesn’t conflict with God’s eternal knowledge since God exists outside of time. And how does one square God’s goodness with the presence of evil in the world? Lady Philosophy redefines power and punishment, arguing that the wicked are punished by their evil deeds: what may seem to us like a blessing may actually be a curse. God transcends human categories, including being in time. We can’t know God’s mind with our limited capabilities — an answer that echos the Book of Job. Audiovisual Music: classical works related to death: Schubert’s String Quartet No. 14 and Mozart’s Requiem. I hadn’t heard the Schubert quartet before; reading about it before listening helped me contextualize the music. I first heard Mozart’s Requiem in one of my favorite movies, Miloš Forman’s AMADEUS. It’s long been one of my favorite pieces of classical music. A fascinating discovery: while re-visiting this piece in Apple’s Classical Music app, I learned that the app presents in-line annotations for some popular pieces as the music plays. Listening while reading these notes helped me understand this work better. It’s a great example of how digital media can aid understandability. Art: Hieronymus Bosch, Albrecht Dürer, and Pieter Bruegel the Elder. I knew all three’s work, but was more familiar with Bosch and Dürer than with Bruegel. These videos helped: Cinema: among films possibly related to Boethius, Perplexity recommended Fred Zinnemann’s A MAN OF ALL SEASONS (1966), which won six Academy Awards including best picture. It’s a biopic of Sir Thomas More (1478—1535). While well-shot, scripted, and acted I found it uneven — but relevant. Reflections I can see why Perplexity would suggest pairing this movie with this week’s reading. Both Boethius and More were upstanding and influential members of society unfairly imprisoned and executed for crossing their despotic rulers. (Theodoric and Henry VIII, respectively.) The Consolation of Philosophy had parallels with the Book of Job: both grapple with God’s agency in a world where evil exists. Job’s answer is that we’re incapable of comprehending the mind of God. Boethius refines the argument by proposing that God exists outside of time entirely, viewing all events in a single, eternal act of knowing. While less philosophically abstract, the movie casts these themes in more urgent light. More’s crime is being principled and refusing to allow pressure from an authoritarian regime to compromise his integrity. At one point, he says I believe, when statesmen forsake their own private conscience for the sake of their public duties… they lead their country by a short route to chaos. Would that more people in leadership today had More’s integrity. That said, learning about the film’s historical context makes me think it paints him as more saintly than he likely was. Still, it offers a powerful portrayal of a man willing to pay the ultimate price for staying true to his beliefs. Notes on Note-taking ChatGPT failed me for the first time in the course. As I’ve done throughout, I asked the LLM for summaries and explanations as I read. I soon realized ChatGPT was giving me information for a different chapter than the one I was reading. The problem was with the book’s structure. The Consolation is divided into five books; each includes a prose chapter followed by a verse poem. ChatGPT was likely trained on a version that numbered these sections differently than the one I was reading. It took considerable back and forth to get the LLM on track. At least it suggested useful steps to do so. Specifically, it asked me to copy the beginning sentence of each chapter so it could orient itself. After three or so chapters of this, it started providing accurate responses. The lesson: as good as LLMs are, we can’t take their responses at face value. In a context like this — i.e., using it to learn about books I’m reading — it helps keep me on my toes, which helps me retain more of what I’m reading. But I’m wary of using AI for subjects where I have less competency. (E.g., medical advice.) Also new this week: I’ve started capturing Obsidian notes for the movies I’m watching. I created a new template based on the one I use for literature notes, replacing the metadata fields for the author and publisher with director and studio respectively. Up Next Gioia recommends Sun Tzu and Lao Tzu. I’ve read both a couple of times; I’ll only revisit The Art of War at this time. (I read Ursula Le Guin’s translation of the Tao Te Ching last year, so I’ll skip it to make space for other stuff.) Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

20 hours ago 1 votes
Reading List 04/12/2025

Solar PV adoption in Pakistan, a sodium-ion battery startup closing up shop, Figure’s humanoid robot progress, an AI-based artillery targeting system, and more.

2 days ago 3 votes
Australian Air Force

If You're Switched On, This is Paradise.

2 days ago 5 votes