Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
99
Introduction What was the SyncServer S200 Supposed to be Used For? IMPORTANT: Use the Right GPS Antenna! The SyncServer S200 Outside and Inside Installing the Missing BNC Connectors Setting Up a SyncServer S200 from Scratch Opening up the SyncServer S200 The SyncServer File System on the Flash Card Cloning the CompactFlash Card Reset to Default Settings Setting the IP Address Accessing the Web Interface DNS Configuration External NTP Server Configuration Testing your NTP server A Complicated Power Hungry Clock with a Terrible 10MHz Output Coming Up: Make the S200 Lock to GPS References Footnotes Introduction The Silicon Valley Electronics Flea Market runs every month from March to September and then goes on a hiatus for 6 months, so when the new March episode hits, there is a ton of pent-up demand… and supply: inevitably, some spouse has reached their limit during spring cleaning and wants a few of those boat anchors gone. You do not want to miss the first flea market of the year,...
8 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Electronics etc…

HP Laptop 17 RAM Upgrade

Introduction Selecting the RAM Opening up Replacing the RAM Reassembly References Introduction I do virtually all of my hobby and home computing on Linux and MacOS. The MacOS stuff on a laptop and almost all Linux work a desktop PC. The desktop PC has Windows on it installed as well, but it’s too much of a hassle to reboot so it never gets used in practice. Recently, I’ve been working on a project that requires a lot of Spice simulations. NGspice works fine under Linux, but it doesn’t come standard with a GUI and, more important, the simulation often refuse to converge once your design becomes a little bit bigger. Tired of fighting against the tool, I switched to LTspice from Analog Devices. It’s free to use and while it support Windows and MacOS in theory, the Mac version is many years behind the Windows one and nearly unusuable. After dual-booting into Windows too many times, a Best Buy deal appeared on my BlueSky timeline for an HP laptop for just $330. The specs were pretty decent too: AMD Ryzen 5 7000 17.3” 1080p screen 512GB SSD 8 GB RAM Full size keyboard Windows 11 Someone at the HP marketing departement spent long hours to come up with a suitable name and settled on “HP Laptop 17”. I generally don’t pay attention to what’s available on the PC laptop market, but it’s hard to really go wrong for this price so I took the plunge. Worst case, I’d return it. We’re now 8 weeks later and the laptop is still firmly in my possession. In fact, I’ve used it way more than I thought I would. I haven’t noticed any performance issues, the screen is pretty good, the SSD larger than what I need for the limited use case, and, surprisingly, the trackpad is the better than any Windows laptop that I’ve ever used, though that’s not a high bar. It doesn’t come close to MacBook quality, but palm rejection is solid and it’s seriously good at moving the mouse around in CAD applications. The two worst parts are the plasticy keyboard and the 8GB of RAM. I can honestly not quantify whether or not it has a practical impact, but I decided to upgrade it anyway. In this blog post, I go through the steps of doing this upgrade. Important: there’s a good chance that you will damage your laptop when trying this upgade and almost certainly void your warranty. Do this at your own risk! Selecting the RAM The laptop wasn’t designed to be upgradable and thus you can’t find any official resources about it. And with such a generic name, there’s guaranteed to be multiple hardware versions of the same product. To have reasonable confidence that you’re buying the correct RAM, check out the full product name first. You can find it on the bottom: Mine is an HP Laptop 17-cp3005dx. There’s some conflicting information about being able to upgrade the thing. The BestBuy Q&A page says: The HP 17.3” Laptop Model 17-cp3005dx RAM and Storage are soldered to the motherboard, and are not upgradeable on this model. This is flat out wrong for my device. After a bit of Googling around, I learned that it has a single 8GB DDR4 SODIMM 260-pin RAM stick but that the motherboard has 2 RAM slots and that it can support up to 2x32GB. I bought a kit with Crucial 2x16GB 3200MHz SODIMMs from Amazon. As I write this, the price is $44. Opening up Removing the screws This is the easy part. There are 10 screws at the bottom, 6 of which are hidden underneath the 2 rubber anti-slip strips. It’s easy to peel these stips loose. It’s als easy to put them back without losing the stickiness. Removing the bottom cover The bottom cover is held back by those annoying plastic tabs. If you have a plastic spudger or prying tool, now is the time to use them. I didn’t so I used a small screwdriver instead. Chances are high that you’ll leave some tiny scuffmarks on the plastic casing. I found it easiest to open the top lid a bit, place the laptop on its side, and start on the left and right side of the keyboard. After that, it’s a matter of working your way down the long sides at the front and back of the laptop. There are power and USB connectors that are right against the side of the bottom panel so be careful not to poke with the spudger or screwdriver inside the case. It’s a bit of a jarring process, going back and forth and making steady improvement. In addition to all the clips around the board of the bottom panel, there are also a few in the center that latch on to the side of the battery. But after enough wiggling and creaking sounds, the panel should come loose. Replacing the RAM As expected, there are 2 SODIMM slots, one of which is populated with a 3200MHz 8GDB RAM stick. At the bottom right of the image below, you can also see the SSD slot. If you don’t enjoy the process of opening up the laptop and want to upgrade to a larger drive as well, now would be the time for that. New RAM in place! It’s always a good idea to test the surgery before reassembly: Success! Reassembly Reassembly of the laptop is much easier than taking it apart. Everything simply clicks together. The only minor surprise was that both anti-slip strips became a little bit longer… References Memory Upgrade for HP 17-cp3005dx Laptop Upgrading Newer HP 17.3” Laptop With New RAM And M.2 NVMe SSD Different model with Intel CPU but the case is the same.

3 weeks ago 16 votes
Symbolic Reference and Hardware Models in Python

The Traditional Hardware Design and Verification Flow An Image Downscaler as Example Design The Reference Model The Micro-Architecture Model Comparing the results Conversion to Hardware Combining symbolic models with random input generation Specification changes Things to experiment with… Symbolic models are best for block or sub-block level modelling References Conclusion The Traditional Hardware Design and Verification Flow In a professional FPGA or ASIC development flow, multiple models are tested against each other to ensure that the final design behaves the way it should. Common models are: a behavioral model that describes the functionality at the highest level These models can be implemented in Matlab, Python, C++ etc. and are usually completely hardware architecture agnostic. They are often not bit accurate in their calculated results, for example because they use floating point numbers instead of fixed point numbers that are more commonly used by the hardware, A good example is the floating point C model that I used to develop my Racing the Beam Ray Tracer, though in this case, the model later transistioned into a hybrid reference/achitectural model. an architectural transaction accurate model An architectural model is already aware of how the hardware is split into major functional groups and models the interfaces between these functional groups in a bit-accurate and transaction-accurate way at the interface level. It doesn’t have a concept of timing in the form of clock cycles. source hardware model This model is the source from which the actual hardware is generated. Traditionally, and still in most cases, this is a synthesizable RTL model written in Verilog or VHDL, but high-level synthesis (HLS) is getting some traction as well. In the case of RTL, this model is cycle accurate. In the case of HLS, it still won’t be. The difference between an HLS C++ model1 and an architectural C++ model is in the way it is coded: HLS code needs to obey coding style restrictions that will otherwise prevent the HLS tool to convert the code to RTL. The HLS model is usually also split up in much more smaller units that interact with each other. RTL model The Verilog or VHDL model of the design. This can be the same as the source hardware model or it can be generated from HLS. Gate-level model The RTL model synthesized into a gatelevel netlist. During the design process, different models are compared against each other. Their outputs should be the same… to a certain extent, since it’s not possible to guarantee identical results between floating point and fixed point models. One thing that is constant among these models is that they get fed with, operate on, and output actual data values. Let’s use the example of a video pipeline. The input of the hardware block might be raw pixels from a video sensor, the processing could be some filtering algorithm to reduce noise, and the outputs are processed pixels. To verify the design, the various models are fed with a combination of generic images, ‘interesting’ images that are expected to hit certain use cases, images with just random pixels, or directed tests that explicity try to trigger corner cases. When there is mismatch between different models, the fun part begins: figuring out the root cause. For complex algorithms that have a lot of internal state, an error may have happened thousands of transactions before it manifests itself at the output. Tracking down such an issue can be a gigantic pain. For many hardware units, the hard part of the design is not the math, but getting the right data to the math units at the right time, by making sure that the values are written, read, and discarded from internal RAMs and FIFOs in the right order. Even with a detailed micro-architectural specification, a major part of the code may consist of using just the correct address calculation or multiplixer input under various conditions. For these kind of units, I use a different kind of model: instead of passing around and operating on data values through the various stages of the pipeline or algorithm, I carry around where the data is coming from. This is not so easy to do in C++ or RTL, but it’s trivial in Python. For lack of a better name, I call these symbolic models. There are thus two additional models to my arsenal of tools: a reference symbolic model a hardware symbolic model These models are both written in Python and their outputs are compared against each other. In this blog post, I’ll go through an example case where I use such model. An Image Downscaler as Example Design Let’s design a hardware module that is easy enough to not spend too much time on it for a blog post, but complex enough to illustrate the benefits of a symbolic model: an image downscaler. The core functionality is the following: it accepts an monochrome image with a maximum resolution of 7680x4320. it downscales the input image with a fixed ratio of 2 in both directions. it uses a 3x3 tap 2D filter kernel for downscaling. The figure below shows how an image with a 12x8 resolution that gets filtered and downsampled into a 6x4 resolution image. Each square represents an input pixel, each hatched square an output pixel, and the arrows show how input pixels contribute to the input of the 3x3 filter for the output pixel. For pixels that lay against the top or left border, the top and left pixels are repeated upward and leftward so that the same 3x3 filter can be used. If this downscaler is part of a streaming display pipeline that eventually sends pixels to a monitor, there is not a lot of flexibility: pixels arrive in a left to right and top to bottom scan order and you need 2 line stores (memories) because there are 3 vertical filter taps. Due to the 2:1 scaling ratio, the line stores can be half the horizontal resolution, but for an 8K resolution that’s still 7680/2 ~ 4KB of RAM just for line buffering. In the real world, you’d have to multiply that by 3 to support RGB instead of monochrome. And since we need to read and write from this RAM every clock cycle, there’s no chance of off-loading this storage to cheaper memory such as external DRAM. However, we’re lucky: the downscaler is part of a video decoder pipeline and those typically work with super blocks of 32x32 or 64x64 pixels that are scanned left-to-right and top-to-bottom. Within each super block, pixels are grouped in tiles of 4x4 pixels that are scanned the same way. In other words, there are 3 levels of left-to-right, top-to-bottom scan operations: the pixels inside each 4x4 pixel tile the pixel tiles inside each super block the super blocks inside each picture (Click to enlarge) The output has the same organization of pixels, 4x4 pixel blocks and super blocks, but due to the 2:1 downsampling in both directions, the size of a super block is 32x32 instead of 64x64 pixels. There are two major advantages to having the data flow organized this way: the downscaler can operate on one super block at a time instead of the full image For pixels inside the super block, that reduces size of the active input image width from 7680 to just 64 pixels. as long as the filter kernel is less than 5 pixels high, only 1 line store is needed The line store contains a partial sum of multiple lines of pixels. While the line store still needs to cover the full picture width when moving from one row of super blocks to the one below it, the bandwidth that is required to access the line store is but a fraction of the one befire: 1/64th to be exact. That opens up the opportunity to stream line store data in and out of external DRAM instead of keeping it in expensive on-chip RAMs. But it’s not all roses! There are some negative consequences as well: pixels from the super block above the current one must be fetched from DMA and stored in a local memory pixels at the bottom of the current super block must be sent to DMA the right-most column of pixels from the current super block are used in the next super block to when doing the 3x3 filter operation 4x4 size input tiles get downsampled to 2x2 size output tiles, but they must be sent out again as 4x4 tiles. This requires some kind of pixel tile merging operation. While the RAM area savings are totally worth it, all this adds a significant amount of data management complexity. This is the kind of problem where a symbolic micro-architecture model shines. The Reference Model When modeling transformations that work at the picture level, it’s convenient to assume that there are no memory size constraints and that you can access all pixels at all times no matter where they are located in the image. You don’t have to worry about how much RAM this would take on silicon: it’s up to the designer of the micro-architecture to figure out how to create an area efficient implementation. This usually results in a huge simplification of the reference model, which is good because as the source of truth you want to avoid any bugs in it. For our downscaler, the reference model creates an array of output pixels where each output pixel contains the coordinates of all the input pixels that are required to calculate its value. The pseudo code is someting like this: for y in range(OUTPUT_HEIGHT): for x in range(OUTPUT_WIDTH): get coordinates of input pixels for filter store coordinates at (x,y) of the output image The reference model python code is not much more complicated. You can find the code here. Instead of a 2-dimensional array, it uses a dictionary with the output pixel coordinates as key. This is a personal preference: I think ref_output_pixels [ (x,y) ] looks cleaner than ref_output_pixels[y][x] or ref_output_pixels[x][y]. When the reference model data creation is complete, the ref_output_pixels array contains values like this: (0,0) => [ Pixel(x=0, y=0), Pixel(x=0, y=0), Pixel(x=1, y=0), Pixel(x=0, y=0), Pixel(x=0, y=0), Pixel(x=1, y=0), Pixel(x=0, y=1), Pixel(x=0, y=1), Pixel(x=1, y=1) ] (1,0) => [ Pixel(x=1, y=0), Pixel(x=2, y=0), Pixel(x=3, y=0), Pixel(x=1, y=0), Pixel(x=2, y=0), Pixel(x=3, y=0), Pixel(x=1, y=1), Pixel(x=2, y=1), Pixel(x=3, y=1) ] ... (8,7) => [ Pixel(x=15, y=13), Pixel(x=16, y=13), Pixel(x=17, y=13), Pixel(x=15, y=14), Pixel(x=16, y=14), Pixel(x=17, y=14), Pixel(x=15, y=15), Pixel(x=16, y=15), Pixel(x=17, y=15) ] ... The reference value of each output pixel is a list of input pixels that are needed to calculate its value. I do not care about the actual value of the pixels or the mathematical operation that is applied on these inputs. The Micro-Architecture Model The source code of the hardware symbolic model can be found here. It has the following main data buffers and FIFOs: an input stream, generated by gen_input_stream, 4x4 pixel tiles that are sent super block by super block and then tile by tile. an output stream of 4x4 pixel tiles with the downsampled image. a DMA FIFO, modelled with simple Python list in which the bottom pixels of a super block area stored and later fetched when the super block of the next row needs the neighboring pixels above. buffers with above and left neighboring pixels that cover the width and height of a super block. an output merge FIFO is used to group a set to 4 2x2 downsampled pixels into a 4x4 tile of pixels The model loops through the super blocks in scan order and then the tiles in scan order, and for each 4x4 tile it calculates a 2x2 output tiles. for sy in range(nr of vertical super block): for sx in range(nr of horizontal super block): for tile_y in range(nr of vertical tiles in a superblock): for tile_x in range(nr of horizontal tiles in a superblock): fetch 4x4 tile with input pixels calculate 2x2 output pixels merge 2x2 output pixels into 4x4 output tiles data management When we look at the inputs that are required to calculate the 4 output pixels for each tile of 16 input pixels, we get the following: In addition to pixels from the tile itself, some output pixels also need input values from above, above-left and/or left neighbors. When switching from one super block to the next, buffers must be updated with neighboring pixels for the whole width and height of the super block. But instead of storing the values of individual pixels, we can store intermediate sums to reduce the number of values: At first sight, it looks like this reduces the number of values in the above and left neighbors buffers by half, but that’s only true for the above buffer. While the left neighbors can be reduced by half for the current tile, the bottom left pixel value is still needed to calculuate the above value for the 4x4 tiles of the next row. So the size of the left buffer is not 1/2 of the size of the super block but 3/4. In the left figure above, the red rectangles contain the components needed for the top-left output pixel, the green for the top-right output pixel etc. The right figure shows the partial sums that must be calculated for the left and above neighbors for future 4x4 tiles. These red, green, blue and purple rectangles have direct corresponding sections in the code. p00 = ( tile_above_pixels[0], tile_left_pixels[0], input_tile[0], input_tile[1], input_tile[4], input_tile[5] ) p10 = ( tile_above_pixels[1], input_tile[ 1], input_tile[ 2], input_tile[ 3] , input_tile[ 5], input_tile[ 6], input_tile[ 7] ) p01 = (tile_left_pixels[1], input_tile[ 4], input_tile[ 5] , input_tile[ 8], input_tile[ 9] , input_tile[12], input_tile[13] ) p11 = ( input_tile[ 5], input_tile[ 6], input_tile[ 7] , input_tile[ 9], input_tile[10], input_tile[11] , input_tile[13], input_tile[14], input_tile[15] ) For each tile, there’s quite a bit of bookkeeping of context values, reading and writing to buffers to keep everything going. Comparing the results In traditional models, as soon as intermediate values are calculated, the original values can be dropped. In the case of our example, with a filter where all coefficients are 1, the above and left values intermediate values of the top-left output pixels are summed and stored as 18 and 11, and the original values of (3,9,6) and (5,6) aren’t needed anymore. This and the fact the multiple inputs might have the same numerical value is what makes traditional models often hard to debug. This is not the case for symbolic models where all input values, the input pixel coordinates, are carried along until the end. In our model, the intermediate results are not removed from the final result. Here is the output result for output pixel (12,10): ... ( # Above neighbor intermediate sum (Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19)), # Left neighbor intermediate sum (Pixel(x=23, y=20), Pixel(x=23, y=21)), # Values from the current tile Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=24, y=21), Pixel(x=25, y=21) ), ... Keeping the intermediate results makes it easier to debug but to compare against the reference model, the data with nested lists must be flattened into this: ... ( Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19), Pixel(x=23, y=20), Pixel(x=23, y=21), Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=24, y=21), Pixel(x=25, y=21) ), ... But even that is not enough to compare: the reference value has the 3x3 input values in a scan order that was destroyed due to using intermediate values so there’s a final sorting step to restore the scan order: ... ( Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19), Pixel(x=23, y=20), Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=23, y=21), Pixel(x=24, y=21), Pixel(x=25, y=21) ) Finally, we can go through all the output tiles of the hardware model and compare them against the tiles of the reference model. If all goes well, the script should give the following: > ./downscaler.py PASS! Any kind of bug will result in an error message like this one: > ./downscaler.py MISMATCH! sb(1,0) tile(0,0) (0,0) 1 ref: [Pixel(x=7, y=0), Pixel(x=7, y=0), Pixel(x=8, y=0), Pixel(x=8, y=0), Pixel(x=9, y=0), Pixel(x=9, y=0), Pixel(x=7, y=1), Pixel(x=8, y=1), Pixel(x=9, y=1)] hw: [Pixel(x=7, y=0), Pixel(x=8, y=0), Pixel(x=8, y=0), Pixel(x=9, y=0), Pixel(x=9, y=0), Pixel(x=7, y=1), Pixel(x=8, y=1), Pixel(x=9, y=1), Pixel(x=7, y=4)] Conversion to Hardware The difficulty of converting the Python micro-architectural model to a hardware implementation model depends on the abstraction level of the hardware implementation language. When using C++ and HLS, the effort can be trivial: some of my blocks have a thousand or more lines of Python that can be converted entirely to C++ pretty much line by line. It can take a few weeks to develop and debug the Python model yet getting the C++ model running only takes a day or two. If the Python model is fully debugged, the only issues encountered are typos made during the conversion and signal precision mistakes. The story is different when using RTL: with HLS, the synthesis-to-Verilog will convert for loops to FSMs and take care of pipelining. When writing RTL directly, that tasks falls on you. You could change the Python model and switch to FSMs there to make that step a bit easier. Either way, having flushed out all the data management will allow you to focus on just the RTL specific tasks while being confident that the core architecture is correct. Combining symbolic models with random input generation The downscaler example is a fairly trivial unit with a predictable input data stream and a simple algorithm. In a video encoder or decoder, instead of a scan-order stream of 4x4 tiles, the input is often a hierarchical coding tree with variable size coding units that are scanned in quad-tree depth-first order. Dealing with this kind of data stream kicks up the complexity a whole lot. For designs like this, the combination of a symbolic model and a random coding tree generator is a super power that will hit corner case bugs with an efficiency that puts regular models to shame. Specification changes The benefits of symbolic models don’t stop with quickly finding corner case bugs. I’ve run in a number of cases where the design requirements weren’t fully understood at the time of implementation and incorrect behavior was discovered long after the hardware model was implemented. By that time, some of the implementation subtleties may have already been forgotten. It’s scary to make changes on a hardware design that has complex data management when corner case bugs take thousands of regression simulations to uncover. If the symbolic model is the initial source of truth, this is usually just not an issue: exhaustive tests can often be run in seconds and once the changes pass there, you have confidence that the corresponding changes in the hardware model source code are sound. Things to experiment with… Generating hardware stimuli I haven’t explored this yet, but it is possible to use a symbolic model to generate stimuli for the hardware model. All it takes is to replace the symbolic input values (pixel coordinates) by the actual pixel values at that location and performing the right mathematical equations on the symbolic values. A joint symbolic/hardware model Having a Python symbolic model and a C++ HLS hardware model isn’t a huge deal but there’s still the effort of converting one into the other. There is a way to have a unified symbolic/hardware model by switching the data type of the input and output values from one that contains symbolic values to one that contains the real values. If C++ is your HLS language, then this requires writing the symbolic model in C++ instead of Python. You’d trade off the rapid interation time and conciseness of Python against having only a single code base. Symbolic models are best for block or sub-block level modelling Since symbolic models carry along the full history of calculated values, symbolic models aren’t very practical when modelling multiple larger blocks together: hierarchical lists with tens or more input values create information overload. For this reason, I use symbolic models at the individual block level or sometimes even sub-block level when dealing with particularly tricky data management cases. My symbolic model script might contain multiple disjoint models that each implement a sub-block of the same major block, without interacting with eachother. References symbolic_model repo on GitHub Conclusion Symbolic models in Python have been a major factor in boosting my design productivity and increasing my confidence in a micro-architectural implementation. If you need to architect and implement a hardware block with some tricky data management, give them a try, and let me know how it went! Not all HLS code is written with C++. There are other languages as well. ↩

3 months ago 49 votes
Making Screenshots of Test Equipment Old and New

Introduction Screenshot Capturing Interfaces Hardware and Software Tools Capturing GPIB data in Talk Only mode TDS 540 Oscilloscope - GPIB - PCL Output HP 54542A Oscilloscope - Parallel Port - PCL or HPGL Output HP Inifinium 54825A Oscilloscope - Parallel Port - Encapsulated Postscript TDS 684B - Parallel Port - PCX Color Output Advantest R3273 Spectrum Analyzer - Parallel Port - PCL Output HP 8753C Vector Network Analyzer - GPIB - HP 8753 Companion Siglent SDS 2304X Oscilloscope - USB Drive, Ethernet or USB Introduction Last year, I create Fake Parallel Printer, a tool to capture the output of the parallel printer port of old-ish test equipment so that it can be converted into screenshots for blog posts etc. It’s definitely a niche tool, but of all the projects that I’ve done, it’s definitely the one that has seen the most amount of use. One issue is that converting the captured raw printing data to a bitmap requires recipes that may need quite a bit of tuning. Some output uses HP PCL, other is Encapsulated Postscript (EPS), if you’re lucky the output is a standard bitmap format like PCX. In the blog post, describe the procedures that I use to create screenshots of the test equipment that I personally own, so that don’t need to figure it out again when I use the device a year later. That doesn’t make it all that useful for others, but somebody may benefit from it when googling for it… As always, I’m using Linux so the software used below reflects that. Screenshot Capturing Interfaces Here are some common ways to transfer screenshots from test equipment to your PC: USB flash drive Usually the least painless by far, but it only works on modern equipment. USB cable Requires some effort to set the right udev driver permissions and a script that sends commands that are device specific. But it generally works fine. Ethernet Still requires slightly modern equipment, and there’s often some configuration pain involved. RS-232 serial Reliable, but often slow. Floppy disk I have a bunch of test equipment with a floppy drive and I also have a USB floppy drive for my PC. However, the drives on all this equipment is broken, in the sense that it can’t correctly write data to a disc. There must be some kind of contamination going on when a floppy drive head isn’t used for decades. GPIB Requires an expensive interface dongle and I’ve yet to figure out how to make it work for all equipment. Below, I was able to make it work for a TDS 540 oscilloscope, but not for an HP 54532A oscilloscope, for example. Parallel printer port Available on a lot of old equipment, but it normally can’t be captured by a PC unless you use Fake Parallel Printer. We’re now more than a year later, and I use it all the time. I find it to be the easiest to use of all the printer interfaces. Hardware and Software Tools GPIB to USB Dongle If you want to print to GPIB, you’ll need a PC to GPIB interface. These days, the cheapest and most common are GPIB to USB dongles. I’ve written about those here and here. The biggest take-away is that they’re expensive (>$100 second hand) and hard to configure when using Linux. And as mentioned above, I have only had limited success and using them in printer mode. ImageMagick ImageMagick is the swiss army knife of bitmap file processing. It has a million features, but I primarily use it for file format conversion and image cropping. I doubt that there’s any major Linux distribution that doesn’t have it as a standard package… sudo apt install imagemagick GhostPCL GhostPCL is used to decode PCL files. On many old machines, these files are created when printing to Thinkjet, Deskjet or Laserjet. Installation: Download the GhostPCL/GhostPDL source code. Compile cd ~/tools tar xfv ~/Downloads/ghostpdl-10.03.0.tar.gz cd ghostpdl-10.03.0/ ./configure --prefix=/opt/ghostpdl make -j$(nproc) export PATH="/opt/ghostpdl/bin:$PATH" Install sudo make install A whole bunch of tools will now be available in /opt/ghostpdl/bin, including gs (Ghostscript) and gpcl6. hp2xx hp2xx converts HPGL files, originally intended for HP plotter, to bitmaps, EPS etc. It’s available as a standard package for Ubuntu: sudo apt install hp2xx Inkscape Inkscape is full-featured vector drawing app, but it can also be used as a command line tool to convert vector content to bitmaps. I use it to convert Encapsulated Postscript file (EPS) to bitmaps. Like other well known tools, installation on Ubuntu is simple: sudo apt install inkscape HP 8753C Companion This tool is specific to HP 8753 vector network analyzers. It captures HPGL plotter commands, extracts the data, recreates what’s displayed on the screen, and allow you to interact with it. It’s available on GitHub. Capturing GPIB data in Talk Only mode Some devices will only print to GPIB in Talk Only mode, or sometimes it’s just easier to use that way. When the device is in Talk Only mode, the PC GPIB controller becomes a Listen Only device, a passive observer that doesn’t initiate commands but just receives data. I wrote the following script to record the printing data and save it to a file: gpib_talk_to_file.py: (Click to download) #! /usr/bin/env python3 import sys import pyvisa gpib_addr = int(sys.argv[1]) output_filename = sys.argv[2] rm = pyvisa.ResourceManager() inst = rm.open_resource(f'GPIB::{gpib_addr}') try: # Read data from the device data = inst.read_raw() with open(output_filename, 'wb') as file: file.write(data) except pyvisa.VisaIOError as e: print(f"Error: {e}") Pyvisa is a universal library to take to test equipement. I wrote about it here. It will quickly time out when no data arrives in Talk Only mode, but since all data transfers happen with valid-ready protocol, you can avoid time-out issued by pressing the hardcopy or print button on your oscilloscope first, and only then launch the script above. This will work as long as the printing device doesn’t go silent while in the middle of printing a page. TDS 540 Oscilloscope - GPIB - PCL Output My old TDS 540 oscilloscope doesn’t have a printer port, so I had to make do with GPIB. Unlike later version of the TDS series, it also doesn’t have the ability to export bitmaps directly, but it has outputs for: Thinkjet, Deskjet, and Laserjet in PCL format Epson in ESC/P format Interleaf format EPS Image format HPGL plotter format The TDS 540 has a screen resolution of 640x480. I found the Thinkjet output format, with a DPI of 75x75, easiest to deal with. The device adds a margin of 20 pixels to the left, and 47 pixels at the top, but those can be removed with ImageMagick. With a GPIB address of 11, the overall recipe looks like this: # Capture the PCL data gpib_talk_to_file.py 11 tds540.thinkjet.pcl # Convert PCL to png gpcl6 -dNOPAUSE -sOutputFile=tds540.png -sDEVICE=png256 -g680x574 -r75x75 tds540.thinkjet.pcl # Remove the margins and crop the image to 640x480 convert tds540.png -crop 640x480+20+47 tds540.crop.png The end result looks like this: HP 54542A Oscilloscope - Parallel Port - PCL or HPGL Output This oscilloscope was an ridiculous $20 bargain at the Silicon Valley Electronics Flea Market and it’s the one I love working with the most: the user interface is just so smooth and intuitive. Like all other old oscilloscopes, the biggest thing going against it is the amount if desk space it requires. It has a GPIB, RS-232, and Centronics parallel port, and all 3 can be used for printing. I tried to get printing to GPIB to work but wasn’t successful: I’m able to talk to the device and send commands like “*IDN?” and get a reply just fine, but the GPIB script that works fine with the TDS 540 always times out eventually. I switched to my always reliable Fake Parallel Printer and that worked fine. There’s also the option to use the serial cable. The printer settings menu can by accessed by pressing the Utility button and then the top soft-button with the name “HPIB/RS232/CENT CENTRONICS”. You have the following options: ThinkJet DeskJet75dpi, DeskJet100dpi, DeskJet150dpi, DeskJet300dpi LaserJet PaintJet Plotter Unlike the TDS 540 I wasn’t able to get the ThinkJet option to convert into anything, but the DeskJet75dpi option worked fine with this recipe: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f hp54542a_ -s deskjet.pcl -v gpcl6 -dNOPAUSE -sOutputFile=hp54542a.png -sDEVICE=png256 -g680x700 -r75x75 hp54542a_0.deskjet.pcl convert hp54542a.png -crop 640x388+19+96 hp54542a.crop.png The 54542A doesn’t just print out the contents of the screen, it also prints the date and adds the settings for the channels that are enabled, trigger options etc. The size of these additional values depends on how many channels and other parameters are enabled. When you select PaintJet or Plotter as output device, you have the option to select different colors for regular channels, math channels, graticule, markers etc. So it is possible to create nice color screenshots from this scope, even if the CRT is monochrome. I tried the PaintJet option, and while gcpl6 was able to extract an image, the output was much worse than the DeskJet option. I had more success using the Plotter option. It prints out a file in HPGL format that can be converted to a bitmap with hp2xx. The following recipe worked for me: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f hp54542a_ -s plotter.hpgl -v hp2xx -m png -a 1.4 --width 250 --height 250 -c 12345671 -p 11111111 hp54542a_0.plotter.hpgl I’m not smitten with the way it looks, but if you want color, this is your best option. The command line options of hp2xx are not intuitive. Maybe it’s possible to get this to look a bit better with some other options. Click to enlarge HP Inifinium 54825A Oscilloscope - Parallel Port - Encapsulated Postscript This indefinite-loaner-from-a-friend oscilloscope has a small PC in it that runs an old version of Windows. It can be connected to Ethernet, but I’ve never done that: capturing parallel printer traffic is just too convenient. On this oscilloscope, I found that printing things out as Encapsulated Postscript was the best option. I then use inkscape to convert the screenshot to PNG: ./fake_printer.py --port=/dev/ttyACM0 -t 2 -v --prefix=hp_osc_ -s eps inkscape -f ./hp_osc_0.eps -w 1580 -y=255 -e hp_osc_0.png convert hp_osc_0.png -crop 1294x971+142+80 hp_osc_0_cropped.png Ignore the part circled in red, that was added in post for an earlier article: Click to enlarge TDS 684B - Parallel Port - PCX Color Output I replaced my TDS 540 oscilloscope with a TDS 684B. On the outside they look identical. They also have the same core user interface, but the 648B has a color screen, a bandwidth of 1GHz, and a sample rate of 5 Gsps. Print formats The 684B also has a lot more output options: Thinkjet, Deskjet, DeskjetC (color), Laserjet output in PCL format Epson in ESC/P format DPU thermal printer PC Paintbrush mono and color in PCX file format TIFF file format BMP mono and color format RLE color format EPS mono and color printer format EPS mono and color plotter format Interleaf .img format HPGL color plot Phew. Like the HP 54542A, my unit has GPIB, parallel port, and serial port. It can also write out the files to floppy drive. So which one to use? BMP is an obvious choice and supported natively by all modern PCs. The only issue is that it gets written out without any compression so it takes over 130 seconds to capture with fake printer. PCX is a very old bitmap file format, I used it way back in 1988 on my first Intel 8088 PC, but it compresses with run length encoding which works great on oscilloscope screenshots. It only take 22 seconds to print. I tried the TIFF option and was happy to see that it only took 17 seconds, but the output was monochrome. So for color bitmap files, PCX is the way to go. The recipe: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f tds684_ -s pcx -v convert tds684_0.pcx tds684.png The screenshot above uses the Normal color setting. The scope also a Bold color setting: There’s a Hardcopy option as well: It’s a matter of personal taste, but my preference is the Normal option. Advantest R3273 Spectrum Analyzer - Parallel Port - PCL Output Next up is my Advantest R3273 spectrum analyzer. It has a printer port, a separate parallel port that I don’t know what it’s used for, a serial port, a GPIB port, and floppy drive that refuses to work. However, in the menus I can only configure prints to go to floppy or to the parallel port, so fake parallel printer is what I’m using. The print configuration menu can be reached by pressing: [Config] - [Copy Config] -> [Printer]: The R3273 supports a bunch of formats, but I had the hardest time getting it create a color bitmap. After a lot of trial and error, I ended up with this: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f r3273_ -s pcl -v gpcl6 -dNOPAUSE -sOutputFile=r3273_tmp.png -sDEVICE=png256 -g4000x4000 -r600x600 r3273_0.pcl convert r3273_tmp.png -filter point -resize 1000 r3273_filt.png rm r3273_tmp.png convert r3273_filt.png -crop 640x480+315+94 r3273.png rm r3273_filt.png The conversion loses something in the process. The R3273 hardcopy mimics the shades of depressed buttons with a 4x4 pixel dither pattern: If you use a 4x4 pixel box filter and downsample by a factor of 4, this dither pattern converts to a nice uniform gray, but the actual spectrum data gets filtered down as well: With the recipe above, I’m using 4x4 to 1 pixel point-sampling instead, with a phase that is chosen just right so that the black pixels of the dither pattern get picked. The highlighted buttons are now solid black and everything looks good. HP 8753C Vector Network Analyzer - GPIB - HP 8753 Companion My HP 8753C VNA only has a GPIB interface, so there’s not a lot of choice there. I’m using HP 8753 Companion. It can be used for much more than just grabbing screenshots: you can save the measured data to a filter, upload calibration kit data and so on. It’s great! You can render the screenshot the way it was plotted by the HP 8753C, like this: Click to enlarge Or you can display it as in a high resolution mode, like this: Click to enlarge Default color settings for the HPGL plot aren’t ideal, but everything is configurable. If you don’t have one, the existence of HP 8753 Companion alone is a good reason to buy a USB-to-GPIB dongle. Click to enlarge Siglent SDS 2304X Oscilloscope - USB Drive, Ethernet or USB My Siglent SDS 2304X was my first oscilloscope. It was designed 20 years later than all the other stuff, with a modern UI, and modern interfaces such as USB and Ethernet. There is no GPIB, parallel or RS-232 serial port to be found. I don’t love the scope. The UI can become slow when you’re displaying a bunch of data on the screen, and selecting anything from a menu with a detentless rotary knob can be the most infuriating experience. But it’s my daily driver because it’s not a boat anchor: even on my messy desk, I can usually create room to put it down without too much effort. You’d think that I use USB or Ethernet to grab screenshots, but most of the time I just use a USB stick and shuttle it back and forth between the scope and the PC. That’s because setting up the connection is always a bit of pain. However, if you insist, you can set things up this way: Ethernet To configure Ethernet, you need to go to [Utility] -> [Next Page] -> [I/O] -> [LAN]. Unlike my HP 1670G logic analyzer, the Siglent supports DHCP but when writing this blog post, the scope refused to grab an IP address on my network. No amount of rebooting, disabling and re-enabling DHCP helped. I have gotten it to work in the past, but today it just wasn’t happening. You’ll probably understand why using a zero-configuration USB stick becomes an attractive alternative. USB If you want to use USB, you need an old relic of a USB-B cable. It shows up like this: sudo dmesg -w [314170.674538] usb 1-7.1: new full-speed USB device number 11 using xhci_hcd [314170.856450] usb 1-7.1: not running at top speed; connect to a high speed hub [314170.892455] usb 1-7.1: New USB device found, idVendor=f4ec, idProduct=ee3a, bcdDevice= 2.00 [314170.892464] usb 1-7.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [314170.892469] usb 1-7.1: Product: SDS2304X [314170.892473] usb 1-7.1: Manufacturer: Siglent Technologies Co,. Ltd. [314170.892476] usb 1-7.1: SerialNumber: SDS2XJBD1R2754 Note 3 key parameters: USB vendor ID: f4ec USB product ID: ee3a Product serial number: SDS2XJBD1R2754 Set udev rules so that you can access this device of USB without requiring root permission by creating an /etc/udev/rules.d/99-usbtmc.rules file and adding the following line: SUBSYSTEM=="usb", ATTR{idVendor}=="f4ec", ATTR{idProduct}=="ee3a", MODE="0666" You should obviously replace the vendor ID and product ID with the one of your case. Make the new udev rules active: sudo udevadm control --reload-rules sudo udevadm trigger You can now download screenshots with the following script: siglent_screenshot_usb.py: (Click to download) #!/usr/bin/env python3 import argparse import io import pyvisa from pyvisa.constants import StatusCode from PIL import Image def screendump(filename): rm = pyvisa.ResourceManager('') # Siglent SDS2304X scope = rm.open_resource('USB0::0xF4EC::0xEE3A::SDS2XJBD1R2754::INSTR') scope.read_termination = None scope.write('SCDP') data = scope.read_raw(2000000) image = Image.open(io.BytesIO(data)) image.save(filename) scope.close() rm.close() if __name__ == '__main__': parser = argparse.ArgumentParser( description='Grab a screenshot from a Siglent DSO.') parser.add_argument('--output', '-o', dest='filename', required=True, help='the output filename') args = parser.parse_args() screendump(args.filename) Once again, take note of this line scope = rm.open_resource('USB0::0xF4EC::0xEE3A::SDS2XJBD1R2754::INSTR') and don’t forget to replace 0xF4EC, 0xEE3A, and SDS2XJBD1R2754 by the correct USB product ID, vendor ID and serial number. Call the script like this: ./siglent_screenshot_usb.py -o siglent_screenshot.png If all goes well, you’ll get something like this:

4 months ago 57 votes
A Hardware Interposer to Fix the Symmetricom SyncServer S200 GPS Week Number Rollover Problem

Introduction IMPORTANT: Use the Right GPS Antenna! The Problem: SyncServer Refuses to Lock to GPS The GPS Week Number Rollover Issue Making the Furuno GT-8031 Work Again How It Works Build Instructions Power Supply Recapping The Future: A Software-Only Solution The Result References Footnotes Introduction In my earlier blog post, I wrote about how to set up a SyncServer S200 as a regular NTP server, and how to install the backside BNC connectors to bring out the 10 MHz and 1PPS outputs. The ultimate goal is to use the SyncServer as a lab timing reference, but at the end of that blog post, it’s clear that using NTP alone is not good enough to get a precise 10 MHz clock: the output frequency was off by almost 100Hz! To get a more accurate output clock, you need to synchronize the SyncServer to the GPS system so that it becomes a GPS disciplined oscillator (GPSDO) and a stratum 1 time keeping device. The S200 has a GPS antenna input and a GPS receiver module inside, so in theory this should be a matter of connecting the right GPS antenna. But in practice it wasn’t simple at all because the GPS module in the SyncServer S200 is so old that it suffers from the so-called Week Number Roll-Over (WNRO) problem. In this blog post, I’ll discuss what the WNRO problem is all about and show my custom hardware solution that fixed the problem. IMPORTANT: Use the Right GPS Antenna! Let me once again point out the importance of using the right GPS antenna to avoid damaging it permanently due to over-voltage. GPS antennas have active elements that amplify the received signal right at the point of reception before sending it down the cable to the GPS unit. Most antennas need 3.3V or 5V that is supplied through the GPS antenna connector, but Symmetricom S200 supplies 12V! Make sure you are using a 12V GPS antenna! Check out my earlier blog post for more information. The Problem: SyncServer Refuses to Lock to GPS When you connect a GPS antenna to a SyncServer in its original configuration, everything seems to go fine initially. The front panel reports the antenna connection as “Good”, a few minutes later the number of satellites detected goes up, and the right location gets reported. But the most important “Status” field remains stuck in the “Unlocked” state which means that the SyncServer refuses to lock its internal clock to the GPS unit. This issue has been discussed to death in a number of EEVblog forum threads, but conclusion is always the same: the Furuno GT-8031 GPS module suffers from the GPS Week Number Roll-Over (WNRO) issue and nothing can be done about it other than replacing the GPS module with an after-market replacement one. The GPS Week Number Rollover Issue The original GPS system used a 10-bit number to count the number of weeks that have elapsed since January 6, 1980. Every 19.7 years, this number rolls over from 1023 back to 0. The first rollover happened on August 21, 1999, the second on April 6, 2019, and the next one will be on November 20, 2038. Check out this US Naval Observatory presention for some more information. GPS module manufacturers have dealt with the issue is by using a dynamic base year or variable pivot year. Let’s say that a device is designed in at the start of 2013 during week 697 of the 19.7 year epoch that started in 1999. The device then assumes that all week numbers higher than 697 are for years 2013 to 2019, and that numbers from 0 to 697 are for the years 2019 and later. Such a device will work fine for the 19.7 years from 2013 until 2032. And with just a few bits of non-volatile storage it is even possible to make a GPS unit robust against this kind of rollover forever: if the last seen date by the GPS unit was 2019 and suddenly it sees a date of 1999, it can infer that there was a rollover and record in the storage that the next GPS epoch has started. Unfortunately many modules don’t do that. The only way to fix the issue is to either update the module firmware or for some external device to tell the GPS module about the current GPS epoch. Many SyncServer S2xx devices shipped with a Motorola M12 compatible module that is based on a Furuno GT-8031 which has a starting date of February 2, 2003 and rolled over on September 18, 2022. You can send a command to the module that hints the current date and that fixes the issue, but there is no SyncServer S200 firmware that supports that. Check out this Furuno technical document for more the rollover details. The same document also tells us how to adjust the rollover date. It depends on the protocol that is supposed by the module. For a GT-8031, you need to use the ZDA command, other modules require the @@Gb or the TIME command. If you want to run a SyncServer for its intended purpose, a time server, it is of course important that you get the correct date. But if you don’t care about the date because the primary purpose it to use it as a GPSDO then the 1PPS output from the GPS module should be sufficient to drive a PLL that locks the internal 10 MHz oscillator to the GPS system. This 1PPS signal is still present on the GT-8031 of my unit and I verified with an oscilloscope that it matches the 1PPS output of my TM4313 GPSDO both in frequency and in phase as soon as it sees a copule of satellites. But there is something in the SyncServer firmware that depends on more than just the 1PPS signal because my S200 refuses to enter into “GPS Locked” mode, and the 10 MHz oscillator stays in free-running mode at the miserable frequency of roughly 9,999,993 Hz. Making the Furuno GT-8031 Work Again There are aftermarket replacement modules out there with a rollover date that is far into the future, but they are priced pretty high. There’s the iLotus IL-GPS-0030-B Module which goes for around $100 in AliExpress but that one has a rollover date on August 2024, and other modules go as high as $240. The reason for these high prices is because these modules don’t use a $5 location GPS chip but specialized ones that are designed for accurate time keeping such as the u-blox NEO/LEA series. Instead of solving the problem with money, I wondered if it was possible to make the GT-8031 send the right date to the S200 with a hardware interposer that sits between the module and motherboard. There were 2 options: intercept the date sent from the GPS module, correct it, and transmit it to the motherboard I tried that, but didn’t get it work. send a configuration command to the GPS module to set the right date This method was suggested by Alex Forencich on the time-nuts mailing list. He implemented it by patching the firmware of a microcontroller on his SyncServer S350. His solution might be the best one eventually, it doesn’t require extra hardware, but by the time he posted his message, my interposer was already up and running on my desk. It took 2 PCB spins, but I eventually came up with the following solution: Click to enlarge In the picture above, you see the GT-8031 plugged into my interposer that is in turn plugged into the motherboard. The interposer itself looks like this: The design is straightforward: an RP2024-zero, a smaller variant of the Raspberry Pico, puts itself in between the serial TX and RX wires that normally go between the module and the motherboard. It’s up to the software that runs on the RP2040 to determine what to do with the data streams that run over those wires. There are a few of other connectors: the one at the bottom right is for observing the signals with a logic analyzer. There are also 2 connectors for power. When finally installed, the interposer gets powered with a 5V supply that’s available on a pin that is conveniently located right behind the GPS module. In the picture above, the red wires provides the 5V, the ground is connected through the screws that hold the board in place. The total cost of the board is as follows: PCB: $2 for 5 PCBs + $1.50 shipping = $3.50 RP2040-zero: $9 on Amazon 2 5x2 connectors: $5 on Mouser + $5 shipping = $10 Total: $22.50 The full project details can be found my gps_interposer GitHub repo. How It Works To arrive at a working solution, I recorded all the transactions on the serial port RX and TX and ran them through a decoder script to convert them into readable GPS messages. Here are the messages that are exchanged between the motherboard after powering up the unit: >>> @@Cf - set to defaults command: [], [37], [13, 10] - 7 >>> @@Gf - time raim alarm message: [0, 10], [43], [13, 10] - 9 >>> @@Aw - time correction select: [1], [55], [13, 10] - 8 >>> @@Bp - request utc/ionospheric data: [0], [50], [13, 10] - 8 >>> @@Ge - time raim select message: [1], [35], [13, 10] - 8 >>> @@Gd - position control message: [3], [32], [13, 10] - 8 <<< @@Aw - time correction select: [1], [55], [13, 10] - 8 time_mode:UTC <<< @@Co - utc/ionospheric data input: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [44], [13, 10] - 29 alpha0:0, alpha1:0, alhpa2:0, alpha3:0 beta0:0, beta1:0, alhpa2:0, beta3:0 A0:0, A1:0 delta_tls:0, tot:0, WNt:0, WNlsf:0, DN:0, delta_Tlsf:0 The messages above can be seen in the blue and green rectangles of this logic analyzer screenshot: Click to enlarge Earlier, we saw that the GT-8031 module itself needs the ZDA command to set the time. This is an NMEA command. The messages above are Motorola M12 commands however. On the M12 module that contains the GT-8031 there is also a TI M430F147 microcontroller that takes care of the conversion between Motorola and NMEA commands. Note how the messages that arrive at the interposer immediately get forwarded to the other side. But there is one transaction marked in red, generated by the interposer itself, that sends the @@Gb command. When the GPS module is not yet locked to a satelllite, this command sends an initial estimate of the current date and time. The M12 User Guide has the following to say about this command: The interposer sends a hinted date of May 4, 2024. When the GPS module receives date from its first satellite, it corrects the date and time to the right value but it uses the initial estimated date to correct for the week number rollover! I initially placed the @@Gb command right after the @@Cf command that resets the module to default values, but that didn’t work. The solution was to send it after the initial burst of configuration commmands. With this fix in place, it still takes almost 15 minutes before the S200 enters into GPS lock. You can see this on the logic analyzer by an increased rate of serial port traffic: Click to enlarge There’s also the immense satisfaction to see the GPS status changed to “Locked”: Build Instructions Disclaimer: it’s easy to make mistakes and permanently damage your SyncServer. Build this at your own risk! All design assets for this project can be found in my gps_interposer project on GitHub. There is a KiCAD project with schematic, PCB design and gerber files. A PDF with the schematics can be found here. The design consists of a few connectors, some resistors and the RP2024-zero. If you just want the Gerbers, you can use gps_interposer_v2.zip. The interposer is connected to the motherboard and to the M12 module with 2x5 pin 0.050” connectors, one male and one female. But because it was difficult to come up with the right height of the male connector, I chose to add 2 female connectors, available here on Mouser and used a loose male-to-male connector between them to make the connection. The trickiest part of the build is soldering 2x5 pin connectors that connects to the motherboard: it needs to be reasonably well positioned otherwise the alignment of the screw holes will be out of whack. I applied some solder past on the solder pads, carefully placed the connector on top of the past, and used a soldering iron to melt the paste without moving the connector. It wasn’t too bad. Under normal circumstances, the board is powered by the 5V rail that’s present right next to the GPS module. However, when you plug in the USB-C cable into the RP2040-zero, you need to make sure to remove that connection because you’ll short out the 5V rail of the S200 otherwise! The board has 2 connectors for the 5V: at the front next to the USB-C connector and in the back right above the 5V of the motherboard. The goal was to plug in the power there, but it turned out to be easier to use a patch wire between the motherboard and the front power connector. You’ll need to the program the RP2024-zero with the firmware of course. You can find it in the sw/rp2040_v2 directory of my gps_interposer GitHub repo. Power Supply Recapping If you plan to deploy the SyncServer in alway-on mode, you should seriously consider replacement the capacitors in the power supply unit: they are known to be leaky and if things really go wrong they can be a fire hazard. The power supply board can easily be removed with a few screws. In my version, the capacitors were held in place with some hard sticky goo, so it takes some effort to remove them. The Future: A Software-Only Solution My solution uses a cheap piece of custom hardware. Alex’s solution currently requires patching some microcontroller firmware and then flashing this firmware with $30 programming dongle. So both solutions require some hardware. A software-only solution should be possible though: the microcontroller gets reprogrammed during an official SyncServer software update procedure. It should be possible to insert the patched microcontroller firmware into an existing software update package and then do an upgrade. Since the software update package is little more than a tar-archive of the Linux disk image that runs the embedded PC, it shouldn’t be very hard to make this happen, but right now, this doesn’t exist, and I’m happy with what I have. The Result The video below shows the 10 MHz output of the S200 being measured by a frequency counter that uses a calibrated stand-alone frequency standard as reference clock. The stability of the S200 seems slightly worse than that of my TM4313 GPSDO, but it’s close. Right now, I don’t have the knowledge yet to measure and quantify these differences in a scientifically acceptable way. References Microsemi SyncServer S200 datasheet Microsemi SyncServer S200, S250, S250i User Guide EEVblog - Symmetricom S200 Teardown/upgrade to S250 EEVblog - Symmetricom Syncserver S350 + Furuno GT-8031 timing GPS + GPS week rollover EEVblog - Synserver S200 GPS lock question Furuno GPS/GNSS Receiver GPS Week Number Rollover Footnotes

7 months ago 83 votes

More in technology

The government should stop worrying about the Daily Mail Test

You can't fix the Civil Service by penny-pinching

14 hours ago 1 votes
“Retina” is not a static PPI

I’m just going to keep posting about this every few months because people keep getting “retina” wrong. According to this YouTube video that was recommended to me today, retina on desktop monitors is exactly 218ppi (pixels per inch). Now of course the iPhone 4, which was

22 hours ago 1 votes
Forgot your safe combination? This Arduino-controlled autodialer can crack it for you

Safes are designed specifically to be impenetrable — that’s kind of the whole point. That’s great when you need to protect something, but it is a real problem when you forget the combination to your safe or when a safe’s combination becomes lost to history. In such situations, Charles McNall’s safe-cracking autodialer device can help. […] The post Forgot your safe combination? This Arduino-controlled autodialer can crack it for you appeared first on Arduino Blog.

yesterday 1 votes
XSS To RCE By Abusing Custom File Handlers - Kentico Xperience CMS (CVE-2025-2748)

We know what you’re waiting for - this isn’t it. Today, we’re back with more tales of our adventures in Kentico’s Xperience CMS. Due to it’s wide usage, the type of solution, and the types of enterprises using this solution

yesterday 4 votes
The April Fools joke that might have got me fired

Everyone should pull one great practical joke in their lifetimes. This one was mine, and I think it's past the statute of limitations. The story is true. Only the names are redacted to protect the guilty. My first job out of college was a database programmer, even though my undergraduate degree had nothing to do with computers and my current profession still mostly doesn't. The reason was that the University I worked for couldn't afford competitive wages, but they did offer various fringe benefits, and they were willing to train someone who at least had decent working knowledge. I, as a newly minted graduate of the august University of California system, had decent working knowledge at least of BSD/386 and SunOS, but more importantly also had the glowing recommendation of my predecessor who was being promoted into a new position. I was hired, which was their first mistake. The system I was hired to work on was an HP 9000 K250, one of Hewlett-Packard's big PA-RISC servers. I wish I had a photograph of it, but all I have are a couple bad scans of some bad Polaroids of my office and none of the server room. The server room was downstairs from my office back in the days when server rooms were on-premises, complete with a swipe card lock and a halon system that would give you a few seconds of grace before it flooded everything. The K250 hulked in there where it had recently replaced what I think was an Encore mini of some sort (probably a Multimax, since it was a few years old and the 88K Encores would have been too new for the University), along with the AIX RS/6000s that provided student and faculty shell accounts and E-mail, the bonded T1 lines, some of the terminal servers, the massive Cabletron routers and a lot of the telco stuff. One of the tape reels from the Encore hangs on my wall today as a memento. The K250 and the Encore it replaced (as well as the L-Class that later replaced the K250 when I was a consultant) ran an all-singing, all-dancing student information system called CARS. CARS is still around, renamed Jenzabar, though I suspect that many of its underpinnings remain if you look under the table. In those days CARS was a massive overlay that was loaded atop the operating system and database, which when I started were, respectively, HP/UX 10.20 and Informix. (I'm old.) It used Informix tables, screens and stored procedures plus its own text UI libraries to run code written variously as Perform screens, SQL, C-shell scripts and plain old C or ESQL/C. Everything was tracked in RCS using overgrown Makefiles. I had the admin side (resource management, financials, attendance trackers, etc.) and my office partner had the academic side (mostly grades and faculty tracking). My job was to write and maintain this code and shortly after to help the University create custom applications in CARS' brand-spanking new web module, which chose the new hotness in scripting languages, i.e., Perl. Fortuitously I had learned Perl in, appropriately enough, a computational linguistics course. CARS also managed most of the printers on campus except for the few that the RS/6000s controlled directly. Most of the campus admin printers were HP LaserJet 4 units of some derivation equipped with JetDirect cards for networking. These are great warhorse printers, some of the best laser printers HP ever made. I suspect there were line printers other places, but those printers were largely what existed in the University's offices. It turns out that the READY message these printers show on their VFD panels is changeable. I don't remember where I read this, probably idly paging through the manual over a lunch break, but initially the only fun things I could think of to do was to have the printer say hi to my boss when she sent jobs to it, stuff like that (whereupon she would tell me to get back to work). Then it dawned on me: because I had access to the printer spools on the K250, and the spool directories were conveniently named the same as their hostnames, I knew where each and every networked LaserJet on campus was. I was young, rash and motivated. This was a hack I just couldn't resist. It would be even better than what had been my favourite joke at my alma mater, where campus services, notable for posting various service suspension notices, posted one April Fools' Day that gravity itself would be suspended to various buildings. I felt sure this hack would eclipse that too. The plan on April Fools' Day was to get into work at OMG early o'clock and iterate over every entry in the spool, sending it a sequence that would change the READY message to INSERT 5 CENTS. This would cause every networked LaserJet on campus to appear to ask for a nickel before you printed anything. The script was very simple (this is the actual script, I saved it): The ^[ was a literal ASCII 27 ESCape character, and netto was a simple netcat-like script I had written in these days before netcat was widely used. That's it. Now, let me be clear: the printer was still ready! The effect was merely cosmetic! It would still print if you sent jobs to it! Nevertheless, to complete the effect, this message was sent out on the campus-wide administration mailing list (which I also saved): At the end of the day I would reset everything back to READY, smile smugly, and continue with my menial existence. That was the plan. Having sent this out, I fielded a few anxious calls, who laughed uproariously when they realized, and I reset their printers manually afterwards. The people who knew me, knew I was a practical joker, took note of the date, and sent approving replies. One of the best was sent to me later in the day by intercampus mail, printed on their laser printer, with a nickel taped to it. Unfortunately, not everybody on campus knew me, and those who did not not only did not call me, but instead called university administration directly. By 8:30am it was chaos in the main office and this filtered up to the head of HR, who most definitely did know me, and told me I'd better send a retraction before the CFO got in or I was in big trouble. That went wrong also, because my retraction said that campus administration was not considering charging per-page fees when in fact they actually were, so I had to retract it and send a new retraction that didn't call attention to that fact. I also ran the script to reset everything early. Eventually the hubbub finally settled down around noon. Everybody in the office thought it was very funny. Even my boss, who officially disapproved, thought it was somewhat funny. The other thing that went wrong, as if all that weren't enough, was that the director of IT — which is to say, my boss's boss — was away on vacation when all this took place. (Read E-mail remotely? Who does that?) I compounded this situation with the tactical error of going skiing over the coming weekend and part of the next week, most of which I spent snowplowing down the bunny slopes face first, so that he discovered all the angry E-mail in his box without me around to explain myself. (My office partner remembers him coming in wide-eyed asking, "what did he do??") When I returned, it was icier in the office than it had been on the mountain. The assistant director, who thought it was funny, was in trouble for not putting a lid on it, and I was in really big trouble for doing it in the first place. I was appropriately contrite and made various apologies and was an uncharacteristically model employee for an unnaturally long period of time. The Ice Age eventually thawed and the incident was officially dropped except for a "poor judgment" on my next performance review and the satisfaction of what was then considered the best practical joke ever pulled on campus. Indeed, everyone agreed it was much more technically accomplished than the previous award winner, where someone had supposedly gotten it around the grounds that the security guards at the entrance would be charging a nominal admission fee per head. Years later they still said it was legendary. I like to think they still do.

yesterday 4 votes