More from Electronics etc…
The Traditional Hardware Design and Verification Flow An Image Downscaler as Example Design The Reference Model The Micro-Architecture Model Comparing the results Conversion to Hardware Combining symbolic models with random input generation Specification changes Things to experiment with… Symbolic models are best for block or sub-block level modelling References Conclusion The Traditional Hardware Design and Verification Flow In a professional FPGA or ASIC development flow, multiple models are tested against each other to ensure that the final design behaves the way it should. Common models are: a behavioral model that describes the functionality at the highest level These models can be implemented in Matlab, Python, C++ etc. and are usually completely hardware architecture agnostic. They are often not bit accurate in their calculated results, for example because they use floating point numbers instead of fixed point numbers that are more commonly used by the hardware, A good example is the floating point C model that I used to develop my Racing the Beam Ray Tracer, though in this case, the model later transistioned into a hybrid reference/achitectural model. an architectural transaction accurate model An architectural model is already aware of how the hardware is split into major functional groups and models the interfaces between these functional groups in a bit-accurate and transaction-accurate way at the interface level. It doesn’t have a concept of timing in the form of clock cycles. source hardware model This model is the source from which the actual hardware is generated. Traditionally, and still in most cases, this is a synthesizable RTL model written in Verilog or VHDL, but high-level synthesis (HLS) is getting some traction as well. In the case of RTL, this model is cycle accurate. In the case of HLS, it still won’t be. The difference between an HLS C++ model1 and an architectural C++ model is in the way it is coded: HLS code needs to obey coding style restrictions that will otherwise prevent the HLS tool to convert the code to RTL. The HLS model is usually also split up in much more smaller units that interact with each other. RTL model The Verilog or VHDL model of the design. This can be the same as the source hardware model or it can be generated from HLS. Gate-level model The RTL model synthesized into a gatelevel netlist. During the design process, different models are compared against each other. Their outputs should be the same… to a certain extent, since it’s not possible to guarantee identical results between floating point and fixed point models. One thing that is constant among these models is that they get fed with, operate on, and output actual data values. Let’s use the example of a video pipeline. The input of the hardware block might be raw pixels from a video sensor, the processing could be some filtering algorithm to reduce noise, and the outputs are processed pixels. To verify the design, the various models are fed with a combination of generic images, ‘interesting’ images that are expected to hit certain use cases, images with just random pixels, or directed tests that explicity try to trigger corner cases. When there is mismatch between different models, the fun part begins: figuring out the root cause. For complex algorithms that have a lot of internal state, an error may have happened thousands of transactions before it manifests itself at the output. Tracking down such an issue can be a gigantic pain. For many hardware units, the hard part of the design is not the math, but getting the right data to the math units at the right time, by making sure that the values are written, read, and discarded from internal RAMs and FIFOs in the right order. Even with a detailed micro-architectural specification, a major part of the code may consist of using just the correct address calculation or multiplixer input under various conditions. For these kind of units, I use a different kind of model: instead of passing around and operating on data values through the various stages of the pipeline or algorithm, I carry around where the data is coming from. This is not so easy to do in C++ or RTL, but it’s trivial in Python. For lack of a better name, I call these symbolic models. There are thus two additional models to my arsenal of tools: a reference symbolic model a hardware symbolic model These models are both written in Python and their outputs are compared against each other. In this blog post, I’ll go through an example case where I use such model. An Image Downscaler as Example Design Let’s design a hardware module that is easy enough to not spend too much time on it for a blog post, but complex enough to illustrate the benefits of a symbolic model: an image downscaler. The core functionality is the following: it accepts an monochrome image with a maximum resolution of 7680x4320. it downscales the input image with a fixed ratio of 2 in both directions. it uses a 3x3 tap 2D filter kernel for downscaling. The figure below shows how an image with a 12x8 resolution that gets filtered and downsampled into a 6x4 resolution image. Each square represents an input pixel, each hatched square an output pixel, and the arrows show how input pixels contribute to the input of the 3x3 filter for the output pixel. For pixels that lay against the top or left border, the top and left pixels are repeated upward and leftward so that the same 3x3 filter can be used. If this downscaler is part of a streaming display pipeline that eventually sends pixels to a monitor, there is not a lot of flexibility: pixels arrive in a left to right and top to bottom scan order and you need 2 line stores (memories) because there are 3 vertical filter taps. Due to the 2:1 scaling ratio, the line stores can be half the horizontal resolution, but for an 8K resolution that’s still 7680/2 ~ 4KB of RAM just for line buffering. In the real world, you’d have to multiply that by 3 to support RGB instead of monochrome. And since we need to read and write from this RAM every clock cycle, there’s no chance of off-loading this storage to cheaper memory such as external DRAM. However, we’re lucky: the downscaler is part of a video decoder pipeline and those typically work with super blocks of 32x32 or 64x64 pixels that are scanned left-to-right and top-to-bottom. Within each super block, pixels are grouped in tiles of 4x4 pixels that are scanned the same way. In other words, there are 3 levels of left-to-right, top-to-bottom scan operations: the pixels inside each 4x4 pixel tile the pixel tiles inside each super block the super blocks inside each picture (Click to enlarge) The output has the same organization of pixels, 4x4 pixel blocks and super blocks, but due to the 2:1 downsampling in both directions, the size of a super block is 32x32 instead of 64x64 pixels. There are two major advantages to having the data flow organized this way: the downscaler can operate on one super block at a time instead of the full image For pixels inside the super block, that reduces size of the active input image width from 7680 to just 64 pixels. as long as the filter kernel is less than 5 pixels high, only 1 line store is needed The line store contains a partial sum of multiple lines of pixels. While the line store still needs to cover the full picture width when moving from one row of super blocks to the one below it, the bandwidth that is required to access the line store is but a fraction of the one befire: 1/64th to be exact. That opens up the opportunity to stream line store data in and out of external DRAM instead of keeping it in expensive on-chip RAMs. But it’s not all roses! There are some negative consequences as well: pixels from the super block above the current one must be fetched from DMA and stored in a local memory pixels at the bottom of the current super block must be sent to DMA the right-most column of pixels from the current super block are used in the next super block to when doing the 3x3 filter operation 4x4 size input tiles get downsampled to 2x2 size output tiles, but they must be sent out again as 4x4 tiles. This requires some kind of pixel tile merging operation. While the RAM area savings are totally worth it, all this adds a significant amount of data management complexity. This is the kind of problem where a symbolic micro-architecture model shines. The Reference Model When modeling transformations that work at the picture level, it’s convenient to assume that there are no memory size constraints and that you can access all pixels at all times no matter where they are located in the image. You don’t have to worry about how much RAM this would take on silicon: it’s up to the designer of the micro-architecture to figure out how to create an area efficient implementation. This usually results in a huge simplification of the reference model, which is good because as the source of truth you want to avoid any bugs in it. For our downscaler, the reference model creates an array of output pixels where each output pixel contains the coordinates of all the input pixels that are required to calculate its value. The pseudo code is someting like this: for y in range(OUTPUT_HEIGHT): for x in range(OUTPUT_WIDTH): get coordinates of input pixels for filter store coordinates at (x,y) of the output image The reference model python code is not much more complicated. You can find the code here. Instead of a 2-dimensional array, it uses a dictionary with the output pixel coordinates as key. This is a personal preference: I think ref_output_pixels [ (x,y) ] looks cleaner than ref_output_pixels[y][x] or ref_output_pixels[x][y]. When the reference model data creation is complete, the ref_output_pixels array contains values like this: (0,0) => [ Pixel(x=0, y=0), Pixel(x=0, y=0), Pixel(x=1, y=0), Pixel(x=0, y=0), Pixel(x=0, y=0), Pixel(x=1, y=0), Pixel(x=0, y=1), Pixel(x=0, y=1), Pixel(x=1, y=1) ] (1,0) => [ Pixel(x=1, y=0), Pixel(x=2, y=0), Pixel(x=3, y=0), Pixel(x=1, y=0), Pixel(x=2, y=0), Pixel(x=3, y=0), Pixel(x=1, y=1), Pixel(x=2, y=1), Pixel(x=3, y=1) ] ... (8,7) => [ Pixel(x=15, y=13), Pixel(x=16, y=13), Pixel(x=17, y=13), Pixel(x=15, y=14), Pixel(x=16, y=14), Pixel(x=17, y=14), Pixel(x=15, y=15), Pixel(x=16, y=15), Pixel(x=17, y=15) ] ... The reference value of each output pixel is a list of input pixels that are needed to calculate its value. I do not care about the actual value of the pixels or the mathematical operation that is applied on these inputs. The Micro-Architecture Model The source code of the hardware symbolic model can be found here. It has the following main data buffers and FIFOs: an input stream, generated by gen_input_stream, 4x4 pixel tiles that are sent super block by super block and then tile by tile. an output stream of 4x4 pixel tiles with the downsampled image. a DMA FIFO, modelled with simple Python list in which the bottom pixels of a super block area stored and later fetched when the super block of the next row needs the neighboring pixels above. buffers with above and left neighboring pixels that cover the width and height of a super block. an output merge FIFO is used to group a set to 4 2x2 downsampled pixels into a 4x4 tile of pixels The model loops through the super blocks in scan order and then the tiles in scan order, and for each 4x4 tile it calculates a 2x2 output tiles. for sy in range(nr of vertical super block): for sx in range(nr of horizontal super block): for tile_y in range(nr of vertical tiles in a superblock): for tile_x in range(nr of horizontal tiles in a superblock): fetch 4x4 tile with input pixels calculate 2x2 output pixels merge 2x2 output pixels into 4x4 output tiles data management When we look at the inputs that are required to calculate the 4 output pixels for each tile of 16 input pixels, we get the following: In addition to pixels from the tile itself, some output pixels also need input values from above, above-left and/or left neighbors. When switching from one super block to the next, buffers must be updated with neighboring pixels for the whole width and height of the super block. But instead of storing the values of individual pixels, we can store intermediate sums to reduce the number of values: At first sight, it looks like this reduces the number of values in the above and left neighbors buffers by half, but that’s only true for the above buffer. While the left neighbors can be reduced by half for the current tile, the bottom left pixel value is still needed to calculuate the above value for the 4x4 tiles of the next row. So the size of the left buffer is not 1/2 of the size of the super block but 3/4. In the left figure above, the red rectangles contain the components needed for the top-left output pixel, the green for the top-right output pixel etc. The right figure shows the partial sums that must be calculated for the left and above neighbors for future 4x4 tiles. These red, green, blue and purple rectangles have direct corresponding sections in the code. p00 = ( tile_above_pixels[0], tile_left_pixels[0], input_tile[0], input_tile[1], input_tile[4], input_tile[5] ) p10 = ( tile_above_pixels[1], input_tile[ 1], input_tile[ 2], input_tile[ 3] , input_tile[ 5], input_tile[ 6], input_tile[ 7] ) p01 = (tile_left_pixels[1], input_tile[ 4], input_tile[ 5] , input_tile[ 8], input_tile[ 9] , input_tile[12], input_tile[13] ) p11 = ( input_tile[ 5], input_tile[ 6], input_tile[ 7] , input_tile[ 9], input_tile[10], input_tile[11] , input_tile[13], input_tile[14], input_tile[15] ) For each tile, there’s quite a bit of bookkeeping of context values, reading and writing to buffers to keep everything going. Comparing the results In traditional models, as soon as intermediate values are calculated, the original values can be dropped. In the case of our example, with a filter where all coefficients are 1, the above and left values intermediate values of the top-left output pixels are summed and stored as 18 and 11, and the original values of (3,9,6) and (5,6) aren’t needed anymore. This and the fact the multiple inputs might have the same numerical value is what makes traditional models often hard to debug. This is not the case for symbolic models where all input values, the input pixel coordinates, are carried along until the end. In our model, the intermediate results are not removed from the final result. Here is the output result for output pixel (12,10): ... ( # Above neighbor intermediate sum (Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19)), # Left neighbor intermediate sum (Pixel(x=23, y=20), Pixel(x=23, y=21)), # Values from the current tile Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=24, y=21), Pixel(x=25, y=21) ), ... Keeping the intermediate results makes it easier to debug but to compare against the reference model, the data with nested lists must be flattened into this: ... ( Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19), Pixel(x=23, y=20), Pixel(x=23, y=21), Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=24, y=21), Pixel(x=25, y=21) ), ... But even that is not enough to compare: the reference value has the 3x3 input values in a scan order that was destroyed due to using intermediate values so there’s a final sorting step to restore the scan order: ... ( Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19), Pixel(x=23, y=20), Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=23, y=21), Pixel(x=24, y=21), Pixel(x=25, y=21) ) Finally, we can go through all the output tiles of the hardware model and compare them against the tiles of the reference model. If all goes well, the script should give the following: > ./downscaler.py PASS! Any kind of bug will result in an error message like this one: > ./downscaler.py MISMATCH! sb(1,0) tile(0,0) (0,0) 1 ref: [Pixel(x=7, y=0), Pixel(x=7, y=0), Pixel(x=8, y=0), Pixel(x=8, y=0), Pixel(x=9, y=0), Pixel(x=9, y=0), Pixel(x=7, y=1), Pixel(x=8, y=1), Pixel(x=9, y=1)] hw: [Pixel(x=7, y=0), Pixel(x=8, y=0), Pixel(x=8, y=0), Pixel(x=9, y=0), Pixel(x=9, y=0), Pixel(x=7, y=1), Pixel(x=8, y=1), Pixel(x=9, y=1), Pixel(x=7, y=4)] Conversion to Hardware The difficulty of converting the Python micro-architectural model to a hardware implementation model depends on the abstraction level of the hardware implementation language. When using C++ and HLS, the effort can be trivial: some of my blocks have a thousand or more lines of Python that can be converted entirely to C++ pretty much line by line. It can take a few weeks to develop and debug the Python model yet getting the C++ model running only takes a day or two. If the Python model is fully debugged, the only issues encountered are typos made during the conversion and signal precision mistakes. The story is different when using RTL: with HLS, the synthesis-to-Verilog will convert for loops to FSMs and take care of pipelining. When writing RTL directly, that tasks falls on you. You could change the Python model and switch to FSMs there to make that step a bit easier. Either way, having flushed out all the data management will allow you to focus on just the RTL specific tasks while being confident that the core architecture is correct. Combining symbolic models with random input generation The downscaler example is a fairly trivial unit with a predictable input data stream and a simple algorithm. In a video encoder or decoder, instead of a scan-order stream of 4x4 tiles, the input is often a hierarchical coding tree with variable size coding units that are scanned in quad-tree depth-first order. Dealing with this kind of data stream kicks up the complexity a whole lot. For designs like this, the combination of a symbolic model and a random coding tree generator is a super power that will hit corner case bugs with an efficiency that puts regular models to shame. Specification changes The benefits of symbolic models don’t stop with quickly finding corner case bugs. I’ve run in a number of cases where the design requirements weren’t fully understood at the time of implementation and incorrect behavior was discovered long after the hardware model was implemented. By that time, some of the implementation subtleties may have already been forgotten. It’s scary to make changes on a hardware design that has complex data management when corner case bugs take thousands of regression simulations to uncover. If the symbolic model is the initial source of truth, this is usually just not an issue: exhaustive tests can often be run in seconds and once the changes pass there, you have confidence that the corresponding changes in the hardware model source code are sound. Things to experiment with… Generating hardware stimuli I haven’t explored this yet, but it is possible to use a symbolic model to generate stimuli for the hardware model. All it takes is to replace the symbolic input values (pixel coordinates) by the actual pixel values at that location and performing the right mathematical equations on the symbolic values. A joint symbolic/hardware model Having a Python symbolic model and a C++ HLS hardware model isn’t a huge deal but there’s still the effort of converting one into the other. There is a way to have a unified symbolic/hardware model by switching the data type of the input and output values from one that contains symbolic values to one that contains the real values. If C++ is your HLS language, then this requires writing the symbolic model in C++ instead of Python. You’d trade off the rapid interation time and conciseness of Python against having only a single code base. Symbolic models are best for block or sub-block level modelling Since symbolic models carry along the full history of calculated values, symbolic models aren’t very practical when modelling multiple larger blocks together: hierarchical lists with tens or more input values create information overload. For this reason, I use symbolic models at the individual block level or sometimes even sub-block level when dealing with particularly tricky data management cases. My symbolic model script might contain multiple disjoint models that each implement a sub-block of the same major block, without interacting with eachother. References symbolic_model repo on GitHub Conclusion Symbolic models in Python have been a major factor in boosting my design productivity and increasing my confidence in a micro-architectural implementation. If you need to architect and implement a hardware block with some tricky data management, give them a try, and let me know how it went! Not all HLS code is written with C++. There are other languages as well. ↩
Introduction Screenshot Capturing Interfaces Hardware and Software Tools Capturing GPIB data in Talk Only mode TDS 540 Oscilloscope - GPIB - PCL Output HP 54542A Oscilloscope - Parallel Port - PCL or HPGL Output HP Inifinium 54825A Oscilloscope - Parallel Port - Encapsulated Postscript TDS 684B - Parallel Port - PCX Color Output Advantest R3273 Spectrum Analyzer - Parallel Port - PCL Output HP 8753C Vector Network Analyzer - GPIB - HP 8753 Companion Siglent SDS 2304X Oscilloscope - USB Drive, Ethernet or USB Introduction Last year, I create Fake Parallel Printer, a tool to capture the output of the parallel printer port of old-ish test equipment so that it can be converted into screenshots for blog posts etc. It’s definitely a niche tool, but of all the projects that I’ve done, it’s definitely the one that has seen the most amount of use. One issue is that converting the captured raw printing data to a bitmap requires recipes that may need quite a bit of tuning. Some output uses HP PCL, other is Encapsulated Postscript (EPS), if you’re lucky the output is a standard bitmap format like PCX. In the blog post, describe the procedures that I use to create screenshots of the test equipment that I personally own, so that don’t need to figure it out again when I use the device a year later. That doesn’t make it all that useful for others, but somebody may benefit from it when googling for it… As always, I’m using Linux so the software used below reflects that. Screenshot Capturing Interfaces Here are some common ways to transfer screenshots from test equipment to your PC: USB flash drive Usually the least painless by far, but it only works on modern equipment. USB cable Requires some effort to set the right udev driver permissions and a script that sends commands that are device specific. But it generally works fine. Ethernet Still requires slightly modern equipment, and there’s often some configuration pain involved. RS-232 serial Reliable, but often slow. Floppy disk I have a bunch of test equipment with a floppy drive and I also have a USB floppy drive for my PC. However, the drives on all this equipment is broken, in the sense that it can’t correctly write data to a disc. There must be some kind of contamination going on when a floppy drive head isn’t used for decades. GPIB Requires an expensive interface dongle and I’ve yet to figure out how to make it work for all equipment. Below, I was able to make it work for a TDS 540 oscilloscope, but not for an HP 54532A oscilloscope, for example. Parallel printer port Available on a lot of old equipment, but it normally can’t be captured by a PC unless you use Fake Parallel Printer. We’re now more than a year later, and I use it all the time. I find it to be the easiest to use of all the printer interfaces. Hardware and Software Tools GPIB to USB Dongle If you want to print to GPIB, you’ll need a PC to GPIB interface. These days, the cheapest and most common are GPIB to USB dongles. I’ve written about those here and here. The biggest take-away is that they’re expensive (>$100 second hand) and hard to configure when using Linux. And as mentioned above, I have only had limited success and using them in printer mode. ImageMagick ImageMagick is the swiss army knife of bitmap file processing. It has a million features, but I primarily use it for file format conversion and image cropping. I doubt that there’s any major Linux distribution that doesn’t have it as a standard package… sudo apt install imagemagick GhostPCL GhostPCL is used to decode PCL files. On many old machines, these files are created when printing to Thinkjet, Deskjet or Laserjet. Installation: Download the GhostPCL/GhostPDL source code. Compile cd ~/tools tar xfv ~/Downloads/ghostpdl-10.03.0.tar.gz cd ghostpdl-10.03.0/ ./configure --prefix=/opt/ghostpdl make -j$(nproc) export PATH="/opt/ghostpdl/bin:$PATH" Install sudo make install A whole bunch of tools will now be available in /opt/ghostpdl/bin, including gs (Ghostscript) and gpcl6. hp2xx hp2xx converts HPGL files, originally intended for HP plotter, to bitmaps, EPS etc. It’s available as a standard package for Ubuntu: sudo apt install hp2xx Inkscape Inkscape is full-featured vector drawing app, but it can also be used as a command line tool to convert vector content to bitmaps. I use it to convert Encapsulated Postscript file (EPS) to bitmaps. Like other well known tools, installation on Ubuntu is simple: sudo apt install inkscape HP 8753C Companion This tool is specific to HP 8753 vector network analyzers. It captures HPGL plotter commands, extracts the data, recreates what’s displayed on the screen, and allow you to interact with it. It’s available on GitHub. Capturing GPIB data in Talk Only mode Some devices will only print to GPIB in Talk Only mode, or sometimes it’s just easier to use that way. When the device is in Talk Only mode, the PC GPIB controller becomes a Listen Only device, a passive observer that doesn’t initiate commands but just receives data. I wrote the following script to record the printing data and save it to a file: gpib_talk_to_file.py: (Click to download) #! /usr/bin/env python3 import sys import pyvisa gpib_addr = int(sys.argv[1]) output_filename = sys.argv[2] rm = pyvisa.ResourceManager() inst = rm.open_resource(f'GPIB::{gpib_addr}') try: # Read data from the device data = inst.read_raw() with open(output_filename, 'wb') as file: file.write(data) except pyvisa.VisaIOError as e: print(f"Error: {e}") Pyvisa is a universal library to take to test equipement. I wrote about it here. It will quickly time out when no data arrives in Talk Only mode, but since all data transfers happen with valid-ready protocol, you can avoid time-out issued by pressing the hardcopy or print button on your oscilloscope first, and only then launch the script above. This will work as long as the printing device doesn’t go silent while in the middle of printing a page. TDS 540 Oscilloscope - GPIB - PCL Output My old TDS 540 oscilloscope doesn’t have a printer port, so I had to make do with GPIB. Unlike later version of the TDS series, it also doesn’t have the ability to export bitmaps directly, but it has outputs for: Thinkjet, Deskjet, and Laserjet in PCL format Epson in ESC/P format Interleaf format EPS Image format HPGL plotter format The TDS 540 has a screen resolution of 640x480. I found the Thinkjet output format, with a DPI of 75x75, easiest to deal with. The device adds a margin of 20 pixels to the left, and 47 pixels at the top, but those can be removed with ImageMagick. With a GPIB address of 11, the overall recipe looks like this: # Capture the PCL data gpib_talk_to_file.py 11 tds540.thinkjet.pcl # Convert PCL to png gpcl6 -dNOPAUSE -sOutputFile=tds540.png -sDEVICE=png256 -g680x574 -r75x75 tds540.thinkjet.pcl # Remove the margins and crop the image to 640x480 convert tds540.png -crop 640x480+20+47 tds540.crop.png The end result looks like this: HP 54542A Oscilloscope - Parallel Port - PCL or HPGL Output This oscilloscope was an ridiculous $20 bargain at the Silicon Valley Electronics Flea Market and it’s the one I love working with the most: the user interface is just so smooth and intuitive. Like all other old oscilloscopes, the biggest thing going against it is the amount if desk space it requires. It has a GPIB, RS-232, and Centronics parallel port, and all 3 can be used for printing. I tried to get printing to GPIB to work but wasn’t successful: I’m able to talk to the device and send commands like “*IDN?” and get a reply just fine, but the GPIB script that works fine with the TDS 540 always times out eventually. I switched to my always reliable Fake Parallel Printer and that worked fine. There’s also the option to use the serial cable. The printer settings menu can by accessed by pressing the Utility button and then the top soft-button with the name “HPIB/RS232/CENT CENTRONICS”. You have the following options: ThinkJet DeskJet75dpi, DeskJet100dpi, DeskJet150dpi, DeskJet300dpi LaserJet PaintJet Plotter Unlike the TDS 540 I wasn’t able to get the ThinkJet option to convert into anything, but the DeskJet75dpi option worked fine with this recipe: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f hp54542a_ -s deskjet.pcl -v gpcl6 -dNOPAUSE -sOutputFile=hp54542a.png -sDEVICE=png256 -g680x700 -r75x75 hp54542a_0.deskjet.pcl convert hp54542a.png -crop 640x388+19+96 hp54542a.crop.png The 54542A doesn’t just print out the contents of the screen, it also prints the date and adds the settings for the channels that are enabled, trigger options etc. The size of these additional values depends on how many channels and other parameters are enabled. When you select PaintJet or Plotter as output device, you have the option to select different colors for regular channels, math channels, graticule, markers etc. So it is possible to create nice color screenshots from this scope, even if the CRT is monochrome. I tried the PaintJet option, and while gcpl6 was able to extract an image, the output was much worse than the DeskJet option. I had more success using the Plotter option. It prints out a file in HPGL format that can be converted to a bitmap with hp2xx. The following recipe worked for me: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f hp54542a_ -s plotter.hpgl -v hp2xx -m png -a 1.4 --width 250 --height 250 -c 12345671 -p 11111111 hp54542a_0.plotter.hpgl I’m not smitten with the way it looks, but if you want color, this is your best option. The command line options of hp2xx are not intuitive. Maybe it’s possible to get this to look a bit better with some other options. Click to enlarge HP Inifinium 54825A Oscilloscope - Parallel Port - Encapsulated Postscript This indefinite-loaner-from-a-friend oscilloscope has a small PC in it that runs an old version of Windows. It can be connected to Ethernet, but I’ve never done that: capturing parallel printer traffic is just too convenient. On this oscilloscope, I found that printing things out as Encapsulated Postscript was the best option. I then use inkscape to convert the screenshot to PNG: ./fake_printer.py --port=/dev/ttyACM0 -t 2 -v --prefix=hp_osc_ -s eps inkscape -f ./hp_osc_0.eps -w 1580 -y=255 -e hp_osc_0.png convert hp_osc_0.png -crop 1294x971+142+80 hp_osc_0_cropped.png Ignore the part circled in red, that was added in post for an earlier article: Click to enlarge TDS 684B - Parallel Port - PCX Color Output I replaced my TDS 540 oscilloscope with a TDS 684B. On the outside they look identical. They also have the same core user interface, but the 648B has a color screen, a bandwidth of 1GHz, and a sample rate of 5 Gsps. Print formats The 684B also has a lot more output options: Thinkjet, Deskjet, DeskjetC (color), Laserjet output in PCL format Epson in ESC/P format DPU thermal printer PC Paintbrush mono and color in PCX file format TIFF file format BMP mono and color format RLE color format EPS mono and color printer format EPS mono and color plotter format Interleaf .img format HPGL color plot Phew. Like the HP 54542A, my unit has GPIB, parallel port, and serial port. It can also write out the files to floppy drive. So which one to use? BMP is an obvious choice and supported natively by all modern PCs. The only issue is that it gets written out without any compression so it takes over 130 seconds to capture with fake printer. PCX is a very old bitmap file format, I used it way back in 1988 on my first Intel 8088 PC, but it compresses with run length encoding which works great on oscilloscope screenshots. It only take 22 seconds to print. I tried the TIFF option and was happy to see that it only took 17 seconds, but the output was monochrome. So for color bitmap files, PCX is the way to go. The recipe: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f tds684_ -s pcx -v convert tds684_0.pcx tds684.png The screenshot above uses the Normal color setting. The scope also a Bold color setting: There’s a Hardcopy option as well: It’s a matter of personal taste, but my preference is the Normal option. Advantest R3273 Spectrum Analyzer - Parallel Port - PCL Output Next up is my Advantest R3273 spectrum analyzer. It has a printer port, a separate parallel port that I don’t know what it’s used for, a serial port, a GPIB port, and floppy drive that refuses to work. However, in the menus I can only configure prints to go to floppy or to the parallel port, so fake parallel printer is what I’m using. The print configuration menu can be reached by pressing: [Config] - [Copy Config] -> [Printer]: The R3273 supports a bunch of formats, but I had the hardest time getting it create a color bitmap. After a lot of trial and error, I ended up with this: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f r3273_ -s pcl -v gpcl6 -dNOPAUSE -sOutputFile=r3273_tmp.png -sDEVICE=png256 -g4000x4000 -r600x600 r3273_0.pcl convert r3273_tmp.png -filter point -resize 1000 r3273_filt.png rm r3273_tmp.png convert r3273_filt.png -crop 640x480+315+94 r3273.png rm r3273_filt.png The conversion loses something in the process. The R3273 hardcopy mimics the shades of depressed buttons with a 4x4 pixel dither pattern: If you use a 4x4 pixel box filter and downsample by a factor of 4, this dither pattern converts to a nice uniform gray, but the actual spectrum data gets filtered down as well: With the recipe above, I’m using 4x4 to 1 pixel point-sampling instead, with a phase that is chosen just right so that the black pixels of the dither pattern get picked. The highlighted buttons are now solid black and everything looks good. HP 8753C Vector Network Analyzer - GPIB - HP 8753 Companion My HP 8753C VNA only has a GPIB interface, so there’s not a lot of choice there. I’m using HP 8753 Companion. It can be used for much more than just grabbing screenshots: you can save the measured data to a filter, upload calibration kit data and so on. It’s great! You can render the screenshot the way it was plotted by the HP 8753C, like this: Click to enlarge Or you can display it as in a high resolution mode, like this: Click to enlarge Default color settings for the HPGL plot aren’t ideal, but everything is configurable. If you don’t have one, the existence of HP 8753 Companion alone is a good reason to buy a USB-to-GPIB dongle. Click to enlarge Siglent SDS 2304X Oscilloscope - USB Drive, Ethernet or USB My Siglent SDS 2304X was my first oscilloscope. It was designed 20 years later than all the other stuff, with a modern UI, and modern interfaces such as USB and Ethernet. There is no GPIB, parallel or RS-232 serial port to be found. I don’t love the scope. The UI can become slow when you’re displaying a bunch of data on the screen, and selecting anything from a menu with a detentless rotary knob can be the most infuriating experience. But it’s my daily driver because it’s not a boat anchor: even on my messy desk, I can usually create room to put it down without too much effort. You’d think that I use USB or Ethernet to grab screenshots, but most of the time I just use a USB stick and shuttle it back and forth between the scope and the PC. That’s because setting up the connection is always a bit of pain. However, if you insist, you can set things up this way: Ethernet To configure Ethernet, you need to go to [Utility] -> [Next Page] -> [I/O] -> [LAN]. Unlike my HP 1670G logic analyzer, the Siglent supports DHCP but when writing this blog post, the scope refused to grab an IP address on my network. No amount of rebooting, disabling and re-enabling DHCP helped. I have gotten it to work in the past, but today it just wasn’t happening. You’ll probably understand why using a zero-configuration USB stick becomes an attractive alternative. USB If you want to use USB, you need an old relic of a USB-B cable. It shows up like this: sudo dmesg -w [314170.674538] usb 1-7.1: new full-speed USB device number 11 using xhci_hcd [314170.856450] usb 1-7.1: not running at top speed; connect to a high speed hub [314170.892455] usb 1-7.1: New USB device found, idVendor=f4ec, idProduct=ee3a, bcdDevice= 2.00 [314170.892464] usb 1-7.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [314170.892469] usb 1-7.1: Product: SDS2304X [314170.892473] usb 1-7.1: Manufacturer: Siglent Technologies Co,. Ltd. [314170.892476] usb 1-7.1: SerialNumber: SDS2XJBD1R2754 Note 3 key parameters: USB vendor ID: f4ec USB product ID: ee3a Product serial number: SDS2XJBD1R2754 Set udev rules so that you can access this device of USB without requiring root permission by creating an /etc/udev/rules.d/99-usbtmc.rules file and adding the following line: SUBSYSTEM=="usb", ATTR{idVendor}=="f4ec", ATTR{idProduct}=="ee3a", MODE="0666" You should obviously replace the vendor ID and product ID with the one of your case. Make the new udev rules active: sudo udevadm control --reload-rules sudo udevadm trigger You can now download screenshots with the following script: siglent_screenshot_usb.py: (Click to download) #!/usr/bin/env python3 import argparse import io import pyvisa from pyvisa.constants import StatusCode from PIL import Image def screendump(filename): rm = pyvisa.ResourceManager('') # Siglent SDS2304X scope = rm.open_resource('USB0::0xF4EC::0xEE3A::SDS2XJBD1R2754::INSTR') scope.read_termination = None scope.write('SCDP') data = scope.read_raw(2000000) image = Image.open(io.BytesIO(data)) image.save(filename) scope.close() rm.close() if __name__ == '__main__': parser = argparse.ArgumentParser( description='Grab a screenshot from a Siglent DSO.') parser.add_argument('--output', '-o', dest='filename', required=True, help='the output filename') args = parser.parse_args() screendump(args.filename) Once again, take note of this line scope = rm.open_resource('USB0::0xF4EC::0xEE3A::SDS2XJBD1R2754::INSTR') and don’t forget to replace 0xF4EC, 0xEE3A, and SDS2XJBD1R2754 by the correct USB product ID, vendor ID and serial number. Call the script like this: ./siglent_screenshot_usb.py -o siglent_screenshot.png If all goes well, you’ll get something like this:
Introduction What was the SyncServer S200 Supposed to be Used For? IMPORTANT: Use the Right GPS Antenna! The SyncServer S200 Outside and Inside Installing the Missing BNC Connectors Setting Up a SyncServer S200 from Scratch Opening up the SyncServer S200 The SyncServer File System on the Flash Card Cloning the CompactFlash Card Reset to Default Settings Setting the IP Address Accessing the Web Interface DNS Configuration External NTP Server Configuration Testing your NTP server A Complicated Power Hungry Clock with a Terrible 10MHz Output Coming Up: Make the S200 Lock to GPS References Footnotes Introduction The Silicon Valley Electronics Flea Market runs every month from March to September and then goes on a hiatus for 6 months, so when the new March episode hits, there is a ton of pent-up demand… and supply: inevitably, some spouse has reached their limit during spring cleaning and wants a few of those boat anchors gone. You do not want to miss the first flea market of the year, and you better come early, think 6:30am, because the good stuff goes fast. But success is not guaranteed: I saw a broken-but-probably-repairable HP 58503A GPS Time and Frequency Reference getting sold right in front of me for just $40. And while I was able to pick up a very low end HP signal generator and Fluke multimeter for $10, at 8:30am I was on my way back to the car, unsatisfied. Until that guy and his wife, late arrivals, started unloading stuff from their trunk onto a blanket. There it was, right in front of me, a Stanford Research Systems SR620 universal counter. I tried to haggle on the SR620 but was met with a “You know what you’re doing and what this is worth.” Let’s just say that I paid the listed price which was still a crazy good deal. I even had some cash left in my pocket. Which is great, because right next to the SR620 sat a pristine looking Symmetricom SyncServer S200 with accessories for the ridiculously low price of $60. I’ve never left the flea market more excited. I didn’t really know what a network time server does, but since it said GPS time server, I hoped that it could work as a GPSDO with a 10MHz reference clock and a 1 pulse-per-second (1PPS) synchronization output. But even if not, I was sure that I’d learn something new, and worst case I’d reuse the beautiful rackmount case for some future project. Turns out that out of the box a SyncServer S200 can not be used as a GPSDO, but its close sibling, the S250, can do it just fine, and it’s straightforward to do the conversion. A bigger problem was an issue with the GPS module due to the week number rollover (WNRO) problem. In this and upcoming blog posts, I go through the steps required to bring a mostly useless S200 back to life, how to add the connectors for 10MHz and 1PPS output, which allowed it to do double duty as a GPSDO, and how to convert it into a full S250. Some of what is described here is based on discussions on EEVblog forum such as this one. I hope you’ll find it useful. What was the SyncServer S200 Supposed to be Used For? Symmetricom was acquired by Microsemi and the SyncServer S200 is now obsolete but Microsemi still has a data sheet on their website. Here’s what it says in the first paragraph: The SyncServer S200 GPS Network Time Server synchronizes clocks on servers for large or expanding IT enterprises and for the ever-demanding high-bandwidth Next Generation Network. Accurately synchronized clocks are critical for network log file accuracy, security, billing systems, electronic transactions, database integrity, VoIP, and many other essential applications. There’s a lot more, but one of the key aspects of a time server like this is that it uses the Network Time Protocol (NTP), which… … is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. The SyncServer S200 is a stratum 1 level device. The stratum level indicates the hierarchical distance to an authorative atomic reference clock. While it does not act as a stratum 0 oracle of truth, it can derive the time directly from a stratum 0 device, in this case the GPS system. When the GPS signal disappears, a SyncServer can fall back to stratum 2 mode if it can retrieve the time from other stratum 1 devices (e.g. other NTP-capable time servers on the network) or it can switch to a hold-over mode where it lets an internal oscillator run untethered, without being locked to the GPS system. The S200 has 3 oscillator options: a TCXO with a drift of 21ms per day an OCXO with a drift of 1ms per day a rubidium atomic clock with drift of only 25us per day Mine has the OCXO option. It’s clear that the primary use case of the S200 is not to act as a lab clock or frequency reference, but something that belongs in a router cabinet. IMPORTANT: Use the Right GPS Antenna! Before continuing, let’s interrupt with a short but important service message: GPS antennas have active elements that amplify the received signal right at the point of reception before sending it down the cable to the GPS unit. Most antennas need 3.3V or 5V that is supplied through the GPS antenna connector, but Symmetricom S200 supplies 12V! Make sure you are using a 12V GPS antenna! Check out my earlier blog post for more information. The SyncServer S200 Outside and Inside The front panel has 2 USB type-A ports, an RS-232 console interface, a vacuum fluorescent display (VFD), and a bunch of buttons. Click to enlarge VFDs have a tendency to degrade over time, but mine is in perfect shape. Click to enlarge In the back, we can see a power switch for the 100-240VAC mains voltage1, a BNC connector for the GPS antenna, a Sysplex Timer-Out interface, and 3 LAN ports. Let’s see what’s inside: Click to enlarge Under the heatshink with the blue heat transfer pad sits an off-the-shelf embedded PC motherboard with a 500MHz AMD Geode x86 CPU with 256MB of DRAM. Not all SyncServer S200 models use the same motherboard, some of them have a VIA Eden ESP 400MHz processor. Using the front panel RS-232 serial port, I recorded this boot.log file that contains some low level details of the system. The PC is running MontaVista Linux, a commercial Linux distribution for embedded devices. At the bottom left sits a little PCB for the USB and RS-232 ports. There are also two cables for the VFD and the buttons. On the right side of the main PCB we can see a Vectron OCXO, the same brand as the huge OCXO that you can find in the GT300 frequency standard of my home lab. It creates the 10MHz stable clock that’s used for time keeping. A 512MB CompactFlash card contains the operating system. There’s still plenty of room for a Rubidium atomic clock module. It should be possible to upgrade my S200 with a Rubidium standard, but I didn’t try that. At the top left sits a Furuno GT-8031 GPS module that is compatible with Motorola OnCore M12 modules. There are a bunch of connectors, and, very important, 3 unpopulated connector footprints. When looking at it from a different angle, we can also see 6 covered up holes that line up with those 3 unpopulated footprints: Click to enlarge Let’s cut to the chase and show the backside of the next model in the SyncServer product line, the S250: Those 6 holes holes are used for the input and output for the following 3 signals: 10MHz 1PPS IRIG We are primarily interested in the 10MHz and 1PPS outputs, of course. The BNC connectors may be unpopulated, but the driving circuit is not. When you probe the hole for the 10MHz output, you get this sorry excuse of sine wave: There’s also a 50% duty cycle 1PPS signal present on the other BNC footprint. Installing the Missing BNC Connectors The first step is to install the missing BNC connectors. They are are expensive Amphenol RF 031-6577. I got 3 of them for $17.23 a piece from Mouser. Chances are that you’ll never use the IRIG port, so you can make do with only 2 connectors to save yourself some money. You need to remove the whole main PCB from the case, which is a matter of removing all the screws and some easy to disconnect cables and then taking it out. The PCB rests on a bunch of spacers for the screws. When you slide the PCB out, be very careful to not scrape the bottom of the PCB against these spacers! Next up is opening up the covered holes. The plastic that covers these holes is very sturdy. It took quite a bit of time to open them up with a box cutter. The holes are not perfectly round: they have a flat section at the top. Open up the holes as much as possible because the BNC connectors will have to go through them during reassembly and you want this to go smoothly without putting strain on the PCB. The end result isn’t commercial product quality, but it’s good enough for something that will stay hidden at the back of the instrument. Installing the BNC connectors is easy: plug them in and solder them down… Due to the imperfectly cut BNC holes, installing the main PCB back into the chassis will be harder than removing it, so, again, be careful about those spacers at the bottom of the case to prevent damaging the PCB! The end result looks clean enough: Setting Up a SyncServer S200 from Scratch My first goal was to set up the SyncServer so that it’d be synchronized to the GPS time and display the right time on the display. That should be a pretty simple thing to do, but it took many hours before I got there. There were 3 reasons for that: due to the WNRO issue, the SyncServer didn’t want to lock onto the GPS time. the default IP addresses for the Symmetricom NTP servers aren’t operational anymore. But that wasn’t clear in the status screen. when I tried alternative NTP servers, they couldn’t be found because I hadn’t configured any DNS servers. configuring the SyncServer to get its IP using DHCP is a bit of nightmare. My focus was first on the GPS and then the NTP part, but here I’ll do things in the opposite order so that you get somehwere without the kind of hardware hacking that I ended up doing. Opening up the SyncServer S200 Just remove the 4 black screws at the top of the case and lift the top cover. I wish other equipment was just as easy. The SyncServer File System on the Flash Card The S200 uses a 512MB flash card to store the OS. I wanted to have a look at the contents and make a copy as well so that I wouldn’t have to worry about making crucial mistakes. I bought a compact flash card reader on Amazon for $8. After plugging in the drive: $ df /dev/sdb7 97854 2 92720 1% /media/tom/_tmparea /dev/sdb2 8571 442 7683 6% /media/tom/_persist /dev/sdb5 173489 69985 94256 43% /media/tom/_fsroot1 /dev/sdb6 173489 69986 94255 43% /media/tom/_fsroot2 In my case, the drive is mounted on /dev/sdb but this will probably be different for you. The flash card has 4 different partitions. I did some digging to understand how the system works. OS partititions _fsroot1 and _fsroot2 The _fsroot1 and _fsroot2 partitions contain copies of the OS itself. Before making any changes on my system, I checked the chronosver file that resides in the root directory of each partition: cat _fsroot1/chronosver # # SyncServer version # # $Date: 2010-12-08-093436 $ # # The format is Major.Minor Version=1.26 # # For internal identification # Revision=$ProjectRevision: Last Checkpoint: 1.666.10.3420934 $ and: cat _fsroot2/chronosver # # SyncServer version # # $Date: 2010-12-08-093436 $ # # The format is Major.Minor Version=1.26 # # For internal identification # Revision=$ProjectRevision: Last Checkpoint: 1.666.10.3420934 $ The versions are identical and indicate that version 1.26 of the system is installed. Later versions 1.30 and 1.36 are available. When you install those, you’ll see that only 1 of the partitions gets updated, so what’s happening is that one _fsroot partition gets updated and the system boots the newest version. Except when doing an OS upgrade, the _fsroot partitions on the flash card are always mounted as read-only, even when making system configuration changes. Persistent configuration partition _persists The _persist partition contains a tar file of the /var and /etc directories with configuration data. When you make changes through the web interface, the changes end up here. ll _persist/ total 445 drwxr-xr-x 2 root root 1024 Jan 2 2006 ./ drwxr-x---+ 7 root root 4096 Jun 23 17:40 ../ -rw-r--r-- 1 root root 2059 Dec 8 2010 downgradelist -rw-r--r-- 1 root root 50 Jan 1 2006 persist-1.2.md5 -rw-r--r-- 1 root root 440320 Jan 1 2006 persist-1.2.tar -rw-r--r-- 1 root root 2062 Dec 8 2010 upgradelist The _fsroot partitions contain /etc and /var directories as well, but when mounted on the real system, the contents of these directories come from the tar file. However, when you reset the system back to default values, it restores back to the values in the _fsroot partition. System upgrade staging area _tmparea The _tmparea partition is used as a staging area when upgrading the system. When you use the web interface to upload a new version, the file gets stored here before one of the _fsroot directories gets overwritten. Cloning the CompactFlash Card You don’t really need a second flash card: a full disk copy of the contents of the original card can be saved to your PC and restored from there, but I found it useful to have a second one to swap back and forth between different flash cards while experimenting. There is a plenty chatter in the EEVBlog forum about which type of flash card does or doesn’t work. It must a CompactFlash card with fixed disk PIO instead of removable DMA support and it’s also good to use one that is an industrial type because those have long-life SLC-type flash memory chips that allow more read-write operations and a higher temperature range, but that’s where the concensus ends. Some people aren’t able even make a 512MB card work. Others claim that their 512MB card worked, but that larger capacity ones didn’t. I bought this 512MB card and it worked fine, and this 1GB one worked fine too. I ran into none of the issues that some other people seem to have. I wonder if it has to do with the kind of embedded PC motherboard that my system is using: remember that there are different versions out there. Windows people should use something like HDD Raw Copy Tool to make a copy, but it’s just as easy to use the standard Linux disk utilities. Copy the whole flash card contents to a file on your Linux machine like this: $ sudo dd if=/dev/sdb of=flash_contents_orig.img bs=1M [sudo] password for tom: 488+1 records in 488+1 records out 512483328 bytes (512 MB, 489 MiB) copied, 40.6726 s, 12.6 MB/s If you have a second flash card, copying the original contents to that one works the same way: $ sudo dd if=flash_contents_orig.img of=/dev/sdb bs=1M && sync 488+1 records in 488+1 records out 512483328 bytes (512 MB, 489 MiB) copied, 0.143977 s, 3.6 GB/s Notice the sync command at the end: if you don’t do that, you’ll get your command line back right away while the copy operation is going on in the background. I didn’t want to accidentally unplug the thing before it was done. Unmount the drive after copying: sudo unmount /dev/sdb You’re all set now to plug the old or the new CF card back into the S200. Reset to Default Settings Chances are that your SyncServer was used before and that it didn’t come in its default state. That can be a problem, for example when the passwords are different than the default one. The procedure to reset the device back to factory settings is described on page 120 of the user manual. After opening the device, set jumper JP4 on the motherboard. It’s located at the bottom right of the motherboard, next the CR2032 lithium battery. The jumper pins have a smaller than usual 0.1” pitch so you can’t use a standard jumper. I used logic analyzer grabbers to make the connection: Install the jumper connection, power up the device, wait for 100 seconds, power the device back off, and remove the connection. When powered up, there won’t be any messages or visual indications to tell you that defaults have been restored, you just have to wait long enough. Setting the IP Address Configuration of the SyncServer happens through a web interface, so the next step is to setup a network connection. The user guide recommends assigning static IP addresses instead of DHCP because NTP associations and authentication may rely on static network addresses. This was good advice because I was never able to get the SyncServer to work with DHCP… While there are 3 LAN ports, only port 1 can be used for web management. You’ll need to set the IP address, the gateway address mask, and the gateway addresses through the front panel. I used the following settings, because that’s how my home is configured: IP address: 192.168.1.201 Gateway address mask: 255.255.255.0 Gateway IP address: 192.168.1.1 Values for your network may be different. If your home network uses DHCP for all other devices, it’s best to tell your router to exclude the chosen static IP address from the list of dynamically assignable IP addresses. I explain in my HP 1670G blog post how to this with an Asus router. Once LAN port 1 has been enabled an configured, plug in a network cable. The network LED on the front panel of the device should turn green. If all went well, you can now ping the SyncServer from your host machine: ping 192.168.1.201 PING 192.168.1.201 (192.168.1.201): 56 data bytes 64 bytes from 192.168.1.201: icmp_seq=0 ttl=64 time=2.172 ms 64 bytes from 192.168.1.201: icmp_seq=1 ttl=64 time=3.221 ms 64 bytes from 192.168.1.201: icmp_seq=2 ttl=64 time=2.195 ms ... Some DHCP notes If against the general advice, you still decide to use DHCP, here are some notes: when you switch LAN port 1 to DHCP through the front panel for initial configuration, the web interface won’t work. Your first configuration round must happen with a static IP address. It’s during that round that you must tell the device that further web configuration is allowed over a DHCP enabled LAN port 1. when DHCP is enabled and a network cable is not plugged in when powering up, the SyncServer bootup time increase by several minutes. This is because the SyncServer is waiting for IP address assignment until a certain time-out value is reached. DHCP IP address assignment only happens at initial bootup. If you plug in the network cable when the device is already up and running, no IP address will be assigned. Accessing the Web Interface Going forward, most web interface screenshot will show a header that says “SyncServer S250” instead of “S200”. Modding the device to an S250 is something that will be discussed in a future blog post. With your browser, you should now be able to access the web interface by going to IP address that you assigned: When using default settings, the following credentials are active: Username: admin Password: symmetricom DNS Configuration Now is the time to assign DNS servers. This step is crucial if you want your SyncServer to work with external NTP servers, since most of them want you to use a hostname instead of an IP address. Using the web interface, go to “NETWORK” and then the “Ethernet” tab, and then “Add a DNS Server”. I’m using Comcast/Xfinity, which has a DNS server at address 75.75.75.75, so that what’s I used. You’ll need to find the appropriate DNS server for your case, or you can use Google’s Public DNS service at address 8.8.8.8. If you set up the DNS server correctly, you should now be able to ping public Internet servers with the “Ping” tab. For the last 35 years, I’ve used “www.yahoo.com” as my ping testing address. External NTP Server Configuration The default settings of my SyncServer make it use static IP addresses 69.25.96.11, 69.25.96.12, and 69.25.96.14 for external NTP requests. Symmetricom used to have NTP servers there, but they are not operational anymore. You must replace these static IP addresses by other NTP servers. A popular option is ntp.org which offers a free NTP server service. Another alternative are the NTP servers from NIST, the National Institute for Standards and Technology. Or do like I did, and use both! For ntp.org, use the following addresses: 0.pool.ntp.org, 1.pool.ntp.org, 2.pool.ntp.org, or 0.us.pool.ntp.org, 1.us.pool.ntp.org, 2.us.pool.ntp.org if you want to force using a server that is located in the US. For NIST, I used time.nist.gov. Here is how you fill in the panel to add a server: And here’s how it looks like after all servers were added: Click “Restart” when all servers have been added, and check out “NTP - Sysinfo” to verify that the SyncServer can successfully talk to the external servers: The “St/Poll” columns shows the stratum level and polling time of the servers. In the example above, 4 NTP servers with stratum levels 1 and 2 are being polled. The stratum level will vary because NTP.org rotates through a pool of servers with different levels. If there’s a configuration issue, you’ll see a stratum level of 16. Testing your NTP server After setting up the external NTP servers, your SyncServer can now itself be used as an NTP server. Using Linux, you can query the date from the SyncServer with the ntpdate tool as follows: ntpdate -q 192.168.1.201 server 192.168.1.201, stratum 16, offset -0.019111, delay 0.02856 4 Jun 23:20:32 ntpdate[230482]: no server suitable for synchronization found Initially, the SyncServer may report itself as an unsynchronized stratum 16 device because NTP server synchronization can take a bit of time, but after around 15 minutes, you’ll get this: ntpdate -q 192.168.1.201 server 192.168.1.201, stratum 2, offset -0.034768, delay 0.02989 4 Jun 23:23:21 ntpdate[230491]: adjust time server 192.168.1.201 offset -0.034768 sec The Sync status LED and web page indicator are now be yellow instead of red: you now have something that should be good enough for time serving needs that don’t require nano-second level accuracy. Don’t worry about the NTP LED not being constanstly green: it only lights up when the SyncServer polls the external server and that only happens every some many seconds. On the front panel and on web interface under “STATUS -> Timing”, you’ll see that the current sync source is NTP and that the hardware clock status is “Locked”. On a S200 you will only see the “GPS Input Status - Unlocked”, there won’t be any IRIG, 1 PPS or 10 MHz input status. Those are S250 features that will be unlocked in a later blog post. A Complicated Power Hungry Clock with a Terrible 10MHz Output All that’s left now to display the correct local time is to set the right time zone. You can do this under “Timing” -> “Time Zone”. Once you’ve done that, you can use your SyncServer as a slightly overcomplicated living room clock: It’s power hungry too: mine pulls roughly 19W from the power socket. But what about the 10MHz output? After booting up the S200 when the device is not yet locked to NTP, the Vector OCXO is free-running with a frequency of 9,999,993MHz, an error of 7Hz2 which makes it unusable for the lab. One would expect this error number to go down when the device is locked to NTP, a bad time reference is better than no reference at all, but that’s not the case: as soon as the device locked to NTP, the 10MHz output locked at 10,000,095Hz. The error increased with an order of magnitude! However, since NTP is part of stratum hierarchy, the time should average out to 10MHz, and indeed, the behavior was bimodal: after around 20 minutes, the 10MHz output frequency switches to 9,999,001Hz, which doesn’t quite average to 10MHz, but that’s because the frequency counter was connected to a non-disciplined reference clock. For the SyncServer to be usable as a lab timing reference, it clearly needs that GPS input. It will also be interesting to check the hold-over behavior of the OCXO after it was first locked to the GPS system. Coming Up: Make the S200 Lock to GPS In the next blog post, I’ll describe how I was able to make the S200 lock at GPS satellites which makes it a stratum 1 device. This took a lot of work and designing an interposer PCB to work around the GPS WNRO issue. Stay tuned! References Microsemi SyncServer S200 datasheet Microsemi SyncServer S200, S250, S250i User Guide Footnotes There are also versions for telecom applications that are powered with 40-60 VDC. ↩ This number will be different for your unit. ↩
TLDR: It’s a useless technology demo. Introduction Rules of Engagement Test Ride 1: from Kings Beach to Truckee (11 miles) Test Ride 2: I-80 from Truckee to Blue Canyon (36 miles) Test Ride 3: from West-Valley College to I-85 Entrance (1 mile) Conclusion Introduction In the past months, Tesla has been offering a free, one-month trial of their full self-driving (FSD) system to all current owners. It looks like they are rolling this out in stages, because I only got mine only a few days ago. I’ve had a Model Y for more than 3 years now, well before Elon revealed himself as the kind of person he really is, and I’ve been happy with it. The odometer is now well above 50,000 miles, a significant part of those were spent on I-80 while driving between the SF South Bay and the Lake Tahoe area. For long distance interstate driving, autopilot (in other words: lane centering and adaptive cruise control) has been amazing. I use it all the time, and have little to complain about. Early on, I had one case where the autopilot started slowing down for no good reason, but since I distrust these kind of systems and since phantom braking has been reported quite a bit in the press, I try to keep attention to what the car is doing at all times. I immediately pressed the accelerator, and that was that. I don’t know how prevalent phantom breaking really is. One time is still too many, and disconcerting. It doesn’t help that you can’t anticipate it, there must by a bunch of different factors to trigger it: version of the car, weather, light conditions etc. All I can say, after so many miles, is that autopilot has been amazing for me. When I buy my next car, my requirements will be simple: I want an EV, an extensive charger network along I-80, and an autosteer that’s at least as good as what I have today. Let’s hope there’ll be decent Tesla alternatives by then. But let’s get to FSD. During his last financial conference call, Musk claimed that he wants to focus more on robo-taxis. With no driver in the car at all, such a system better be pretty much flawless. But with the YouTube videos that are out there, often posted by Tesla fans, that show the system making ridiculous errors, I highly doubt that the system is close to ready. I would never pay for FSD, not only do I not trust it, I also don’t really see the point, but with with a free trial, I couldn’t resist checking it out. Here are my impressions. Rules of Engagement During 3 short tests, I watched FSD the way a helicopter parent watches a toddler who’s first exploring the world: allow it do what it wants to do, but intervene the moment you feel things are not going the way you like it. It’s common on social media to see comments like this: “if the driver had waited a bit more, FSD would still have corrected itself.” I’m having none of that. The moment I’m sensing it’s on its way to do something, anything, wrong, I intervene. While it’s possible that benign cases are dinged as interventions, I don’t think any what I describe below can be considered as such. They were real mistakes that should never have happened. Test Ride 1: from Kings Beach to Truckee (11 miles) I first switched on FSD for an 11 mile drive from a mountain biking trailhead in Kings Beach to the I-80 entrance in Truckee, with a stop at a gas station to get some snacks. Click to open in Google Maps This is not a complicated tasks. Other than the gas stop, it’s just driving straight along state route 267 with a 3 traffic lights. What could possibly go wrong? Well, FSD managed to make 2 mistakes. Mistake 1: select the wrong exit lane During the first mistake, instead of turning right at the gas station, it made the decision to prepare to exit one street early, switched on its indicator, and started moving to the right exit lane. Note that there is no way to get to the gas station through that first exist. If I hadn’t immediately interrupted that maneuver (see Rules of Engagement), I assume it would have corrected itself eventually and gone back onto the main lane. But if I had been the driver behind, I’d have questioned the antics of the driver in front of me. FSD managed to screw up its very first maneuver. Not a good look. Mistake 2: selecting the right turn lane when going straight The second mistake happened less than a mile later. At the intersection with Old Brockway Rd, the car was supposed to continue straight. There are 3 lanes at the traffic light: left, middle, and right, and only the middle lane can be used to go straight. For whatever reason, FSD initiated a move to go to the right lane. Another case where I’m sure it would have corrected itself eventually, but it’s clear that the system had no clue about the traffic situation in front of it. While both cases are not life-or-death situations, it’s truly impressive that FSD managed to make 2 easily avoided mistakes before my first 10 miles of using it! Test Ride 2: I-80 from Truckee to Blue Canyon (36 miles) For the second test, my wife reluctantly gave me permission to try out FSD for interstate driving, which should be its best case scenario. Click to open in Google Maps It’s a bit disconcerting to see the car make a decision to change lanes to pass someone, but I guess that something you’ll get used to. But what was baffling was the way in which it behaved worse than autopilot. There were two nearly identical cases, where the 2-lane road was dead straight, with excellent paint marks, and with cars right of me, yet FSD made nervous left-right oscillation-like corrections that I have never experienced before in autopilot mode. It was not a case of FSD wanting to change lanes, no right indicator was ever switched on. The first time, my wife questioned what was going on. The second time, on a section just past the Whitmore Caltrans station near Alta, she ordered me to switch off FSD. In the past 3 years, she never once asked me to switch off autopilot. One would think that autopilot and FSD have the same core lane tracking algorithms, but one way or the other the experience was radically different. I switched back to autopilot. The remaining 3 hours were uneventful. Test Ride 3: from West-Valley College to I-85 Entrance (1 mile) The final test happened yesterday, while driving back from the Silicon Valley Electronics Flea Market back home. These are always held on a Sunday, start very early at 6am, and I’m usually out before 9am, so there’s almost nobody on the road. Click to open in Google Maps FSD managed to turn right out of the parking lot just fine and get past the first traffic light. The second traffic light has a don’t-turn-on-red sign. The light was red, the Tesla came to a full stop, and then pressed on the gas to move on while the light was still red. (According to my colleague, police often lay in wait at this location to catch violators.) By now I fully expected it to make that mistake, so I was ready to press the brake. Conclusion The way it currently behaves, FSD is a system that can’t be trusted to make the right decisions. It makes the most basic mistakes and it makes many of them. Without FSD, you pay attention to the road and everything else is within your control. With FSD, you still need to pay attention but now there’s the additional cognitive load to monitor an unpredictable system over which you don’t have direct control. Forget about just being focused, you need to be hyper-focused, and you need to pay $99 per month or a one time of $12,000 for the privilege. With the limited functionality of autopilot, you hit the sweet spot: adaptive cruise control and lane centering work reliably, and you don’t need to worry about any other mischief. Maybe one day I’ll be able to drive to Lake Tahoe by typing in the address, sit back, take a nap, or play on my phone. Until then, it’s just a fancy technology demo with little practical value.
More in technology
What is the origin of the word "mainframe", referring to a large, complex computer? Most sources agree that the term is related to the frames that held early computers, but the details are vague.1 It turns out that the history is more interesting and complicated than you'd expect. Based on my research, the earliest computer to use the term "main frame" was the IBM 701 computer (1952), which consisted of boxes called "frames." The 701 system consisted of two power frames, a power distribution frame, an electrostatic storage frame, a drum frame, tape frames, and most importantly a main frame. The IBM 701's main frame is shown in the documentation below.2 This diagram shows how the IBM 701 mainframe swings open for access to the circuitry. From "Type 701 EDPM [Electronic Data Processing Machine] Installation Manual", IBM. From Computer History Museum archives. The meaning of "mainframe" has evolved, shifting from being a part of a computer to being a type of computer. For decades, "mainframe" referred to the physical box of the computer; unlike modern usage, this "mainframe" could be a minicomputer or even microcomputer. Simultaneously, "mainframe" was a synonym for "central processing unit." In the 1970s, the modern meaning started to develop—a large, powerful computer for transaction processing or business applications—but it took decades for this meaning to replace the earlier ones. In this article, I'll examine the history of these shifting meanings in detail. Early computers and the origin of "main frame" Early computers used a variety of mounting and packaging techniques including panels, cabinets, racks, and bays.3 This packaging made it very difficult to install or move a computer, often requiring cranes or the removal of walls.4 To avoid these problems, the designers of the IBM 701 computer came up with an innovative packaging technique. This computer was constructed as individual units that would pass through a standard doorway, would fit on a standard elevator, and could be transported with normal trucking or aircraft facilities.7 These units were built from a metal frame with covers attached, so each unit was called a frame. The frames were named according to their function, such as the power frames and the tape frame. Naturally, the main part of the computer was called the main frame. An IBM 701 system at General Motors. On the left: tape drives in front of power frames. Back: drum unit/frame, control panel and electronic analytical control unit (main frame), electrostatic storage unit/frame (with circular storage CRTs). Right: printer, card punch. Photo from BRL Report, thanks to Ed Thelen. The IBM 701's internal documentation used "main frame" frequently to indicate the main box of the computer, alongside "power frame", "core frame", and so forth. For instance, each component in the schematics was labeled with its location in the computer, "MF" for the main frame.6 Externally, however, IBM documentation described the parts of the 701 computer as units rather than frames.5 The term "main frame" was used by a few other computers in the 1950s.8 For instance, the JOHNNIAC Progress Report (August 8, 1952) mentions that "the main frame for the JOHNNIAC is ready to receive registers" and they could test the arithmetic unit "in the JOHNNIAC main frame in October."10 An article on the RAND Computer in 1953 stated that "The main frame is completed and partially wired" The main body of a computer called ERMA is labeled "main frame" in the 1955 Proceedings of the Eastern Computer Conference.9 Operator at console of IBM 701. The main frame is on the left with the cover removed. The console is in the center. The power frame (with gauges) is on the right. Photo from NOAA. The progression of the word "main frame" can be seen in reports from the Ballistics Research Lab (BRL) that list almost all the computers in the United States. In the 1955 BRL report, most computers were built from cabinets or racks; the phrase "main frame" was only used with the IBM 650, 701, and 704. By 1961, the BRL report shows "main frame" appearing in descriptions of the IBM 702, 705, 709, and 650 RAMAC, as well as the Univac FILE 0, FILE I, RCA 501, READIX, and Teleregister Telefile. This shows that the use of "main frame" was increasing, but still mostly an IBM term. The physical box of a minicomputer or microcomputer In modern usage, mainframes are distinct from minicomputers or microcomputers. But until the 1980s, the word "mainframe" could also mean the main physical part of a minicomputer or microcomputer. For instance, a "minicomputer mainframe" was not a powerful minicomputer, but simply the main part of a minicomputer.13 For example, the PDP-11 is an iconic minicomputer, but DEC discussed its "mainframe."14. Similarly, the desktop-sized HP 2115A and Varian Data 620i computers also had mainframes.15 As late as 1981, the book Mini and Microcomputers mentioned "a minicomputer mainframe." "Mainframes for Hobbyists" on the front cover of Radio-Electronics, Feb 1978. Even microcomputers had a mainframe: the cover of Radio Electronics (1978, above) stated, "Own your own Personal Computer: Mainframes for Hobbyists", using the definition below. An article "Introduction to Personal Computers" in Radio Electronics (Mar 1979) uses a similar meaning: "The first choice you will have to make is the mainframe or actual enclosure that the computer will sit in." The popular hobbyist magazine BYTE also used "mainframe" to describe a microprocessor's box in the 1970s and early 1980s16. BYTE sometimes used the word "mainframe" both to describe a large IBM computer and to describe a home computer box in the same issue, illustrating that the two distinct meanings coexisted. Definition from Radio-Electronics: main-frame n: COMPUTER; esp: a cabinet housing the computer itself as distinguished from peripheral devices connected with it: a cabinet containing a motherboard and power supply intended to house the CPU, memory, I/O ports, etc., that comprise the computer itself. Main frame synonymous with CPU Words often change meaning through metonymy, where a word takes on the meaning of something closely associated with the original meaning. Through this process, "main frame" shifted from the physical frame (as a box) to the functional contents of the frame, specifically the central processing unit.17 The earliest instance that I could find of the "main frame" being equated with the central processing unit was in 1955. Survey of Data Processors stated: "The central processing unit is known by other names; the arithmetic and ligical [sic] unit, the main frame, the computer, etc. but we shall refer to it, usually, as the central processing unit." A similar definition appeared in Radio Electronics (June 1957, p37): "These arithmetic operations are performed in what is called the arithmetic unit of the machine, also sometimes referred to as the 'main frame.'" The US Department of Agriculture's Glossary of ADP Terminology (1960) uses the definition: "MAIN FRAME - The central processor of the computer system. It contains the main memory, arithmetic unit and special register groups." I'll mention that "special register groups" is nonsense that was repeated for years.18 This definition was reused and extended in the government's Automatic Data Processing Glossary, published in 1962 "for use as an authoritative reference by all officials and employees of the executive branch of the Government" (below). This definition was reused in many other places, notably the Oxford English Dictionary.19 Definition from Bureau of the Budget: frame, main, (1) the central processor of the computer system. It contains the main storage, arithmetic unit and special register groups. Synonymous with (CPU) and (central processing unit). (2) All that portion of a computer exclusive of the input, output, peripheral and in some instances, storage units. By the early 1980s, defining a mainframe as the CPU had become obsolete. IBM stated that "mainframe" was a deprecated term for "processing unit" in the Vocabulary for Data Processing, Telecommunications, and Office Systems (1981); the American National Dictionary for Information Processing Systems (1982) was similar. Computers and Business Information Processing (1983) bluntly stated: "According to the official definition, 'mainframe' and 'CPU' are synonyms. Nobody uses the word mainframe that way." Guide for auditing automatic data processing systems (1961).](mainframe-diagram.jpg "w400") 1967: I/O devices transferring data "independent of the main frame" Datamation, Volume 13. Discusses other sorts of off-line I/O. 1967 Office Equipment & Methods: "By putting your data on magnetic tape and feeding it to your computer in this pre-formatted fashion, you increase your data input rate so dramatically that you may effect main frame time savings as high as 50%." Same in Data Processing Magazine, 1966 Equating the mainframe and the CPU led to a semantic conflict in the 1970s, when the CPU became a microprocessor chip rather than a large box. For the most part, this was resolved by breaking apart the definitions of "mainframe" and "CPU", with the mainframe being the computer or class of computers, while the CPU became the processor chip. However, some non-American usages resolved the conflict by using "CPU" to refer to the box/case/tower of a PC. (See discussion [here](https://news.ycombinator.com/item?id=21336515) and [here](https://superuser.com/questions/1198006/is-it-correct-to-say-that-main-memory-ram-is-a-part-of-cpu).) --> Mainframe vs. peripherals Rather than defining the mainframe as the CPU, some dictionaries defined the mainframe in opposition to the "peripherals", the computer's I/O devices. The two definitions are essentially the same, but have a different focus.20 One example is the IFIP-ICC Vocabulary of Information Processing (1966) which defined "central processor" and "main frame" as "that part of an automatic data processing system which is not considered as peripheral equipment." Computer Dictionary (1982) had the definition "main frame—The fundamental portion of a computer, i.e. the portion that contains the CPU and control elements of a computer system, as contrasted with peripheral or remote devices usually of an input-output or memory nature." One reason for this definition was that computer usage was billed for mainframe time, while other tasks such as printing results could save money by taking place directly on the peripherals without using the mainframe itself.21 A second reason was that the mainframe vs. peripheral split mirrored the composition of the computer industry, especially in the late 1960s and 1970s. Computer systems were built by a handful of companies, led by IBM. Compatible I/O devices and memory were built by many other companies that could sell them at a lower cost than IBM.22 Publications about the computer industry needed convenient terms to describe these two industry sectors, and they often used "mainframe manufacturers" and "peripheral manufacturers." Main Frame or Mainframe? An interesting linguistic shift is from "main frame" as two independent words to a compound word: either hyphenated "main-frame" or the single word "mainframe." This indicates the change from "main frame" being a type of frame to "mainframe" being a new concept. The earliest instance of hyphenated "main-frame" that I found was from 1959 in IBM Information Retrieval Systems Conference. "Mainframe" as a single, non-hyphenated word appears the same year in Datamation, mentioning the mainframe of the NEAC2201 computer. In 1962, the IBM 7090 Installation Instructions refer to a "Mainframe Diag[nostic] and Reliability Program." (Curiously, the document also uses "main frame" as two words in several places.) The 1962 book Information Retrieval Management discusses how much computer time document queries can take: "A run of 100 or more machine questions may require two to five minutes of mainframe time." This shows that by 1962, "main frame" had semantically shifted to a new word, "mainframe." The rise of the minicomputer and how the "mainframe" become a class of computers So far, I've shown how "mainframe" started as a physical frame in the computer, and then was generalized to describe the CPU. But how did "mainframe" change from being part of a computer to being a class of computers? This was a gradual process, largely happening in the mid-1970s as the rise of the minicomputer and microcomputer created a need for a word to describe large computers. Although microcomputers, minicomputers, and mainframes are now viewed as distinct categories, this was not the case at first. For instance, a 1966 computer buyer's guide lumps together computers ranging from desk-sized to 70,000 square feet.23 Around 1968, however, the term "minicomputer" was created to describe small computers. The story is that the head of DEC in England created the term, inspired by the miniskirt and the Mini Minor car.24 While minicomputers had a specific name, larger computers did not.25 Gradually in the 1970s "mainframe" came to be a separate category, distinct from "minicomputer."2627 An early example is Datamation (1970), describing systems of various sizes: "mainframe, minicomputer, data logger, converters, readers and sorters, terminals." The influential business report EDP first split mainframes from minicomputers in 1972.28 The line between minicomputers and mainframes was controversial, with articles such as Distinction Helpful for Minis, Mainframes and Micro, Mini, or Mainframe? Confusion persists (1981) attempting to clarify the issue.29 With the development of the microprocessor, computers became categorized as mainframes, minicomputers or microcomputers. For instance, a 1975 Computerworld article discussed how the minicomputer competes against the microcomputer and mainframes. Adam Osborne's An Introduction to Microcomputers (1977) described computers as divided into mainframes, minicomputers, and microcomputers by price, power, and size. He pointed out the large overlap between categories and avoided specific definitions, stating that "A minicomputer is a minicomputer, and a mainframe is a mainframe, because that is what the manufacturer calls it."32 In the late 1980s, computer industry dictionaries started defining a mainframe as a large computer, often explicitly contrasted with a minicomputer or microcomputer. By 1990, they mentioned the networked aspects of mainframes.33 IBM embraces the mainframe label Even though IBM is almost synonymous with "mainframe" now, IBM avoided marketing use of the word for many years, preferring terms such as "general-purpose computer."35 IBM's book Planning a Computer System (1962) repeatedly referred to "general-purpose computers" and "large-scale computers", but never used the word "mainframe."34 The announcement of the revolutionary System/360 (1964) didn't use the word "mainframe"; it was called a general-purpose computer system. The announcement of the System/370 (1970) discussed "medium- and large-scale systems." The System/32 introduction (1977) said, "System/32 is a general purpose computer..." The 1982 announcement of the 3804, IBM's most powerful computer at the time, called it a "large scale processor" not a mainframe. IBM started using "mainframe" as a marketing term in the mid-1980s. For example, the 3270 PC Guide (1986) refers to "IBM mainframe computers." An IBM 9370 Information System brochure (c. 1986) says the system was "designed to provide mainframe power." IBM's brochure for the 3090 processor (1987) called them "advanced general-purpose computers" but also mentioned "mainframe computers." A System 390 brochure (c. 1990) discussed "entry into the mainframe class." The 1990 announcement of the ES/9000 called them "the most powerful mainframe systems the company has ever offered." The IBM System/390: "The excellent balance between price and performance makes entry into the mainframe class an attractive proposition." IBM System/390 Brochure By 2000, IBM had enthusiastically adopted the mainframe label: the z900 announcement used the word "mainframe" six times, calling it the "reinvented mainframe." In 2003, IBM announced "The Mainframe Charter", describing IBM's "mainframe values" and "mainframe strategy." Now, IBM has retroactively applied the name "mainframe" to their large computers going back to 1959 (link), (link). Mainframes and the general public While "mainframe" was a relatively obscure computer term for many years, it became widespread in the 1980s. The Google Ngram graph below shows the popularity of "microcomputer", "minicomputer", and "mainframe" in books.36 The terms became popular during the late 1970s and 1980s. The popularity of "minicomputer" and "microcomputer" roughly mirrored the development of these classes of computers. Unexpectedly, even though mainframes were the earliest computers, the term "mainframe" peaked later than the other types of computers. N-gram graph from Google Books Ngram Viewer. Dictionary definitions I studied many old dictionaries to see when the word "mainframe" showed up and how they defined it. To summarize, "mainframe" started to appear in dictionaries in the late 1970s, first defining the mainframe in opposition to peripherals or as the CPU. In the 1980s, the definition gradually changed to the modern definition, with a mainframe distinguished as being large, fast, and often centralized system. These definitions were roughly a decade behind industry usage, which switched to the modern meaning in the 1970s. The word didn't appear in older dictionaries, such as the Random House College Dictionary (1968) and Merriam-Webster (1974). The earliest definition I could find was in the supplement to Webster's International Dictionary (1976): "a computer and esp. the computer itself and its cabinet as distinguished from peripheral devices connected with it." Similar definitions appeared in Webster's New Collegiate Dictionary (1976, 1980). A CPU-based definition appeared in Random House College Dictionary (1980): "the device within a computer which contains the central control and arithmetic units, responsible for the essential control and computational functions. Also called central processing unit." The Random House Dictionary (1978, 1988 printing) was similar. The American Heritage Dictionary (1982, 1985) combined the CPU and peripheral approaches: "mainframe. The central processing unit of a computer exclusive of peripheral and remote devices." The modern definition as a large computer appeared alongside the old definition in Webster's Ninth New Collegiate Dictionary (1983): "mainframe (1964): a computer with its cabinet and internal circuits; also: a large fast computer that can handle multiple tasks concurrently." Only the modern definition appears in The New Merriram-Webster Dictionary (1989): "large fast computer", while Webster's Unabridged Dictionary of the English Language (1989): "mainframe. a large high-speed computer with greater storage capacity than a minicomputer, often serving as the central unit in a system of smaller computers. [MAIN + FRAME]." Random House Webster's College Dictionary (1991) and Random House College Dictionary (2001) had similar definitions. The Oxford English Dictionary is the principal historical dictionary, so it is interesting to see its view. The 1989 OED gave historical definitions as well as defining mainframe as "any large or general-purpose computer, exp. one supporting numerous peripherals or subordinate computers." It has seven historical examples from 1964 to 1984; the earliest is the 1964 Honeywell Glossary. It quotes a 1970 Dictionary of Computers as saying that the word "Originally implied the main framework of a central processing unit on which the arithmetic unit and associated logic circuits were mounted, but now used colloquially to refer to the central processor itself." The OED also cited a Hewlett-Packard ad from 1974 that used the word "mainframe", but I consider this a mistake as the usage is completely different.15 Encyclopedias A look at encyclopedias shows that the word "mainframe" started appearing in discussions of computers in the early 1980s, later than in dictionaries. At the beginning of the 1980s, many encyclopedias focused on large computers, without using the word "mainframe", for instance, The Concise Encyclopedia of the Sciences (1980) and World Book (1980). The word "mainframe" started to appear in supplements such as Britannica Book of the Year (1980) and World Book Year Book (1981), at the same time as they started discussing microcomputers. Soon encyclopedias were using the word "mainframe", for example, Funk & Wagnalls Encyclopedia (1983), Encyclopedia Americana (1983), and World Book (1984). By 1986, even the Doubleday Children's Almanac showed a "mainframe computer." Newspapers I examined old newspapers to track the usage of the word "mainframe." The graph below shows the usage of "mainframe" in newspapers. The curve shows a rise in popularity through the 1980s and a steep drop in the late 1990s. The newspaper graph roughly matches the book graph above, although newspapers show a much steeper drop in the late 1990s. Perhaps mainframes aren't in the news anymore, but people still write books about them. Newspaper usage of "mainframe." Graph from newspapers.com from 1975 to 2010 shows usage started growing in 1978, picked up in 1984, and peaked in 1989 and 1997, with a large drop in 2001 and after (y2k?). The first newspaper appearances were in classified ads seeking employees, for instance, a 1960 ad in the San Francisco Examiner for people "to monitor and control main-frame operations of electronic computers...and to operate peripheral equipment..." and a (sexist) 1966 ad in the Philadelphia Inquirer for "men with Digital Computer Bkgrnd [sic] (Peripheral or Mainframe)."37 By 1970, "mainframe" started to appear in news articles, for example, "The computer can't work without the mainframe unit." By 1971, the usage increased with phrases such as "mainframe central processor" and "'main-frame' computer manufacturers". 1972 had usages such as "the mainframe or central processing unit is the heart of any computer, and does all the calculations". A 1975 article explained "'Mainframe' is the industry's word for the computer itself, as opposed to associated items such as printers, which are referred to as 'peripherals.'" By 1980, minicomputers and microcomputers were appearing: "All hardware categories-mainframes, minicomputers, microcomputers, and terminals" and "The mainframe and the minis are interconnected." By 1985, the mainframe was a type of computer, not just the CPU: "These days it's tough to even define 'mainframe'. One definition is that it has for its electronic brain a central processor unit (CPU) that can handle at least 32 bits of information at once. ... A better distinction is that mainframes have numerous processors so they can work on several jobs at once." Articles also discussed "the micro's challenge to the mainframe" and asked, "buy a mainframe, rather than a mini?" By 1990, descriptions of mainframes became florid: "huge machines laboring away in glass-walled rooms", "the big burner which carries the whole computing load for an organization", "behemoth data crunchers", "the room-size machines that dominated computing until the 1980s", "the giant workhorses that form the nucleus of many data-processing centers", "But it is not raw central-processing-power that makes a mainframe a mainframe. Mainframe computers command their much higher prices because they have much more sophisticated input/output systems." Conclusion After extensive searches through archival documents, I found usages of the term "main frame" dating back to 1952, much earlier than previously reported. In particular, the introduction of frames to package the IBM 701 computer led to the use of the word "main frame" for that computer and later ones. The term went through various shades of meaning and remained fairly obscure for many years. In the mid-1970s, the term started describing a large computer, essentially its modern meaning. In the 1980s, the term escaped the computer industry and appeared in dictionaries, encyclopedias, and newspapers. After peaking in the 1990s, the term declined in usage (tracking the decline in mainframe computers), but the term and the mainframe computer both survive. Two factors drove the popularity of the word "mainframe" in the 1980s with its current meaning of a large computer. First, the terms "microcomputer" and "minicomputer" led to linguistic pressure for a parallel term for large computers. For instance, the business press needed a word to describe IBM and other large computer manufacturers. While "server" is the modern term, "mainframe" easily filled the role back then and was nicely alliterative with "microcomputer" and "minicomputer."38 Second, up until the 1980s, the prototype meaning for "computer" was a large mainframe, typically IBM.39 But as millions of home computers were sold in the early 1980s, the prototypical "computer" shifted to smaller machines. This left a need for a term for large computers, and "mainframe" filled that need. In other words, if you were talking about a large computer in the 1970s, you could say "computer" and people would assume you meant a mainframe. But if you said "computer" in the 1980s, you needed to clarify if it was a large computer. The word "mainframe" is almost 75 years old and both the computer and the word have gone through extensive changes in this time. The "death of the mainframe" has been proclaimed for well over 30 years but mainframes are still hanging on. Who knows what meaning "mainframe" will have in another 75 years? Follow me on Bluesky (@righto.com) or RSS. (I'm no longer on Twitter.) Thanks to the Computer History Museum and archivist Sara Lott for access to many documents. Notes and References The Computer History Museum states: "Why are they called “Mainframes”? Nobody knows for sure. There was no mainframe “inventor” who coined the term. Probably “main frame” originally referred to the frames (designed for telephone switches) holding processor circuits and main memory, separate from racks or cabinets holding other components. Over time, main frame became mainframe and came to mean 'big computer.'" (Based on my research, I don't think telephone switches have any connection to computer mainframes.) Several sources explain that the mainframe is named after the frame used to construct the computer. The Jargon File has a long discussion, stating that the term "originally referring to the cabinet containing the central processor unit or ‘main frame’." Ken Uston's Illustrated Guide to the IBM PC (1984) has the definition "MAIN FRAME A large, high-capacity computer, so named because the CPU of this kind of computer used to be mounted on a frame." IBM states that mainframe "Originally referred to the central processing unit of a large computer, which occupied the largest or central frame (rack)." The Microsoft Computer Dictionary (2002) states that the name mainframe "is derived from 'main frame', the cabinet originally used to house the processing unit of such computers." Some discussions of the origin of the word "mainframe" are here, here, here, here, and here. The phrase "main frame" in non-computer contexts has a very old but irrelevant history, describing many things that have a frame. For example, it appears in thousands of patents from the 1800s, including drills, saws, a meat cutter, a cider mill, printing presses, and corn planters. This shows that it was natural to use the phrase "main frame" when describing something constructed from frames. Telephony uses a Main distribution frame or "main frame" for wiring, going back to 1902. Some people claim that the computer use of "mainframe" is related to the telephony use, but I don't think they are related. In particular, a telephone main distribution frame looks nothing like a computer mainframe. Moreover, the computer use and the telephony use developed separately; if the computer use started in, say, Bell Labs, a connection would be more plausible. IBM patents with "main frame" include a scale (1922), a card sorter (1927), a card duplicator (1929), and a card-based accounting machine (1930). IBM's incidental uses of "main frame" are probably unrelated to modern usage, but they are a reminder that punch card data processing started decades before the modern computer. ↩ It is unclear why the IBM 701 installation manual is dated August 27, 1952 but the drawing is dated 1953. I assume the drawing was updated after the manual was originally produced. ↩ This footnote will survey the construction techniques of some early computers; the key point is that building a computer on frames was not an obvious technique. ENIAC (1945), the famous early vacuum tube computer, was constructed from 40 panels forming three walls filling a room (ref, ref). EDVAC (1949) was built from large cabinets or panels (ref) while ORDVAC and CLADIC (1949) were built on racks (ref). One of the first commercial computers, UNIVAC 1 (1951), had a "Central Computer" organized as bays, divided into three sections, with tube "chassis" plugged in (ref ). The Raytheon computer (1951) and Moore School Automatic Computer (1952) (ref) were built from racks. The MONROBOT VI (1955) was described as constructed from the "conventional rack-panel-cabinet form" (ref). ↩ The size and construction of early computers often made it difficult to install or move them. The early computer ENIAC required 9 months to move from Philadelphia to the Aberdeen Proving Ground. For this move, the wall of the Moore School in Philadelphia had to be partially demolished so ENIAC's main panels could be removed. In 1959, moving the SWAC computer required disassembly of the computer and removing one wall of the building (ref). When moving the early computer JOHNNIAC to a different site, the builders discovered the computer was too big for the elevator. They had to raise the computer up the elevator shaft without the elevator (ref). This illustrates the benefits of building a computer from moveable frames. ↩ The IBM 701's main frame was called the Electronic Analytical Control Unit in external documentation. ↩ The 701 installation manual (1952) has a frame arrangement diagram showing the dimensions of the various frames, along with a drawing of the main frame, and power usage of the various frames. Service documentation (1953) refers to "main frame adjustments" (page 74). The 700 Series Data Processing Systems Component Circuits document (1955-1959) lists various types of frames in its abbreviation list (below) Abbreviations used in IBM drawings include MF for main frame. Also note CF for core frame, and DF for drum frame, From 700 Series Data Processing Systems Component Circuits (1955-1959). When repairing an IBM 701, it was important to know which frame held which components, so "main frame" appeared throughout the engineering documents. For instance, in the schematics, each module was labeled with its location; "MF" stands for "main frame." Detail of a 701 schematic diagram. "MF" stands for "main frame." This diagram shows part of a pluggable tube module (type 2891) in mainframe panel 3 (MF3) section J, column 29. The blocks shown are an AND gate, OR gate, and Cathode Follower (buffer). From System Drawings 1.04.1. The "main frame" terminology was used in discussions with customers. For example, notes from a meeting with IBM (April 8, 1952) mention "E. S. [Electrostatic] Memory 15 feet from main frame" and list "main frame" as one of the seven items obtained for the $15,000/month rental cost. ↩ For more information on how the IBM 701 was designed to fit on elevators and through doorways, see Building IBM: Shaping an Industry and Technology page 170, The Interface: IBM and the Transformation of Corporate Design page 69. This is also mentioned in "Engineering Description of the IBM Type 701 Computer", Proceedings of the IRE Oct 1953, page 1285. ↩ Many early systems used "central computer" to describe the main part of the computer, perhaps more commonly than "main frame." An early example is the "central computer" of the Elecom 125 (1954). The Digital Computer Newsletter (Apr 1955) used "central computer" several times to describe the processor of SEAC. The 1961 BRL report shows "central computer" being used by Univac II, Univac 1107, Univac File 0, DYSEAC and RCA Series 300. The MIT TX-2 Technical Manual (1961) uses "central computer" very frequently. The NAREC glossary (1962) defined "central computer. That part of a computer housed in the main frame." ↩ This footnote lists some other early computers that used the term "main frame." The October 1956 Digital Computer Newsletter mentions the "main frame" of the IBM NORC. Digital Computer Newsletter (Jan 1959) discusses using a RAMAC disk drive to reduce "main frame processing time." This document also mentions the IBM 709 "main frame." The IBM 704 documentation (1958) says "Each DC voltage is distributed to the main frame..." (IBM 736 reference manual) and "Check the air filters in each main frame unit and replace when dirty." (704 Central Processing Unit). The July 1962 Digital Computer Newsletter discusses the LEO III computer: "It has been built on the modular principle with the main frame, individual blocks of storage, and input and output channels all physically separate." The article also mentions that the new computer is more compact with "a reduction of two cabinets for housing the main frame." The IBM 7040 (1964) and IBM 7090 (1962) were constructed from multiple frames, including the processing unit called the "main frame."11 Machines in IBM's System/360 line (1964) were built from frames; some models had a main frame, power frame, wall frame, and so forth, while other models simply numbered the frames sequentially.12 ↩ The 1952 JOHNNIAC progress report is quoted in The History of the JOHNNIAC. This memorandum was dated August 8, 1952, so it is the earliest citation that I found. The June 1953 memorandum also used the term, stating, "The main frame is complete." ↩ A detailed description of IBM's frame-based computer packaging is in Standard Module System Component Circuits pages 6-9. This describes the SMS-based packaging used in the IBM 709x computers, the IBM 1401, and related systems as of 1960. ↩ IBM System/360 computers could have many frames, so they were usually given sequential numbers. The Model 85, for instance, had 12 frames for the processor and four megabytes of memory in 18 frames (at over 1000 pounds each). Some of the frames had descriptive names, though. The Model 40 had a main frame (CPU main frame, CPU frame), a main storage logic frame, a power supply frame, and a wall frame. The Model 50 had a CPU frame, power frame, and main storage frame. The Model 75 had a main frame (consisting of multiple physical frames), storage frames, channel frames, central processing frames, and a maintenance console frame. The compact Model 30 consisted of a single frame, so the documentation refers to the "frame", not the "main frame." For more information on frames in the System/360, see 360 Physical Planning. The Architecture of the IBM System/360 paper refers to the "main-frame hardware." ↩ A few more examples that discuss the minicomputer's mainframe, its physical box: A 1970 article discusses the mainframe of a minicomputer (as opposed to the peripherals) and contrasts minicomputers with large scale computers. A 1971 article on minicomputers discusses "minicomputer mainframes." Computerworld (Jan 28, 1970, p59) discusses minicomputer purchases: "The actual mainframe is not the major cost of the system to the user." Modern Data (1973) mentions minicomputer mainframes several times. ↩ DEC documents refer to the PDP-11 minicomputer as a mainframe. The PDP-11 Conventions manual (1970) defined: "Processor: A unit of a computing system that includes the circuits controlling the interpretation and execution of instructions. The processor does not include the Unibus, core memory, interface, or peripheral devices. The term 'main frame' is sometimes used but this term refers to all components (processor, memory, power supply) in the basic mounting box." In 1976, DEC published the PDP-11 Mainframe Troubleshooting Guide. The PDP-11 mainframe is also mentioned in Computerworld (1977). ↩ Test equipment manufacturers started using the term "main frame" (and later "mainframe") around 1962, to describe an oscilloscope or other test equipment that would accept plug-in modules. I suspect this is related to the use of "mainframe" to describe a computer's box, but it could be independent. Hewlett-Packard even used the term to describe a solderless breadboard, the 5035 Logic Lab. The Oxford English Dictionary (1989) used HP's 1974 ad for the Logic Lab as its earliest citation of mainframe as a single word. It appears that the OED confused this use of "mainframe" with the computer use. 1974 Sci. Amer. Apr. 79. The laboratory station mainframe has the essentials built-in (power supply, logic state indicators and programmers, and pulse sources to provide active stimulus for the student's circuits)." --> Is this a mainframe? The HP 5035A Logic Lab was a power supply and support circuitry for a solderless breadboard. HP's ads referred to this as a "laboratory station mainframe." ↩↩ In the 1980s, the use of "mainframe" to describe the box holding a microcomputer started to conflict with "mainframe" as a large computer. For example, Radio Electronics (October 1982), started using the short-lived term "micro-mainframe" instead of "mainframe" for a microcomputer's enclosure. By 1985, Byte magazine had largely switched to the modern usage of "mainframe." But even as late as 1987, a review of the Apple IIGC described one of the system's components as the '"mainframe" (i.e. the actual system box)'. ↩ Definitions of "central processing unit" disagreed as to whether storage was part of the CPU, part of the main frame, or something separate. This was largely a consequence of the physical construction of early computers. Smaller computers had memory in the same frame as the processor, while larger computers often had separate storage frames for memory. Other computers had some memory with the processor and some external. Thus, the "main frame" might or might not contain memory, and this ambiguity carried over to definitions of CPU. (In modern usage, the CPU consists of the arithmetic/logic unit (ALU) and control circuitry, but excludes memory.) ↩ Many definitions of mainframe or CPU mention "special register groups", an obscure feature specific to the Honeywell 800 computer (1959). (Processors have registers, special registers are common, and some processors have register groups, but only the Honeywell 800 had "special register groups.") However, computer dictionaries kept using this phrase for decades, even though it doesn't make sense for other computers. I wrote a blog post about special register groups here. ↩ This footnote provides more examples of "mainframe" being defined as the CPU. The Data Processing Equipment Encyclopedia (1961) had a similar definition: "Main Frame: The main part of the computer, i.e. the arithmetic or logic unit; the central processing unit." The 1967 IBM 360 operator's guide defined: "The main frame - the central processing unit and main storage." The Department of the Navy's ADP Glossary (1970): "Central processing unit: A unit of a computer that includes the circuits controlling the interpretation and execution of instructions. Synonymous with main frame." This was a popular definition, originally from the ISO, used by IBM (1979) among others. Funk & Wagnalls Dictionary of Data Processing Terms (1970) defined: "main frame: The basic or essential portion of an assembly of hardware, in particular, the central processing unit of a computer." The American National Standard Vocabulary for Information Processing (1970) defined: "central processing unit: A unit of a computer that includes the circuits controlling the interpretation and execution of instructions. Synonymous with main frame." ↩ Both the mainframe vs. peripheral definition and the mainframe as CPU definition made it unclear exactly what components of the computer were included in the mainframe. It's clear that the arithmetic-logic unit and the processor control circuitry were included, while I/O devices were excluded, but some components such as memory were in a gray area. It's also unclear if the power supply and I/O interfaces (channels) are part of the mainframe. These distinctions were ignored in almost all of the uses of "mainframe" that I saw. An unusual definition in a Goddard Space Center document (1965, below) partitioned equipment into the "main frame" (the electronic equipment), "peripheral equipment" (electromechanical components such as the printer and tape), and "middle ground equipment" (the I/O interfaces). The "middle ground" terminology here appears to be unique. Also note that computers are partitioned into "super speed", "large-scale", "medium-scale", and "small-scale." Definitions from Automatic Data Processing Equipment, Goddard Space Center, 1965. "Main frame" was defined as "The central processing unit of a system including the hi-speed core storage memory bank. (This is the electronic element.) ↩ This footnote gives some examples of using peripherals to save the cost of mainframe time. IBM 650 documentation (1956) describes how "Data written on tape by the 650 can be processed by the main frame of the 700 series systems." Univac II Marketing Material (1957) discusses various ways of reducing "main frame time" by, for instance, printing from tape off-line. The USAF Guide for auditing automatic data processing systems (1961) discusses how these "off line" operations make the most efficient use of "the more expensive main frame time." ↩ Peripheral manufacturers were companies that built tape drives, printers, and other devices that could be connected to a mainframe built by IBM or another company. The basis for the peripheral industry was antitrust action against IBM that led to the 1956 Consent Decree. Among other things, the consent decree forced IBM to provide reasonable patent licensing, which allowed other firms to build "plug-compatible" peripherals. The introduction of the System/360 in 1964 produced a large market for peripherals and IBM's large profit margins left plenty of room for other companies. ↩ Computers and Automation, March 1965, categorized computers into five classes, from "Teeny systems" (such as the IBM 360/20) renting for $2000/month, through Small, Medium, and Large systems, up to "Family or Economy Size Systems" (such as the IBM 360/92) renting for $75,000 per month. ↩ The term "minicomputer" was supposedly invented by John Leng, head of DEC's England operations. In the 1960s, he sent back a sales report: "Here is the latest minicomputer activity in the land of miniskirts as I drive around in my Mini Minor", which led to the term becoming popular at DEC. This story is described in The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation (1988). I'd trust the story more if I could find a reference that wasn't 20 years after the fact. ↩ For instance, Computers and Automation (1971) discussed the role of the minicomputer as compared to "larger computers." A 1975 minicomputer report compared minicomputers to their "general-purpose cousins." ↩ This footnote provides more on the split between minicomputers and mainframes. In 1971, Modern Data Products, Systems, Services contained .".. will offer mainframe, minicomputer, and peripheral manufacturers a design, manufacturing, and production facility...." Standard & Poor's Industry Surveys (1972) mentions "mainframes, minicomputers, and IBM-compatible peripherals." Computerworld (1975) refers to "mainframe and minicomputer systems manufacturers." The 1974 textbook "Information Systems: Technology, Economics, Applications" couldn't decide if mainframes were a part of the computer or a type of computer separate from minicomputers, saying: "Computer mainframes include the CPU and main memory, and in some usages of the term, the controllers, channels, and secondary storage and I/O devices such as tape drives, disks, terminals, card readers, printers, and so forth. However, the equipment for storage and I/O are usually called peripheral devices. Computer mainframes are usually thought of as medium to large scale, rather than mini-computers." Studying U.S. Industrial Outlook reports provides another perspective over time. U.S. Industrial Outlook 1969 divides computers into small, medium-size, and large-scale. Mainframe manufacturers are in opposition to peripheral manufacturers. The same mainframe vs. peripherals opposition appears in U.S. Industrial Outlook 1970 and U.S. Industrial Outlook 1971. The 1971 report also discusses minicomputer manufacturers entering the "maxicomputer market."30 1973 mentions "large computers, minicomputers, and peripherals." U.S. Industrial Outlook 1976 states, "The distinction between mainframe computers, minis, micros, and also accounting machines and calculators should merge into a spectrum." By 1977, the market was separated into "general purpose mainframe computers", "minicomputers and small business computers" and "microprocessors." Family Computing Magazine (1984) had a "Dictionary of Computer Terms Made Simple." It explained that "A Digital computer is either a "mainframe", a "mini", or a "micro." Forty years ago, large mainframes were the only size that a computer could be. They are still the largest size, and can handle more than 100,000,000 instructions per second. PER SECOND! [...] Mainframes are also called general-purpose computers." ↩ In 1974, Congress held antitrust hearings into IBM. The thousand-page report provides a detailed snapshot of the meanings of "mainframe" at the time. For instance, a market analysis report from IDC illustrates the difficulty of defining mainframes and minicomputers in this era (p4952). The "Mainframe Manufacturers" section splits the market into "general-purpose computers" and "dedicated application computers" including "all the so-called minicomputers." Although this section discusses minicomputers, the emphasis is on the manufacturers of traditional mainframes. A second "Plug-Compatible Manufacturers" section discusses companies that manufactured only peripherals. But there's also a separate "Minicomputers" section that focuses on minicomputers (along with microcomputers "which are simply microprocessor-based minicomputers"). My interpretation of this report is the terminology is in the process of moving from "mainframe vs. peripheral" to "mainframe vs. minicomputer." The statement from Research Shareholders Management (p5416) on the other hand discusses IBM and the five other mainframe companies; they classify minicomputer manufacturers separately. (p5425) p5426 mentions "mainframes, small business computers, industrial minicomputers, terminals, communications equipment, and minicomputers." Economist Ralph Miller mentions the central processing unit "(the so-called 'mainframe')" (p5621) and then contrasts independent peripheral manufacturers with mainframe manufacturers (p5622). The Computer Industry Alliance refers to mainframes and peripherals in multiple places, and "shifting the location of a controller from peripheral to mainframe", as well as "the central processing unit (mainframe)" p5099. On page 5290, "IBM on trial: Monopoly tends to corrupt", from Harper's (May 1974), mentions peripherals compatible with "IBM mainframe units—or, as they are called, central processing computers." ↩ The influential business newsletter EDP provides an interesting view on the struggle to separate the minicomputer market from larger computers. Through 1968, they included minicomputers in the "general-purpose computer" category. But in 1969, they split "general-purpose computers" into "Group A, General Purpose Digital Computers" and "Group B, Dedicated Application Digital Computers." These categories roughly corresponded to larger computers and minicomputers, on the (dubious) assumption that minicomputers were used for a "dedicated application." The important thing to note is that in 1969 they did not use the term "mainframe" for the first category, even though with the modern definition it's the obvious term to use. At the time, EDP used "mainframe manufacturer" or "mainframer"31 to refer to companies that manufactured computers (including minicomputers), as opposed to manufacturers of peripherals. In 1972, EDP first mentioned mainframes and minicomputers as distinct types. In 1973, "microcomputer" was added to the categories. As the 1970s progressed, the separation between minicomputers and mainframes became common. However, the transition was not completely smooth; 1973 included a reference to "mainframe shipments (including minicomputers)." To specific, the EDP Industry Report (Nov. 28, 1969) gave the following definitions of the two groups of computers: Group A—General Purpose Digital Computers: These comprise the bulk of the computers that have been listed in the Census previously. They are character or byte oriented except in the case of the large-scale scientific machines, which have 36, 48, or 60-bit words. The predominant portion (60% to 80%) of these computers is rented, usually for $2,000 a month or more. Higher level languages such as Fortran, Cobol, or PL/1 are the primary means by which users program these computers. Group B—Dedicated Application Digital Computers: This group of computers includes the "mini's" (purchase price below $25,000), the "midi's" ($25,000 to $50,000), and certain larger systems usually designed or used for one dedicated application such as process control, data acquisition, etc. The characteristics of this group are that the computers are usually word oriented (8, 12, 16, or 24-bits per word), the predominant number (70% to 100%) are purchased, and assembly language (at times Fortran) is the predominant means of programming. This type of computer is often sold to an original equipment manufacturer (OEM) for further system integration and resale to the final user. These definitions strike me as rather arbitrary. ↩ In 1981 Computerworld had articles trying to clarify the distinctions between microcomputers, minicomputers, superminicomputers, and mainframes, as the systems started to overlay. One article, Distinction Helpful for Minis, Mainframes said that minicomputers were generally interactive, while mainframes made good batch machines and network hosts. Microcomputers had up to 512 KB of memory, minis were 16-bit machines with 512 KB to 4 MB of memory, costing up to $100,000. Superminis were 16- to 32-bit machines with 4 MB to 8 MB of memory, costing up to $200,000 but with less memory bandwidth than mainframes. Finally, mainframes were 32-bit machines with more than 8 MB of memory, costing over $200,000. Another article Micro, Mini, or Mainframe? Confusion persists described a microcomputer as using an 8-bit architecture and having fewer peripherals, while a minicomputer has a 16-bit architecture and 48 KB to 1 MB of memory. ↩ The miniskirt in the mid-1960s was shortly followed by the midiskirt and maxiskirt. These terms led to the parallel construction of the terms minicomputer, midicomputer, and maxicomputer. The New York Times had a long article Maxi Computers Face Mini Conflict (April 5, 1970) explicitly making the parallel: "Mini vs. Maxi, the reigning issue in the glamorous world of fashion, is strangely enough also a major point of contention in the definitely unsexy realm of computers." Although midicomputer and maxicomputer terminology didn't catch on the way minicomputer did, they still had significant use (example, midicomputer examples, maxicomputer examples). The miniskirt/minicomputer parallel was done with varying degrees of sexism. One example is Electronic Design News (1969): "A minicomputer. Like the miniskirt, the small general-purpose computer presents the same basic commodity in a more appealing way." ↩ Linguistically, one indication that a new word has become integrated in the language is when it can be extended to form additional new words. One example is the formation of "mainframers", referring to companies that build mainframes. This word was moderately popular in the 1970s to 1990s. It was even used by the Department of Justice in their 1975 action against IBM where they described the companies in the systems market as the "mainframe companies" or "mainframers." The word is still used today, but usually refers to people with mainframe skills. Other linguistic extensions of "mainframe" include mainframing, unmainframe, mainframed, nonmainframe, and postmainframe. ↩ More examples of the split between microcomputers and mainframes: Softwide Magazine (1978) describes "BASIC versions for micro, mini and mainframe computers." MSC, a disk system manufacturer, had drives "used with many microcomputer, minicomputer, and mainframe processor types" (1980). ↩ Some examples of computer dictionaries referring to mainframes as a size category: Illustrated Dictionary of Microcomputer Terminology (1978) defines "mainframe" as "(1) The heart of a computer system, which includes the CPU and ALU. (2) A large computer, as opposed to a mini or micro." A Dictionary of Minicomputing and Microcomputing (1982) includes the definition of "mainframe" as "A high-speed computer that is larger, faster, and more expensive than the high-end minicomputers. The boundary between a small mainframe and a large mini is fuzzy indeed." The National Bureau of Standards Future Information Technology (1984) defined: "Mainframe is a term used to designate a medium and large scale CPU." The New American Computer Dictionary (1985) defined "mainframe" as "(1) Specifically, the rack(s) holding the central processing unit and the memory of a large computer. (2) More generally, any large computer. 'We have two mainframes and several minis.'" The 1990 ANSI Dictionary for Information Systems (ANSI X3.172-1990) defined: mainframe. A large computer, usually one to which other computers are connected in order to share its resources and computing power. Microsoft Press Computer Dictionary (1991) defined "mainframe computer" as "A high-level computer designed for the most intensive computational tasks. Mainframe computers are often shared by multiple users connected to the computer via terminals." ISO 2382 (1993) defines a mainframe as "a computer, usually in a computer center, with extensive capabilities and resources to which other computers may be connected so that they can share facilities." The Microsoft Computer Dictionary (2002) had an amusingly critical definition of mainframe: "A type of large computer system (in the past often water-cooled), the primary data processing resource for many large businesses and organizations. Some mainframe operating systems and solutions are over 40 years old and have the capacity to store year values only as two digits." ↩ IBM's 1962 book Planning a Computer System (1962) describes how the Stretch computer's circuitry was assembled into frames, with the CPU consisting of 18 frames. The picture below shows how a "frame" was, in fact, constructed from a metal frame. In the Stretch computer, the circuitry (left) could be rolled out of the frame (right) ↩ The term "general-purpose computer" is probably worthy of investigation since it was used in a variety of ways. It is one of those phrases that seems obvious until you think about it more closely. On the one hand, a computer such as the Apollo Guidance Computer can be considered general purpose because it runs a variety of programs, even though the computer was designed for one specific mission. On the other hand, minicomputers were often contrasted with "general-purpose computers" because customers would buy a minicomputer for a specific application, unlike a mainframe which would be used for a variety of applications. ↩ The n-gram graph is from the Google Books Ngram Viewer. The curves on the graph should be taken with a grain of salt. First, the usage of words in published books is likely to lag behind "real world" usage. Second, the number of usages in the data set is small, especially at the beginning. Nonetheless, the n-gram graph generally agrees with what I've seen looking at documents directly. ↩ More examples of "mainframe" in want ads: A 1966 ad from Western Union in The Arizona Republic looking for experience "in a systems engineering capacity dealing with both mainframe and peripherals." A 1968 ad in The Minneapolis Star for an engineer with knowledge of "mainframe and peripheral hardware." A 1968 ad from SDS in The Los Angeles Times for an engineer to design "circuits for computer mainframes and peripheral equipment." A 1968 ad in Fort Lauderdale News for "Computer mainframe and peripheral logic design." A 1972 ad in The Los Angeles Times saying "Mainframe or peripheral [experience] highly desired." In most of these ads, the mainframe was in contrast to the peripherals. ↩ A related factor is the development of remote connections from a microcomputer to a mainframe in the 1980s. This led to the need for a word to describe the remote computer, rather than saying "I connected my home computer to the other computer." See the many books and articles on connecting "micro to mainframe." ↩ To see how the prototypical meaning of "computer" changed in the 1980s, I examined the "Computer" article in encyclopedias from that time. The 1980 Concise Encyclopedia of the Sciences discusses a large system with punched-card input. In 1980, the World Book article focused on mainframe systems, starting with a photo of an IBM System/360 Model 40 mainframe. But in the 1981 supplement and the 1984 encyclopedia, the World Book article opened with a handheld computer game, a desktop computer, and a "large-scale computer." The article described microcomputers, minicomputers, and mainframes. Funk & Wagnalls Encyclopedia (1983) was in the middle of the transition; the article focused on large computers and had photos of IBM machines, but mentioned that future growth is expected in microcomputers. By 1994, the World Book article's main focus was the personal computer, although the mainframe still had a few paragraphs and a photo. This is evidence that the prototypical meaning of "computer" underwent a dramatic shift in the early 1980s from a mainframe to a balance between small and large computers, and then to the personal computer. ↩
The tragedy in Washington D.C. this week was horrible, and a shocking incident. There should and will be an investigation into what went wrong here, but every politician and official who spoke at the White House today explicitly blamed DEI programs for this crash. The message may as well
Guillermo posted this recently: What you name your product matters more than people give it credit. It's your first and most universal UI to the world. Designing a good name requires multi-dimensional thinking and is full of edge cases, much like designing software. I first will give credit where credit is due: I spent the first few years thinking "vercel" was phonetically interchangable with "volcel" and therefore fairly irredeemable as a name, but I've since come around to the name a bit as being (and I do not mean this snarkily or negatively!) generically futuristic, like the name of an amoral corporation in a Philip K. Dick novel. A few folks ask every year where the name for Buttondown came from. The answer is unexciting: Its killer feature was Markdown support, so I was trying to find a useful way to play off of that. "Buttondown" evokes, at least for me, the scent and touch of a well-worn OCBD, and that kind of timeless bourgeois aesthetic was what I was going for with the general branding. It was, in retrospect, a good-but-not-great name with two flaws: It's a common term. Setting Google Alerts (et al) for "buttondown" meant a lot of menswear stuff and not a lot of email stuff. Because it's a common term, the .com was an expensive purchase (see Notes on buttondown.com for more on that). We will probably never change the name. It's hard for me to imagine the ROI on a total rebrand like that ever justifying its own cost, and I have a soft spot for it even after all of these years. But all of this is to say: I don't know of any projects that have failed or succeeded because of a name. I would just try to avoid any obvious issues, and follow Seth's advice from 2003.
Mark your calendars for March 21-22, 2025, as we come together for a special Arduino Day to celebrate our 20th anniversary! This free, online event is open to everyone, everywhere. Two decades of creativity and community Over the past 20 years, we have evolved from a simple open-source hardware platform into a global community with […] The post Join us for Arduino Day 2025: celebrating 20 years of community! appeared first on Arduino Blog.
Disruptive technologies call for rethinking product design. We must question assumptions about underlying infrastructure and mental models while acknowledging neither change overnight. For example, self-driving cars don’t need steering wheels. Users direct AI-driven vehicles by giving them a destination address. Keyboards and microphones are better controls for this use case than steering wheels and pedals. But people expect cars to have steering wheels and pedals. Without them, they feel a loss of control – especially if they don’t fully trust the new technology. It’s not just control. The entire experience can – and perhaps must — change as a result. In a self-driving car, passengers needn’t all face forward. Freed from road duties, they can focus on work or leisure during the drive. As a result, designers can rethink the cabin experience from scratch. Such changes don’t happen overnight. People are used to having agency. They expect to actively sit behind the wheel with everyone facing forward. It’ll take time for people to cede control and relax. Moreover, current infrastructure is designed around these assumptions. For example, road signs point toward oncoming traffic because that’s where drivers can see them. Roads transited by robots don’t need signals at all. But it’s going to be a while before roads are used exclusively by AI-driven vehicles. Human drivers will share roads with them for some time, and humans need signs. The presence of robots might even call for new signaling. It’s a liminal situation that a) doesn’t yet accommodate the full potential of the new reality while b) trying to accommodate previous ways of being. The result is awkward “neither fish nor fowl” experiments. My favorite example is a late 19th Century product called Horsey Horseless. Patent diagram of Horsey Horseless (1899) via Wikimedia Yes, it’s a vehicle with a wooden horse head grafted on front. When I first saw this abomination (in a presentation by my friend Andrew Hinton,) I assumed it meant to appeal to early adopters who couldn’t let go of the idea of driving behind a horse. But there was a deeper logic here. At the time, cars shared roads with horse-drawn vehicles. Horsey Horseless was meant to keep motorcars from freaking out the horses. Whether it worked or not doesn’t matter. The important thing to note is people were grappling with the implications of the new technology on the product typology given the existing context. We’re in that situation now. Horsey Horseless is a metaphor for an approach to product evolution after the introduction of a disruptive new technology. To wit, designers seek to align the new technology with existing infrastructure and mental models by “grafting a horse.” Consider how many current products are “adding AI” by including a button that opens a chatbox alongside familiar UI. Here’s Gmail: Gmail’s Gemini AI panel. In this case, the email client UI is a sort of horse’s head that lets us use the new technology without disrupting our workflows. It’s a temporary hack. New products will appear that rethink use cases from the new technology’s unique capabilities. Why have a chat panel on an email client when AI can obviate the need for email altogether? Today, email is assumed infrastructure. Other products expect users to have an email address and a client app to access it. That might not always stand. Eventually, such awkward compromises will go away. But it takes time. We’re entering that liminal period now. It’s exciting – even if it produces weird chimeras for a while.