More from Electronics etc…
Introduction The SR620 Repairing the SR620 Replacing the Backup 3V Lithium Battery Switching to an External Reference Clock Running Auto-Calibration Oscilloscope Display Mode References Footnotes Introduction A little over a year ago, I found a Stanford Research Systems SR620 universal time interval counter at the Silicon Valley Electronics Flea Market. It had a big sticker “Passes Self-Test” and “Tested 3/9/24” (the day before the flea market) on it so I took the gamble and spent an ungodly $4001 on it. Luckily, it did work fine, initially at least, but I soon discovered that it sometimes got into some weird behavior after pressing the power-on switch. The SR620 The SR620 was designed sometime in the mid-1980s. Mine has a rev C PCB with a date of July 1988, 37 year old! The manual lists 1989, 2006, 2019 and 2025 revisions. I don’t know if there were any major changes along the way, but I doubt it. It’s still for sale on the SRS website, starting at $5150. The specifications are still pretty decent, especially for a hobbyist: 25 ps single shot time resolution 1.3 GHz frequency range 11-digit resolution over a 1 s measurement interval The SR620 is not perfect, one notable issue is its thermal design. It simply doesn’t have enough ventilation holes, the heat-generating power regulators are located close to the high precision time-to-analog converters, and the temperature sensor for the fan is inexplicably placed right next to the fan, which is not close at all to the power regulators. The Signal Path has an SR620 repair video that talks about this. Repairing the SR620 You can see the power-on behavior in the video below: Of note is that lightly touching the power button changes the behavior and sometimes makes it get all the way through the power-on sequence. This made me hopeful that the switch itself was bad, something that should be easy to fix. Unlike my still broken SRS DG535, another flea market buy with the most cursed assembly, the SR620 is a dream to work on: 4 side screws is all it takes to remove the top of the case and have access to all the components from the top. Another 4 screws to remove the bottom panel and you have access to the solder side of the PCB. You can desolder components without lifting the PCB out of the enclosure. Like my HP 5370A, the power switch of the SR620 selects between power on and standby mode. The SR620 enables the 15V rail at all times to keep a local TCXO or OCXO warmed up. The power switch is located at the right of the front panel. It has 2 black and 2 red wires. When the unit is powered on, the 2 black wires and the 2 red wires are connected to each other. To make sure that the switch itself was the problem, I soldered the wires together to create a permanent connection: After this, the SR620 worked totall fine! Let’s replace the switch. Unscrew 4 more screws and pull the knobs of the 3 front potentiometers and power switch to get rid of the front panel: A handful of additional screws to remove the front PCB from the chassis, and you have access to the switch: The switch is an ITT Schadow NE15 T70. Unsurprisingly, these are not produced anymore, but you can still find them on eBay. I paid $7.5 + shipping, the price increased to $9.5 immediately after that. According to this EEVblog forum post, this switch on Digikey is a suitable replacement, but I didn’t try it. The old switch (bottom) has 6 contact points vs only 4 of the new one (top), but that wasn’t an issue since only 4 were used. Both switches also have a metal screw plate, but they were oriented differently. However, you can easily reconfigure the screw plate by straightening 4 metal prongs. If you buy the new switch from Digikey and it doesn’t come with the metal screw plate, you should be able to transplant the plate from the broken switch to the new one just the same. To get the switch through the narrow hole of the case, you need to cut off the pins on the one side of the switch and you need to bend the contact points a bit. After soldering the wires back in place, the SR620 powered on reliably. Switch replacement completed! Replacing the Backup 3V Lithium Battery The SR620 has a simple microcontroller system consists of a Z8800 CPU, 64 KB of EPROM and a 32 KB SRAM. In addition to program data, the SRAM also contains calibration and settings. replaced one such battery in my HP 3478A multimeter. These batteries last almost forever, but mine had a 1987 date code and 38 years is really pushing things, so I replaced it with this new one from Digikey. The 1987 version of this battery had 1 pin on each side, on the new ones, the + side has 2 pins, so you need to cut one of those pins and install the battery slightly crooked back onto the PCB. When you first power up the SR620 after replacing the battery, you might see “Test Error 3” on the display. According to the manual: Test error 3 is usually “self-healing”. The instrument settings will be returned to their default values and factory calibration data will be recalled from ROM. Test Error 3 will recur if the Lithium battery or RAM is defective. After power cycling the device again, the test error was gone and everything worked, but with a precision that was slightly lower than before: before the battery replacement, when feeding the 10 MHz output reference clock into channel A and measuring frequency with a 1s gate time, I’d get a read-out of 10,000,000.000N MHz. In other words: around a milli-Hz accuracy. After the replacment, the accuracy was about an order of magnitude worse. That’s just not acceptable! The reason for this loss in accuracy is because the auto-calibration parameters were lost. Luckily, this is easy to fix. Switching to an External Reference Clock My SR620 has the cheaper TCXO option which gives frequency measurement results that are about one order of magnitude less accurate than using an external OCXO based reference clock. So I always switch to an external reference clock. The SR620 doesn’t do that automatically, you need to manually change it in the settings, as follows: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” SET -> “cloc source int” Scale Down arrow -> “cloc source rear” SET -> “cloc Fr 10000000” SET If you have a 5 MHz reference clock, use the down or up arrow to switch between 1000000 and 5000000. Running Auto-Calibration You can rerun auto-calibration manually from the front panel without opening up the device with this sequence: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” START The auto-calibration will take around 2 minutes. Only run it once the device has been running for a while to make sure all components have warmed up and are at stable temperature. The manual recommends a 30 minute warmup time. After doing auto-calibration, feeding back the reference clock into channel A and measuring frequency with a 1 s gate time gave me a result that oscillated around 10 MHz, with the mHz digits always 000 or 999.2 It’s possible to fine-tune the SR620 beyond the auto-calibration settings. One reason why one might want to do this is to correct for drift of the internal oscillator To enable this kind of tuning, you need to move a jumper inside the case. The time-nuts email list has a couple of discussions about this, here is one such post. Page 69 of the SR620 manual has detailed calibration instructions. Oscilloscope Display Mode When the 16 7-segment LEDs on the front panel are just not enough, the SR620 has this interesting way of (ab)using an oscilloscope as general display: it uses XY mode to paint the data. I had tried this mode in the past with my Sigilent digital oscilloscope, but the result was unreadable: for this kind of rendering, having a CRT beam that lights up all the phosphor from one point to the next is a feature, not a bug. This time, I tried it with an old school analog oscilloscope3: (Click to enlarge) The result is much better on the analog scope, but still very hard to read. When you really need all the data you can get from the SR620, just use the GPIB or RS232 interface. References The Signal Path - TNP #41 - Stanford Research SR620 Universal Time Interval Counter Teardown, Repair & Experiments Some calibration info about the SR620 Fast High Precision Set-up of SR 620 Counter The rest of this page has a bunch of other interesting SR620 related comments. Time-Nuts topics The SR620 is mentioned in tons of threads on the time-nuts emaiml list. Here are just a few interesting posts: This post talks about some thermal design mistakes in the SR620. E.g. the linear regulators and heat sink are placed right next to the the TCXO. It also talks about the location of the thermistor inside the fan path, resulting in unstable behavior. This is something Shrirar of The Signal Path fixed by moving the thermistor. This comment mentions that while the TXCO stays powered on in standby, the DAC that sets the control voltage does not, which results in an additional settling time after powering up. General recommendation is to use an external 10 MHz clock reference. This comment talks about warm-up time needed depending on the desired accuracy. It also has some graphs. Footnotes This time, the gamble paid off, and the going rate of a good second hand SR620 is quite a bit higher. But I don’t think I’ll ever do this again! ↩ In other words, when fed with the same 10 MHz as the reference clock, the display always shows a number that is either 10,000,000,000x or 9,999,999,xx. ↩ I find it amazing that this scope was calibrated as recently as April 2023. ↩
Introduction Finding the Display Tuning Potentiometers The Result Hardcopy Preview Mode Introduction Less than a week after finishing my TDS 684B analog memory blog post, a TDS 684C landed on my lab bench with a very dim CRT. If you follow the lives the 3-digit TDS oscilloscope series, you probably know that this is normally a bit of death sentence of the CRT: after years of use, the cathode ray loses its strength and there’s nothing you can do about it other than replace the CRT with an LCD screen. I was totally ready to go that route, and if I ever need to do it, here are 3 possible LCD upgrade options that I list for later reference: The most common one is to buy a $350 Newscope-T1 LCD display kit by SimmConn Labs. A cheaper hobbyist alternative is to hack something together with a VGA to LVDS interface board and some generic LCD panel, as described in this build report. He uses a VGA LCD Controller Board KYV-N2 V2 with a 7” A070SN02 LCD panel. As I write this, the cost is $75, but I assume this used to be a lot cheaper before tariffs were in place. If you really want to go hard-core, you could make your own interface board with an FPGA that snoops the RAMDAC digital signals and converts them to LVDS, just like the Newscope-T1. There is a whole thread about this the EEVblog forum. But this blog post is not about installing an LCD panel! Before going that route, you should try to increase the brightness of the CRT by turning a potentiometer on the display board. It sounds like an obvious thing to try, but didn’t a lot of reference to online. And in my case, it just worked. Finding the Display Tuning Potentiometers In the Display Assembly Adjustment section of chapter 5 of the TDS 500D, TDS 600C, TDS 700D and TDS 714L Service Manual, page 5-23, you’ll find the instructions on how to change rotation, brightness and contrast. It says to remove the cabinet and then turn some potentiometer, but I just couldn’t find them! They’re supposed to be next to the fan. Somewhere around there: Well, I couldn’t see any. It’s only the next day, when I was ready to take the whole thing apart that I noticed these dust covered holes: A few minutes and a vaccum cleaning operation later reveals 5 glorious potentiometers: From left to right: horizontal position rotation vertical position brightness contrast Rotate the last 2 at will and if you’re lucky, your dim CRT will look brand new again. It did for me! The Result The weird colors in the picture above is a photography artifact that’s caused by Tektronix NuColor display technology: it uses a monochrome CRT with an R/G/B shutter in front of it. You can read more about it in this Hackaday article. In real life, the image looks perfectly fine! Hardcopy Preview Mode If dialing up the brightness doesn’t work and you don’t want to spend money on an LCD upgrade, there is the option of switching the display to Hardcopy mode, like this: [Display] -> [Settings <Color>] -> [Palette] -> [Hardcopy preview] Instead of a black, you will now get a white background. It made the scope usable before I made the brightness adjustment.
Introduction The TDS600 Series The Acquisition Board Measuring Along the Signal Path A Closer Look at the Noise Issue Conclusion Introduction I have a Tektronix TDS 684B oscilloscope that I bought cheaply at an auction. It has 4 channels, 1 GHz of BW and a sample rate of 5 Gsps. Those are respectable numbers even by today’s standards. It’s also the main reason why I have it: compared to modern oscilloscopes, the other features aren’t nearly as impressive. It can only record 15k samples per channel at a time, for example. But at least the sample rate doesn’t go down when you increase the number of recording channels: it’s 5 Gsps even all 4 channels are enabled. I’ve always wondered how Tektronix managed to reach such high specifications back in the nineties, so in this blog post I take a quick look at the internals, figure out how it works, and do some measurements along the signal path. The TDS600 Series The first oscilloscopes of the TDS600 series were introduced around 1993. The last one, the TDS694C was released in 2002. The TDS684 version was from sometime 1995. The ICs on my TDS684C have date codes from as early as the first half of 1997. The main characteristic of these scopes was their extreme sample rate for that era, going from 2 Gsps for the TDS620, TDS640 and TDS644, 5 Gsps for the TDS654, TDS680 and TDS684, and 10 Gsps for the TDS694C which was developed under the Screamer code name. The oscilloscopes have 2 main boards: the acquisition board contains all the parts from the analog input down to the sample memory as well as some triggering logic. (Click to enlarge) a very busy CPU board does the rest. (Click to enlarge) 2 flat cables and a PCB connect the 2 boards. The interconnect PCB traces go to the memory on the acquisition board. It’s safe to assume that this interface is used for high-speed waveform data transfer while the flat cables are for lower speed configuration and status traffic. If you ever remove the interconnection PCB, make sure to put it back with the same orientation. It will fit just fine when rotated 180 degrees but the scope won’t work anymore! The Acquisition Board The TDS 684B has 4 identical channels that can easily be identified. (Click to enlarge) There are 6 major components in the path from input to memory: Analog front-end Hidden under a shielding cover, but you’d expect to find a bunch of relays there to switch between different configurations: AC/DC, 1Meg/50 Ohm termination, … I didn’t open it because it requires disassembling pretty much the whole scope. Signal Conditioner IC(?) This is the device with the glued-on heatsink. I left it in place because there’s no metal attachment latch. Reattaching it would be a pain. Since the acquisition board has a bunch of custom ICs already, chances are this one is custom as well, so knowing the exact part number wouldn’t add a lot of extra info. We can see one differential pair going from the analog front-end into this IC and a second one going from this IC to the next one, an ADG286D. National Semi ADG286D Mystery Chip Another custom chip with unknown functionality. Motorola MC10319DW 8-bit 25 MHz A/D Converter Finally, an off-the-shelf device! But why is it only rated for 25MHz? National Semi ADG303 - A Custom Memory Controller Chip It receives the four 8-bit lanes from the four ADCs on one side and connects to four SRAMs on the other. 4 Alliance AS7C256-15JC SRAMs Each memory has a capacity of 32KB and a 15ns access time, which allows for a maximum clock of 66 MHz. The TDS 684B supports waveform traces of 15k points, so they either only use half of the available capacity or they use some kind of double-buffering scheme. There are four unpopulated memory footprints. In one of my TDS 420A blog posts, I extend the waveform memory by soldering in extra SRAM chips. I’m not aware of a TDS 684B option for additional memory, so I’m not optimistic about the ability to expand its memory. There’s also no such grayed-out option in the acquisition menu. When googling for “ADG286D”, I got my answer when I stumbled on this comment on BlueSky which speculates that it’s an analog memory, probably some kind of CCD FIFO. Analog values are captured at a rate of up to 5 GHz and then shifted out at a much lower speed and fed into the ADC. I later found a few other comments that confirm this theory. Measuring Along the Signal Path Let’s verify this by measuring a few signals on the board with a different scope. The ADC input pins are large enough to attach a Tektronix logic analyzer probe: ADC sampling the signal With a 1 MHz signal and using a 100Msps sample rate, the input to the ADC looks like this: The input to the ADC is clearly chopped into discrete samples, with a new sample every 120 ns. We can discern a sine wave in the samples, but there’s a lot of noise on the signal too. Meanwhile the TDS684B CRT shows a nice and clean 1 MHz signal. I haven’t been able to figure out how that’s possible. For some reason, simply touching the clock pin of the ADC with a 1 MOhm oscilloscope probe adds a massive amount of noise to the input signal, but it shows the clock nicely: The ADC clock matches the input signal. It’s indeed 8.33 MHz. Acquistion refresh rate The scope only records in bursts. When recording 500, 1000 or 2500 sample points at 100Msps, it records a new burst every 14ms or 70Hz. When recording 5000 points, the refresh rate drops to 53Hz. For 15000 points, it drops even lower, to 30Hz: Sampling burst duration The duration of a sampling burst is always 2 ms, irrespective of the sample rate of the oscilloscope or the number of points acquired! The combination of a 2 ms burst and 8 MHz sample clock results in 16k samples. So the scope always acquires what’s probably the full contents of the CCD FIFO and throws a large part away when a lower sample length is selected. Here’s the 1 MHz signal sampled at 100 Msps: And here’s the same signal sampled at 5 Gsps: It looks like the signal doesn’t scan out of the CCD memory in the order it was received, hence the signal discontinuity in the middle. Sampling a 1 GHz signal I increased the input signal from 1 MHz to 1 GHz. Here’s the ADC input at 5 Gsps: With a little bit of effort, you can once again imagine a sine wave in those samples. There’s periodicity of 5 samples, as one would expect for a 1 GHz to 5 Gsps ratio. The sample rate is still 8.3 MHz. Sampling a 200 MHz signal I also applied a 200 MHz input signal. The period is now ~22 samples, as expected. 200 MHz is low enough to measure with my 350 MHz bandwidth Siglent oscilloscope. To confirm that the ADG286D chip contains the CCD memory, I measured the signal on one of the differential pins going into that chip: And here it is, a nice 200 MHz signal: A Closer Look at the Noise Issue After initially publishing this blog post, I had a discussion on Discord about the noise issue which made me do a couple more measurements. Input connected to ground Here’s what the ADC input looks like when the input of the scope is connected to ground: 2 major observations: there’s a certain amount of repetitiveness to it. there are these major voltage spikes in between each repetition. They are very faint on the scope shot. Let’s zoom in on that: The spikes are still hard to see so I added the arrows, but look how the sample pattern repeats after each spike! The time delay between each spike is ~23.6 us. With a sample rate of 120ns, that converts into a repetitive pattern of ~195 samples. I don’t know why a pattern of 195 samples exists, but it’s clear that each of those 195 locations have a fixed voltage offset. If the scope measures those offsets during calibration, it can subtract them after measurement and get a clean signal out. 50 kHz square wave Next I applied a 50kHz square wave to the input. This frequency was chosen so that, for the selected sample rate, a single period would cover the 15000 sampling points. 2 more observations: the micro-repetitiveness is still there, irrespective of the voltage offset due to the input signal. That means that subtracting the noise should be fine for different voltage inputs. We don’t see a clean square wave outline. It looks like there’s some kind of address interleaving going on. 50kHz sawtooth wave We can see the interleaving even better when applying a sawtooth wavefrom that covers one burst: Instead of a clean break from high-to-low somewhere in the middle, there is a transition period where you get both high and low values. This confirms that some kind of interleaving is happening. Conclusion The TDS684B captures input signals at high speed in an analog memory and digitizes them at 8 MHz. The single-ended input to the ADC is noisy yet the signal looks clean when displayed on the CRT of the scope, likely because the noise pattern is repetitive and predictable. In addition to noise, there’s also an interleaving pattern during the reading out of the analog FIFO contents. The number of samples digitized is always the same, irrespective of the settings in the horizontal acquisition menu. (Not written by ChatGPT, I just like to use bullet points…)
Introduction The Rohde & Schwarz AMIQ Modulation Generator WinIQSim Software Inside the AMIQ The Signal Generation PCB Analog Signal Generation Architecture: Fixed vs Variable DAC Clock Internal Reference Clock Generation DAC Clock Synthesizer I/Q Output Skew Tuning Variable Gain Amplifier Internal Diagnostics Efficient Distribution of Configuration Signals Conclusion References Introduction Every few months, a local company auctions off all kinds of lab, production and test equipment. I shouldn’t be subscribed to their email list but I am, and that’s one way I end up with more stuff that I don’t really need. During a recent auction, I got my hands on a Rohde & Schwarz AMIQ, an I/Q modulation generator, for a grand total of $45. Add to that another 30% for the auction fee and taxes and you’re still paying much less than what others would pay for a round of golf? But instead of one morning of fun, this thing has the potential to keep me busy for many weekends, so what a deal! (Click to enlarge) A few days after “winning” the auction, I drove to a dark dungeon of a warehouse in San Jose to pick up the loot. The AMIQ has a power on/off button and 3 LEDs and that’s in terms of user interface. There are no dials, there’s no display. So without any other options, I simply powered it up and I was immediately greeted by the ominous clicking of a hard drive. I was right: this thing would keep me entertained for at least a little bit! (Click to enlarge) It took a significant amount of effort to restore the machine back to its working state. I’ll write about that in future blog posts, but let’s start with an overview of the functionality and a teardown of the R&S AMIQ and then deep dive into some of its analog circuits. AMIQ prices on eBay vary wildly, from $129 to $2600 at the time of writing this. Even if you get one of the higher priced ones, you should expect to get a unit that’s close to failing due to leaking capacitors and a flaky harddrive! The Rohde & Schwarz AMIQ Modulation Generator Reduced to an elevator sales pitch, the AMIQ is a 2-channel arbitrary waveform generator (AWG) with a deep sample buffer. That’s it! It has a streaming buffer that feeds samples to 2 14-bit DACs at a sample rate of up to 105MHz. Two output channels I and Q will typically contain quadrature modulation signals that are sent to an RF vector signal generator such as a Rohde & Schwarz SMIQ for the actual high-frequency modulation. In a typical setup, the AMIQ is used to generate the baseband modulated signal and the SMIQ shifts the baseband signal to an RF frequency. Since the AMIQ has no user interface, the waveform data must provided by an external device. This could be a PC that runs R&S WinIQSim software or even the SMIQ itself because it has the ability to control an AMIQ. You can also create your own waveforms and upload them via floppy disk, GPIB or an RS-232 interface using SCPI control commands. Figure 4-1 of the AMIQ Operating Manual has a simplified block diagram. It is pretty straightforward and somewhat similar to the one of my HP 33120A function generator: (Click to enlarge) On the left are 2 blocks that are shared: a clock synthesizer waveform memory And then for each channel: 14-bit D/A converter analog filters output section with amplifier/attenuator differential analog output driver (AMIQ-B2 option) The major blocks are surrounded by a large amount of DACs that are used to control everything from the tuning input of local 10MHz clock oscillator, gain and offset of output signals, clock skew between I and Q signal and much more. You could do some of this with a modern SDR setup, but the specifications of the AMIQ units are dialed up a notch. Both channels are completely symmetrical to avoid modulation errors at the source. If there’s a need to compensate for small delay differences in, for example, the external cables, you can compensate for that by changing the skew between clocks of the output DACs with a precision of 10ps. Similarly, the DAC sample frequency can be programmed with 32-bit precision. (Click to enlarge) In addition to the main I/Q outputs at the front, there are a bunch of secondary input and output signals: 10MHz reference input and output sample clock trigger input marker output external filter loopback output and input bit error rate measurement (BER) connector (AMIQ-B1 option) parallel interface with the digital value of the samples that are sent to the DAC (front panel, AMIQ-B3 option) (Click to enlarge) Above those speciality inputs and outputs are the obligatory GPIB interface and a bunch of generic connectors that look suspiciously like the ones you’d find on an early century PC: PS/2 keyboard and mouse Parallel port RS232 USB There are 3 different AMIQ versions: 1110.2003.02: 4M samples 1110.2003.03: 4M samples 1110.2003.04: 16M samples Mine is an AMIQ-04. WinIQSim Software While it is possible to control the AMIQ over RS-232 or GPIB with your own software, you’d have a hard time matching the features of WinIQSim, a R&S Windows application that supports the AMIQ. (Click to enlarge) With WinIQSim, you can select one of the popular communication protocols from the late nineties and early 2000s, fill in digital framing data, apply all kinds of distortions and interferences, compute the I and Q waveforms and send it to the AMIQ. Some of the supported formats include CDMA2000, 802.11a WLAN, TD-SCDMA and more. But you don’t have to use these official protocols, WinIQSim supports any kind of FSK, QPSK, QAM or other common modulation method. You need a license for some of the communication protocols. The license is linked to the AMIQ device, not the PC, but the license check is pretty naive, and while I haven’t tried it… yet, the EEVblog forum has discussions about how to enable features yourself. My device only came with a license for IS-95 CDMA. Inside the AMIQ It’s trivial to open up an AMIQ: after removing the 4 feet in the back with a regular Philips screwdriver, you can simply slide off the outer case. It has 2 major subsystems: the top contains all the components of a standard PC (Click to enlarge) the bottom has a signal generation PCB (Click to enlarge) The PCB looks incredibly clean and well laid out and I love how they printed the names of different sections on the metal gray shielding plates. We’ll leave the PC system for a future blog post, and focus on the signal generation PCB. The Signal Generation PCB Let’s remove the shielding plates to see what’s underneath. I had to drill out one of the screws that attach the plates because the head was stripped. (Did somebody before me already try to repair it?) (Click to enlarge) The bottom half left and right sections are perfectly symmetrical, as one would expect for a device that has the ability to tune skew mismatches with a 10 ps precision. Annotated, it looks like this: (Click to enlarge) Rohde & Schwarz recently made the terrible decision to lock all their software and manuals behind an approval-only corporate log-in wall, but luckily some of the most important AMIQ assets can be found online elsewhere, including the operating manual and a service manual contains the full schematics! Let’s dig a bit deeper into the various aspects of the design. In what follows I’ll be focusing primarily on analog aspects of the design. This is a very personal choice: not that the digital sections aren’t important, it’s just that, as digital design engineer, they’re not particular interesting to me. By studying the analog sections, I hope to stumble into circuits that I didn’t really know a lot about before. Fantastic Schematics Before digging in for real, a word about the schematics: they are fantastic. Each sub-system has a block diagram that is already pretty detailed, with signal names that match the schematics and test points, often with annotations to indicate the voltage or frequency range. Here’s the block diagram of the reference and DAC clock generation section, for example. Schematic page 5 (Click to enlarge) Signals that come from or go to other pages are fully referenced. Look at the SYN_OUT_CLK signal below: Schematic page 10 (Click to enlarge) The signal is also used on page 6, coordinate 1D and 7B and page 22, coordinate 8A. How cool is that? Signal path test points One of the awesome features of the PCB is the generous amount of test points. We’re not just talking PCB test point against which you can hold your oscilloscope probe or even header pins, though there are plenty of those too, but full on SMB connectors. In addition to these SMD connectors, there are also plenty of jumpers that can be used to interrupt the default signal flow and insert your own test signal instead. Analog Signal Generation Architecture: Fixed vs Variable DAC Clock In the HP 33120A the DAC has a fixed 40 MHz clock. There’s a 16 kB waveform RAM that contains, say, one quarter period of a 100 Hz sine. If you want to send out a sine wave of 200 Hz, instead of sequentially stepping through all the addresses of the waveform RAM, you just skip every other address. One of the benefits of this kind of generation scheme is that you can make do with a fixed frequency analog anti-aliasing filter: the Nyquist frequency is always the same after all. A major disadvantage, however, is that even if the output signal has a bandwidth of only 1MHz, you still need to feed the DAC at the fixed clock rate. You could insert a digital upsampling filter between the waveform memory and the DAC, but that requires significant mathematical DSP fire power, or you’d have to increase the depth of the waveform memory. For an arbitrary waveform generator, it makes more sense to run the DAC at whichever clock speed is sufficient to meet the Nyquist requirement of the desired signal and provide a number of different filtering options. The AMIQ has 4 such options: no filter, a 25 MHz, a 2.5 MHz or a loopback through an external filter. The DAC sample clock range is huge, from 10 Hz all the way to 105 MHz, though specifications are only guaranteed up to 100 MHz. According to the data sheet, the clock frequency can be set with a precision of 10^-7. Internal Reference Clock Generation Like all professional test and measurement equipment, the AMIQ uses a 10MHz reference clock that can come from outside or that can be generated locally with a 10MHz crystal. It’s common for high-end equipment to have an oven controlled crystal oscillator (OCXO), but AMIQ has a lower spec’ed temperature controlled one (TCXO), a Milliren Technologies 453-0210. If we look at the larger reference clock generation block diagram, we can see something slightly unusual: instead of selecting between the internal TCXO output or the external reference clock input, the internal reference clock always comes from the TCXO (green). Schematic page 5 (Click to enlarge) When the internal clock is selected, the TCXO output frequency can be tuned with an analog signal that comes from the VTXCO TUNE DAC (blue), but when the external reference input is active, the TCXO is phase locked to the external clock. You can see the phase comparator and low pass filter in red. The reason for using a PLL with the internal TCXO as the voltage controlled oscillator is probably to ensure that the generated reference clock has the phase noise of the TCXO while tracking the frequency of the external reference clock: a PLL acts as a low-pass filter to the reference clock and as a high-pass filter to the VCO. If the TCXO has a better high frequency phase noise than the external reference clock, that makes sense. This is really out of my wheelhouse, so take all of this with a grain of salt… SYN_REF is the output of the internal reference clock generation unit. DAC Clock Synthesizer The clock synthesizer creates a highly programmable DAC clock from the internal reference clock SYN_REF from previous section. It should come as no surprise that this clock is generated by a PLL as well. Schematic page 5 (Click to enlarge) There are two speciality components in the clock generation path: a Mini-Circuits JTOS-200 VCO an Analog Devices AD9850 DDS Synthesizer (Click to enlarge) The VCO has an operating frequency between 100 and 200 MHz. I don’t know enough about VCOs to give meaningful commentary about the specifications, but based on comparisons with similar components on Digikey, such as this FMVC11009, it’s safe to assume that it’s an expensive and high quality component. The AD9850 is located in the feedback path of the PLL where it acts as a feedback divider with a precision of 32 bits. The signal flow inside the AD9850 is interesting: It works as follows: each clock cycle, the phase of a numerical controlled oscillator (NCO) accumulates with the programmable 32-bit increment. the upper bits of the phase accumulator serve as the address of a sine waveform table. a 10-bit DAC converts the digital sine wave to analog. the analog signal is sent through an external low-pass filter. the output of the low-pass filter goes back into the AD9850 and through a comparator to generate a digital output clock. This thing has its own programmable signal generator! But why the roundtrip from digital to analog back to digital? In theory, the MSB of the NCO could be used as output clock of the clock generator. The problem is that in such a configuration, the length of each clock period toggles between N and N+1 clock cycles, with a ratio so that the average clock period ends up with the desired value. But this creates major spurs in the frequency spectrum of the generated clock. When fed into the phase comparator of a PLL, these frequency spurs can show up as jitter at the output of the PLL and thus into the spectrum of the generated I and Q signals. By converting the signal to analog and using a low pass filter, the spurs can be filtered away. The combination of a low pass filter and a comparator acts as an interpolator: the edges of the generated clock fall somewhere in between N and N+1. Schematic page 21 (Click to enlarge) The steepness of the low pass filter depends on the ratio between the input and the output clock: the lower the ratio, the steeper the filter. There are a bunch of binary clock dividers that make it hard to know the exact ratio, but if the output of the VCO is 100 MHz and the input 10 MHz or less, there is a 10:1 ratio. The AMIQ has a 7th order elliptical low pass filter. I ran a quick simulation in LTspice to check the behavior. (dds_filter.asc source file.) The filter has a cut-off frequency of around 13 MHz: In modern fractional PLLs, instead of using a regular DAC, one could use a high-order sigma-delta unit to create a pulse density modulated output with the noise pushed to higher frequencies and a low pass filter that can be less aggressive. There’s a plenty of literature online about DDS clock generators, some of which I’ve listed in the references at the bottom, but the datasheet of the AD9850 itself is a good start. I/Q Output Skew Tuning Earlier I mentioned the ability to tune the skew between the I and the Q output to compensate for a difference in cable length when connecting the AMIQ to an RF signal generator. While the digital waveform fetching circuit works on one clock, the two clocks that go to the DAC are the ones that can be changed: Here’s what the skewing circuit looks like: Schematic page 10 (Click to enlarge) Starting with input signal DAC_CLK, the goal is to create I_DAC_CLK and Q_DAC_CLK that can be moved up to 1ns ahead or behind the other. Signals can be delayed by sending them through an R/C combo. Since we need a variable delay, we need a way to change the value of either the R or the C. A varactor or varicap diode is exactly that kind of device: its capacitance changes depending on the reverse bias voltage across the diode. Static input signal SKEW_TUNE comes from a DAC. It goes to the cathode of one varicap and the anode of the other. When its voltage increase, the capacitance of those 2 diodes moves in opposite ways, and so does the R/C delay along the I and Q clock path. A BB147 varicap has a capacitance that varies between 2pF and 112pF depending on the bias voltage. Capacitances C168 and C619 prevent the relatively high bias voltages from reaching the digital signal path. Does the circuit work? Absolutely! The scope photos below show the impact of the skewing circuit when dialed to the maximum in both directions: With a 2ns/div horizontal scale, you can see a skew of ~1ns either way. Variable Gain Amplifier The analog signal path starts at the DAC and then goes through the anti-aliasing filters, an output section that does amplification and attenuation, and finally the output connector board. It is common for signal generators to have a fixed gain amplification section and then some selectable fixed attenuation stages: amplifiers with a variable gain and low distortion are hard to design. If you need a signal amplitude that can’t be achieved with one of the fixed attenuation settings, one solution is to multiply the signal before it enters the DAC to reduce the output amplitude, though that’s at the expense of losing some of the dynamic range of the DAC. This is not the case for the AMIQ: while it can use a signal path that only uses fixed amplification and attenuation stages, it also offers the intriguing option to send the analog signal through an analog multiplier stage: If we zoom down from the system diagram to the block diagram, we can see how the multiplier/variable attenuator sits between the filter and the output amplifier, with 2 control input signals: AMPL_CNTRL and OFFSET. This circuit exists twice, of course, for the I and the Q channel. Schematic page 3 (Click to enlarge) Let’s check out the details! The heavy lifting of the variable gain amplifier is performed by an Analog Devices AD835 250 MHz, Voltage Output, 4-Quadrant Multiplier, a part from their Analog Multipliers & Dividers catalog. Schematic page 14 (Click to enlarge) It’s a tiny 8-pin device that calculates W = (X1-X2) * (Y1-Y2) + Z. In low volume, it will set you back $32 on Digikey. In addition to the multiplication, you can set a fixed output gain by feeding back output W through a resistive divider back into Z. For inputs less than 10 MHz, it has a typical harmonic distortion of -70dB. Analog Devices datasheets usually have a Theory of Operation section that explain, well, the underlying theory of operation. The AD835 has such a section as well, but it doesn’t go any further than stating that the multiplier is based on a classic form, having a translinear core, supported by three (X, Y, and Z) linearized voltage-to-current converters, and the load driving output amplifier. I have no clue how the thing works! In the case of the AMIQ, X2 and Y2 are strapped to ground, and there’s no feedback from W back into Z either which reduces the functionality to: W = FILTER_OUT * AMPL_VAR + OFFSET. AMPL_VAR and OFFSET are static analog values that are each created by a 12-bit DAC8143, just like many other analog configuration signals. It’s almost shame that this powerful little device is asked to perform such a basic operation. While researching the AD835, somebody pointed out the AD8310, another interesting speciality chip from Analog Devices. It’s a DC to 440 MHz, 95dB logarithmic amplifier, a converter from linear to logarithmic scale basically. Discovering little gems like this is why I love studying schematics of complex devices. Internal Diagnostics The AMIQ has a set of 15 internal analog signal to monitor the health of the device. Various meaningful signals are gathered all around the signal generation board and measured at power up to give you that dreaded Diagnostic Check Fail message. Schematic page 27 (Click to enlarge) The circuit itself is straightforward: 2 8-to-1 analog multiplexers feed into a CS5507AS 16-bit AD converter. It can only do 100 samples per second, but that’s sufficient to measure a bunch of mostly static signals. Like many other devices, the measured value is sent serially to one of the FPGAs. The ADC needs a 2.5V reference voltage. It’s funny that this reference voltage also goes to the analog multiplexers as one one of the diagnostic signals. One wonders what the ADC returns as a result when it tries to convert the output of a broken reference voltage generator. There are a bunch of circuits in the AMIQ whose only purpose is generating diagnostic signals. Here’s a good example of that: Schematic page 14 (Click to enlarge) I_OUT_DIAG and Q_OUT_DIAG are the outputs after the attenuator that will eventually end up at the connectors. The circuit in the top red square is a signal peak detector, similar to what you’d find in the rectifier of a power supply. It allows the AMIQ to track the output level of signals with a frequency that is way higher than the sample rate of the ADC. The circuit in the red rectangle below performs a digital XOR on the analog I and the Q signals and then sends it through a simple R/C low pass filter. I think that it allows the AMIQ to check that the phase difference between the I and Q channel is sensible when applying a test signal during power-on self-test. Efficient Distribution of Configuration Signals The AMIQ signal board has hundreds of configuration bits: there are the obvious enable/disable or selection bits, such as those to select between different output filters, but the lion’s share are used to set the output value of many 12-bit DACs. Instead of using parallel busses that fan out from an FPGA, the AMIQ has a serial configuration scan chain. Discrete 8-bit 74HCT4094 shift-and-store registers are located all over the design for the digital configuration bits. The DAC8134 devices have their own built-in shift register and are part of the same scan chain. Schematic page 19 (Click to enlarge) The schematic above is an example of that. The red scan chain data input goes through the VTCXO_TUNE DAC, then the 74HCT4094 after which it exist to some other page of the schematics. Conclusion Around the late nineties, test equipment companies started to stop adding schematics to their service manuals, but the R&S AMIQ is a nice exception to that. And while the device already has a bunch of FPGAs, most of the components are off-the-shelf single-function components. Thanks to that, the AMIQ is an excellent candidate for a deep dive: all the information is there, you need to just spend a bit of effort to go through things. I had a ton of fun figuring out how things worked. References Rohde & Schwarz documents R&S - IQ Modulation Generator AMIQ R&S - AMIQ Datasheet R&S - AMIQ Operating Manual R&S - AMIQ Service Manual with schematic Various application notes R&S - Floppy Disk Control of the I/Q Modulation Generator AMIQ R&S - Software WinIQSIM for Calculating I/Q Signals for Modulation Generator R&S AMIQ R&S - Creating Test Signals for Bluetooth with AMIQ / WinIQSIM and SMIQ R&S - WCDMA Signal Generator Solutions R&S - Golden devices: ideal path or detour? R&S - Demonstration of BER Test with AMIQ controlled by WinIQSIM Other AMIQ content on the web zw-ix has a blog - Getting an Rohde Schwarz AMIQ up and running zw-ix has a blog - Connecting a Rohde Schwarz AMIQ to a SMIQ04 Bosco tweets DDS Clock Synthesis MT-085 Tutorial: Fundamentals of Direct Digital Synthesis (DDS) How to Predict the Frequency and Magnitude of the Primary Phase Truncation Spur in the Output Spectrum of a Direct Digital Synthesizer (DDS) Related content The Signal Path - Teardown, Repair & Analysis of a Rohde & Schwarz AFQ100A I/Q (ARB) Modulation Generator
More in technology
An opening note: would you believe that I have been at this for five years, now? If I planned ahead better, I would have done this on the five-year anniversary, but I missed it. Computers Are Bad is now five years and four months old. When I originally launched CAB, it was my second attempt at keeping up a blog. The first, which I had called 12 Bit Word, went nowhere and I stopped keeping it up. One of the reasons, I figured, is that I had put too much effort into it. CAB was a very low-effort affair, which was perhaps best exemplified by the website itself. It was monospace and 80 characters wide, a decision that I found funny (in a shitposty way) and generated constant complaints. To be fair, if you didn't like the font, it was "user error:" I only ever specified "monospace" and I can't be blamed that certain platforms default to Courier. But there were problems beyond the appearance; the tool that generated the website was extremely rough and made new features frustrating to implement. Over the years, I have not invested much (or really any) effort in promoting CAB or even making it presentable. I figured my readership, interested in vintage computing, would probably put up with it anyway. That is at least partially true, and I am not going to put any more effort into promotion, but some things have changed. Over time I have broadened my topics quite a bit, and I now regularly write about things that I would have dropped as "off topic" three or four years ago. Similarly, my readership has broadened, and probably to a set of people that find 80 characters of monospace text less charming. I think I've also changed my mind in some ways about what is "special" about CAB. One of the things that I really value about it, that I don't think comes across to readers well, is the extent to which it is what I call artisanal internet. It's like something you'd get at the farmer's market. What I mean by this is that CAB is a website generated by a static site generator that I wrote, and a newsletter sent by a mailing list system that I wrote, and you access them by connecting directly to a VM that I administer, on a VM cluster that I administer, on hardware that I own, in a rack that I lease in a data center in downtown Albuquerque, New Mexico. This is a very old-fashioned way of doing things, now, and one of the ironies is that it is a very expensive way of doing things. It would be radically cheaper and easier to use wordpress.com, and it would probably go down less often and definitely go down for reasons that are my fault less often. But I figure people listen to me in part because I don't use wordpress.com, because I have weird and often impractical opinions about how to best contribute to internet culture. I spent a week on a cruise ship just recently, and took advantage of the great deal of time I had to look at the sea to also get some work done. Strategically, I decided, I want to keep the things that are important to me (doing everything myself) and move on from the things that are not so important (the website looking, objectively, bad). So this is all a long-winded announcement that I am launching, with this post, a complete rewrite of the site generator and partial rewrite of the mailing list manager. This comes with several benefits to you. First, computer.rip is now much more readable and, arguably, better looking. Second, it should be generally less buggy (although to be fair I had eliminated most of the problems with the old generator through sheer brute force over the years). Perhaps most importantly, the emails sent to the mailing list are no longer the unrendered Markdown files. I originally didn't use markup of any kind, so it was natural to just email out the plaintext files. But then I wanted links, and then I wanted pictures, leading me to implement Markdown in generating the webpages... but I just kept emailing out the plaintext files. I strongly considered switching to HTML emails as a solution and mostly finished the effort, but in the end I didn't like it. HTML email is a massive pain in the ass and, I think, distasteful. Instead, I modified a Markdown renderer to create human-readable plaintext output. Things like links and images will still be a little weird in the plaintext emails, but vastly better than they were before. I expect some problems to surface when I put this all live. It is quite possible that RSS readers will consider the most recent ten posts to all be new again due to a change in how the article IDs are generated. I tried to avoid that happening but, look, I'm only going to put so much time into testing and I've found RSS readers to be surprisingly inconsistent. If anything else goes weird, please let me know. There has long been a certain connection between the computer industry and the art of animation. The computer, with a frame-oriented raster video output, is intrinsically an animation machine. Animation itself is an exacting, time-consuming process that has always relied on technology to expand the frontier of the possible. Walt Disney, before he was a business magnate, was a technical innovator in animation. He made great advances in cel animation techniques during the 1930s, propelling the Disney Company to fame not only by artistic achievement but also by reducing the cost and time involved in creating feature-length animated films. Most readers will be familiar with the case of Pixar, a technical division of Lucasfilm that operated primarily as a computer company before its 1986 spinoff under computer executive Steve Jobs---who led the company through a series of creative successes that overshadowed the company's technical work until it was known to most only as a film studio. Animation is hard. There are several techniques, but most ultimately come down to an animator using experience, judgment, and trial and error to get a series of individually composed frames to combine into fluid motion. Disney worked primarily in cel animation: each element of each frame was hand-drawn, but on independent transparent sheets. Each frame was created by overlaying the sheets like layers in a modern image editor. The use of separate cels made composition and corrections easier, by allowing the animator to move and redraw single elements of the final image, but it still took a great deal of experience to produce a reasonable result. The biggest challenge was in anticipating how motion would appear. From the era of Disney's first work, problems like registration (consistent positioning of non-moving objects) had been greatly simplified by the use of clear cels and alignment pegs on the animator's desk that held cels in exact registration for tracing. But some things in an animation are supposed to move, I would say that's what makes it animation. There was no simple jig for ensuring that motion would come out smoothly, especially for complex movements like a walking or gesturing character. The animator could flip two cels back and forth, but that was about as good as they could get without dedicating the animation to film. For much of the mid-century, a typical animation workflow looked like this: a key animator would draw out the key frames in final or near-final quality, establishing the most important moments in the animation, the positions and poses of the characters. The key animator or an assistant would then complete a series of rough pencil sketches for the frames that would need to go in between. These sketches were sent to the photography department for a "pencil test." In the photography department, a rostrum camera was used: a cinema camera, often 16mm, permanently mounted on an adjustable stand that pointed it down at a flat desk. The rostrom camera looked a bit like a photographic enlarger and worked much the same way, but backwards: the photographer laid out the cels or sketches on the desk, adjusted the position and focus of the camera for the desired framing, and then exposed one frame. This process was repeated, over and over and over, a simple economy that explains the common use of a low 12 FPS frame rate in animation. Once the pencil test had been photographed, the film went to the lab where it was developed, and then returned to the animation studio where the production team could watch it played on a cinema projector in a viewing room. Ideally, any problems would be identified during this first viewing before the key frames and pencil sketches were sent to the small army of assistant animators. These workers would refine the cels and redraw the pencil sketches in part by tracing, creating the "in between" frames of the final animation. Any needed changes were costly, even when caught at the earliest stage, as it usually took a full day for the photography department to return a new a pencil test (making the pencil test very much analogous to the dailies used in film). What separated the most skilled animators from amateurs, then, was often their ability to visualize the movement of their individual frames by imagination. They wanted to get it right the first time. Graphics posed a challenge to computers for similar reasons. Even a very basic drawing involves a huge number of line segments, which a computer will need to process individually during rendering. Add properties such as color, consider the practicalities of rasterizing, and then make it all move: just the number of simple arithmetic problems involved in computer graphics becomes enormous. It is not a coincidence that we picture all early computer systems as text-only, although it is a bit unfair. Graphical output is older than many realize, originating with vector-mode CRT displays in the 1950s. Still, early computer graphics were very slow. Vector-mode displays were often paired with high-end scientific computers and you could still watch them draw in real time. Early graphics-intensive computer applications like CAD used specialized ASICs for drawing and yet provided nothing like the interactivity we expect from computers today. The complexity of computer graphics ran head-first against an intense desire for more capable graphical computers, driven most prominently by the CAD industry. Aerospace and other advanced engineering fields were undergoing huge advancements during the second half of the 20th century. World War II had seen adoption of the jet engine, for example, machines which were extremely powerful but involved complex mathematics and a multitude of 3D parts that made them difficult for a human to reason over. The new field of computer-aided design promised a revolutionary leap in engineering capability, but ironically, the computers were quite bad at drawing. In the first decades, CAD output was still being sent to traditional draftsmen for final drawings. The computers were not only slow, but unskilled at the art of drafting: limitations on the number and complexity of the shapes that computers could render limited them to only very basic drawings, without the extensive annotations that would be needed for manufacturing. During the 1980s, the "workstation" began to replace the mainframe in engineering applications. Today, "workstation" mostly just identifies PCs that are often extra big and always extra expensive. Historically, workstations were a different class of machines from PCs that often employed fundamentally different architectures. Workstations were often RISC, an architecture selected for better mathematical performance, frequently ran UNIX or a derivative, and featured the first examples of what we now call a GPU. Some things don't change: they were also very big, and very expensive. It was the heady days of the space program and the Concorde, then, that brought us modern computer graphics. The intertied requirements for scientific computing, numerical simulation, and computer graphics that emerged from Cold War aerospace and weapons programs forged a strong bond between high-end computing and graphics. One could perhaps say that the nexus between AI and GPUs today is an extension of this era, although I think it's a bit of a stretch given the text-heavy applications. The echoes of the dawn of computer graphics are much quieter today, but still around. They persist, for example, in the heavy emphasis on computer visualization seen throughout scientific computing but especially in defense-related fields. They persist also in the names of the companies born in that era, names like Silicon Graphics and Mentor Graphics. The development of video technology, basically the combination of preexisting television technology with new video tape recorders, lead to a lot of optimizations in film. Video was simply not of good enough quality to displace film for editing and distribution, but it was fast and inexpensive. For example, beginning in the 1960s filmmakers began to adopt a system called "video assist." A video camera was coupled to the film camera, either side-by-side with matched lenses or even sharing the same lens via a beam splitter. By running a video tape recorder during filming, the crew could generate something like an "instant daily" and play the tape back on an on-set TV. For the first time, a director could film a scene and then immediately rewatch it. Video assist was a huge step forward, especially in the television industry where furthered the marriage of film techniques and television techniques for the production of television dramas. It certainly seems that there should be a similar technique for animation. It's not easy, though. Video technology was all designed around sequences of frames in a continuous analog signal, not individual images stored discretely. With the practicalities of video cameras and video recorders, it was surprisingly difficult to capture single frames and then play them back to back. In the 1970s, animators Bruce Lyon and John Lamb developed the Lyon-Lamb Video Animation System (VAS). The original version of the VAS was a large workstation that replaced a rostrum camera with a video camera, monitor, and a custom video tape recorder. Much like the film rostrum camera, the VAS allowed an operator to capture a single frame at a time by composing it on the desk. Unlike the traditional method, the resulting animation could be played back immediately on the included monitor. The VAS was a major innovation in cel animation, and netted both an Academy Award and an Emmy for technical achievement. While it's difficult to say for sure, it seems like a large portion of the cel-animated features of the '80s had used the VAS for pencil tests. The system was particularly well-suited to rotoscoping, overlaying animation on live-action images. Through a combination of analog mixing techniques and keying, the VAS could directly overlay an animator's work on the video, radically accelerating the process. To demonstrate the capability, John Lamb created a rotoscoped music video for the Tom Waits song "The One That Got Away." The resulting video, titled "Tom Waits for No One," was probably the first rotoscoped music video as well as the first production created with the video rotoscope process. As these landmarks often do, it languished in obscurity until it was quietly uploaded to YouTube in 2006. The VAS was not without its limitations. It was large, and it was expensive. Even later generations of the system, greatly miniaturized through the use of computerized controls and more modern tape recorders, came in at over $30,000 for a complete system. And the VAS was designed around the traditional rostrom camera workflow, intended for a dedicated operator working at a desk. For many smaller studios the system was out of reach, and for forms of animation that were not amenable to top-down photography on a desk, the VAS wasn't feasible. There are some forms of animation that are 3D---truly 3D. Disney had produced pseudo-3D scenes by mounting cels under a camera on multiple glass planes, for example, but it was obviously possible to do so in a more complete form by the use of animated sculptures or puppets. Practical challenges seem to have left this kind of animation mostly unexplored until the rise of its greatest producer, Will Vinton. Vinton grew up in McMinnville, Oregon, but left to study UC Berkeley. His time in Berkeley left him not only with an architecture degree (although he had studied filmmaking as well), but also a friendship with Bob Gardiner. Gardiner had a prolific and unfortunately short artistic career, in which he embraced many novel media including the hologram. Among his inventions, though, seems to have been claymation itself: Gardiner was fascinated with sculpting and posing clay figures, and demonstrated the animation potential to Vinton. Vinton, in turn, developed a method of using his student film camera to photograph the clay scenes frame by frame. Their first full project together, Closed Mondays, took the Academy Award for Best Animated Short Film in 1975. It was notable not only for the moving clay sculptures, but for its camerawork. Vinton had realized that in claymation, where scenes are composed in real 3D space, the camera can be moved from frame to frame just like the figures. Not long after this project, Vinton and Gardiner split up. Gardiner seems to have been a prolific artist in that way where he could never stick to one thing for very long, and Vinton had a mind towards making a business out of this new animation technology. It was Vinton who christened it Claymation, then a trademark of his new studio. Vinton returned to his home state and opened Will Vinton Studios in Portland. Vinton Studios released a series of successful animated shorts in the '70s, and picked up work on numerous other projects, contributing for example to the "Wizard of Oz" film sequel "Return to Oz" and the Disney film "Captain EO." By far Vinton Studios most famous contributions to our culture, though, are their advertising projects. Will Vinton Studios brought us the California Raisins, the Noid, and walking, talking M&M's. Will Vinton Studios struggled with producing claymation at commercial scale. Shooting with film cameras, it took hours to see the result. Claymation scenes were more difficult to rework than cel animation, setting an even larger penalty for reshoots. Most radically, claymation scenes had to be shot on sets, with camera and light rigging. Reshooting sections without continuity errors was as challenging as animating those sections in the first place. To reduce rework, they used pencil tests: quicker, lower-effort versions of scenes shot to test the lighting, motion, and sound synchronization before photography with a film camera. Their pencil tests were apparently captured on a crude system of customized VCRs, allowing the animator to see the previous frame on a monitor as they composed the next, and then to play back the whole sequence. It was better than working from film, but it was still slow going. The area from Beaverton to Hillsboro, in Oregon near Portland, is sometimes called "the silicon forest" largely on the influence of Intel and Tektronix. As in the better known silicon valley, these two keystone companies were important not only on their own, but also as the progenitors of dozens of new companies. Tektronix, in particular, had a steady stream of employees leaving to start their own businesses. Among these alumni was Mentor Graphics. Mentor Graphics was an early player in electronic design automation (EDA), sort of like a field of CAD specialized to electronics. Mentor products assisted not just in the physical design of circuit boards and ICs, but also simulation and validation of their functionality. Among the challenges of EDA are its fundamentally graphical nature: the final outputs of EDA are often images, masks for photolithographic manufacturing processes, and engineers want to see both manufacturing drawings and logical diagrams as they work on complex designs. When Mentor started out in 1981, EDA was in its infancy and relied mostly on custom hardware. Mentor went a different route, building a suite of software products that ran on Motorola 68000-based workstations from Apollo. The all-software architecture had cost and agility advantages, and Mentor outpaced their competition to become the field's leader. Corporations want for growth, and by the 1990s Mentor had a commanding position in EDA and went looking for other industries to which their graphics-intensive software could be applied. One route they considered was, apparently, animation: computer animation was starting to take off, and there were very few vendors for not just the animation software but the computer platforms capable of rendering the product. In the end, Mentor shied away: companies like Silicon Graphics and Pixar already had a substantial lead, and animation was an industry that Mentor knew little about. As best I can tell, though, it was this brief investigation of a new market that exposed Mentor engineering managers Howard Mozeico and Arthur Babitz to the animation industry. I don't know much about their career trajectories in the years shortly after, only that they both decided to leave Mentor for their own reasons. Arthur Babitz went into independent consulting, and found a client reminiscent of his work at Mentor, an established animation studio that was expanding into computer graphics: Will Vinton Studios. Babitz's work at Will Vinton Studios seems to have been largely unrelated to claymation, but it exposed him to the process, and he watched the way they used jury-rigged VCRs and consumer video cameras to preview animations. Just a couple of years later, Mozeico and Babitz talked about their experience with animation at Mentor, a field they were both still interested in. Babitz explained the process he had seen at Will Vinton Studios, and his ideas for improving it. Both agreed that they wanted to figure out a sort of retirement enterprise, what we might now call a "lifestyle business": they each wanted to found a company that would keep them busy, but not too busy. The pair incorporated Animation Toolworks, headquartered in Mozeico's Sherwood, Oregon home. In 1998 Animation Toolworks hit trade shows with the Video Lunchbox. The engineering was mostly by Babitz, the design and marketing by Mozeico, and the manufacturing done on contract by a third party. The device took its name from its form factor, a black crinkle paint box with a handle on top of its barn-roof-shaped lid. It was something like the Lyon Lamb VAS, if it was portable, digital, and relatively inexpensive. The Lunchbox was essentially a framegrabber, a compact and simplified version of the computer framegrabbers that were coming into use in the animation industry. You plugged a video camera into the input, and a television monitor into the output. You could see the output of the camera, live, on the monitor while you composed a scene. Then, one press of a button captured a single frame and stored it. With a press of another button, you could swap between the stored frame and the live image, helping to compose the next. You could even enable an automatic "flip-flop" mode that alternated the two rapidly, for hands-free adjustment. Each successive press of the capture button stored another frame to the Lunchbox's memory, and buttons allowed you to play the entire set of stored frames as a loop, or manually step forward or backward through the frames. And that was basically it: there were a couple of other convenience features like an intervalometer (for time lapse) and the ability to record short sections of real-time video, but complete operation of the device was really very simple. That seems to have been one of its great assets. The Lunchbox was much easier to sell after Mozeico gave a brief demonstration and said that that was all there is to it. To professionals, the Lunchbox was a more convenient, more reliable, and more portable version of the video tape recorder or computer framegrabber systems they were already using for pencil tests. Early customers of Animation Toolworks included Will Vinton Studios alongside other animation giants like Disney, MTV, and Academy Award-winning animator Mark Osborne. Animation Toolworks press quoted animators from these firms commenting on the simplicity and ease of use, saying that it had greatly sped up the animation test process. In a review for Animation World Magazine, Kellie-Bea Rainey wrote: In most cases, computers as framegrabbers offer more complications than solutions. Many frustrations stem from the complexity of learning the computer, the software and it's constant upgrades. But one of the things Gary Schwartz likes most about the LunchBox is that the system requires no techno-geeks. "Computers are too complex and the technology upgrades are so frequent that the learning curve keeps you from mastering the tools. It seems that computers are taking the focus off the art. The Video LunchBox has a minimum learning curve with no upgrade manuals. Everything is in the box, just plug it in." Indeed, the Lunchbox was so simple that it caught on well beyond the context of professional studios. It is remembered most as an educational tool. Disney used the Lunchbox for teaching cel animation in a summer program, but closer to home, the Lunchbox made its way to animation enthusiast and second-grade teacher Carrie Caramella. At Redmond, Oregon's John Tuck Elementary School, Caramella acted as director of a student production team that brought their short film "The Polka Dot Day" to the Northwest Film Center's Young People's Film and Video Festival. During the early 2000s, after-school and summer animation programs proliferated, many using claymation, and almost all using the Video Lunchbox. At $3,500, the Video Lunchbox was not exactly cheap. It cost more than some of the more affordable computer-based options, but it was so much easier to use, and so much more durable, that it was very much at home in a classroom. Caramella: "By using the lunchbox, we receive instant feedback because the camera acts > as an eye. It is also child-friendly, and you can manipulate the film a lot more." Caramella championed animation at John Tuck, finding its uses in other topics. A math teacher worked with students to make a short animation of a chicken. In a unit on compound words, Caramella led students in animating their two words together: a sun and a flower dance; the word is "sunflower." Butter and milk, base and ball. In Lake Oswego, an independent summer program called Earthlight Studios took up the system. With the lunchbox, Corey's black-and-white drawings spring to life, two catlike animé characters circling each other with broad edged swords. It's the opening seconds of what he envisions will be an action-adventure film. We can imagine how cringeworthy these student animations must be to their creators today, but early-'00s education was fascinated with multimedia and it seems rare that technology served the instructional role so well. It was in this context that I crossed ways with the Lunchbox. As a kid, I went to a summer animation program at OMSI---a claymation program, which I hazily remember was sponsored by a Will Vinton Studios employee. In an old industrial building beside the museum, we made crude clay figures and then made them crudely walk around. The museum's inventory of Lunchboxes already showed their age, but they worked, in a way that was so straightforward that I think hardly any time was spent teaching operation of the equipment. It was a far cry from an elementary school film project in which, as I recall, nearly an entire day of class time was burned trying to get video off of a DV camcorder and into iMovie. Mozeico and Babitz aimed for modest success, and that was exactly what they found. Animation Toolworks got started on so little capital that it turned a profit the first year, and by the second year the two made a comfortable salary---and that was all the company would ever really do. Mozeico and Babitz continued to improve on the concept. In 2000, they launched the Lunchbox Sync, which added an audio recorder and the ability to cue audio clips at specific frame numbers. In 2006, the Lunchbox DV added digital video. By the mid-2000s, computer multimedia technology had improved by leaps and bounds. Framegrabbers and real-time video capture devices were affordable, and animation software on commodity PCs overtook the Lunchbox on price and features. Still, the ease of use and portability of the Lunchbox was a huge appeal to educators. By 2005 Animation Toolworks was basically an educational technology company, and in the following years computers overtook them in that market as well. The era of the Lunchbox is over, in more ways than one. A contentious business maneuver by Phil Knight saw Will Vinton pushed out of Will Vinton Studios. He was replaced by Phil Knight's son, Travis Knight, and the studio rebranded to Laika. The company has struggled under its new management, and Laika has not achieved the renaissance of stop-motion that some thought Coraline might bring about. Educational technology has shifted its focus, as a business, to a sort of lightweight version of corporate productivity platforms that is firmly dominated by Google. Animation Toolworks was still selling the Lunchbox DV as late as 2014, but by 2016 Mozeico and Babitz had fully retired and offered support on existing units only. Mozeico died in 2017, crushed under a tractor on his own vineyard. There are worse ways to go. Arthur Babitz is a Hood River County Commissioner. Kellie-Bea Rainey: I took the two-minute tutorial and taped it to the wall. I cleaned off a work table and set up a stage and a character. Then I put my Sharp Slimcam on a tripod... Once the camera was plugged into the LunchBox, I focused it on my animation set-up. Next, I plugged in my monitor. All the machines were on and all the lights were green, standing by. It's time to hit the red button on the LunchBox and animate! Yippee! Look Houston, we have an image! That was quick, easy and most of all, painless. I want to do more, and more, and even more. The next time you hear from me I'll be having fun, teaching my own animation classes and making my own characters come to life. I think Gary Schwartz says it best, "The LunchBox brings the student back to what animation is all about: art, self-esteem, results and creativity." I think we're all a little nostalgic for the way technology used to be. I know I am. But there is something to be said for a simple device, from a small company, that does a specific thing well. I'm not sure that I have ever, in my life, used a piece of technology that was as immediately compelling as the Video Lunchbox. There are numerous modern alternatives, replete with USB and Bluetooth and iPad apps. Somehow I am confident that none of them are quite as good.
Your latest serving of computing related humor
We’re excited to announce that the Arduino team is returning to Amsterdam as an ecosystem partner at The Things Conference 2025, the world’s leading LoRaWAN event, taking place September 23rd-24th. This year, we’re bringing more tech, more insights, and more real-world use cases than ever – to give you all the tools you need to future-proof […] The post The Things Conference 2025: shape the future of IoT with Arduino! appeared first on Arduino Blog.
Okay, I have to be doing something astronomically stupid, right? This should be working? I’m playing around with an App Clip and want to just run it on the device as a test, but no matter how I set things up nothing ever works. If you see what I’m doing wrong let me know and I’ll update this, and hopefully we can save someone else in the future a few hours of banging their head! Xcode App Clips require some setup in App Store Connect, so Apple provides a way when you’re just testing things to side step all that: App Clip Local Experiences I create a new sample project called IceCreamStore, which has the bundle ID com.christianselig.IceCreamStore. I then go to File > New > Target… > App Clip. I choose the Product Name “IceCreamClip”, and it automatically gets the bundle ID com.christianselig.IceCreamStore.Clip. I run both the main target and the app clip target on my iOS 18.6 phone and everything shows up perfectly, so let’s go onto actually configuring the Local Experience. Local Experience setup I go to Settings.app > Developer > App Clips Testing > Local Experiences > Register Local Experience, and then input the following details: URL Prefix: https://boop.com/beep/ Bundle ID: com.christianselig.IceCreamStore.Clip (note thne Apple guide above says to use the Clip’s bundle ID, but I have tried both) Title: Test1 Subtitle: Test2 Action: Open Upon saving, I then send myself a link to https://boop.com/beep/123 in iMessage, and upon tapping on it… nothing, it just tries to open that URL in Safari rather than in an App Clip (as it presumably should?). Same thing if I paste the URL into Safari’s address bar directly. Help What’s the deal here, what am I doing wrong? Is my App Store Connect account conspiring against me? I’ve tried on multiple iPhones on both iOS 18 and 26, and the incredible Matt Heaney (wrangler of App Clips) even kindly spent a bunch of time also pulling his hair out over this. We even tried to see if my devices were somehow banned from using App Clips, but nope, production apps using App Clips work fine! If you figure this out you would be my favorite person. 😛