Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
87
Introduction The ICE-V Wireless FPGA Board Preloading the PSRAM with User Data The Overall Boot Process UART Console Getting Started with the ICE-V Wireless Board for Real Reinstalling or Modifying the ESP32C3 Firmware The RISC-V Example FPGA Design Compiling the RISC-V Example FPGA Design Conclusion Footnotes Introduction It’s been more than 7 years now since the first public release of Project IceStorm. Its goal was to reverse engineer to the internals of Lattice ICE40 FPGAs, and create a full open source toolchain to go from RTL all the way to a place-and-routed bitstream. Project IceStorm was a huge success and started a small revolution in the world of hobby electronics. While it had been possible to design for FPGAs before, it required multi-GB behemoth software installations, and there weren’t many cheap development boards. Project IceStorm kicked off an industry of small-scale makers who created their own development boards, each with a distinct set of features. One crowd...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Electronics etc…

Brightness and Contrast Adjustment of Tektronix TDS 500/600/700 Oscilloscopes

Introduction Finding the Display Tuning Potentiometers The Result Hardcopy Preview Mode Introduction Less than a week after finishing my TDS 684B analog memory blog post, a TDS 684C landed on my lab bench with a very dim CRT. If you follow the lives the 3-digit TDS oscilloscope series, you probably know that this is normally a bit of death sentence of the CRT: after years of use, the cathode ray loses its strength and there’s nothing you can do about it other than replace the CRT with an LCD screen. I was totally ready to go that route, and if I ever need to do it, here are 3 possible LCD upgrade options that I list for later reference: The most common one is to buy a $350 Newscope-T1 LCD display kit by SimmConn Labs. A cheaper hobbyist alternative is to hack something together with a VGA to LVDS interface board and some generic LCD panel, as described in this build report. He uses a VGA LCD Controller Board KYV-N2 V2 with a 7” A070SN02 LCD panel. As I write this, the cost is $75, but I assume this used to be a lot cheaper before tariffs were in place. If you really want to go hard-core, you could make your own interface board with an FPGA that snoops the RAMDAC digital signals and converts them to LVDS, just like the Newscope-T1. There is a whole thread about this the EEVblog forum. But this blog post is not about installing an LCD panel! Before going that route, you should try to increase the brightness of the CRT by turning a potentiometer on the display board. It sounds like an obvious thing to try, but didn’t a lot of reference to online. And in my case, it just worked. Finding the Display Tuning Potentiometers In the Display Assembly Adjustment section of chapter 5 of the TDS 500D, TDS 600C, TDS 700D and TDS 714L Service Manual, page 5-23, you’ll find the instructions on how to change rotation, brightness and contrast. It says to remove the cabinet and then turn some potentiometer, but I just couldn’t find them! They’re supposed to be next to the fan. Somewhere around there: Well, I couldn’t see any. It’s only the next day, when I was ready to take the whole thing apart that I noticed these dust covered holes: A few minutes and a vaccum cleaning operation later reveals 5 glorious potentiometers: From left to right: horizontal position rotation vertical position brightness contrast Rotate the last 2 at will and if you’re lucky, your dim CRT will look brand new again. It did for me! The Result The weird colors in the picture above is a photography artifact that’s caused by Tektronix NuColor display technology: it uses a monochrome CRT with an R/G/B shutter in front of it. You can read more about it in this Hackaday article. In real life, the image looks perfectly fine! Hardcopy Preview Mode If dialing up the brightness doesn’t work and you don’t want to spend money on an LCD upgrade, there is the option of switching the display to Hardcopy mode, like this: [Display] -> [Settings <Color>] -> [Palette] -> [Hardcopy preview] Instead of a black, you will now get a white background. It made the scope usable before I made the brightness adjustment.

a week ago 9 votes
A Tektronix TDS 684B Oscilloscope Uses CCD Analog Memory

Introduction The TDS600 Series The Acquisition Board Measuring Along the Signal Path A Closer Look at the Noise Issue Conclusion Introduction I have a Tektronix TDS 684B oscilloscope that I bought cheaply at an auction. It has 4 channels, 1 GHz of BW and a sample rate of 5 Gsps. Those are respectable numbers even by today’s standards. It’s also the main reason why I have it: compared to modern oscilloscopes, the other features aren’t nearly as impressive. It can only record 15k samples per channel at a time, for example. But at least the sample rate doesn’t go down when you increase the number of recording channels: it’s 5 Gsps even all 4 channels are enabled. I’ve always wondered how Tektronix managed to reach such high specifications back in the nineties, so in this blog post I take a quick look at the internals, figure out how it works, and do some measurements along the signal path. The TDS600 Series The first oscilloscopes of the TDS600 series were introduced around 1993. The last one, the TDS694C was released in 2002. The TDS684 version was from sometime 1995. The ICs on my TDS684C have date codes from as early as the first half of 1997. The main characteristic of these scopes was their extreme sample rate for that era, going from 2 Gsps for the TDS620, TDS640 and TDS644, 5 Gsps for the TDS654, TDS680 and TDS684, and 10 Gsps for the TDS694C which was developed under the Screamer code name. The oscilloscopes have 2 main boards: the acquisition board contains all the parts from the analog input down to the sample memory as well as some triggering logic. (Click to enlarge) a very busy CPU board does the rest. (Click to enlarge) 2 flat cables and a PCB connect the 2 boards. The interconnect PCB traces go to the memory on the acquisition board. It’s safe to assume that this interface is used for high-speed waveform data transfer while the flat cables are for lower speed configuration and status traffic. If you ever remove the interconnection PCB, make sure to put it back with the same orientation. It will fit just fine when rotated 180 degrees but the scope won’t work anymore! The Acquisition Board The TDS 684B has 4 identical channels that can easily be identified. (Click to enlarge) There are 6 major components in the path from input to memory: Analog front-end Hidden under a shielding cover, but you’d expect to find a bunch of relays there to switch between different configurations: AC/DC, 1Meg/50 Ohm termination, … I didn’t open it because it requires disassembling pretty much the whole scope. Signal Conditioner IC(?) This is the device with the glued-on heatsink. I left it in place because there’s no metal attachment latch. Reattaching it would be a pain. Since the acquisition board has a bunch of custom ICs already, chances are this one is custom as well, so knowing the exact part number wouldn’t add a lot of extra info. We can see one differential pair going from the analog front-end into this IC and a second one going from this IC to the next one, an ADG286D. National Semi ADG286D Mystery Chip Another custom chip with unknown functionality. Motorola MC10319DW 8-bit 25 MHz A/D Converter Finally, an off-the-shelf device! But why is it only rated for 25MHz? National Semi ADG303 - A Custom Memory Controller Chip It receives the four 8-bit lanes from the four ADCs on one side and connects to four SRAMs on the other. 4 Alliance AS7C256-15JC SRAMs Each memory has a capacity of 32KB and a 15ns access time, which allows for a maximum clock of 66 MHz. The TDS 684B supports waveform traces of 15k points, so they either only use half of the available capacity or they use some kind of double-buffering scheme. There are four unpopulated memory footprints. In one of my TDS 420A blog posts, I extend the waveform memory by soldering in extra SRAM chips. I’m not aware of a TDS 684B option for additional memory, so I’m not optimistic about the ability to expand its memory. There’s also no such grayed-out option in the acquisition menu. When googling for “ADG286D”, I got my answer when I stumbled on this comment on BlueSky which speculates that it’s an analog memory, probably some kind of CCD FIFO. Analog values are captured at a rate of up to 5 GHz and then shifted out at a much lower speed and fed into the ADC. I later found a few other comments that confirm this theory. Measuring Along the Signal Path Let’s verify this by measuring a few signals on the board with a different scope. The ADC input pins are large enough to attach a Tektronix logic analyzer probe: ADC sampling the signal With a 1 MHz signal and using a 100Msps sample rate, the input to the ADC looks like this: The input to the ADC is clearly chopped into discrete samples, with a new sample every 120 ns. We can discern a sine wave in the samples, but there’s a lot of noise on the signal too. Meanwhile the TDS684B CRT shows a nice and clean 1 MHz signal. I haven’t been able to figure out how that’s possible. For some reason, simply touching the clock pin of the ADC with a 1 MOhm oscilloscope probe adds a massive amount of noise to the input signal, but it shows the clock nicely: The ADC clock matches the input signal. It’s indeed 8.33 MHz. Acquistion refresh rate The scope only records in bursts. When recording 500, 1000 or 2500 sample points at 100Msps, it records a new burst every 14ms or 70Hz. When recording 5000 points, the refresh rate drops to 53Hz. For 15000 points, it drops even lower, to 30Hz: Sampling burst duration The duration of a sampling burst is always 2 ms, irrespective of the sample rate of the oscilloscope or the number of points acquired! The combination of a 2 ms burst and 8 MHz sample clock results in 16k samples. So the scope always acquires what’s probably the full contents of the CCD FIFO and throws a large part away when a lower sample length is selected. Here’s the 1 MHz signal sampled at 100 Msps: And here’s the same signal sampled at 5 Gsps: It looks like the signal doesn’t scan out of the CCD memory in the order it was received, hence the signal discontinuity in the middle. Sampling a 1 GHz signal I increased the input signal from 1 MHz to 1 GHz. Here’s the ADC input at 5 Gsps: With a little bit of effort, you can once again imagine a sine wave in those samples. There’s periodicity of 5 samples, as one would expect for a 1 GHz to 5 Gsps ratio. The sample rate is still 8.3 MHz. Sampling a 200 MHz signal I also applied a 200 MHz input signal. The period is now ~22 samples, as expected. 200 MHz is low enough to measure with my 350 MHz bandwidth Siglent oscilloscope. To confirm that the ADG286D chip contains the CCD memory, I measured the signal on one of the differential pins going into that chip: And here it is, a nice 200 MHz signal: A Closer Look at the Noise Issue After initially publishing this blog post, I had a discussion on Discord about the noise issue which made me do a couple more measurements. Input connected to ground Here’s what the ADC input looks like when the input of the scope is connected to ground: 2 major observations: there’s a certain amount of repetitiveness to it. there are these major voltage spikes in between each repetition. They are very faint on the scope shot. Let’s zoom in on that: The spikes are still hard to see so I added the arrows, but look how the sample pattern repeats after each spike! The time delay between each spike is ~23.6 us. With a sample rate of 120ns, that converts into a repetitive pattern of ~195 samples. I don’t know why a pattern of 195 samples exists, but it’s clear that each of those 195 locations have a fixed voltage offset. If the scope measures those offsets during calibration, it can subtract them after measurement and get a clean signal out. 50 kHz square wave Next I applied a 50kHz square wave to the input. This frequency was chosen so that, for the selected sample rate, a single period would cover the 15000 sampling points. 2 more observations: the micro-repetitiveness is still there, irrespective of the voltage offset due to the input signal. That means that subtracting the noise should be fine for different voltage inputs. We don’t see a clean square wave outline. It looks like there’s some kind of address interleaving going on. 50kHz sawtooth wave We can see the interleaving even better when applying a sawtooth wavefrom that covers one burst: Instead of a clean break from high-to-low somewhere in the middle, there is a transition period where you get both high and low values. This confirms that some kind of interleaving is happening. Conclusion The TDS684B captures input signals at high speed in an analog memory and digitizes them at 8 MHz. The single-ended input to the ADC is noisy yet the signal looks clean when displayed on the CRT of the scope, likely because the noise pattern is repetitive and predictable. In addition to noise, there’s also an interleaving pattern during the reading out of the analog FIFO contents. The number of samples digitized is always the same, irrespective of the settings in the horizontal acquisition menu. (Not written by ChatGPT, I just like to use bullet points…)

a week ago 6 votes
Rohde &amp; Schwarz AMIQ Modulation Generator - Teardown and Analog Deep Dive

Introduction The Rohde & Schwarz AMIQ Modulation Generator WinIQSim Software Inside the AMIQ The Signal Generation PCB Analog Signal Generation Architecture: Fixed vs Variable DAC Clock Internal Reference Clock Generation DAC Clock Synthesizer I/Q Output Skew Tuning Variable Gain Amplifier Internal Diagnostics Efficient Distribution of Configuration Signals Conclusion References Introduction Every few months, a local company auctions off all kinds of lab, production and test equipment. I shouldn’t be subscribed to their email list but I am, and that’s one way I end up with more stuff that I don’t really need. During a recent auction, I got my hands on a Rohde & Schwarz AMIQ, an I/Q modulation generator, for a grand total of $45. Add to that another 30% for the auction fee and taxes and you’re still paying much less than what others would pay for a round of golf? But instead of one morning of fun, this thing has the potential to keep me busy for many weekends, so what a deal! (Click to enlarge) A few days after “winning” the auction, I drove to a dark dungeon of a warehouse in San Jose to pick up the loot. The AMIQ has a power on/off button and 3 LEDs and that’s in terms of user interface. There are no dials, there’s no display. So without any other options, I simply powered it up and I was immediately greeted by the ominous clicking of a hard drive. I was right: this thing would keep me entertained for at least a little bit! (Click to enlarge) It took a significant amount of effort to restore the machine back to its working state. I’ll write about that in future blog posts, but let’s start with an overview of the functionality and a teardown of the R&S AMIQ and then deep dive into some of its analog circuits. AMIQ prices on eBay vary wildly, from $129 to $2600 at the time of writing this. Even if you get one of the higher priced ones, you should expect to get a unit that’s close to failing due to leaking capacitors and a flaky harddrive! The Rohde & Schwarz AMIQ Modulation Generator Reduced to an elevator sales pitch, the AMIQ is a 2-channel arbitrary waveform generator (AWG) with a deep sample buffer. That’s it! It has a streaming buffer that feeds samples to 2 14-bit DACs at a sample rate of up to 105MHz. Two output channels I and Q will typically contain quadrature modulation signals that are sent to an RF vector signal generator such as a Rohde & Schwarz SMIQ for the actual high-frequency modulation. In a typical setup, the AMIQ is used to generate the baseband modulated signal and the SMIQ shifts the baseband signal to an RF frequency. Since the AMIQ has no user interface, the waveform data must provided by an external device. This could be a PC that runs R&S WinIQSim software or even the SMIQ itself because it has the ability to control an AMIQ. You can also create your own waveforms and upload them via floppy disk, GPIB or an RS-232 interface using SCPI control commands. Figure 4-1 of the AMIQ Operating Manual has a simplified block diagram. It is pretty straightforward and somewhat similar to the one of my HP 33120A function generator: (Click to enlarge) On the left are 2 blocks that are shared: a clock synthesizer waveform memory And then for each channel: 14-bit D/A converter analog filters output section with amplifier/attenuator differential analog output driver (AMIQ-B2 option) The major blocks are surrounded by a large amount of DACs that are used to control everything from the tuning input of local 10MHz clock oscillator, gain and offset of output signals, clock skew between I and Q signal and much more. You could do some of this with a modern SDR setup, but the specifications of the AMIQ units are dialed up a notch. Both channels are completely symmetrical to avoid modulation errors at the source. If there’s a need to compensate for small delay differences in, for example, the external cables, you can compensate for that by changing the skew between clocks of the output DACs with a precision of 10ps. Similarly, the DAC sample frequency can be programmed with 32-bit precision. (Click to enlarge) In addition to the main I/Q outputs at the front, there are a bunch of secondary input and output signals: 10MHz reference input and output sample clock trigger input marker output external filter loopback output and input bit error rate measurement (BER) connector (AMIQ-B1 option) parallel interface with the digital value of the samples that are sent to the DAC (front panel, AMIQ-B3 option) (Click to enlarge) Above those speciality inputs and outputs are the obligatory GPIB interface and a bunch of generic connectors that look suspiciously like the ones you’d find on an early century PC: PS/2 keyboard and mouse Parallel port RS232 USB There are 3 different AMIQ versions: 1110.2003.02: 4M samples 1110.2003.03: 4M samples 1110.2003.04: 16M samples Mine is an AMIQ-04. WinIQSim Software While it is possible to control the AMIQ over RS-232 or GPIB with your own software, you’d have a hard time matching the features of WinIQSim, a R&S Windows application that supports the AMIQ. (Click to enlarge) With WinIQSim, you can select one of the popular communication protocols from the late nineties and early 2000s, fill in digital framing data, apply all kinds of distortions and interferences, compute the I and Q waveforms and send it to the AMIQ. Some of the supported formats include CDMA2000, 802.11a WLAN, TD-SCDMA and more. But you don’t have to use these official protocols, WinIQSim supports any kind of FSK, QPSK, QAM or other common modulation method. You need a license for some of the communication protocols. The license is linked to the AMIQ device, not the PC, but the license check is pretty naive, and while I haven’t tried it… yet, the EEVblog forum has discussions about how to enable features yourself. My device only came with a license for IS-95 CDMA. Inside the AMIQ It’s trivial to open up an AMIQ: after removing the 4 feet in the back with a regular Philips screwdriver, you can simply slide off the outer case. It has 2 major subsystems: the top contains all the components of a standard PC (Click to enlarge) the bottom has a signal generation PCB (Click to enlarge) The PCB looks incredibly clean and well laid out and I love how they printed the names of different sections on the metal gray shielding plates. We’ll leave the PC system for a future blog post, and focus on the signal generation PCB. The Signal Generation PCB Let’s remove the shielding plates to see what’s underneath. I had to drill out one of the screws that attach the plates because the head was stripped. (Did somebody before me already try to repair it?) (Click to enlarge) The bottom half left and right sections are perfectly symmetrical, as one would expect for a device that has the ability to tune skew mismatches with a 10 ps precision. Annotated, it looks like this: (Click to enlarge) Rohde & Schwarz recently made the terrible decision to lock all their software and manuals behind an approval-only corporate log-in wall, but luckily some of the most important AMIQ assets can be found online elsewhere, including the operating manual and a service manual contains the full schematics! Let’s dig a bit deeper into the various aspects of the design. In what follows I’ll be focusing primarily on analog aspects of the design. This is a very personal choice: not that the digital sections aren’t important, it’s just that, as digital design engineer, they’re not particular interesting to me. By studying the analog sections, I hope to stumble into circuits that I didn’t really know a lot about before. Fantastic Schematics Before digging in for real, a word about the schematics: they are fantastic. Each sub-system has a block diagram that is already pretty detailed, with signal names that match the schematics and test points, often with annotations to indicate the voltage or frequency range. Here’s the block diagram of the reference and DAC clock generation section, for example. Schematic page 5 (Click to enlarge) Signals that come from or go to other pages are fully referenced. Look at the SYN_OUT_CLK signal below: Schematic page 10 (Click to enlarge) The signal is also used on page 6, coordinate 1D and 7B and page 22, coordinate 8A. How cool is that? Signal path test points One of the awesome features of the PCB is the generous amount of test points. We’re not just talking PCB test point against which you can hold your oscilloscope probe or even header pins, though there are plenty of those too, but full on SMB connectors. In addition to these SMD connectors, there are also plenty of jumpers that can be used to interrupt the default signal flow and insert your own test signal instead. Analog Signal Generation Architecture: Fixed vs Variable DAC Clock In the HP 33120A the DAC has a fixed 40 MHz clock. There’s a 16 kB waveform RAM that contains, say, one quarter period of a 100 Hz sine. If you want to send out a sine wave of 200 Hz, instead of sequentially stepping through all the addresses of the waveform RAM, you just skip every other address. One of the benefits of this kind of generation scheme is that you can make do with a fixed frequency analog anti-aliasing filter: the Nyquist frequency is always the same after all. A major disadvantage, however, is that even if the output signal has a bandwidth of only 1MHz, you still need to feed the DAC at the fixed clock rate. You could insert a digital upsampling filter between the waveform memory and the DAC, but that requires significant mathematical DSP fire power, or you’d have to increase the depth of the waveform memory. For an arbitrary waveform generator, it makes more sense to run the DAC at whichever clock speed is sufficient to meet the Nyquist requirement of the desired signal and provide a number of different filtering options. The AMIQ has 4 such options: no filter, a 25 MHz, a 2.5 MHz or a loopback through an external filter. The DAC sample clock range is huge, from 10 Hz all the way to 105 MHz, though specifications are only guaranteed up to 100 MHz. According to the data sheet, the clock frequency can be set with a precision of 10^-7. Internal Reference Clock Generation Like all professional test and measurement equipment, the AMIQ uses a 10MHz reference clock that can come from outside or that can be generated locally with a 10MHz crystal. It’s common for high-end equipment to have an oven controlled crystal oscillator (OCXO), but AMIQ has a lower spec’ed temperature controlled one (TCXO), a Milliren Technologies 453-0210. If we look at the larger reference clock generation block diagram, we can see something slightly unusual: instead of selecting between the internal TCXO output or the external reference clock input, the internal reference clock always comes from the TCXO (green). Schematic page 5 (Click to enlarge) When the internal clock is selected, the TCXO output frequency can be tuned with an analog signal that comes from the VTXCO TUNE DAC (blue), but when the external reference input is active, the TCXO is phase locked to the external clock. You can see the phase comparator and low pass filter in red. The reason for using a PLL with the internal TCXO as the voltage controlled oscillator is probably to ensure that the generated reference clock has the phase noise of the TCXO while tracking the frequency of the external reference clock: a PLL acts as a low-pass filter to the reference clock and as a high-pass filter to the VCO. If the TCXO has a better high frequency phase noise than the external reference clock, that makes sense. This is really out of my wheelhouse, so take all of this with a grain of salt… SYN_REF is the output of the internal reference clock generation unit. DAC Clock Synthesizer The clock synthesizer creates a highly programmable DAC clock from the internal reference clock SYN_REF from previous section. It should come as no surprise that this clock is generated by a PLL as well. Schematic page 5 (Click to enlarge) There are two speciality components in the clock generation path: a Mini-Circuits JTOS-200 VCO an Analog Devices AD9850 DDS Synthesizer (Click to enlarge) The VCO has an operating frequency between 100 and 200 MHz. I don’t know enough about VCOs to give meaningful commentary about the specifications, but based on comparisons with similar components on Digikey, such as this FMVC11009, it’s safe to assume that it’s an expensive and high quality component. The AD9850 is located in the feedback path of the PLL where it acts as a feedback divider with a precision of 32 bits. The signal flow inside the AD9850 is interesting: It works as follows: each clock cycle, the phase of a numerical controlled oscillator (NCO) accumulates with the programmable 32-bit increment. the upper bits of the phase accumulator serve as the address of a sine waveform table. a 10-bit DAC converts the digital sine wave to analog. the analog signal is sent through an external low-pass filter. the output of the low-pass filter goes back into the AD9850 and through a comparator to generate a digital output clock. This thing has its own programmable signal generator! But why the roundtrip from digital to analog back to digital? In theory, the MSB of the NCO could be used as output clock of the clock generator. The problem is that in such a configuration, the length of each clock period toggles between N and N+1 clock cycles, with a ratio so that the average clock period ends up with the desired value. But this creates major spurs in the frequency spectrum of the generated clock. When fed into the phase comparator of a PLL, these frequency spurs can show up as jitter at the output of the PLL and thus into the spectrum of the generated I and Q signals. By converting the signal to analog and using a low pass filter, the spurs can be filtered away. The combination of a low pass filter and a comparator acts as an interpolator: the edges of the generated clock fall somewhere in between N and N+1. Schematic page 21 (Click to enlarge) The steepness of the low pass filter depends on the ratio between the input and the output clock: the lower the ratio, the steeper the filter. There are a bunch of binary clock dividers that make it hard to know the exact ratio, but if the output of the VCO is 100 MHz and the input 10 MHz or less, there is a 10:1 ratio. The AMIQ has a 7th order elliptical low pass filter. I ran a quick simulation in LTspice to check the behavior. (dds_filter.asc source file.) The filter has a cut-off frequency of around 13 MHz: In modern fractional PLLs, instead of using a regular DAC, one could use a high-order sigma-delta unit to create a pulse density modulated output with the noise pushed to higher frequencies and a low pass filter that can be less aggressive. There’s a plenty of literature online about DDS clock generators, some of which I’ve listed in the references at the bottom, but the datasheet of the AD9850 itself is a good start. I/Q Output Skew Tuning Earlier I mentioned the ability to tune the skew between the I and the Q output to compensate for a difference in cable length when connecting the AMIQ to an RF signal generator. While the digital waveform fetching circuit works on one clock, the two clocks that go to the DAC are the ones that can be changed: Here’s what the skewing circuit looks like: Schematic page 10 (Click to enlarge) Starting with input signal DAC_CLK, the goal is to create I_DAC_CLK and Q_DAC_CLK that can be moved up to 1ns ahead or behind the other. Signals can be delayed by sending them through an R/C combo. Since we need a variable delay, we need a way to change the value of either the R or the C. A varactor or varicap diode is exactly that kind of device: its capacitance changes depending on the reverse bias voltage across the diode. Static input signal SKEW_TUNE comes from a DAC. It goes to the cathode of one varicap and the anode of the other. When its voltage increase, the capacitance of those 2 diodes moves in opposite ways, and so does the R/C delay along the I and Q clock path. A BB147 varicap has a capacitance that varies between 2pF and 112pF depending on the bias voltage. Capacitances C168 and C619 prevent the relatively high bias voltages from reaching the digital signal path. Does the circuit work? Absolutely! The scope photos below show the impact of the skewing circuit when dialed to the maximum in both directions: With a 2ns/div horizontal scale, you can see a skew of ~1ns either way. Variable Gain Amplifier The analog signal path starts at the DAC and then goes through the anti-aliasing filters, an output section that does amplification and attenuation, and finally the output connector board. It is common for signal generators to have a fixed gain amplification section and then some selectable fixed attenuation stages: amplifiers with a variable gain and low distortion are hard to design. If you need a signal amplitude that can’t be achieved with one of the fixed attenuation settings, one solution is to multiply the signal before it enters the DAC to reduce the output amplitude, though that’s at the expense of losing some of the dynamic range of the DAC. This is not the case for the AMIQ: while it can use a signal path that only uses fixed amplification and attenuation stages, it also offers the intriguing option to send the analog signal through an analog multiplier stage: If we zoom down from the system diagram to the block diagram, we can see how the multiplier/variable attenuator sits between the filter and the output amplifier, with 2 control input signals: AMPL_CNTRL and OFFSET. This circuit exists twice, of course, for the I and the Q channel. Schematic page 3 (Click to enlarge) Let’s check out the details! The heavy lifting of the variable gain amplifier is performed by an Analog Devices AD835 250 MHz, Voltage Output, 4-Quadrant Multiplier, a part from their Analog Multipliers & Dividers catalog. Schematic page 14 (Click to enlarge) It’s a tiny 8-pin device that calculates W = (X1-X2) * (Y1-Y2) + Z. In low volume, it will set you back $32 on Digikey. In addition to the multiplication, you can set a fixed output gain by feeding back output W through a resistive divider back into Z. For inputs less than 10 MHz, it has a typical harmonic distortion of -70dB. Analog Devices datasheets usually have a Theory of Operation section that explain, well, the underlying theory of operation. The AD835 has such a section as well, but it doesn’t go any further than stating that the multiplier is based on a classic form, having a translinear core, supported by three (X, Y, and Z) linearized voltage-to-current converters, and the load driving output amplifier. I have no clue how the thing works! In the case of the AMIQ, X2 and Y2 are strapped to ground, and there’s no feedback from W back into Z either which reduces the functionality to: W = FILTER_OUT * AMPL_VAR + OFFSET. AMPL_VAR and OFFSET are static analog values that are each created by a 12-bit DAC8143, just like many other analog configuration signals. It’s almost shame that this powerful little device is asked to perform such a basic operation. While researching the AD835, somebody pointed out the AD8310, another interesting speciality chip from Analog Devices. It’s a DC to 440 MHz, 95dB logarithmic amplifier, a converter from linear to logarithmic scale basically. Discovering little gems like this is why I love studying schematics of complex devices. Internal Diagnostics The AMIQ has a set of 15 internal analog signal to monitor the health of the device. Various meaningful signals are gathered all around the signal generation board and measured at power up to give you that dreaded Diagnostic Check Fail message. Schematic page 27 (Click to enlarge) The circuit itself is straightforward: 2 8-to-1 analog multiplexers feed into a CS5507AS 16-bit AD converter. It can only do 100 samples per second, but that’s sufficient to measure a bunch of mostly static signals. Like many other devices, the measured value is sent serially to one of the FPGAs. The ADC needs a 2.5V reference voltage. It’s funny that this reference voltage also goes to the analog multiplexers as one one of the diagnostic signals. One wonders what the ADC returns as a result when it tries to convert the output of a broken reference voltage generator. There are a bunch of circuits in the AMIQ whose only purpose is generating diagnostic signals. Here’s a good example of that: Schematic page 14 (Click to enlarge) I_OUT_DIAG and Q_OUT_DIAG are the outputs after the attenuator that will eventually end up at the connectors. The circuit in the top red square is a signal peak detector, similar to what you’d find in the rectifier of a power supply. It allows the AMIQ to track the output level of signals with a frequency that is way higher than the sample rate of the ADC. The circuit in the red rectangle below performs a digital XOR on the analog I and the Q signals and then sends it through a simple R/C low pass filter. I think that it allows the AMIQ to check that the phase difference between the I and Q channel is sensible when applying a test signal during power-on self-test. Efficient Distribution of Configuration Signals The AMIQ signal board has hundreds of configuration bits: there are the obvious enable/disable or selection bits, such as those to select between different output filters, but the lion’s share are used to set the output value of many 12-bit DACs. Instead of using parallel busses that fan out from an FPGA, the AMIQ has a serial configuration scan chain. Discrete 8-bit 74HCT4094 shift-and-store registers are located all over the design for the digital configuration bits. The DAC8134 devices have their own built-in shift register and are part of the same scan chain. Schematic page 19 (Click to enlarge) The schematic above is an example of that. The red scan chain data input goes through the VTCXO_TUNE DAC, then the 74HCT4094 after which it exist to some other page of the schematics. Conclusion Around the late nineties, test equipment companies started to stop adding schematics to their service manuals, but the R&S AMIQ is a nice exception to that. And while the device already has a bunch of FPGAs, most of the components are off-the-shelf single-function components. Thanks to that, the AMIQ is an excellent candidate for a deep dive: all the information is there, you need to just spend a bit of effort to go through things. I had a ton of fun figuring out how things worked. References Rohde & Schwarz documents R&S - IQ Modulation Generator AMIQ R&S - AMIQ Datasheet R&S - AMIQ Operating Manual R&S - AMIQ Service Manual with schematic Various application notes R&S - Floppy Disk Control of the I/Q Modulation Generator AMIQ R&S - Software WinIQSIM for Calculating I/Q Signals for Modulation Generator R&S AMIQ R&S - Creating Test Signals for Bluetooth with AMIQ / WinIQSIM and SMIQ R&S - WCDMA Signal Generator Solutions R&S - Golden devices: ideal path or detour? R&S - Demonstration of BER Test with AMIQ controlled by WinIQSIM Other AMIQ content on the web zw-ix has a blog - Getting an Rohde Schwarz AMIQ up and running zw-ix has a blog - Connecting a Rohde Schwarz AMIQ to a SMIQ04 Bosco tweets DDS Clock Synthesis MT-085 Tutorial: Fundamentals of Direct Digital Synthesis (DDS) How to Predict the Frequency and Magnitude of the Primary Phase Truncation Spur in the Output Spectrum of a Direct Digital Synthesizer (DDS) Related content The Signal Path - Teardown, Repair & Analysis of a Rohde & Schwarz AFQ100A I/Q (ARB) Modulation Generator

2 weeks ago 4 votes
DSLogic U3Pro16 Review and Teardown

Introduction The DSLogic U3Pro16 In the Box Probe Cables and Clips The Controller Hardware The Input Circuit Impact of Input Circuit on Circuit Under Test Additional IOs: External Clock, Trigger In, Trigger Out Software: From Saleae Logic to PulseView to DSView Installing DSView on a Linux Machine DSView UI Streaming Data to the Host vs Local Storage in DRAM Triggers Conclusion References Footnotes Introduction The year was 2020 and offices all over the world shut down. A house remodel had just started, so my office moved from a comfortably airconditioned corporate building to a very messy garage. Since I’m in the business of developing and debugging hardware, a few pieces of equipment came along for the ride, including a Saleae Logic Pro 16. I had the unit for work stuff, I may once in a while have used it for some hobby-related activities too. There’s no way around it: Saleae makes some of the best USB logic analyzers around. Plenty of competitors have matched or surpassed their digital features, but none have the ability to record the 16 channels in analog format as well. After corporate offices reopened, the Saleae went back to its original habitat and I found myself without a good 16-channel USB logic analyzer. Buying a Saleae for myself was out of the question: even after the $150 hobbyist discount, I can’t justify the $1350 price tag. After looking around for a bit, I decided to give the DSLogic U3Pro16 from DreamSourceLab a chance. I bought it on Amazon for $299. (Click to enlarge) In this blog post, I’ll look at some of the features, my experience with the software, and I’ll also open it up to discover what’s inside. The DSLogic U3Pro16 The DSLogic series currently consists of 3 logic analyzers: the $149 DSLogic Plus (16 channels) the $299 DSLogic U3Pro16 (16 channels) the $399 DSLogic U3Pro32 (32 channels) The DSLogic Plus and U3Pro16 both have 16 channels, but acquisition memory of the Plus is only 256Mbits vs 2Gbits for the Pro, and it has to make do with USB 2.0 instead of a USB 3.0 interface, a crucial difference when streaming acquistion data straight to the PC to avoid the limitations of the acquistion memory. There’s also a difference in sample rate, 400MHz vs 1GHz, but that’s not important in practice. The only functional difference between the U3Pro16 and U3Pro32 is the number of channels. It’s tempting to go for the 32 channel version but I’ve rarely had the need to record more than 16 channels at the same time and if I do, I can always fall back to my HP 1670G logic analyzer, a pristine $200 flea market treasure with a whopping 136 channels1. So the U16Pro it is! In the Box The DSLogic U16Pro comes with a nice, elongated hard case. Inside, you’ll find: the device itself. It has a slick aluminum enclosure. a USB-C to USB-A cable 5 4-way probe cables and 1 3-way clock and trigger cable 18 test clips Probe Cables and Clips You read it right, my unit came with 5 4-way probe cables, not 4. I don’t know if DreamSourceLab added one extra in case you lose one or if they mistakenly included one too much, but it’s good to have a spare. The cables are slightly stiffer than those that comes with a Saleae but not to the point that it adds a meaningful additional strain to the probe point. They’re stiffer because each of the 16 probe wires carries both signal and ground, probably a thin coaxial cable, which lowers the inductance of the probe and reduce ringing when measuring signal with fast rise and fall times. In terms of quality, the probe cables are a step up from the Saleae ones. The case is long enough so that the probe cables can be stored without bending them. The quality of the test clips is not great, but they are no different than those of the 5 times more expensive Saleae Logic 16 Pro. Both are clones of the HP/Agilent logic analyzer grabbers that I got from eBay and will do the job, but I much prefer the ones from Tektronix. The picture below shows 4 different grabbers. From left to right: Tektronix, Agilent, Saleae and DSLogic ones. Compared to the 3 others, the stem of the Tektronix probe is narrow which makes it easier to place multiple ones next to each other one fine-pitch pin arrays. If you’re thinking about upgrading your current probes to Tektronix ones: stay away from fakes. As I write this, you can find packs of 20 probes on eBay for $40 (incl shipping), so around $2 per probe. Search for “Tektronix SMG50” or “Tektronix 020-1386-01”. Meanwhile, you can buy a pack of 12 fake ones on Amazon for $16, or $1.3 a piece. They work, but they aren’t any better than the probes that come standard with the DSLogic. Fake probe on the left, Tek probe on the right The stem of the fake one is much thicker and the hooks are different too. The Tek probe has rounded hooks with a sharp angle at the tip: Tektronix hooks The hooks of a fake probe are flat and don’t attach nearly as well to their target: Fake hooks If you need to probe targets with a pitch that is smaller than 1.25mm, you should check out these micro clips that I reviewed ages ago. The Controller Hardware Each cable supports 4 probes and plugs into the main unit with 8 0.05” pins in 4x2 configuration, one pin for the signal, one pin for ground. The cable itself has a tiny PCB sticking out that slots into a gap of the aluminum enclosure. This way it’s not possible to plug in the cable incorrectly… unlike the Saleae. It’s great. When we open up the device, we can see an Infineon (formerly Cypress) CYUSB3014-BZX EZ-USB FX3 SuperSpeed controller. A Saleae Logic Pro uses the same device. These are your to-go-to USB interface chips when you need a microcontroller in addition to the core USB3 functionatility. They’re relatively cheap too, you can get them for $16 in single digital quantities at LCSC.com. The other size of the PCB is much busier. (Click to enlarge) The big ticket components are: a Spartan-6 XC6SLX16 FPGA Reponsible data acquisition, triggering, run-length encoding/compression, data storage to DRAM, and sending data to the CYUSB3014. A Saleae Logic 16 Pro has a smaller Spartan-6 LX9. That makes sense: its triggering options aren’t as advanced as the DSLogic and since it lacks external DDR memory, it doesn’t need a memory controller on the FPGA either. a DDR3-1600 DRAM It’s a Micron MT41K128M16JT-125, marked D9PTK, with 2Gbits of storage and a 16-bit data bus. an Analog Devices ADF4360-7 clock generator I found this a bit surprising. A Spartan-6 LX16 FPGA has 2 clock management tiles (CMT) that each have 1 real PLL and 2 DCMs (digital clock manager) with delay locked loop, digital frequency synthesizer, etc. The VCO of the PLL can be configured with a frequency up to 1080 MHz which should be sufficient to capture signals at 1GHz, but clearly there was a need for something better. The ADF4360-7 can generate an output clock as fast a 1800MHz. There’s obviously an extensive supporting cast: a Macronix MX25R2035F serial flash This is used to configure the FPGA. an SGM2054 DDR termination voltage controller an LM26480 power management unit It has two linear voltage regulators and two step-down DC-DC convertors. two clock oscillators: 24MHz and 19.2MHz a TI HD3SS3220 USB-C Mux This the glue logic that makes it possible for USB-C connectors to be orientation independent. a SP3010-04UTG for USB ESD protection Marked QH4 Two 5x2 pin connectors J7 and J8 on the right size of the PCB are almost certainly used to connect programming and debugging cables to the FPGA and the CYUSB-3014. (Click to enlarge) The Input Circuit I spent a bit of time Ohm-ing out the input circuit. Here’s what I came up with: The cable itself has a 100k Ohm series resistance. Together with a 100k Ohm shunt resistor to ground at the entrance of the PCB it acts as by-two resistive divider. The series resistor also limits the current going into the device. Before passing through a 33 Ohm series resistor that goes into the FPGA, there’s an ESD protection device. I’m not 100% sure, but my guess is that it’s an SRV05-4D-TP or some variant thereof. I’m not 100% sure why the 33 Ohm resistor is there. It’s common to have these type of resistors on high speed lines to avoid reflection but since there’s already a 100k resistor in the path, I don’t think that makes much sense here. It might be there for additional protection of the ESD structure that resides inside the FPGA IOs? A DSLogic has a fully programmable input threshold voltage. If that’s the case, then where’s the opamp to compare the input voltage against this threshold voltage? (There is such a comparator on a Saleae Logic Pro!) The answer to that question is: “it’s in the FPGA!” FPGA IOs can support many different I/O standards: single-ended ones, think CMOS and TTL, and a whole bunch of differential standards too. Differential protocols compare a positive and a negative version of the same signal but nothing prevents anyone from assigning a static value to the negative input of a differential pair and making the input circuit behave as a regular single-end pair with programmable threshold. Like this: There is plenty of literature out there about using the LVDS comparator in single-ended mode. It’s even possible to create pretty fast analog-digital convertors this way, but that’s outside the scope of this blog post. Impact of Input Circuit on Circuit Under Test 7 years ago, OpenTechLab reviewed the DSLogic Plus, the predecessor of the DSLogic U3Pro16. Joel spent a lot of time looking at its input circuit. He mentions a 7.6k Ohm pull-down resistor at the input, different than the 100k Ohm that I measured. There’s no mention of a series resistor in the cable or about the way adjustable thresholds are handled, but I think that the DSLogic Pro has a simular input circuit. His review continues with an in-depth analysis of how measuring a signal can impact the signal itself, he even builds a simulation model of the whole system, and does a real-world comparison between a DSLogic measurement and a fake-Saleae one. While his measurements are convincing, I wasn’t able to repeat his results on a similar setup with a DSLogic U3Pro and a Saleae Logic Pro: for both cases, a 200MHz signal was still good enough. I need to spend a bit more time to better understand the difference between my and his setup… Either way, I recommend watching this video. Additional IOs: External Clock, Trigger In, Trigger Out In addition to the 16 input pins that are used to record data, the DSLogic has 3 special IOs and a seperate 3-wire cable to wire them up. They are marked with the character “OIC” above the connector, which stands for Output, Input, Clock. Clock Instead of using a free-running internal clock, the 16 input signals can be sampled with an external sampling clock. This corresponds to a mode that’s called “state clocking” in big-iron Tektronix and HP/Agilent/Keysight logic analyzers. Using an external clock that is the same as the one that is used to generate the signals that you want to record is a major benefit: you will always record the signal at the right time as long as setup and hold requirements are met. When using a free-running internal sampling clock, the sample rate must a factor of 2 or more higher to get an accurate representation of what’s going on in the system. The DSLogic U16Pro provides the option to sample the data signals at the positive or negative edge of the external clock. On one hand, I would have prefered more options in moving the edge of the clock back and forth. It’s something that should be doable with the DLLs that are part of the DCMs blocks of a Spartan-6. But on the other, external clocking is not supported at all by Saleae analyzers. The maximum clock speed of the external clock input is 50MHz, significantly lower than the free-running sample speed. This is the usually the case as well for big iron logic analyzers. For example, my old Agilent 1670G has a free running sampling clock of 500MHz and supports a maximum state clock of 150MHz. Trigger In According to the manuals: “TI is the input for an external trigger signal”. That’s a great feature, but I couldn’t figure out a way in DSView on how to enable it. After a bit of googling, I found the following comment in an issue on GitHub. This “TI” signal has no function now. It’s reserved for compatible and further extension. This comment is dated July 29, 2018. A closer look at the U3Pro16 datasheets shows the description of the “TI” input as “Reserved”… Trigger Out When a trigger is activated inside the U3Pro, a pulse is generated on this pin. The manual doesn’t give more details, but after futzing around with the horrible oscilloscope UI of my 1670G, I was able to capture a 500ms trigger-out pulse of 1.8V. Software: From Saleae Logic to PulseView to DSView When Saleae first came to market, they raised the bar for logic analyzer software with Logic, which had a GUI that allowed scrolling and zooming in and out of waveforms at blazing speed. Logic also added a few protocol decoders, and an C++ API to create your own decoders. It was the inspiration of PulseView, an open source equivalent that acts as the front-end application of SigRok, an open source library and tool that acts as the waveform data acquisition backend. PulseView supports protocol decoders as well, but it has an easier to use Python API and it allows stacked protocol decoders: a low-level decoder might convert the recorded signals into, say, I2C tokens (start/stop/one/zero). A second decoder creates byte-level I2C transactions out of the tokens. And I2C EPROM decoder could interpret multiple I2C transactions as read and write operations. PulseView has tons of protocol decoders, from simple UART transactions, all the way to USB 2.0 decoders. When the DSLogic logic analyzer hit the market after a successful Kickstarter campaign, it shipped with DSView, DreamSourceLab’s closed source waveform viewer. However, people soon discovered that it was a reskinned version of PulseView, a big no-no since the latter is developed under a GPL3 license. After a bit of drama, DreamSourceLab made DSView available on GitHub under the required GPL3 as well, with attribution to the sigrok project. DSView is a hard fork of PulseView and there are still some bad feelings because DreamSourceLab doesn’t push changes to the PulseView project, but at least they’ve legally in the clear for the past 6 years. The default choice would be to use DSView to control your DSLogic, but Sigrok/PulseView supports DSView as well. In the figure below, you can see DSView in demo mode, no hardware device connected, and an example of the 3 stacked protocol described earlier: (Click to enlarge) For this review, I’ll be using DSView. Saleae has since upgrade Logic to Logic2, and now also supports stacked protocol decoders. It still uses a C++ API though. You can find an example decoder here. Installing DSView on a Linux Machine DreamSourceLab provides DSView binaries for Windows and MacOS binaries but not for Linux. When you click the Download button for Linux, it returns a tar file with the source code, which you’re expected to compile yourself. I wasn’t looking forward to running into the usual issues with package dependencies and build failures, but after following the instructions in the INSTALL file, I ended up with a working executable on first try. DSView UI The UI of DSView is straightforward and similar to Saleae Logic 2. There are things that annoy me in both tools but I have a slight preference for Logic 2. Both DSView and Logic2 have a demo mode that allows you to play with it without a real device attached. If you want to get a feel of what you like better, just download the software and play with it. Some random observations: DSView can pan and zoom in or out just as fast as Logic 2. On a MacBook, the way to navigate through the waveform really rubs me the wrong way: it uses the pinching gesture on a trackpad to zoom in and out. That seems like the obvious way to do it, but since it’s such a common operation to browse through a waveform it slows you down. On my HP Laptop 17, DSView uses the 2 finger slide up and down to zoom in and out which is much faster. Logic 2 also uses the 2 finger slide up and down. The stacked protocol decoders area amazing. Like Logic 2, DSView can export decoded protocols as CSV files, but only one protocol at a time. It would be nice to be able to export multiple protocols in the same CSV file so that you can easier compare transaction flow between interfaces. Logic 2 behaves predictably when you navigate through waveforms while the devices is still acquiring new data. DSView behaves a bit erratic. In DSView, you need to double click on the waveform to set a time marker. That’s easy enough, but it’s not intuitive and since I only use the device occasionally, I need to google every time I take it out of the closet. You can’t assign a text label to a DSView cursors/time marker. None of the points above disquality DSView: it’s a functional and stable piece of software. But I’d be lying if I wrote that DSView is as frictionless and polished as Logic 2. Streaming Data to the Host vs Local Storage in DRAM The Saleae Logic 16 Pro only supports streaming mode: recorded data is immediately sent to the PC to which the device is connected. The U3Pro supports both streaming and buffered mode, where data is written in the DRAM that’s on the device and only transported to the host when the recording is complete. Streaming mode introduces a dependency on the upstream bandwidth. An Infineon FX3 supports USB3 data rates up 5Gbps, but it’s far from certain that those rates are achieved in practice. And if so, it still limits recording 16 channels to around 300MHz, assuming no overhead. In practice, higher rates are possible because both devices support run length encoding (RLE), a compression technique that reduces sequences of the same value to that value and the length of the sequence. Of course, RLE introduces recording uncertainty: high activity rates may result in the exceeding the available bandwidth. The U3Pro has a 16-bit wide 2Gbit DDR3 DRAM with a maximum data rate of 1.6G samples per second. Theoretically, make it possible to record 16 channels with a 1.6GHz sample rate, but that assumes accessing DRAM with 100% efficiency, which is never the case. The GUI has the option of recording 16 signals at 500MHz or 8 signals at 1GHz. Even when recording to the local DRAM, RLE compression is still possible. When RLE is disabled and the highest sample rate is selected, 268ms of data can be recorded. When connected to my Windows laptop, buffered mode worked fine, but on my MacBook Air M2 DSView always hangs when downloading the data that was recorded at high sample rates and I have to kill the application. In practice, I rarely record at high sample rates and I always use streaming mode which works reliably on the Mac too. But it’s not a good look for DSView. Triggers One of the biggest benefits of the U3Pro over a Saleae is their trigger capability. Saleae Logic 2.4.22 offers the following options: You can set a rising edge, falling edge, a high or a low level on 1 signal in combination with some static values on other signals, and that’s it. There’s not even a rising-or-falling edge option. It’s frankly a bit embarrassing. When you have a FPGA at your disposal, triggering functionality is not hard to implement. Meanwhile, even in Simple Trigger mode, the DSLogic can trigger on multiple edges at the same time, something that can be useful when using an external sampling clock. But the DSLogic really shines when enabling the Advanced Trigger option. In Stage Trigger mode, you can create state sequences that are up to 16 phases long, with 2 16-bit comparisons and a counter per stage. Alternatively, Serial Trigger mode is a powerful enough to capture protocols like I2C, as shown below, where a start flag is triggered by a falling edge of SDA when SCL is high, a stop flag by a rising edge of SDA when SCL is high, and data bits are captured on the rising edge of SCL: You don’t always need powerful trigger options, but they’re great to have when you do. Conclusion The U3Pro is not perfect. It doesn’t have an analog mode, buffered mode doesn’t work reliably on my MacBook, and the DSView GUI is a bit quirky. But it is relatively cheap, it has a huge library of decoding protocols, and the triggering modes are excellent. I’ve used it for a few projects now and it hasn’t let me down so far. If you’re in the market for a cheap logic analyzer, give it a good look. References Logic Analyzer Shopping Comparison between Saleae Logic Pro 16, Innomaker LA2016, Innomaker LA5016, DSLogic Plus, and DSLogic U3Pro16 Footnotes It even has the digital storage scope option with 2 analog channels, 500MHz bandwidth and 2GSa/s sampling rate. ↩

a month ago 21 votes
HP Laptop 17 RAM Upgrade

Introduction Selecting the RAM Opening up Replacing the RAM Reassembly References Introduction I do virtually all of my hobby and home computing on Linux and MacOS. The MacOS stuff on a laptop and almost all Linux work a desktop PC. The desktop PC has Windows on it installed as well, but it’s too much of a hassle to reboot so it never gets used in practice. Recently, I’ve been working on a project that requires a lot of Spice simulations. NGspice works fine under Linux, but it doesn’t come standard with a GUI and, more important, the simulation often refuse to converge once your design becomes a little bit bigger. Tired of fighting against the tool, I switched to LTspice from Analog Devices. It’s free to use and while it support Windows and MacOS in theory, the Mac version is many years behind the Windows one and nearly unusuable. After dual-booting into Windows too many times, a Best Buy deal appeared on my BlueSky timeline for an HP laptop for just $330. The specs were pretty decent too: AMD Ryzen 5 7000 17.3” 1080p screen 512GB SSD 8 GB RAM Full size keyboard Windows 11 Someone at the HP marketing departement spent long hours to come up with a suitable name and settled on “HP Laptop 17”. I generally don’t pay attention to what’s available on the PC laptop market, but it’s hard to really go wrong for this price so I took the plunge. Worst case, I’d return it. We’re now 8 weeks later and the laptop is still firmly in my possession. In fact, I’ve used it way more than I thought I would. I haven’t noticed any performance issues, the screen is pretty good, the SSD larger than what I need for the limited use case, and, surprisingly, the trackpad is the better than any Windows laptop that I’ve ever used, though that’s not a high bar. It doesn’t come close to MacBook quality, but palm rejection is solid and it’s seriously good at moving the mouse around in CAD applications. The two worst parts are the plasticy keyboard and the 8GB of RAM. I can honestly not quantify whether or not it has a practical impact, but I decided to upgrade it anyway. In this blog post, I go through the steps of doing this upgrade. Important: there’s a good chance that you will damage your laptop when trying this upgade and almost certainly void your warranty. Do this at your own risk! Selecting the RAM The laptop wasn’t designed to be upgradable and thus you can’t find any official resources about it. And with such a generic name, there’s guaranteed to be multiple hardware versions of the same product. To have reasonable confidence that you’re buying the correct RAM, check out the full product name first. You can find it on the bottom: Mine is an HP Laptop 17-cp3005dx. There’s some conflicting information about being able to upgrade the thing. The BestBuy Q&A page says: The HP 17.3” Laptop Model 17-cp3005dx RAM and Storage are soldered to the motherboard, and are not upgradeable on this model. This is flat out wrong for my device. After a bit of Googling around, I learned that it has a single 8GB DDR4 SODIMM 260-pin RAM stick but that the motherboard has 2 RAM slots and that it can support up to 2x32GB. I bought a kit with Crucial 2x16GB 3200MHz SODIMMs from Amazon. As I write this, the price is $44. Opening up Removing the screws This is the easy part. There are 10 screws at the bottom, 6 of which are hidden underneath the 2 rubber anti-slip strips. It’s easy to peel these stips loose. It’s als easy to put them back without losing the stickiness. Removing the bottom cover The bottom cover is held back by those annoying plastic tabs. If you have a plastic spudger or prying tool, now is the time to use them. I didn’t so I used a small screwdriver instead. Chances are high that you’ll leave some tiny scuffmarks on the plastic casing. I found it easiest to open the top lid a bit, place the laptop on its side, and start on the left and right side of the keyboard. After that, it’s a matter of working your way down the long sides at the front and back of the laptop. There are power and USB connectors that are right against the side of the bottom panel so be careful not to poke with the spudger or screwdriver inside the case. It’s a bit of a jarring process, going back and forth and making steady improvement. In addition to all the clips around the board of the bottom panel, there are also a few in the center that latch on to the side of the battery. But after enough wiggling and creaking sounds, the panel should come loose. Replacing the RAM As expected, there are 2 SODIMM slots, one of which is populated with a 3200MHz 8GDB RAM stick. At the bottom right of the image below, you can also see the SSD slot. If you don’t enjoy the process of opening up the laptop and want to upgrade to a larger drive as well, now would be the time for that. New RAM in place! It’s always a good idea to test the surgery before reassembly: Success! Reassembly Reassembly of the laptop is much easier than taking it apart. Everything simply clicks together. The only minor surprise was that both anti-slip strips became a little bit longer… References Memory Upgrade for HP 17-cp3005dx Laptop Upgrading Newer HP 17.3” Laptop With New RAM And M.2 NVMe SSD Different model with Intel CPU but the case is the same.

2 months ago 28 votes

More in technology

Home is where the home server is

I moved recently, and so did my home server. You might have noticed it due to the downtime. This time I have built a dedicated shelf for it, which allows for more flexibility and room for additional expensive ideas. The internet connection is a fiber line, which is fantastic for a place that’s generally considered to be in the countryside. I had to hire a guy at the last place in Tallinn (capital of Estonia) to pull a fiber line from the basement to the apartment, with my own money, so I’m very happy that I don’t have to do it here. And yes, the ThinkPad T430 is still a solid home server. I had an issue with my battery calibration script resulting in the machine being turned off, but I fixed it by disabling it, at the cost of the battery probably dying soon. Seems like a tlp and/or Linux kernel issue that has surfaced recently, as it also happened on a different ThinkPad laptop when I last tried it. I can’t really remove the battery, because the “power on with AC attach” setting only works when the battery is connected and charged. The server/wardrobe/closet room is slightly chillier compared to the rest of the environment, meaning that the temperatures are also slightly lower. I also have an option to do some crazy ventilation experiments in the winter, but that will have to wait for a bit, mainly because it’s spring. I’m genuinely surprised that the Wi-Fi 5 signal is coming through the closet quite adequately, with the whole apartment being covered with at least 50 Mbit/s speeds, and over 300 Mbit/s when near the closet, which is about the maximum speed that I can achieve from the access point in ideal conditions.

51 minutes ago 1 votes
Optimize maintenance with the Arduino Rileva ME Opta Bundle

When your machines run smoothly, your business can go far. That’s why condition monitoring – once a “nice to have” – is quickly becoming a must in maintenance strategies across industrial settings. But most dedicated systems can be complex to set up or difficult to scale. To make things easier, we’re introducing the Arduino Rileva ME Opta […] The post Optimize maintenance with the Arduino Rileva ME Opta Bundle appeared first on Arduino Blog.

2 days ago 3 votes
Dr. Dobb's Journal Interviews Jef Raskin (1986)

They discuss interface design and Raskin's hatred of the mouse.

3 days ago 4 votes
2025-05-11 air traffic control

Air traffic control has been in the news lately, on account of my country's declining ability to do it. Well, that's a long-term trend, resulting from decades of under-investment, severe capture by our increasingly incompetent defense-industrial complex, no small degree of management incompetence in the FAA, and long-lasting effects of Reagan crushing the PATCO strike. But that's just my opinion, you know, maybe airplanes got too woke. In any case, it's an interesting time to consider how weird parts of air traffic control are. The technical, administrative, and social aspects of ATC all seem two notches more complicated than you would expect. ATC is heavily influenced by its peculiar and often accidental development, a product of necessity that perpetually trails behind the need, and a beneficiary of hand-me-down military practices and technology. Aviation Radio In the early days of aviation, there was little need for ATC---there just weren't many planes, and technology didn't allow ground-based controllers to do much of value. There was some use of flags and signal lights to clear aircraft to land, but for the most part ATC had to wait for the development of aviation radio. The impetus for that work came mostly from the First World War. Here we have to note that the history of aviation is very closely intertwined with the history of warfare. Aviation technology has always rapidly advanced during major conflicts, and as we will see, ATC is no exception. By 1913, the US Army Signal Corps was experimenting with the use of radio to communicate with aircraft. This was pretty early in radio technology, and the aircraft radios were huge and awkward to operate, but it was also early in aviation and "huge and awkward to operate" could be similarly applied to the aircraft of the day. Even so, radio had obvious potential in aviation. The first military application for aircraft was reconnaissance. Pilots could fly past the front to find artillery positions and otherwise provide useful information, and then return with maps. Well, even better than returning with a map was providing the information in real-time, and by the end of the war medium-frequency AM radios were well developed for aircraft. Radios in aircraft lead naturally to another wartime innovation: ground control. Military personnel on the ground used radio to coordinate the schedules and routes of reconnaissance planes, and later to inform on the positions of fighters and other enemy assets. Without any real way to know where the planes were, this was all pretty primitive, but it set the basic pattern that people on the ground could keep track of aircraft and provide useful information. Post-war, civil aviation rapidly advanced. The early 1920s saw numerous commercial airlines adopting radio, mostly for business purposes like schedule coordination. Once you were in contact with someone on the ground, though, it was only logical to ask about weather and conditions. Many of our modern practices like weather briefings, flight plans, and route clearances originated as more or less formal practices within individual airlines. Air Mail The government was not left out of the action. The Post Office operated what may have been the largest commercial aviation operation in the world during the early 1920s, in the form of Air Mail. The Post Office itself did not have any aircraft; all of the flying was contracted out---initially to the Army Air Service, and later to a long list of regional airlines. Air Mail was considered a high priority by the Post Office and proved very popular with the public. When the transcontinental route began proper operation in 1920, it became possible to get a letter from New York City to San Francisco in just 33 hours by transferring it between airplanes in a nearly non-stop relay race. The Post Office's largesse in contracting the service to private operators provided not only the funding but the very motivation for much of our modern aviation industry. Air travel was not very popular at the time, being loud and uncomfortable, but the mail didn't complain. The many contract mail carriers of the 1920s grew and consolidated into what are now some of the United States' largest companies. For around a decade, the Post Office almost singlehandedly bankrolled civil aviation, and passengers were a side hustle [1]. Air mail ambition was not only of economic benefit. Air mail routes were often longer and more challenging than commercial passenger routes. Transcontinental service required regular flights through sparsely populated parts of the interior, challenging the navigation technology of the time and making rescue of downed pilots a major concern. Notably, air mail operators did far more nighttime flying than any other commercial aviation in the 1920s. The post office became the government's de facto technical leader in civil aviation. Besides the network of beacons and markers built to guide air mail between cities, the post office built 17 Air Mail Radio Stations along the transcontinental route. The Air Mail Radio Stations were the company radio system for the entire air mail enterprise, and the closest thing to a nationwide, public air traffic control service to then exist. They did not, however, provide what we would now call control. Their role was mainly to provide pilots with information (including, critically, weather reports) and to keep loose tabs on air mail flights so that a disappearance would be noticed in time to send search and rescue. In 1926, the Watres Act created the Aeronautic Branch of the Department of Commerce. The Aeronautic Branch assumed a number of responsibilities, but one of them was the maintenance of the Air Mail routes. Similarly, the Air Mail Radio Stations became Aeronautics Branch facilities, and took on the new name of Flight Service Stations. No longer just for the contract mail carriers, the Flight Service Stations made up a nationwide network of government-provided services to aviators. They were the first edifices in what we now call the National Airspace System (NAS): a complex combination of physical facilities, technologies, and operating practices that enable safe aviation. In 1935, the first en-route air traffic control center opened, a facility in Newark owned by a group of airlines. The Aeronautic Branch, since renamed the Bureau of Air Commerce, supported the airlines in developing this new concept of en-route control that used radio communications and paperwork to track which aircraft were in which airways. The rising number of commercial aircraft made in-air collisions a bigger problem, so the Newark control center was quickly followed by more facilities built on the same pattern. In 1936, the Bureau of Air Commerce took ownership of these centers, and ATC became a government function alongside the advisory and safety services provided by the flight service stations. En route center controllers worked off of position reports from pilots via radio, but needed a way to visualize and track aircraft's positions and their intended flight paths. Several techniques helped: first, airlines shared their flight planning paperwork with the control centers, establishing "flight plans" that corresponded to each aircraft in the sky. Controllers adopted a work aid called a "flight strip," a small piece of paper with the key information about an aircraft's identity and flight plan that could easily be handed between stations. By arranging the flight strips on display boards full of slots, controllers could visualize the ordering of aircraft in terms of altitude and airway. Second, each center was equipped with a large plotting table map where controllers pushed markers around to correspond to the position reports from aircraft. A small flag on each marker gave the flight number, so it could easily be correlated to a flight strip on one of the boards mounted around the plotting table. This basic concept of air traffic control, of a flight strip and a position marker, is still in use today. Radar The Second World War changed aviation more than any other event of history. Among the many advancements were two British inventions of particular significance: first, the jet engine, which would make modern passenger airliners practical. Second, the radar, and more specifically the magnetron. This was a development of such significance that the British government treated it as a secret akin to nuclear weapons; indeed, the UK effectively traded radar technology to the US in exchange for participation in US nuclear weapons research. Radar created radical new possibilities for air defense, and complimented previous air defense development in Britain. During WWI, the organization tasked with defending London from aerial attack had developed a method called "ground-controlled interception" or GCI. Under GCI, ground-based observers identify possible targets and then direct attack aircraft towards them via radio. The advent of radar made GCI tremendously more powerful, allowing a relatively small number of radar-assisted air defense centers to monitor for inbound attack and then direct defenders with real-time vectors. In the first implementation, radar stations reported contacts via telephone to "filter centers" that correlated tracks from separate radars to create a unified view of the airspace---drawn in grease pencil on a preprinted map. Filter center staff took radar and visual reports and updated the map by moving the marks. This consolidated information was then provided to air defense bases, once again by telephone. Later technical developments in the UK made the process more automated. The invention of the "plan position indicator" or PPI, the type of radar scope we are all familiar with today, made the radar far easier to operate and interpret. Radar sets that automatically swept over 360 degrees allowed each radar station to see all activity in its area, rather than just aircraft passing through a defensive line. These new capabilities eliminated the need for much of the manual work: radar stations could see attacking aircraft and defending aircraft on one PPI, and communicated directly with defenders by radio. It became routine for a radar operator to give a pilot navigation vectors by radio, based on real-time observation of the pilot's position and heading. A controller took strategic command of the airspace, effectively steering the aircraft from a top-down view. The ease and efficiency of this workflow was a significant factor in the end of the Battle of Britain, and its remarkable efficacy was noticed in the US as well. At the same time, changes were afoot in the US. WWII was tremendously disruptive to civil aviation; while aviation technology rapidly advanced due to wartime needs those same pressing demands lead to a slowdown in nonmilitary activity. A heavy volume of military logistics flights and flight training, as well as growing concerns about defending the US from an invasion, meant that ATC was still a priority. A reorganization of the Bureau of Air Commerce replaced it with the Civil Aeronautics Authority, or CAA. The CAA's role greatly expanded as it assumed responsibility for airport control towers and commissioned new en route centers. As WWII came to a close, CAA en route control centers began to adopt GCI techniques. By 1955, the name Air Route Traffic Control Center (ARTCC) had been adopted for en route centers and the first air surveillance radars were installed. In a radar-equipped ARTCC, the map where controllers pushed markers around was replaced with a large tabletop PPI built to a Navy design. The controllers still pushed markers around to track the identities of aircraft, but they moved them based on their corresponding radar "blips" instead of radio position reports. Air Defense After WWII, post-war prosperity and wartime technology like the jet engine lead to huge growth in commercial aviation. During the 1950s, radar was adopted by more and more ATC facilities (both "terminal" at airports and "en route" at ARTCCs), but there were few major changes in ATC procedure. With more and more planes in the air, tracking flight plans and their corresponding positions became labor intensive and error-prone. A particular problem was the increasing range and speed of aircraft, and corresponding longer passenger flights, that meant that many aircraft passed from the territory of one ARTCC into another. This required that controllers "hand off" the aircraft, informing the "next" ARTCC of the flight plan and position at which the aircraft would enter their airspace. In 1956, 128 people died in a mid-air collision of two commercial airliners over the Grand Canyon. In 1958, 49 people died when a military fighter struck a commercial airliner over Nevada. These were not the only such incidents in the mid-1950s, and public trust in aviation started to decline. Something had to be done. First, in 1958 the CAA gave way to the Federal Aviation Administration. This was more than just a name change: the FAA's authority was greatly increased compared tot he CAA, most notably by granting it authority over military aviation. This is a difficult topic to explain succinctly, so I will only give broad strokes. Prior to 1958, military aviation was completely distinct from civil aviation, with no coordination and often no communication at all between the two. This was, of course, a factor in the 1958 collision. Further, the 1956 collision, while it did not involve the military, did result in part from communications issues between separate distinct CAA facilities and the airline's own control facilities. After 1958, ATC was completely unified into one organization, the FAA, which assumed the work of the military controllers of the time and some of the role of the airlines. The military continues to have its own air controllers to this day, and military aircraft continue to include privileges such as (practical but not legal) exemption from transponder requirements, but military flights over the US are still beholden to the same ATC as civil flights. Some exceptions apply, void where prohibited, etc. The FAA's suddenly increased scope only made the practical challenges of ATC more difficult, and commercial aviation numbers continued to rise. As soon as the FAA was formed, it was understood that there needed to be major investments in improving the National Airspace System. While the first couple of years were dominated by the transition, the FAA's second director (Najeeb Halaby) prepared two lengthy reports examining the situation and recommending improvements. One of these, the Beacon report (also called Project Beacon), specifically addressed ATC. The Beacon report's recommendations included massive expansion of radar-based control (called "positive control" because of the controller's access to real-time feedback on aircraft movements) and new control procedures for airways and airports. Even better, for our purposes, it recommended the adoption of general-purpose computers and software to automate ATC functions. Meanwhile, the Cold War was heating up. US air defense, a minor concern in the few short years after WWII, became a higher priority than ever before. The Soviet Union had long-range aircraft capable of reaching the United States, and nuclear weapons meant that only a few such aircraft had to make it to cause massive destruction. Considering the vast size of the United States (and, considering the new unified air defense command between the United States and Canada, all of North America) made this a formidable challenge. During the 1950s, the newly minted Air Force worked closely with MIT's Lincoln Laboratory (an important center of radar research) and IBM to design a computerized, integrated, networked system for GCI. When the Air Force committed to purchasing the system, it was christened the Semi-Automated Ground Environment, or SAGE. SAGE is a critical juncture in the history of the computer and computer communications, the first system to demonstrate many parts of modern computer technology and, moreover, perhaps the first large-scale computer system of any kind. SAGE is an expansive topic that I will not take on here; I'm sure it will be the focus of a future article but it's a pretty well-known and well-covered topic. I have not so far felt like I had much new to contribute, despite it being the first item on my "list of topics" for the last five years. But one of the things I want to tell you about SAGE, that is perhaps not so well known, is that SAGE was not used for ATC. SAGE was a purely military system. It was commissioned by the Air Force, and its numerous operating facilities (called "direction centers") were located on Air Force bases along with the interceptor forces they would direct. However, there was obvious overlap between the functionality of SAGE and the needs of ATC. SAGE direction centers continuously received tracks from remote data sites using modems over leased telephone lines, and automatically correlated multiple radar tracks to a single aircraft. Once an operator entered information about an aircraft, SAGE stored that information for retrieval by other radar operators. When an aircraft with associated data passed from the territory of one direction center to another, the aircraft's position and related information were automatically transmitted to the next direction center by modem. One of the key demands of air defense is the identification of aircraft---any unknown track might be routine commercial activity, or it could be an inbound attack. The air defense command received flight plan data on commercial flights (and more broadly all flights entering North America) from the FAA and entered them into SAGE, allowing radar operators to retrieve "flight strip" data on any aircraft on their scope. Recognizing this interconnection with ATC, as soon as SAGE direction centers were being installed the Air Force started work on an upgrade called SAGE Air Traffic Integration, or SATIN. SATIN would extend SAGE to serve the ATC use-case as well, providing SAGE consoles directly in ARTCCs and enhancing SAGE to perform non-military safety functions like conflict warning and forward projection of flight plans for scheduling. Flight strips would be replaced by teletype output, and in general made less necessary by the computer's ability to filter the radar scope. Experimental trial installations were made, and the FAA participated readily in the research efforts. Enhancement of SAGE to meet ATC requirements seemed likely to meet the Beacon report's recommendations and radically improve ARTCC operations, sooner and cheaper than development of an FAA-specific system. As it happened, well, it didn't happen. SATIN became interconnected with another planned SAGE upgrade to the Super Combat Centers (SCC), deep underground combat command centers with greatly enhanced SAGE computer equipment. SATIN and SCC planners were so confident that the last three Air Defense Sectors scheduled for SAGE installation, including my own Albuquerque, were delayed under the assumption that the improved SATIN/SCC equipment should be installed instead of the soon-obsolete original system. SCC cost estimates ballooned, and the program's ambitions were reduced month by month until it was canceled entirely in 1960. Albuquerque never got a SAGE installation, and the Albuquerque air defense sector was eliminated by reorganization later in 1960 anyway. Flight Service Stations Remember those Flight Service Stations, the ones that were originally built by the Post Office? One of the oddities of ATC is that they never went away. FSS were transferred to the CAB, to the CAA, and then to the FAA. During the 1930s and 1940s many more were built, expanding coverage across much of the country. Throughout the development of ATC, the FSS remained responsible for non-control functions like weather briefing and flight plan management. Because aircraft operating under instrument flight rules must closely comply with ATC, the involvement of FSS in IFR flights is very limited, and FSS mostly serve VFR traffic. As ATC became common, the FSS gained a new and somewhat odd role: playing go-between for ATC. FSS were more numerous and often located in sparser areas between cities (while ATC facilities tended to be in cities), so especially in the mid-century, pilots were more likely to be able to reach an FSS than ATC. It was, for a time, routine for FSS to relay instructions between pilots and controllers. This is still done today, although improved communications have made the need much less common. As weather dissemination improved (another topic for a future post), FSS gained access to extensive weather conditions and forecasting information from the Weather Service. This connectivity is bidirectional; during the midcentury FSS not only received weather forecasts by teletype but transmitted pilot reports of weather conditions back to the Weather Service. Today these communications have, of course, been computerized, although the legacy teletype format doggedly persists. There has always been an odd schism between the FSS and ATC: they are operated by different departments, out of different facilities, with different functions and operating practices. In 2005, the FAA cut costs by privatizing the FSS function entirely. Flight service is now operated by Leidos, one of the largest government contractors. All FSS operations have been centralized to one facility that communicates via remote radio sites. While flight service is still available, increasing automation has made the stations far less important, and the general perception is that flight service is in its last years. Last I looked, Leidos was not hiring for flight service and the expectation was that they would never hire again, retiring the service along with its staff. Flight service does maintain one of my favorite internet phenomenon, the phone number domain name: 1800wxbrief.com. One of the odd manifestations of the FSS/ATC schism and the FAA's very partial privatization is that Leidos maintains an online aviation weather portal that is separate from, and competes with, the Weather Service's aviationweather.gov. Since Flight Service traditionally has the responsibility for weather briefings, it is honestly unclear to what extend Leidos vs. the National Weather Service should be investing in aviation weather information services. For its part, the FAA seems to consider aviationweather.gov the official source, while it pays for 1800wxbrief.com. There's also weathercams.faa.gov, which duplicates a very large portion (maybe all?) of the weather information on Leidos's portal and some of the NWS's. It's just one of those things. Or three of those things, rather. Speaking of duplication due to poor planning... The National Airspace System Left in the lurch by the Air Force, the FAA launched its own program for ATC automation. While the Air Force was deploying SAGE, the FAA had mostly been waiting, and various ARTCCs had adopted a hodgepodge of methods ranging from one-off computer systems to completely paper-based tracking. By 1960 radar was ubiquitous, but different radar systems were used at different facilities, and correlation between radar contacts and flight plans was completely manual. The FAA needed something better, and with growing congressional support for ATC modernization, they had the money to fund what they called National Airspace System En Route Stage A. Further bolstering historical confusion between SAGE and ATC, the FAA decided on a practical, if ironic, solution: buy their own SAGE. In an upcoming article, we'll learn about the FAA's first fully integrated computerized air traffic control system. While the failed detour through SATIN delayed the development of this system, the nearly decade-long delay between the design of SAGE and the FAA's contract allowed significant technical improvements. This "New SAGE," while directly based on SAGE at a functional level, used later off-the-shelf computer equipment including the IBM System/360, giving it far more resemblance to our modern world of computing than SAGE with its enormous, bespoke AN/FSQ-7. And we're still dealing with the consequences today! [1] It also laid the groundwork for the consolidation of the industry, with a 1930 decision that took air mail contracts away from most of the smaller companies and awarded them instead to the precursors of United, TWA, and American Airlines.

4 days ago 8 votes
Sierpiński triangle? In my bitwise AND?

Exploring a peculiar bit-twiddling hack at the intersection of 1980s geek sensibilities.

5 days ago 8 votes