Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
106
Introduction IMPORTANT: Use the Right GPS Antenna! The Problem: SyncServer Refuses to Lock to GPS The GPS Week Number Rollover Issue Making the Furuno GT-8031 Work Again How It Works Build Instructions Power Supply Recapping The Future: A Software-Only Solution The Result References Footnotes Introduction In my earlier blog post, I wrote about how to set up a SyncServer S200 as a regular NTP server, and how to install the backside BNC connectors to bring out the 10 MHz and 1PPS outputs. The ultimate goal is to use the SyncServer as a lab timing reference, but at the end of that blog post, it’s clear that using NTP alone is not good enough to get a precise 10 MHz clock: the output frequency was off by almost 100Hz! To get a more accurate output clock, you need to synchronize the SyncServer to the GPS system so that it becomes a GPS disciplined oscillator (GPSDO) and a stratum 1 time keeping device. The S200 has a GPS antenna input and a GPS receiver module inside, so in theory this...
11 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Electronics etc…

Brightness and Contrast Adjustment of Tektronix TDS 500/600/700 Oscilloscopes

Introduction Finding the Display Tuning Potentiometers The Result Hardcopy Preview Mode Introduction Less than a week after finishing my TDS 684B analog memory blog post, a TDS 684C landed on my lab bench with a very dim CRT. If you follow the lives the 3-digit TDS oscilloscope series, you probably know that this is normally a bit of death sentence of the CRT: after years of use, the cathode ray loses its strength and there’s nothing you can do about it other than replace the CRT with an LCD screen. I was totally ready to go that route, and if I ever need to do it, here are 3 possible LCD upgrade options that I list for later reference: The most common one is to buy a $350 Newscope-T1 LCD display kit by SimmConn Labs. A cheaper hobbyist alternative is to hack something together with a VGA to LVDS interface board and some generic LCD panel, as described in this build report. He uses a VGA LCD Controller Board KYV-N2 V2 with a 7” A070SN02 LCD panel. As I write this, the cost is $75, but I assume this used to be a lot cheaper before tariffs were in place. If you really want to go hard-core, you could make your own interface board with an FPGA that snoops the RAMDAC digital signals and converts them to LVDS, just like the Newscope-T1. There is a whole thread about this the EEVblog forum. But this blog post is not about installing an LCD panel! Before going that route, you should try to increase the brightness of the CRT by turning a potentiometer on the display board. It sounds like an obvious thing to try, but didn’t a lot of reference to online. And in my case, it just worked. Finding the Display Tuning Potentiometers In the Display Assembly Adjustment section of chapter 5 of the TDS 500D, TDS 600C, TDS 700D and TDS 714L Service Manual, page 5-23, you’ll find the instructions on how to change rotation, brightness and contrast. It says to remove the cabinet and then turn some potentiometer, but I just couldn’t find them! They’re supposed to be next to the fan. Somewhere around there: Well, I couldn’t see any. It’s only the next day, when I was ready to take the whole thing apart that I noticed these dust covered holes: A few minutes and a vaccum cleaning operation later reveals 5 glorious potentiometers: From left to right: horizontal position rotation vertical position brightness contrast Rotate the last 2 at will and if you’re lucky, your dim CRT will look brand new again. It did for me! The Result The weird colors in the picture above is a photography artifact that’s caused by Tektronix NuColor display technology: it uses a monochrome CRT with an R/G/B shutter in front of it. You can read more about it in this Hackaday article. In real life, the image looks perfectly fine! Hardcopy Preview Mode If dialing up the brightness doesn’t work and you don’t want to spend money on an LCD upgrade, there is the option of switching the display to Hardcopy mode, like this: [Display] -> [Settings <Color>] -> [Palette] -> [Hardcopy preview] Instead of a black, you will now get a white background. It made the scope usable before I made the brightness adjustment.

2 months ago 33 votes
A Tektronix TDS 684B Oscilloscope Uses CCD Analog Memory

Introduction The TDS600 Series The Acquisition Board Measuring Along the Signal Path A Closer Look at the Noise Issue Conclusion Introduction I have a Tektronix TDS 684B oscilloscope that I bought cheaply at an auction. It has 4 channels, 1 GHz of BW and a sample rate of 5 Gsps. Those are respectable numbers even by today’s standards. It’s also the main reason why I have it: compared to modern oscilloscopes, the other features aren’t nearly as impressive. It can only record 15k samples per channel at a time, for example. But at least the sample rate doesn’t go down when you increase the number of recording channels: it’s 5 Gsps even all 4 channels are enabled. I’ve always wondered how Tektronix managed to reach such high specifications back in the nineties, so in this blog post I take a quick look at the internals, figure out how it works, and do some measurements along the signal path. The TDS600 Series The first oscilloscopes of the TDS600 series were introduced around 1993. The last one, the TDS694C was released in 2002. The TDS684 version was from sometime 1995. The ICs on my TDS684C have date codes from as early as the first half of 1997. The main characteristic of these scopes was their extreme sample rate for that era, going from 2 Gsps for the TDS620, TDS640 and TDS644, 5 Gsps for the TDS654, TDS680 and TDS684, and 10 Gsps for the TDS694C which was developed under the Screamer code name. The oscilloscopes have 2 main boards: the acquisition board contains all the parts from the analog input down to the sample memory as well as some triggering logic. (Click to enlarge) a very busy CPU board does the rest. (Click to enlarge) 2 flat cables and a PCB connect the 2 boards. The interconnect PCB traces go to the memory on the acquisition board. It’s safe to assume that this interface is used for high-speed waveform data transfer while the flat cables are for lower speed configuration and status traffic. If you ever remove the interconnection PCB, make sure to put it back with the same orientation. It will fit just fine when rotated 180 degrees but the scope won’t work anymore! The Acquisition Board The TDS 684B has 4 identical channels that can easily be identified. (Click to enlarge) There are 6 major components in the path from input to memory: Analog front-end Hidden under a shielding cover, but you’d expect to find a bunch of relays there to switch between different configurations: AC/DC, 1Meg/50 Ohm termination, … I didn’t open it because it requires disassembling pretty much the whole scope. Signal Conditioner IC(?) This is the device with the glued-on heatsink. I left it in place because there’s no metal attachment latch. Reattaching it would be a pain. Since the acquisition board has a bunch of custom ICs already, chances are this one is custom as well, so knowing the exact part number wouldn’t add a lot of extra info. We can see one differential pair going from the analog front-end into this IC and a second one going from this IC to the next one, an ADG286D. National Semi ADG286D Mystery Chip Another custom chip with unknown functionality. Motorola MC10319DW 8-bit 25 MHz A/D Converter Finally, an off-the-shelf device! But why is it only rated for 25MHz? National Semi ADG303 - A Custom Memory Controller Chip It receives the four 8-bit lanes from the four ADCs on one side and connects to four SRAMs on the other. 4 Alliance AS7C256-15JC SRAMs Each memory has a capacity of 32KB and a 15ns access time, which allows for a maximum clock of 66 MHz. The TDS 684B supports waveform traces of 15k points, so they either only use half of the available capacity or they use some kind of double-buffering scheme. There are four unpopulated memory footprints. In one of my TDS 420A blog posts, I extend the waveform memory by soldering in extra SRAM chips. I’m not aware of a TDS 684B option for additional memory, so I’m not optimistic about the ability to expand its memory. There’s also no such grayed-out option in the acquisition menu. When googling for “ADG286D”, I got my answer when I stumbled on this comment on BlueSky which speculates that it’s an analog memory, probably some kind of CCD FIFO. Analog values are captured at a rate of up to 5 GHz and then shifted out at a much lower speed and fed into the ADC. I later found a few other comments that confirm this theory. Measuring Along the Signal Path Let’s verify this by measuring a few signals on the board with a different scope. The ADC input pins are large enough to attach a Tektronix logic analyzer probe: ADC sampling the signal With a 1 MHz signal and using a 100Msps sample rate, the input to the ADC looks like this: The input to the ADC is clearly chopped into discrete samples, with a new sample every 120 ns. We can discern a sine wave in the samples, but there’s a lot of noise on the signal too. Meanwhile the TDS684B CRT shows a nice and clean 1 MHz signal. I haven’t been able to figure out how that’s possible. For some reason, simply touching the clock pin of the ADC with a 1 MOhm oscilloscope probe adds a massive amount of noise to the input signal, but it shows the clock nicely: The ADC clock matches the input signal. It’s indeed 8.33 MHz. Acquistion refresh rate The scope only records in bursts. When recording 500, 1000 or 2500 sample points at 100Msps, it records a new burst every 14ms or 70Hz. When recording 5000 points, the refresh rate drops to 53Hz. For 15000 points, it drops even lower, to 30Hz: Sampling burst duration The duration of a sampling burst is always 2 ms, irrespective of the sample rate of the oscilloscope or the number of points acquired! The combination of a 2 ms burst and 8 MHz sample clock results in 16k samples. So the scope always acquires what’s probably the full contents of the CCD FIFO and throws a large part away when a lower sample length is selected. Here’s the 1 MHz signal sampled at 100 Msps: And here’s the same signal sampled at 5 Gsps: It looks like the signal doesn’t scan out of the CCD memory in the order it was received, hence the signal discontinuity in the middle. Sampling a 1 GHz signal I increased the input signal from 1 MHz to 1 GHz. Here’s the ADC input at 5 Gsps: With a little bit of effort, you can once again imagine a sine wave in those samples. There’s periodicity of 5 samples, as one would expect for a 1 GHz to 5 Gsps ratio. The sample rate is still 8.3 MHz. Sampling a 200 MHz signal I also applied a 200 MHz input signal. The period is now ~22 samples, as expected. 200 MHz is low enough to measure with my 350 MHz bandwidth Siglent oscilloscope. To confirm that the ADG286D chip contains the CCD memory, I measured the signal on one of the differential pins going into that chip: And here it is, a nice 200 MHz signal: A Closer Look at the Noise Issue After initially publishing this blog post, I had a discussion on Discord about the noise issue which made me do a couple more measurements. Input connected to ground Here’s what the ADC input looks like when the input of the scope is connected to ground: 2 major observations: there’s a certain amount of repetitiveness to it. there are these major voltage spikes in between each repetition. They are very faint on the scope shot. Let’s zoom in on that: The spikes are still hard to see so I added the arrows, but look how the sample pattern repeats after each spike! The time delay between each spike is ~23.6 us. With a sample rate of 120ns, that converts into a repetitive pattern of ~195 samples. I don’t know why a pattern of 195 samples exists, but it’s clear that each of those 195 locations have a fixed voltage offset. If the scope measures those offsets during calibration, it can subtract them after measurement and get a clean signal out. 50 kHz square wave Next I applied a 50kHz square wave to the input. This frequency was chosen so that, for the selected sample rate, a single period would cover the 15000 sampling points. 2 more observations: the micro-repetitiveness is still there, irrespective of the voltage offset due to the input signal. That means that subtracting the noise should be fine for different voltage inputs. We don’t see a clean square wave outline. It looks like there’s some kind of address interleaving going on. 50kHz sawtooth wave We can see the interleaving even better when applying a sawtooth wavefrom that covers one burst: Instead of a clean break from high-to-low somewhere in the middle, there is a transition period where you get both high and low values. This confirms that some kind of interleaving is happening. Conclusion The TDS684B captures input signals at high speed in an analog memory and digitizes them at 8 MHz. The single-ended input to the ADC is noisy yet the signal looks clean when displayed on the CRT of the scope, likely because the noise pattern is repetitive and predictable. In addition to noise, there’s also an interleaving pattern during the reading out of the analog FIFO contents. The number of samples digitized is always the same, irrespective of the settings in the horizontal acquisition menu. (Not written by ChatGPT, I just like to use bullet points…)

3 months ago 31 votes
Rohde &amp; Schwarz AMIQ Modulation Generator - Teardown and Analog Deep Dive

Introduction The Rohde & Schwarz AMIQ Modulation Generator WinIQSim Software Inside the AMIQ The Signal Generation PCB Analog Signal Generation Architecture: Fixed vs Variable DAC Clock Internal Reference Clock Generation DAC Clock Synthesizer I/Q Output Skew Tuning Variable Gain Amplifier Internal Diagnostics Efficient Distribution of Configuration Signals Conclusion References Introduction Every few months, a local company auctions off all kinds of lab, production and test equipment. I shouldn’t be subscribed to their email list but I am, and that’s one way I end up with more stuff that I don’t really need. During a recent auction, I got my hands on a Rohde & Schwarz AMIQ, an I/Q modulation generator, for a grand total of $45. Add to that another 30% for the auction fee and taxes and you’re still paying much less than what others would pay for a round of golf? But instead of one morning of fun, this thing has the potential to keep me busy for many weekends, so what a deal! (Click to enlarge) A few days after “winning” the auction, I drove to a dark dungeon of a warehouse in San Jose to pick up the loot. The AMIQ has a power on/off button and 3 LEDs and that’s in terms of user interface. There are no dials, there’s no display. So without any other options, I simply powered it up and I was immediately greeted by the ominous clicking of a hard drive. I was right: this thing would keep me entertained for at least a little bit! (Click to enlarge) It took a significant amount of effort to restore the machine back to its working state. I’ll write about that in future blog posts, but let’s start with an overview of the functionality and a teardown of the R&S AMIQ and then deep dive into some of its analog circuits. AMIQ prices on eBay vary wildly, from $129 to $2600 at the time of writing this. Even if you get one of the higher priced ones, you should expect to get a unit that’s close to failing due to leaking capacitors and a flaky harddrive! The Rohde & Schwarz AMIQ Modulation Generator Reduced to an elevator sales pitch, the AMIQ is a 2-channel arbitrary waveform generator (AWG) with a deep sample buffer. That’s it! It has a streaming buffer that feeds samples to 2 14-bit DACs at a sample rate of up to 105MHz. Two output channels I and Q will typically contain quadrature modulation signals that are sent to an RF vector signal generator such as a Rohde & Schwarz SMIQ for the actual high-frequency modulation. In a typical setup, the AMIQ is used to generate the baseband modulated signal and the SMIQ shifts the baseband signal to an RF frequency. Since the AMIQ has no user interface, the waveform data must provided by an external device. This could be a PC that runs R&S WinIQSim software or even the SMIQ itself because it has the ability to control an AMIQ. You can also create your own waveforms and upload them via floppy disk, GPIB or an RS-232 interface using SCPI control commands. Figure 4-1 of the AMIQ Operating Manual has a simplified block diagram. It is pretty straightforward and somewhat similar to the one of my HP 33120A function generator: (Click to enlarge) On the left are 2 blocks that are shared: a clock synthesizer waveform memory And then for each channel: 14-bit D/A converter analog filters output section with amplifier/attenuator differential analog output driver (AMIQ-B2 option) The major blocks are surrounded by a large amount of DACs that are used to control everything from the tuning input of local 10MHz clock oscillator, gain and offset of output signals, clock skew between I and Q signal and much more. You could do some of this with a modern SDR setup, but the specifications of the AMIQ units are dialed up a notch. Both channels are completely symmetrical to avoid modulation errors at the source. If there’s a need to compensate for small delay differences in, for example, the external cables, you can compensate for that by changing the skew between clocks of the output DACs with a precision of 10ps. Similarly, the DAC sample frequency can be programmed with 32-bit precision. (Click to enlarge) In addition to the main I/Q outputs at the front, there are a bunch of secondary input and output signals: 10MHz reference input and output sample clock trigger input marker output external filter loopback output and input bit error rate measurement (BER) connector (AMIQ-B1 option) parallel interface with the digital value of the samples that are sent to the DAC (front panel, AMIQ-B3 option) (Click to enlarge) Above those speciality inputs and outputs are the obligatory GPIB interface and a bunch of generic connectors that look suspiciously like the ones you’d find on an early century PC: PS/2 keyboard and mouse Parallel port RS232 USB There are 3 different AMIQ versions: 1110.2003.02: 4M samples 1110.2003.03: 4M samples 1110.2003.04: 16M samples Mine is an AMIQ-04. WinIQSim Software While it is possible to control the AMIQ over RS-232 or GPIB with your own software, you’d have a hard time matching the features of WinIQSim, a R&S Windows application that supports the AMIQ. (Click to enlarge) With WinIQSim, you can select one of the popular communication protocols from the late nineties and early 2000s, fill in digital framing data, apply all kinds of distortions and interferences, compute the I and Q waveforms and send it to the AMIQ. Some of the supported formats include CDMA2000, 802.11a WLAN, TD-SCDMA and more. But you don’t have to use these official protocols, WinIQSim supports any kind of FSK, QPSK, QAM or other common modulation method. You need a license for some of the communication protocols. The license is linked to the AMIQ device, not the PC, but the license check is pretty naive, and while I haven’t tried it… yet, the EEVblog forum has discussions about how to enable features yourself. My device only came with a license for IS-95 CDMA. Inside the AMIQ It’s trivial to open up an AMIQ: after removing the 4 feet in the back with a regular Philips screwdriver, you can simply slide off the outer case. It has 2 major subsystems: the top contains all the components of a standard PC (Click to enlarge) the bottom has a signal generation PCB (Click to enlarge) The PCB looks incredibly clean and well laid out and I love how they printed the names of different sections on the metal gray shielding plates. We’ll leave the PC system for a future blog post, and focus on the signal generation PCB. The Signal Generation PCB Let’s remove the shielding plates to see what’s underneath. I had to drill out one of the screws that attach the plates because the head was stripped. (Did somebody before me already try to repair it?) (Click to enlarge) The bottom half left and right sections are perfectly symmetrical, as one would expect for a device that has the ability to tune skew mismatches with a 10 ps precision. Annotated, it looks like this: (Click to enlarge) Rohde & Schwarz recently made the terrible decision to lock all their software and manuals behind an approval-only corporate log-in wall, but luckily some of the most important AMIQ assets can be found online elsewhere, including the operating manual and a service manual contains the full schematics! Let’s dig a bit deeper into the various aspects of the design. In what follows I’ll be focusing primarily on analog aspects of the design. This is a very personal choice: not that the digital sections aren’t important, it’s just that, as digital design engineer, they’re not particular interesting to me. By studying the analog sections, I hope to stumble into circuits that I didn’t really know a lot about before. Fantastic Schematics Before digging in for real, a word about the schematics: they are fantastic. Each sub-system has a block diagram that is already pretty detailed, with signal names that match the schematics and test points, often with annotations to indicate the voltage or frequency range. Here’s the block diagram of the reference and DAC clock generation section, for example. Schematic page 5 (Click to enlarge) Signals that come from or go to other pages are fully referenced. Look at the SYN_OUT_CLK signal below: Schematic page 10 (Click to enlarge) The signal is also used on page 6, coordinate 1D and 7B and page 22, coordinate 8A. How cool is that? Signal path test points One of the awesome features of the PCB is the generous amount of test points. We’re not just talking PCB test point against which you can hold your oscilloscope probe or even header pins, though there are plenty of those too, but full on SMB connectors. In addition to these SMD connectors, there are also plenty of jumpers that can be used to interrupt the default signal flow and insert your own test signal instead. Analog Signal Generation Architecture: Fixed vs Variable DAC Clock In the HP 33120A the DAC has a fixed 40 MHz clock. There’s a 16 kB waveform RAM that contains, say, one quarter period of a 100 Hz sine. If you want to send out a sine wave of 200 Hz, instead of sequentially stepping through all the addresses of the waveform RAM, you just skip every other address. One of the benefits of this kind of generation scheme is that you can make do with a fixed frequency analog anti-aliasing filter: the Nyquist frequency is always the same after all. A major disadvantage, however, is that even if the output signal has a bandwidth of only 1MHz, you still need to feed the DAC at the fixed clock rate. You could insert a digital upsampling filter between the waveform memory and the DAC, but that requires significant mathematical DSP fire power, or you’d have to increase the depth of the waveform memory. For an arbitrary waveform generator, it makes more sense to run the DAC at whichever clock speed is sufficient to meet the Nyquist requirement of the desired signal and provide a number of different filtering options. The AMIQ has 4 such options: no filter, a 25 MHz, a 2.5 MHz or a loopback through an external filter. The DAC sample clock range is huge, from 10 Hz all the way to 105 MHz, though specifications are only guaranteed up to 100 MHz. According to the data sheet, the clock frequency can be set with a precision of 10^-7. Internal Reference Clock Generation Like all professional test and measurement equipment, the AMIQ uses a 10MHz reference clock that can come from outside or that can be generated locally with a 10MHz crystal. It’s common for high-end equipment to have an oven controlled crystal oscillator (OCXO), but AMIQ has a lower spec’ed temperature controlled one (TCXO), a Milliren Technologies 453-0210. If we look at the larger reference clock generation block diagram, we can see something slightly unusual: instead of selecting between the internal TCXO output or the external reference clock input, the internal reference clock always comes from the TCXO (green). Schematic page 5 (Click to enlarge) When the internal clock is selected, the TCXO output frequency can be tuned with an analog signal that comes from the VTXCO TUNE DAC (blue), but when the external reference input is active, the TCXO is phase locked to the external clock. You can see the phase comparator and low pass filter in red. The reason for using a PLL with the internal TCXO as the voltage controlled oscillator is probably to ensure that the generated reference clock has the phase noise of the TCXO while tracking the frequency of the external reference clock: a PLL acts as a low-pass filter to the reference clock and as a high-pass filter to the VCO. If the TCXO has a better high frequency phase noise than the external reference clock, that makes sense. This is really out of my wheelhouse, so take all of this with a grain of salt… SYN_REF is the output of the internal reference clock generation unit. DAC Clock Synthesizer The clock synthesizer creates a highly programmable DAC clock from the internal reference clock SYN_REF from previous section. It should come as no surprise that this clock is generated by a PLL as well. Schematic page 5 (Click to enlarge) There are two speciality components in the clock generation path: a Mini-Circuits JTOS-200 VCO an Analog Devices AD9850 DDS Synthesizer (Click to enlarge) The VCO has an operating frequency between 100 and 200 MHz. I don’t know enough about VCOs to give meaningful commentary about the specifications, but based on comparisons with similar components on Digikey, such as this FMVC11009, it’s safe to assume that it’s an expensive and high quality component. The AD9850 is located in the feedback path of the PLL where it acts as a feedback divider with a precision of 32 bits. The signal flow inside the AD9850 is interesting: It works as follows: each clock cycle, the phase of a numerical controlled oscillator (NCO) accumulates with the programmable 32-bit increment. the upper bits of the phase accumulator serve as the address of a sine waveform table. a 10-bit DAC converts the digital sine wave to analog. the analog signal is sent through an external low-pass filter. the output of the low-pass filter goes back into the AD9850 and through a comparator to generate a digital output clock. This thing has its own programmable signal generator! But why the roundtrip from digital to analog back to digital? In theory, the MSB of the NCO could be used as output clock of the clock generator. The problem is that in such a configuration, the length of each clock period toggles between N and N+1 clock cycles, with a ratio so that the average clock period ends up with the desired value. But this creates major spurs in the frequency spectrum of the generated clock. When fed into the phase comparator of a PLL, these frequency spurs can show up as jitter at the output of the PLL and thus into the spectrum of the generated I and Q signals. By converting the signal to analog and using a low pass filter, the spurs can be filtered away. The combination of a low pass filter and a comparator acts as an interpolator: the edges of the generated clock fall somewhere in between N and N+1. Schematic page 21 (Click to enlarge) The steepness of the low pass filter depends on the ratio between the input and the output clock: the lower the ratio, the steeper the filter. There are a bunch of binary clock dividers that make it hard to know the exact ratio, but if the output of the VCO is 100 MHz and the input 10 MHz or less, there is a 10:1 ratio. The AMIQ has a 7th order elliptical low pass filter. I ran a quick simulation in LTspice to check the behavior. (dds_filter.asc source file.) The filter has a cut-off frequency of around 13 MHz: In modern fractional PLLs, instead of using a regular DAC, one could use a high-order sigma-delta unit to create a pulse density modulated output with the noise pushed to higher frequencies and a low pass filter that can be less aggressive. There’s a plenty of literature online about DDS clock generators, some of which I’ve listed in the references at the bottom, but the datasheet of the AD9850 itself is a good start. I/Q Output Skew Tuning Earlier I mentioned the ability to tune the skew between the I and the Q output to compensate for a difference in cable length when connecting the AMIQ to an RF signal generator. While the digital waveform fetching circuit works on one clock, the two clocks that go to the DAC are the ones that can be changed: Here’s what the skewing circuit looks like: Schematic page 10 (Click to enlarge) Starting with input signal DAC_CLK, the goal is to create I_DAC_CLK and Q_DAC_CLK that can be moved up to 1ns ahead or behind the other. Signals can be delayed by sending them through an R/C combo. Since we need a variable delay, we need a way to change the value of either the R or the C. A varactor or varicap diode is exactly that kind of device: its capacitance changes depending on the reverse bias voltage across the diode. Static input signal SKEW_TUNE comes from a DAC. It goes to the cathode of one varicap and the anode of the other. When its voltage increase, the capacitance of those 2 diodes moves in opposite ways, and so does the R/C delay along the I and Q clock path. A BB147 varicap has a capacitance that varies between 2pF and 112pF depending on the bias voltage. Capacitances C168 and C619 prevent the relatively high bias voltages from reaching the digital signal path. Does the circuit work? Absolutely! The scope photos below show the impact of the skewing circuit when dialed to the maximum in both directions: With a 2ns/div horizontal scale, you can see a skew of ~1ns either way. Variable Gain Amplifier The analog signal path starts at the DAC and then goes through the anti-aliasing filters, an output section that does amplification and attenuation, and finally the output connector board. It is common for signal generators to have a fixed gain amplification section and then some selectable fixed attenuation stages: amplifiers with a variable gain and low distortion are hard to design. If you need a signal amplitude that can’t be achieved with one of the fixed attenuation settings, one solution is to multiply the signal before it enters the DAC to reduce the output amplitude, though that’s at the expense of losing some of the dynamic range of the DAC. This is not the case for the AMIQ: while it can use a signal path that only uses fixed amplification and attenuation stages, it also offers the intriguing option to send the analog signal through an analog multiplier stage: If we zoom down from the system diagram to the block diagram, we can see how the multiplier/variable attenuator sits between the filter and the output amplifier, with 2 control input signals: AMPL_CNTRL and OFFSET. This circuit exists twice, of course, for the I and the Q channel. Schematic page 3 (Click to enlarge) Let’s check out the details! The heavy lifting of the variable gain amplifier is performed by an Analog Devices AD835 250 MHz, Voltage Output, 4-Quadrant Multiplier, a part from their Analog Multipliers & Dividers catalog. Schematic page 14 (Click to enlarge) It’s a tiny 8-pin device that calculates W = (X1-X2) * (Y1-Y2) + Z. In low volume, it will set you back $32 on Digikey. In addition to the multiplication, you can set a fixed output gain by feeding back output W through a resistive divider back into Z. For inputs less than 10 MHz, it has a typical harmonic distortion of -70dB. Analog Devices datasheets usually have a Theory of Operation section that explain, well, the underlying theory of operation. The AD835 has such a section as well, but it doesn’t go any further than stating that the multiplier is based on a classic form, having a translinear core, supported by three (X, Y, and Z) linearized voltage-to-current converters, and the load driving output amplifier. I have no clue how the thing works! In the case of the AMIQ, X2 and Y2 are strapped to ground, and there’s no feedback from W back into Z either which reduces the functionality to: W = FILTER_OUT * AMPL_VAR + OFFSET. AMPL_VAR and OFFSET are static analog values that are each created by a 12-bit DAC8143, just like many other analog configuration signals. It’s almost shame that this powerful little device is asked to perform such a basic operation. While researching the AD835, somebody pointed out the AD8310, another interesting speciality chip from Analog Devices. It’s a DC to 440 MHz, 95dB logarithmic amplifier, a converter from linear to logarithmic scale basically. Discovering little gems like this is why I love studying schematics of complex devices. Internal Diagnostics The AMIQ has a set of 15 internal analog signal to monitor the health of the device. Various meaningful signals are gathered all around the signal generation board and measured at power up to give you that dreaded Diagnostic Check Fail message. Schematic page 27 (Click to enlarge) The circuit itself is straightforward: 2 8-to-1 analog multiplexers feed into a CS5507AS 16-bit AD converter. It can only do 100 samples per second, but that’s sufficient to measure a bunch of mostly static signals. Like many other devices, the measured value is sent serially to one of the FPGAs. The ADC needs a 2.5V reference voltage. It’s funny that this reference voltage also goes to the analog multiplexers as one one of the diagnostic signals. One wonders what the ADC returns as a result when it tries to convert the output of a broken reference voltage generator. There are a bunch of circuits in the AMIQ whose only purpose is generating diagnostic signals. Here’s a good example of that: Schematic page 14 (Click to enlarge) I_OUT_DIAG and Q_OUT_DIAG are the outputs after the attenuator that will eventually end up at the connectors. The circuit in the top red square is a signal peak detector, similar to what you’d find in the rectifier of a power supply. It allows the AMIQ to track the output level of signals with a frequency that is way higher than the sample rate of the ADC. The circuit in the red rectangle below performs a digital XOR on the analog I and the Q signals and then sends it through a simple R/C low pass filter. I think that it allows the AMIQ to check that the phase difference between the I and Q channel is sensible when applying a test signal during power-on self-test. Efficient Distribution of Configuration Signals The AMIQ signal board has hundreds of configuration bits: there are the obvious enable/disable or selection bits, such as those to select between different output filters, but the lion’s share are used to set the output value of many 12-bit DACs. Instead of using parallel busses that fan out from an FPGA, the AMIQ has a serial configuration scan chain. Discrete 8-bit 74HCT4094 shift-and-store registers are located all over the design for the digital configuration bits. The DAC8134 devices have their own built-in shift register and are part of the same scan chain. Schematic page 19 (Click to enlarge) The schematic above is an example of that. The red scan chain data input goes through the VTCXO_TUNE DAC, then the 74HCT4094 after which it exist to some other page of the schematics. Conclusion Around the late nineties, test equipment companies started to stop adding schematics to their service manuals, but the R&S AMIQ is a nice exception to that. And while the device already has a bunch of FPGAs, most of the components are off-the-shelf single-function components. Thanks to that, the AMIQ is an excellent candidate for a deep dive: all the information is there, you need to just spend a bit of effort to go through things. I had a ton of fun figuring out how things worked. References Rohde & Schwarz documents R&S - IQ Modulation Generator AMIQ R&S - AMIQ Datasheet R&S - AMIQ Operating Manual R&S - AMIQ Service Manual with schematic Various application notes R&S - Floppy Disk Control of the I/Q Modulation Generator AMIQ R&S - Software WinIQSIM for Calculating I/Q Signals for Modulation Generator R&S AMIQ R&S - Creating Test Signals for Bluetooth with AMIQ / WinIQSIM and SMIQ R&S - WCDMA Signal Generator Solutions R&S - Golden devices: ideal path or detour? R&S - Demonstration of BER Test with AMIQ controlled by WinIQSIM Other AMIQ content on the web zw-ix has a blog - Getting an Rohde Schwarz AMIQ up and running zw-ix has a blog - Connecting a Rohde Schwarz AMIQ to a SMIQ04 Bosco tweets DDS Clock Synthesis MT-085 Tutorial: Fundamentals of Direct Digital Synthesis (DDS) How to Predict the Frequency and Magnitude of the Primary Phase Truncation Spur in the Output Spectrum of a Direct Digital Synthesizer (DDS) Related content The Signal Path - Teardown, Repair & Analysis of a Rohde & Schwarz AFQ100A I/Q (ARB) Modulation Generator

3 months ago 19 votes
DSLogic U3Pro16 Review and Teardown

Introduction The DSLogic U3Pro16 In the Box Probe Cables and Clips The Controller Hardware The Input Circuit Impact of Input Circuit on Circuit Under Test Additional IOs: External Clock, Trigger In, Trigger Out Software: From Saleae Logic to PulseView to DSView Installing DSView on a Linux Machine DSView UI Streaming Data to the Host vs Local Storage in DRAM Triggers Conclusion References Footnotes Introduction The year was 2020 and offices all over the world shut down. A house remodel had just started, so my office moved from a comfortably airconditioned corporate building to a very messy garage. Since I’m in the business of developing and debugging hardware, a few pieces of equipment came along for the ride, including a Saleae Logic Pro 16. I had the unit for work stuff, I may once in a while have used it for some hobby-related activities too. There’s no way around it: Saleae makes some of the best USB logic analyzers around. Plenty of competitors have matched or surpassed their digital features, but none have the ability to record the 16 channels in analog format as well. After corporate offices reopened, the Saleae went back to its original habitat and I found myself without a good 16-channel USB logic analyzer. Buying a Saleae for myself was out of the question: even after the $150 hobbyist discount, I can’t justify the $1350 price tag. After looking around for a bit, I decided to give the DSLogic U3Pro16 from DreamSourceLab a chance. I bought it on Amazon for $299. (Click to enlarge) In this blog post, I’ll look at some of the features, my experience with the software, and I’ll also open it up to discover what’s inside. The DSLogic U3Pro16 The DSLogic series currently consists of 3 logic analyzers: the $149 DSLogic Plus (16 channels) the $299 DSLogic U3Pro16 (16 channels) the $399 DSLogic U3Pro32 (32 channels) The DSLogic Plus and U3Pro16 both have 16 channels, but acquisition memory of the Plus is only 256Mbits vs 2Gbits for the Pro, and it has to make do with USB 2.0 instead of a USB 3.0 interface, a crucial difference when streaming acquistion data straight to the PC to avoid the limitations of the acquistion memory. There’s also a difference in sample rate, 400MHz vs 1GHz, but that’s not important in practice. The only functional difference between the U3Pro16 and U3Pro32 is the number of channels. It’s tempting to go for the 32 channel version but I’ve rarely had the need to record more than 16 channels at the same time and if I do, I can always fall back to my HP 1670G logic analyzer, a pristine $200 flea market treasure with a whopping 136 channels1. So the U16Pro it is! In the Box The DSLogic U16Pro comes with a nice, elongated hard case. Inside, you’ll find: the device itself. It has a slick aluminum enclosure. a USB-C to USB-A cable 5 4-way probe cables and 1 3-way clock and trigger cable 18 test clips Probe Cables and Clips You read it right, my unit came with 5 4-way probe cables, not 4. I don’t know if DreamSourceLab added one extra in case you lose one or if they mistakenly included one too much, but it’s good to have a spare. The cables are slightly stiffer than those that comes with a Saleae but not to the point that it adds a meaningful additional strain to the probe point. They’re stiffer because each of the 16 probe wires carries both signal and ground, probably a thin coaxial cable, which lowers the inductance of the probe and reduce ringing when measuring signal with fast rise and fall times. In terms of quality, the probe cables are a step up from the Saleae ones. The case is long enough so that the probe cables can be stored without bending them. The quality of the test clips is not great, but they are no different than those of the 5 times more expensive Saleae Logic 16 Pro. Both are clones of the HP/Agilent logic analyzer grabbers that I got from eBay and will do the job, but I much prefer the ones from Tektronix. The picture below shows 4 different grabbers. From left to right: Tektronix, Agilent, Saleae and DSLogic ones. Compared to the 3 others, the stem of the Tektronix probe is narrow which makes it easier to place multiple ones next to each other one fine-pitch pin arrays. If you’re thinking about upgrading your current probes to Tektronix ones: stay away from fakes. As I write this, you can find packs of 20 probes on eBay for $40 (incl shipping), so around $2 per probe. Search for “Tektronix SMG50” or “Tektronix 020-1386-01”. Meanwhile, you can buy a pack of 12 fake ones on Amazon for $16, or $1.3 a piece. They work, but they aren’t any better than the probes that come standard with the DSLogic. Fake probe on the left, Tek probe on the right The stem of the fake one is much thicker and the hooks are different too. The Tek probe has rounded hooks with a sharp angle at the tip: Tektronix hooks The hooks of a fake probe are flat and don’t attach nearly as well to their target: Fake hooks If you need to probe targets with a pitch that is smaller than 1.25mm, you should check out these micro clips that I reviewed ages ago. The Controller Hardware Each cable supports 4 probes and plugs into the main unit with 8 0.05” pins in 4x2 configuration, one pin for the signal, one pin for ground. The cable itself has a tiny PCB sticking out that slots into a gap of the aluminum enclosure. This way it’s not possible to plug in the cable incorrectly… unlike the Saleae. It’s great. When we open up the device, we can see an Infineon (formerly Cypress) CYUSB3014-BZX EZ-USB FX3 SuperSpeed controller. A Saleae Logic Pro uses the same device. These are your to-go-to USB interface chips when you need a microcontroller in addition to the core USB3 functionatility. They’re relatively cheap too, you can get them for $16 in single digital quantities at LCSC.com. The other size of the PCB is much busier. (Click to enlarge) The big ticket components are: a Spartan-6 XC6SLX16 FPGA Reponsible data acquisition, triggering, run-length encoding/compression, data storage to DRAM, and sending data to the CYUSB3014. A Saleae Logic 16 Pro has a smaller Spartan-6 LX9. That makes sense: its triggering options aren’t as advanced as the DSLogic and since it lacks external DDR memory, it doesn’t need a memory controller on the FPGA either. a DDR3-1600 DRAM It’s a Micron MT41K128M16JT-125, marked D9PTK, with 2Gbits of storage and a 16-bit data bus. an Analog Devices ADF4360-7 clock generator I found this a bit surprising. A Spartan-6 LX16 FPGA has 2 clock management tiles (CMT) that each have 1 real PLL and 2 DCMs (digital clock manager) with delay locked loop, digital frequency synthesizer, etc. The VCO of the PLL can be configured with a frequency up to 1080 MHz which should be sufficient to capture signals at 1GHz, but clearly there was a need for something better. The ADF4360-7 can generate an output clock as fast a 1800MHz. There’s obviously an extensive supporting cast: a Macronix MX25R2035F serial flash This is used to configure the FPGA. an SGM2054 DDR termination voltage controller an LM26480 power management unit It has two linear voltage regulators and two step-down DC-DC convertors. two clock oscillators: 24MHz and 19.2MHz a TI HD3SS3220 USB-C Mux This the glue logic that makes it possible for USB-C connectors to be orientation independent. a SP3010-04UTG for USB ESD protection Marked QH4 Two 5x2 pin connectors J7 and J8 on the right size of the PCB are almost certainly used to connect programming and debugging cables to the FPGA and the CYUSB-3014. (Click to enlarge) The Input Circuit I spent a bit of time Ohm-ing out the input circuit. Here’s what I came up with: The cable itself has a 100k Ohm series resistance. Together with a 100k Ohm shunt resistor to ground at the entrance of the PCB it acts as by-two resistive divider. The series resistor also limits the current going into the device. Before passing through a 33 Ohm series resistor that goes into the FPGA, there’s an ESD protection device. I’m not 100% sure, but my guess is that it’s an SRV05-4D-TP or some variant thereof. I’m not 100% sure why the 33 Ohm resistor is there. It’s common to have these type of resistors on high speed lines to avoid reflection but since there’s already a 100k resistor in the path, I don’t think that makes much sense here. It might be there for additional protection of the ESD structure that resides inside the FPGA IOs? A DSLogic has a fully programmable input threshold voltage. If that’s the case, then where’s the opamp to compare the input voltage against this threshold voltage? (There is such a comparator on a Saleae Logic Pro!) The answer to that question is: “it’s in the FPGA!” FPGA IOs can support many different I/O standards: single-ended ones, think CMOS and TTL, and a whole bunch of differential standards too. Differential protocols compare a positive and a negative version of the same signal but nothing prevents anyone from assigning a static value to the negative input of a differential pair and making the input circuit behave as a regular single-end pair with programmable threshold. Like this: There is plenty of literature out there about using the LVDS comparator in single-ended mode. It’s even possible to create pretty fast analog-digital convertors this way, but that’s outside the scope of this blog post. Impact of Input Circuit on Circuit Under Test 7 years ago, OpenTechLab reviewed the DSLogic Plus, the predecessor of the DSLogic U3Pro16. Joel spent a lot of time looking at its input circuit. He mentions a 7.6k Ohm pull-down resistor at the input, different than the 100k Ohm that I measured. There’s no mention of a series resistor in the cable or about the way adjustable thresholds are handled, but I think that the DSLogic Pro has a simular input circuit. His review continues with an in-depth analysis of how measuring a signal can impact the signal itself, he even builds a simulation model of the whole system, and does a real-world comparison between a DSLogic measurement and a fake-Saleae one. While his measurements are convincing, I wasn’t able to repeat his results on a similar setup with a DSLogic U3Pro and a Saleae Logic Pro: for both cases, a 200MHz signal was still good enough. I need to spend a bit more time to better understand the difference between my and his setup… Either way, I recommend watching this video. Additional IOs: External Clock, Trigger In, Trigger Out In addition to the 16 input pins that are used to record data, the DSLogic has 3 special IOs and a seperate 3-wire cable to wire them up. They are marked with the character “OIC” above the connector, which stands for Output, Input, Clock. Clock Instead of using a free-running internal clock, the 16 input signals can be sampled with an external sampling clock. This corresponds to a mode that’s called “state clocking” in big-iron Tektronix and HP/Agilent/Keysight logic analyzers. Using an external clock that is the same as the one that is used to generate the signals that you want to record is a major benefit: you will always record the signal at the right time as long as setup and hold requirements are met. When using a free-running internal sampling clock, the sample rate must a factor of 2 or more higher to get an accurate representation of what’s going on in the system. The DSLogic U16Pro provides the option to sample the data signals at the positive or negative edge of the external clock. On one hand, I would have prefered more options in moving the edge of the clock back and forth. It’s something that should be doable with the DLLs that are part of the DCMs blocks of a Spartan-6. But on the other, external clocking is not supported at all by Saleae analyzers. The maximum clock speed of the external clock input is 50MHz, significantly lower than the free-running sample speed. This is the usually the case as well for big iron logic analyzers. For example, my old Agilent 1670G has a free running sampling clock of 500MHz and supports a maximum state clock of 150MHz. Trigger In According to the manuals: “TI is the input for an external trigger signal”. That’s a great feature, but I couldn’t figure out a way in DSView on how to enable it. After a bit of googling, I found the following comment in an issue on GitHub. This “TI” signal has no function now. It’s reserved for compatible and further extension. This comment is dated July 29, 2018. A closer look at the U3Pro16 datasheets shows the description of the “TI” input as “Reserved”… Trigger Out When a trigger is activated inside the U3Pro, a pulse is generated on this pin. The manual doesn’t give more details, but after futzing around with the horrible oscilloscope UI of my 1670G, I was able to capture a 500ms trigger-out pulse of 1.8V. Software: From Saleae Logic to PulseView to DSView When Saleae first came to market, they raised the bar for logic analyzer software with Logic, which had a GUI that allowed scrolling and zooming in and out of waveforms at blazing speed. Logic also added a few protocol decoders, and an C++ API to create your own decoders. It was the inspiration of PulseView, an open source equivalent that acts as the front-end application of SigRok, an open source library and tool that acts as the waveform data acquisition backend. PulseView supports protocol decoders as well, but it has an easier to use Python API and it allows stacked protocol decoders: a low-level decoder might convert the recorded signals into, say, I2C tokens (start/stop/one/zero). A second decoder creates byte-level I2C transactions out of the tokens. And I2C EPROM decoder could interpret multiple I2C transactions as read and write operations. PulseView has tons of protocol decoders, from simple UART transactions, all the way to USB 2.0 decoders. When the DSLogic logic analyzer hit the market after a successful Kickstarter campaign, it shipped with DSView, DreamSourceLab’s closed source waveform viewer. However, people soon discovered that it was a reskinned version of PulseView, a big no-no since the latter is developed under a GPL3 license. After a bit of drama, DreamSourceLab made DSView available on GitHub under the required GPL3 as well, with attribution to the sigrok project. DSView is a hard fork of PulseView and there are still some bad feelings because DreamSourceLab doesn’t push changes to the PulseView project, but at least they’ve legally in the clear for the past 6 years. The default choice would be to use DSView to control your DSLogic, but Sigrok/PulseView supports DSView as well. In the figure below, you can see DSView in demo mode, no hardware device connected, and an example of the 3 stacked protocol described earlier: (Click to enlarge) For this review, I’ll be using DSView. Saleae has since upgrade Logic to Logic2, and now also supports stacked protocol decoders. It still uses a C++ API though. You can find an example decoder here. Installing DSView on a Linux Machine DreamSourceLab provides DSView binaries for Windows and MacOS binaries but not for Linux. When you click the Download button for Linux, it returns a tar file with the source code, which you’re expected to compile yourself. I wasn’t looking forward to running into the usual issues with package dependencies and build failures, but after following the instructions in the INSTALL file, I ended up with a working executable on first try. DSView UI The UI of DSView is straightforward and similar to Saleae Logic 2. There are things that annoy me in both tools but I have a slight preference for Logic 2. Both DSView and Logic2 have a demo mode that allows you to play with it without a real device attached. If you want to get a feel of what you like better, just download the software and play with it. Some random observations: DSView can pan and zoom in or out just as fast as Logic 2. On a MacBook, the way to navigate through the waveform really rubs me the wrong way: it uses the pinching gesture on a trackpad to zoom in and out. That seems like the obvious way to do it, but since it’s such a common operation to browse through a waveform it slows you down. On my HP Laptop 17, DSView uses the 2 finger slide up and down to zoom in and out which is much faster. Logic 2 also uses the 2 finger slide up and down. The stacked protocol decoders area amazing. Like Logic 2, DSView can export decoded protocols as CSV files, but only one protocol at a time. It would be nice to be able to export multiple protocols in the same CSV file so that you can easier compare transaction flow between interfaces. Logic 2 behaves predictably when you navigate through waveforms while the devices is still acquiring new data. DSView behaves a bit erratic. In DSView, you need to double click on the waveform to set a time marker. That’s easy enough, but it’s not intuitive and since I only use the device occasionally, I need to google every time I take it out of the closet. You can’t assign a text label to a DSView cursors/time marker. None of the points above disquality DSView: it’s a functional and stable piece of software. But I’d be lying if I wrote that DSView is as frictionless and polished as Logic 2. Streaming Data to the Host vs Local Storage in DRAM The Saleae Logic 16 Pro only supports streaming mode: recorded data is immediately sent to the PC to which the device is connected. The U3Pro supports both streaming and buffered mode, where data is written in the DRAM that’s on the device and only transported to the host when the recording is complete. Streaming mode introduces a dependency on the upstream bandwidth. An Infineon FX3 supports USB3 data rates up 5Gbps, but it’s far from certain that those rates are achieved in practice. And if so, it still limits recording 16 channels to around 300MHz, assuming no overhead. In practice, higher rates are possible because both devices support run length encoding (RLE), a compression technique that reduces sequences of the same value to that value and the length of the sequence. Of course, RLE introduces recording uncertainty: high activity rates may result in the exceeding the available bandwidth. The U3Pro has a 16-bit wide 2Gbit DDR3 DRAM with a maximum data rate of 1.6G samples per second. Theoretically, make it possible to record 16 channels with a 1.6GHz sample rate, but that assumes accessing DRAM with 100% efficiency, which is never the case. The GUI has the option of recording 16 signals at 500MHz or 8 signals at 1GHz. Even when recording to the local DRAM, RLE compression is still possible. When RLE is disabled and the highest sample rate is selected, 268ms of data can be recorded. When connected to my Windows laptop, buffered mode worked fine, but on my MacBook Air M2 DSView always hangs when downloading the data that was recorded at high sample rates and I have to kill the application. In practice, I rarely record at high sample rates and I always use streaming mode which works reliably on the Mac too. But it’s not a good look for DSView. Triggers One of the biggest benefits of the U3Pro over a Saleae is their trigger capability. Saleae Logic 2.4.22 offers the following options: You can set a rising edge, falling edge, a high or a low level on 1 signal in combination with some static values on other signals, and that’s it. There’s not even a rising-or-falling edge option. It’s frankly a bit embarrassing. When you have a FPGA at your disposal, triggering functionality is not hard to implement. Meanwhile, even in Simple Trigger mode, the DSLogic can trigger on multiple edges at the same time, something that can be useful when using an external sampling clock. But the DSLogic really shines when enabling the Advanced Trigger option. In Stage Trigger mode, you can create state sequences that are up to 16 phases long, with 2 16-bit comparisons and a counter per stage. Alternatively, Serial Trigger mode is a powerful enough to capture protocols like I2C, as shown below, where a start flag is triggered by a falling edge of SDA when SCL is high, a stop flag by a rising edge of SDA when SCL is high, and data bits are captured on the rising edge of SCL: You don’t always need powerful trigger options, but they’re great to have when you do. Conclusion The U3Pro is not perfect. It doesn’t have an analog mode, buffered mode doesn’t work reliably on my MacBook, and the DSView GUI is a bit quirky. But it is relatively cheap, it has a huge library of decoding protocols, and the triggering modes are excellent. I’ve used it for a few projects now and it hasn’t let me down so far. If you’re in the market for a cheap logic analyzer, give it a good look. References Logic Analyzer Shopping Comparison between Saleae Logic Pro 16, Innomaker LA2016, Innomaker LA5016, DSLogic Plus, and DSLogic U3Pro16 Footnotes It even has the digital storage scope option with 2 analog channels, 500MHz bandwidth and 2GSa/s sampling rate. ↩

3 months ago 53 votes
HP Laptop 17 RAM Upgrade

Introduction Selecting the RAM Opening up Replacing the RAM Reassembly References Introduction I do virtually all of my hobby and home computing on Linux and MacOS. The MacOS stuff on a laptop and almost all Linux work a desktop PC. The desktop PC has Windows on it installed as well, but it’s too much of a hassle to reboot so it never gets used in practice. Recently, I’ve been working on a project that requires a lot of Spice simulations. NGspice works fine under Linux, but it doesn’t come standard with a GUI and, more important, the simulation often refuse to converge once your design becomes a little bit bigger. Tired of fighting against the tool, I switched to LTspice from Analog Devices. It’s free to use and while it support Windows and MacOS in theory, the Mac version is many years behind the Windows one and nearly unusuable. After dual-booting into Windows too many times, a Best Buy deal appeared on my BlueSky timeline for an HP laptop for just $330. The specs were pretty decent too: AMD Ryzen 5 7000 17.3” 1080p screen 512GB SSD 8 GB RAM Full size keyboard Windows 11 Someone at the HP marketing departement spent long hours to come up with a suitable name and settled on “HP Laptop 17”. I generally don’t pay attention to what’s available on the PC laptop market, but it’s hard to really go wrong for this price so I took the plunge. Worst case, I’d return it. We’re now 8 weeks later and the laptop is still firmly in my possession. In fact, I’ve used it way more than I thought I would. I haven’t noticed any performance issues, the screen is pretty good, the SSD larger than what I need for the limited use case, and, surprisingly, the trackpad is the better than any Windows laptop that I’ve ever used, though that’s not a high bar. It doesn’t come close to MacBook quality, but palm rejection is solid and it’s seriously good at moving the mouse around in CAD applications. The two worst parts are the plasticy keyboard and the 8GB of RAM. I can honestly not quantify whether or not it has a practical impact, but I decided to upgrade it anyway. In this blog post, I go through the steps of doing this upgrade. Important: there’s a good chance that you will damage your laptop when trying this upgade and almost certainly void your warranty. Do this at your own risk! Selecting the RAM The laptop wasn’t designed to be upgradable and thus you can’t find any official resources about it. And with such a generic name, there’s guaranteed to be multiple hardware versions of the same product. To have reasonable confidence that you’re buying the correct RAM, check out the full product name first. You can find it on the bottom: Mine is an HP Laptop 17-cp3005dx. There’s some conflicting information about being able to upgrade the thing. The BestBuy Q&A page says: The HP 17.3” Laptop Model 17-cp3005dx RAM and Storage are soldered to the motherboard, and are not upgradeable on this model. This is flat out wrong for my device. After a bit of Googling around, I learned that it has a single 8GB DDR4 SODIMM 260-pin RAM stick but that the motherboard has 2 RAM slots and that it can support up to 2x32GB. I bought a kit with Crucial 2x16GB 3200MHz SODIMMs from Amazon. As I write this, the price is $44. Opening up Removing the screws This is the easy part. There are 10 screws at the bottom, 6 of which are hidden underneath the 2 rubber anti-slip strips. It’s easy to peel these stips loose. It’s als easy to put them back without losing the stickiness. Removing the bottom cover The bottom cover is held back by those annoying plastic tabs. If you have a plastic spudger or prying tool, now is the time to use them. I didn’t so I used a small screwdriver instead. Chances are high that you’ll leave some tiny scuffmarks on the plastic casing. I found it easiest to open the top lid a bit, place the laptop on its side, and start on the left and right side of the keyboard. After that, it’s a matter of working your way down the long sides at the front and back of the laptop. There are power and USB connectors that are right against the side of the bottom panel so be careful not to poke with the spudger or screwdriver inside the case. It’s a bit of a jarring process, going back and forth and making steady improvement. In addition to all the clips around the board of the bottom panel, there are also a few in the center that latch on to the side of the battery. But after enough wiggling and creaking sounds, the panel should come loose. Replacing the RAM As expected, there are 2 SODIMM slots, one of which is populated with a 3200MHz 8GDB RAM stick. At the bottom right of the image below, you can also see the SSD slot. If you don’t enjoy the process of opening up the laptop and want to upgrade to a larger drive as well, now would be the time for that. New RAM in place! It’s always a good idea to test the surgery before reassembly: Success! Reassembly Reassembly of the laptop is much easier than taking it apart. Everything simply clicks together. The only minor surprise was that both anti-slip strips became a little bit longer… References Memory Upgrade for HP 17-cp3005dx Laptop Upgrading Newer HP 17.3” Laptop With New RAM And M.2 NVMe SSD Different model with Intel CPU but the case is the same.

4 months ago 45 votes

More in technology

This inexpensive adapter brings Apple Universal Control to vintage Macs

In the distant past of about two decades ago, one would need to use a KVM (Keyboard, Video, Mouse) switch to control multiple computers with the same mouse and keyboard — and even then, it would take a button press to move from one to the other. Today, Apple’s Universal Control feature lets users seamlessly […] The post This inexpensive adapter brings Apple Universal Control to vintage Macs appeared first on Arduino Blog.

18 hours ago 3 votes
How to run Uptime Kuma in Docker in an IPv6-only environment

I use Uptime Kuma to check the availability of a few services that I run, with the most important one being my blog. It’s really nice. Today I wanted to set it up on a different machine to help troubleshoot and confirm some latency issues that I’ve observed, and for that purpose I picked the cheapest ARM-based Hetzner Cloud VM hosted in Helsinki, Finland. Hetzner provides a public IPv6 address for free, but you have to pay extra for an IPv4 address. I didn’t want to do that out of principle, so I went ahead and copied my Docker Compose definition over to the new server. For some reason, Uptime Kuma would start up on the new IPv6-only VM, but it was unsuccessful in making requests to my services, which support both IPv4 and IPv6. The requests would time out and show up as “Pending” in the UI, and the service logs complained about not being able to deliver e-mails about the failures. I confirmed IPv6 connectivity within the container by running docker exec -it uptime-kuma bash and running a few curl and ping commands with IPv6 flags, had no issues with those. When I added a public IPv4 address to the container, everything started working again. I fixed the issue by explicitly disabling the IPv4 network in the Docker Compose service definition, and that did the trick, Uptime Kuma made successful requests towards my services. It seems that the service defaults to IPv4 due to the internal Docker network giving it an IPv4 network to work with, and that causes issues when your machine doesn’t have any IPv4 network or public IPv4 address associated with it. Here’s an example Docker Compose file: name: uptime-kuma services: uptime-kuma: container_name: uptime-kuma networks: - uptime-kuma ports: - 3001:3001" volumes: - /path/to/your/storage:/app/data image: docker.io/louislam/uptime-kuma restart: always networks: uptime-kuma: enable_ipv6: true enable_ipv4: false That’s it! If you’re interested in different ways to set up IPv6 networking in Docker, check out this overview that I wrote a while ago.

31 minutes ago 1 votes
A real PowerBook: the Macintosh Application Environment on a PA-RISC laptop

I like the Power ISA very much, but there's nothing architecturally obvious to say that the next natural step from the Motorola 68000 family must be to PowerPC. For example, the Palm OS moved from the DragonBall to ARM, and it's not necessarily a well-known fact that the successor to Commodore's 68K Amigas was intended to be based on PA-RISC, Hewlett-Packard's "Precision Architecture" processor family. (That was the Hombre chipset, and prototype chips existed prior to Commodore's demise in 1994, though controversy swirled regarding backwards compatibility.) Sure, Apple and Motorola were two-thirds of the AIM alliance, and there were several PowerPC PowerBooks available when the fall of 1997 rolled around. But what if the next PowerBooks had been based on PA-RISC instead? Well, no need to strain yourself imagining it. Here's nearly as close as you're gonna get. that game we must all try running), analyze its performance and technical underpinnings, and uncover an unusual artifact of its history hidden in the executable. (A shout-out to Paul Weissman, the author and maintainer of the incomparable PA-RISC resource OpenPA.net, who provided helpful insights for this article.) near my childhood hometown, RDI Computer Systems was founded in 1989 as Research, Development and Innovations Incorporated in La Costa, California, a neighbourhood of Carlsbad annexed in 1972 in northern San Diego county. (It is completely unrelated to earlier Carlsbad company RDI Video Systems, short for "Rick Dyer Industries" and the developers of laserdisc games like Dragon's Lair and Space Ace, who folded in 1985 after their expensive Halcyon home console imploded mid-development from the 1983 video game crash.) RDI, like several of its contemporaries, was established to capitalize on Sun Microsystems' attempt to commoditize SPARC and open up the market to other OEMs. While most such vendors like Solbourne Computer heavily invested in the desktop workstation segment, RDI instead went even smaller, producing what would become the first SPARC laptops in the United States. Basically SPARCstation IPC and IPX systems crammed into boxy off-white portable cases, the BriteLite series weighed a bit over 13 pounds and started at $10,000 [$24,600]. They were lauded for their performance and compatibility but not their battery life, and RDI became an early adopter of Sun's lower-power microSPARC for the sleeker, sexier PowerLites, using a more dramatic jet-black case. An 85MHz microSPARC II PowerLite 85 was the machine that computational physicist Tsutomu Shimomura, then at the San Diego Supercomputer Center, used to track down hacker Kevin Mitnick in 1995. RDI's initial success enabled its expansion into a bigger 40,000-square foot industrial park facility at 2300 Faraday Avenue, which apparently still exists today. Unfortunately for the company, however, microSPARC hit a performance wall above 125MHz and Sun abandoned further development in 1994, which RDI management took as an indication they needed to diversify. By then the RISC market had started to flourish with many architectures competing for dominance, and RDI decided to throw in with Hewlett-Packard's PA-RISC which had extant portable systems already from Hitachi and SAIC. Neither of those systems had ever existed in large numbers (and the SAIC Galaxys only in military applications at that), giving RDI a new market opportunity with a respected architecture that was already mobile-capable. Producing a final PowerLite in 1996 with the 170MHz Fujitsu TurboSPARC, RDI expanded the PowerLite case substantially for their next systems, replacing the trackball with a touchpad and adding an icon-based LCD status display but keeping its multiple hard disk bays and port options. In the same way the original BriteLites were SPARCstations in every other respect, the new RDI PA-RISC laptop was an otherwise standard HP Visualize B132L or B160L workstation, just inside a laptop case. in our demonstration of Hyper-G, which I've christened ruby after HP chief architect Ruby B. Lee, a key designer of the PA-RISC architecture and its first single-chip implementation. This time around, however, we'll take a closer look at ruby's hardware as a comparison point since it will inform some of the choices we'll make running the Macintosh Application Environment. So that I can save some typing, I'm going to liberally abbreviate "PrecisionBook" to "PABook" for the remainder of this article (avoiding "PBook" so we don't confuse it with PowerBooks). RDI's estimate. I haven't bothered trying to recell this one, and although the status LCD claims it's fully charged, it currently lasts maybe a minute or so which is enough to ride out a AC voltage drop and not much else. Under that is a small 15-pin connector for the optional external 3.5" floppy drive which I don't have either, and behind the other door is the external micro 50-pin SCSI-2 port. A diagram sheet shows that an IR transceiver was planned to be next to the SCSI port, but I don't see one on mine. I took some grabs of the boot process before starting the operating system using a different video mode that my Inogeni VGA capture box would tolerate (my Hall scan converter didn't like it either), though this turned the LCD off, so we won't be bringing up the operating system in this configuration. There is no separate service processor; this all runs on the main CPU. SAIC Galaxy family, and the last and fastest-clocked of the 32-bit PA-RISC 1.1 chips (though the earlier PA-7150 and PA-7200 with their comparatively massive caches can easily beat the 132MHz part). Being effectively a hopped-up PA-7100LC, it inherits most of the characteristics of the earlier chip including a two-way superscalar design, bi-endian support, two asymmetric ALUs, a slightly gimped FPU (the "coprocessor" in the POST summary) with greater double precision latency, and MAX-1 multimedia SIMD instructions. It also incorporates the GSC bus controller on-board. Where the PA-7300LC exceeds its ancestor is in its faster clock speeds — from 132 to 180MHz versus 60 to 100MHz — on a slightly longer six-stage pipeline instead of five, and much larger 64K/64K L1 caches (versus just 1K of I-cache) that were on-die for the first time. In keeping with its "consumer" roots the PA-7300LC additionally includes an L2 cache controller like the PA-7100LC, but here as shown on-screen the PABook's L2 is a full megabyte, and up to 8MB was supported. This is particularly notable given L2 cache was rarely a feature with PA-RISC — large L1s were more typical — and it would not be seen again on a PA-RISC CPU until the PA-8800 seven years later. Other improvements include a 96-entry unified translation lookaside buffer (versus 64) and a four-entry instruction lookaside buffer (versus one) specifically for instruction addresses, which also supported prefetching. The die was fabbed by HP on a 0.5μm process with 9.2 million transistors, 8 million of which were for the L1 cache which consumed most of its 260 square millimetres. A velociraptor can famously be seen in die photos. And here the PowerBook 3400c has a run for its money. This is the point at which I get conflicted because I'm a big fan of the Power ISA, yet I have a real soft spot for PA-RISC because it was my first job out of college, so this is like trying to pick which of my "children" I like best. Although the 3400c with a 240MHz PowerPC 603e was briefly the "world's fastest laptop," at least according to Apple, on benchmarks this 160MHz PA-7300LC wins handily. The last generation 300MHz 603e got SPEC95 numbers of 7.4/6.1, while the 180MHz PA-7300LC recorded scores of 9.22/9.43, with 9.06/9.35 officially recorded for the 180MHz PABook. If we linearly adjust both figures for clock speed we get 5.92/4.88 versus 8.20/8.38, and even using the lower figures in the PABook's technical manual (7.78/7.39 at 160MHz and 6.49/6.54 at 132MHz) the PABook still triumphs. While this isn't a completely fair fight due to the 603's notoriously underpowered FPU, clock for clock the PA-7300LC could challenge both the Pentium Pro and the piledriver PowerPC 604e; the Alpha 21164 could only beat it by revving to 300MHz. And I say all this as a pro-PowerPC bigot! Processing power isn't everything, of course: the 160MHz PA-7300LC does this with a TDP of 15W, while the 300MHz 603e displaces just four to six watts, and the 240MHz part (fabbed on the same 290nm process) is on the lower end of that range. In real world terms that translated to battery life that was at least twice as long on the 3400c. The PABook normally boots from the drive bay closest to the rear (SCSI ID 0; the others are 1 and 2) and the 4GB drive as shipped is in that position, but it can also boot from an external device (SCSI ID 3 or higher) if necessary. The console path is virtually always GRAPHICS(0) for the on-board Visualize-EG and the keyboard path is likewise PS/2, but this can apply to both an external keyboard or the built-in keyboard, which is internally connected the same way. COnfiguration, with the RDI commands for RDI-specific features like mirroring the LCD, but here we'll be asking for INformation on what's installed. INTERNAL_EG_X800, is directly connected to the GSC, but most of the rear ports are connected to a "Bus Adapter." This is the HP LASI ("LAN SCSI") combo chip updated for the PA-7300LC, implementing an Intel i82C596CA 10Mbit NIC, NCR 53C710 SCSI-2 controller, a 16550 UART for RS-232, a WD16C522-compatible parallel port, PS/2 controllers, floppy drive controller and HP Harmony audio. Not enumerated here, a secondary low-speed bus from the LASI called the PHD bus connects to its 1MB flash boot memory, NVRAM and power supply controller. The LASI only supports one serial port, so a second UART is attached to a "Bus Bridge" to provide the second one. This "Bus Bridge" is Dino, a GSC-to-PCI bridge. mikec, I'd like to ask about your experiences with the hardware: please say hi in the comments or drop me a line at ckaiser at floodgap dawt com. directly on AIX — which also used the same PowerOpen ABI — through a thinner runtime layer called Macintosh Application Services (MAS) exclusively for IBM's operating system. only one Apple computer ever ran AIX. In May 1996 Apple updated MAE to version 3.0 with System 7.5.3. This release added compatibility with HP-UX 10.x and made the CPU emulation even faster, primarily through improved handling of condition codes and floating point instructions. It also touted better application compatibility and faster screen updates for users running MAE over a remote X11 session. MAE 3.0 received four point updates up to 3.0.4, badged as "Version 3.0 (Update 4)," which is the final release and the version we'll use. xwd. The installation process is with a shell script and binary installer, both running from within a CDE shell window. Apple offered a trial version so that people could test their software and unlocking the trial limitations requires a license key which you may or may not be able to find on any Archive on the Internet. I won't show the installation process here since it's not particularly interesting or customizeable, but ideally it should be installed to /opt/apple, and even though this version of MAE includes it we're not going to install AppleTalk: ./mae in /opt/apple/bin. The very first run requires us to accept the EULA; there is a specific binary for this and we'll look at it when we take the code apart a bit. /opt? Hang on and we'll get to the on-disk representation. /opt. Again, explanations presently. ruby:/home/spectre/% ls System Folder bin src uploads ruby:/home/spectre/% ls -l System\ Folder/ total 1650 drwxr-xr-x 2 spectre users 1024 Jul 23 21:23 Apple Menu Items -rw-r--r-- 1 spectre users 2846 Jul 23 21:23 Clipboard drwxr-xr-x 2 spectre users 1024 Jul 23 21:23 Control Panels drwxr-xr-x 2 spectre users 1024 Jul 23 21:23 Control Strip Modules drwxr-xr-x 3 spectre users 1024 Jul 23 21:23 Extensions -rw-r--r-- 1 spectre users 35921 Jul 23 21:23 Finder drwxr-xr-x 2 spectre users 1024 Jul 23 21:23 Fonts -rw-r--r-- 1 spectre users 152162 Jul 23 21:23 MAE Enabler -rw-rw-r-- 1 spectre users 18952 Jul 23 21:24 MacTCP DNR drwxr-xr-x 3 spectre users 1024 Jul 23 23:04 Preferences -rw-r--r-- 1 spectre users 72200 Jul 23 21:23 Scrapbook File drwxrwxr-x 2 spectre users 96 Jul 23 21:24 Shutdown Items drwxrwxr-x 2 spectre users 96 Jul 23 21:24 Startup Items -rw-r--r-- 1 spectre users 553762 Jul 23 23:59 System CODE resources) on a native HP-UX Veritas filesystem? The answer is that these files are all AppleSingle, which is to say with their resource and data forks combined, and MAE reads and writes AppleSingle on the fly. There is another interesting folder that gets created. This directory is effectively where the virtual Mac lives. It contains the contents of the virtual Mac's "PRAM" (sm.vpram) plus various databases for files and aliases. The numbered directories require specific explanation. Since each Macintosh volume is its own root, which is certainly not the case in Unix, this directory collects the virtual Mac's volumes here. These aren't symbolic links elsewhere in the filesystem; these are MIVs, or MAE Independent Volumes. They correlate with all the mount points in /etc/fstab by default but any directory can be designated as an MIV, "mounted," and then treated as a volume. We only saw two of them on the desktop because only two of them are "mounted" in /opt/apple/lib/Default_MIV_file, and only those two "volumes" have desktop databases. The home directory is obvious, but /opt was also given a mount because we're running MAE from it and there are various resources in /opt/apple/lib it will try to access. (Some of these are global resources and are treated as part of the System Folder, such as fonts, additional standard applications for the Apple menu, keymaps, locales and, of course, the license key.) These MIVs can be renamed and otherwise treated as if they were any other mounted Macintosh fixed volume. Two other hidden files are also present in this directory, .fs_cache and .fs_info, which maintain the virtual Mac's file and volume information respectively. .fs_cache in particular is very important as it is roughly the global equivalent of an HFS catalog file (and, like a real HFS catalog file, is stored on disk as a B-tree), storing similar metadata like type and creator, timestamps and so forth. This file is so important to MAE that Apple distributed a separate tool called fstool to validate and repair it, sort of like MAE's own Disk First Aid from the shell prompt. You'll have also noticed above that the desktop database in spectre and opt is made up of four files. Desktop DB and Desktop DF are present as usual for the bundle database and Finder information respectively, but there are also two more files %Desktop DB and %Desktop DF, named exactly the same except for a percent sign sigil. This is the other way that resource forks can be represented in MAE, as AppleDouble. Here, the data fork and resource fork are split, with the percent sign indicating the resource fork. Let's explore the MAE System 7.5.3 some more before we attempt to install anything. /opt as they appear in the Finder. /opt is read-only to my uid, so I can't write directly to it. If I had permissions, I could change them from the Permissions dialogue, which is MAE's equivalent of chown, chmod and chgrp all in one. You can also view the (composite) System Folder here and see that it looks pretty much like any other System Folder on any other Mac with the exception of the MAE Enabler. SoftwareFPU. Since not all 68K Macs have floating point units, applications are supposed to use Apple's SANE IEEE-754 library which computes the result in software if no FPU is available. Not all software does this, of course (the Magic Cap 1.0 simulator comes to mind), and this is particularly relevant with Power Macs because the 68K emulator only provides a virtual 68LC040. SoftwareFPU, then, is very simple conceptually: it traps F-line instructions intended for the non-existent coprocessor and turns them into SANE calls. This is slow but it means certain software is able to run that otherwise could not. The MAE SoftwareFPU, which Apple licensed from John Neil & Associates and modified for MAE 3.0, goes a bit further. This version implements a fast path where 68K floating point instructions are directly forwarded to MAE, effectively making F-line traps into hypercalls. Apple estimated this was about 50 percent faster than using regular SoftwareFPU. That said, you'll notice that SoftwareFPU is disabled, which is the default. We'll come back to this when we benchmark the emulator. In MAE 3.0, SANE was changed to directly use host FPU instructions (either SPARC or PA-RISC) for the most commonly performed floating point operations. This works for single and double precision and ran substantially faster than MAE 2.0, but it doesn't work for the 68K's 80-bit extended-precision type, where double precision operations are performed instead and converted (but with a corresponding loss of precision). The previous behaviour, where SANE is simply run under emulation, can be restored with the -sane command line option. A better solution on Power Macs is Tom Pittman's PowerFPU, which (where possible) uses PowerPC floating point instructions directly rather than SANE. All Power Macs have an FPU, so this works on all Power Macs, and is over ten times faster than SoftwareFPU. xclock or xcal. Frodo Commodore 64 emulator, which I installed in /opt. The terminal window opens, which is important to capture any standard error or output, but otherwise it runs normally outside of MAE. That makes you can use the MAE Finder as ... your desktop. You could make MAE take up the entire screen by passing /opt/apple/bin/mae the appropriate -geometry option and setting the X resource Vuewm*mae*clientDecoration to none, effectively making it rootless, and Apple fully supported and documented doing so. Now you've got a virtual Mac that will launch your native X11 applications as well. Who needs CDE when you've got this? We'll look at another standard control panel that MAE uses for a different purpose in a little while. Meantime, having made a basic survey of the emulator, it's now time to actually run software on it. A benchmark would be a good first test but to do that we need to actually put software on it. thule, my little 128MB Macintosh IIci running NetBSD and Netatalk for AppleShare. I have lots of basic software on here including useful INITs and CDEVs and essential tools like StuffIt Expander. It still runs NetBSD 1.5.2 because I had trouble getting regular AppleTalk DDP working with 1.6 and up, so it's a fun time capsule too. But we don't have AppleTalk in MAE, so how are we going to get files from it? Easy: we're going to download the files from thule with FTP and put them into my home directory while MAE is running. The Finder will see the new files and incorporate them. What about the resource forks? The fact that the files are being served by Netatalk from a non-HFS volume (i.e., BSD FFS) actually makes that easier. Netatalk natively stores anything with a resource fork as AppleDouble, depositing the resource fork itself as a separate file into a hidden directory .AppleDouble. We pull down both the data fork and the resource fork, rename the resource fork with a %, and move them both at the same time into my home directory. On the next Finder update, it sees the "whole" file and it becomes accessible. Mac and moved StuffIt Expander there. We can now work with StuffIt archives and only have to download one file, which saves having to get the resource fork separately. An alternative approach, especially if you are transferring a file directly from an HFS or HFS+ volume, is to turn it into AppleSingle first and copy that over; MAE will use the file as-is. Apple provided a tool for this in later versions of Mac OS X/macOS, though 10.4 and prior, arguably where it would have been most useful because those versions still support the Classic Environment, don't seem to have it. The best alternative there is /Developer/Tools/SplitForks, which doesn't do AppleSingle but does create separate AppleDouble data and resource fork files, so at least you can copy those. We'll get to a somewhat more automatic way of specifically handling Netatalk's AppleDouble directories a bit later. over 23 years ago and here we are. I wrote SieveAhl in Modula-2 using the unfortunately named MacMETH compiler just to be weird, rolling all the Toolbox calls by hand. It implements the Sieve of Eratosthenes and a modified version of the FPU-dependent Ahl's Simple Benchmark and issues a score relative to my Macintosh Plus which I have as a reference standard. The main advantage SieveAhl has over other benchmarks is that I wrote it intentionally to run on just about any Mac, even down to System 1.1 (tested in vMac). Here, I'm simply grabbing the StuffIt archive using Internet Explorer 5 for UNIX on the CDE side and saving it into the Mac folder. .sit files isn't too swift on other 68K systems either. We now have a Mac directory that looks like this from the Unix side: Our newly created files in the SieveAhl Folder are now AppleSingle, for example the readme file: We'll get to the rules about when MAE creates AppleSingle and AppleDouble files in a moment. Let's see the numbers we get. Byte in September 1981. It iterates over the interval 0-8190, in which 1,899 primes are expected. Creative Computing, intended originally to evaluate performance and precision differences between various microcomputer BASIC implementations. We don't care about the accuracy or randomness values his benchmark would compute (well, we don't care much), so we just compute those and throw them away. This gets 4,863% the speed of a Mac Plus, which we would expect to be roughly the same because we have no floating point hardware. Repeated runs of both tests were nearly identical. regular SoftwareFPU, not against using it at all. Additionally, MacMETH generates well-behaved code that calls SANE as it should and doesn't emit floating point instructions. How does this compare to a real 68LC040? Conveniently, we have one handy to try it out on! rintintin, my PowerBook 540c with a 33MHz 68LC040 and 12MB of RAM running Mac OS 7.6.1, and the most powerful Blackbird PowerBook sold in the United States (the later Japanese 550c is the same speed, but with a full 68040 and FPU). It was the first PowerBook with any '040 processor, stereo speakers, on-board Ethernet (via AAUI), a trackpad instead of a trackball, twin battery bays and a full-size keyboard. The PowerBook 520/520c and 540/540c came out just a couple months after the first Power Macs and Apple placed the processor onto a daughtercard as a promise that it could be eventually upgraded. As such, the "Ready for PowerPC upgrade" sticker came on these models from the factory, though this particular one is a slightly larger reproduction I printed up a few copies of so I could surreptitiously slap them on the Intel Macs at the Apple Store. Apple nevertheless greatly underestimated demand for the line, mistakenly believing people would rather wait for what eventually was the PowerBook 5300, and the Blackbirds were chronically short-stocked for months. I upgraded this particular unit with an additional 8MB of RAM (on top of the base 4MB) and a SCSI2SD, making it an almost silent unit in operation. The only flaw it has is an iffy cable connection between the display and the top case, which is unfortunately a common problem with these models. Mission: Impossible movie with Tom Cruise and Jon Voight. A regular Blackbird in the standard two-tone grey, most likely a 540c, was what Luther used to block the NOC list transmission on the TGV in the third act. any Blackbird laptop anymore by the time the movie came out, neither computer's model badge is ever visible, though you can at least see a rainbow apple on Luther's. MacLynx, the venerable text browser natively ported to the MacOS. Here we'll run beta 6. lynx.cfg to point to our local Crypto Ancienne TLS 1.3 proxy server. However, we don't have a proper text editor installed other than SimpleText. We could certainly grab BBEdit Lite from the server as well, but MAE gives us an alternative. The manual indicates that "[b]y default, MAE stores files (except text files) in AppleSingle format." Text files, however, are stored as AppleDouble. If we look at our directory listing after unStuffing MacLynx, we can see this rule has been followed: You'll notice that all the text files — the readmes, index.html, lynx.cfg and lynxrc — got separate resource forks as AppleDouble, but the main executable did not. Now, the smart ones among you will say, "But wait! The SieveAhl readme file is text, and it was AppleSingle!" That's right — except that file's text is all stored in styled text resources and there's nothing in the data fork at all. MAE seems to content-sniff files to figure out what to do with them, so BinHex files (which are valid text) will be treated as a text file and made AppleDouble, but a read-only SimpleText file with nothing in the data fork will be treated as a binary and made AppleSingle. The AppleDouble control panel allows you to always force storing files as AppleDouble with specific applications and the separately distributed asdtool will convert a Mac file between AppleSingle and AppleDouble from the shell prompt. vi or, for the mentally deranged, emacs — and the resource fork will remain undisturbed. With MacLynx thus configured for the proxy server, we can view modern HTTPS sites inside MAE no problem. move it. That almost sounds like that the MAE desktop doesn't belong to any MIV even though it is, in fact, part of the MIV for your home directory and that's where we had StuffIt Expander: Conveniently, if you open an alias to something on an MIV that's defined but not yet mounted, MAE will automount it on the desktop for you. Our next set of programs will be TattleTech 2.8 and Gestalt.Appl so we can see what's going on under the hood. There are some surprises here. _L at the end of the Device sResource Name is significant because /opt/apple/bin/macd defines six such resources: Display_Video_Apple_MAE_S, Display_Video_Apple_MAE_M, Display_Video_Apple_MAE_L, Display_Video_Apple_MAE_F, Display_Video_Apple_MAE_C1 and Display_Video_Apple_MAE_C2. These apparently correlate to specific resolutions, namely (in the order they appear) "512 x 342 (9" Macintosh)", "640 x 480 (14" Macintosh)", "832 x 624 (17" Macintosh)", "864 x 864 Resolution", "640 x 800 Resolution" and "640 x 640 Resolution". Since we're at 832x624, we get the _L (presumably small, medium, large, full and two custom?) "card." The emulated DeclROM used by these virtual cards is part of the big blob stored in /opt/apple/lib/engine, along with the Toolbox ROM and other goodies. We'll come back to this when we explore its Gestalt selectors. are defined, but neither appears to be supported. MAE naturally supports printing, but only to a "UNIX PostScript printer" (i.e., lpr) or via AppleTalk, and it does not support using the modem port for a modem or even as a serial port. deprecated it for 68K in 1996. allow the Apple Network Server to numbercrunch for connected clients. It should be possible to do something similar with MAE and have the local host do the work, but there are no local AppleTalk interfaces or headers to compile against. QuickDraw is naturally present (no GX or 3D, of course). In MAE 3.0 QuickDraw is especially important and we'll get to that when we try playing a couple rather famous games. QuickTime is also present, version 2.5, though not everything is enabled (no software MIDI synthesizer, for example). -noextensions. cith selector ("Sith"? Darth Mac?), which is unique to MAE and is conveniently set to $00000304 (i.e., major version 3, minor version 4). While I don't have MAE 1.0 here to test with, René Ros indicates in his Gestalt reports that it has a cith value of 0, though it does exist there too. I don't know what version MAE 2.0 reports but I'm sure someone is firing it up right now to find out and will post in the comments. To more easily decipher the others we'll turn to Gestalt.Appl and I'll point out the highlights. micn is not interesting for the icon (just generic Mac) but rather the string shown, which is a STR# resource indexed by the value of mach (-16395). The string is "Macintosh ApplicationEnvironment" [sic]. mmu " (note space), but this isn't a surprise for an emulator. Consequently, there is also no virtual memory support within MAE (the host is supposed to handle that). romv) is more interesting. Although a great many Old World Mac ROMs are tagged as version 1917, the particular ROM that MAE is using is from the Quadra 660AV and 840AV because we can find its checksum (5b f1 0f d1) and version (07 7d) at offset $001c0000 in /opt/apple/lib/engine. No other valid checksum and version appears anywhere else in this file, and no true Scotsman Macintosh LC would have used a ROM that recent. Thus, if you get a Gestalt ID of 19 but a ROM version of 1917, that's a pretty good indication you're running under MAE. René's list also shows a ROM version of 1917 for MAE 1.0, so MAE 2.0 almost certainly does as well. sltc insists there aren't any NuBus slots. snhw for the sound hardware, which reports a driver cith. This likewise appears nowhere else and is specific to the MAE emulated audio hardware. Oddly, although MAE 1.0 lacked sound, René's list indicates an snhw of awac which would suggest an AWACS. Let's get back to running some more apps. I'm going to bump up the emulated RAM now because a couple near the end will likely benefit. still sold! — was a bit of a mixed bag on MAE. 2.1.2 was the 68K version I had on thule, so I tried that first. It starts and runs fine, but when I actually tried to download anything with this version of Fetch it locked up the entire emulated Mac as soon the file was transferred. That said, I'm not sure if this is a fault of MAE or Fetch because at the exact same point of the exact same file with the exact same version of Fetch on the 540c, it abruptly dropped the connection and threw an error message — though I note transfer speeds were faster on MAE, probably because of the better hardware, right up until it hit the wall. Fortunately the later Fetch 3.0.1 behaves correctly and Apple even offered that specific version for download from the MAE website. There are specific advantages to using Fetch in MAE because of its transparent support for AppleSingle transfers. Still, grabbing files with MacLynx works fine too (hurray for eating your own dogfood), so I'll mostly use that. ssheven, which works great on my real Macs, crashes on MAE and I'm not sure why. ssheven, though I'd have to get a better debugger up on it to figure it out. ssh, and that would even be more useful. Instead, we should try something really important next. Apple IIgs versions are derived from the Mac version. This port was released in 1994 and the "Accelerated for Power Macintosh" and "System 7 Savvy" stickers give the rough timeframe. It requires a 25MHz 68030 or better and is a fat binary. Given that the PABook doesn't have a floppy drive, I copied over this version by simply Stuffing the installed folder on my Power Macintosh 7300 and downloading it over shell FTP (NCSA Telnet on the Mac contains a very basic FTP server which is handy for this). Wolfenzoom (Gopher link) that will scale-blit the 320x200 to 640x400, and I used to use it on my unaccelerated desktop Macintosh IIci when I first bought this game. However, since I'm all about pushing my luck, let's try the highest resolution. unchecked, the game will try to draw directly to video memory. This doesn't always work, and as you can see in this screenshot, not all of the screen was updating properly in this mode. This observation will become relevant when we try running our next game. You do see where this is going, don't you? does use QuickDraw, and appears correctly, but ... thule, but copying Word 5.1 over was a much bigger situation than simply grabbing a single file and its resource fork; we had a number of double-forked files we needed to move en masse. This Perl script, which works on both Perl 5 and 4.036, will iterate over a copy and move Netatalk's AppleDouble resource files into the proper location for MAE, resolving ambiguities in filenames if needed. Call it with find [directory] -name '.AppleDouble' -print | perl ad2mae to run. Only run this on a copy! When everything was in the right place, I moved the directory into my Mac folder and it was ready to go. ad2mae from the MAE Finder, the Finder determined it was a text file and opened it up in vi in a CDE window ready for editing. Not bad! As MAE 3.0 specifically advertised that it was faster than previous versions over remote X, let's test how well that works from my POWER9 Raptor Talos II in Fedora 42. Being the Wayland refusenik I am, I still run KDE Plasma in X11, so with xhost set appropriately and AllowByteSwappedClients enabled (because the POWER9 is running Power ISA little-endian and the PABook is running big) we should be able to connect: actual Command and Option keys, though (I use a white A1048 USB Apple Keyboard with the Quad G5 and the Talos II). makes it tick. DLOG modal when MAE is formatting the TIV. DLOGs for the MAE Toolbar help we saw earlier. DLOG is one of the modals for the floppy/CD mounter. DITL looks like it's part of a credits easter egg, though I haven't figured out yet how to trigger it. I'm assuming the dog is named Rosco ("In Memory of Rosco - 10/5/96"). Also note the dog's Dr Seuss hat, a callback to MAE's "Cat-in-the-Hat" codename. Here's the MAE 3.0 development team: Peter Blas, Michael Brenner, Matthew Caprile, Mary Chan, Bill Convis, Jerry Cottingham, Ivan Drucker, Gerri Eaton, Tim Gilman, Gary Giusti, Mark Gorlinsky, John Grabowski, Cindi Hollister, Richard W. Johnson, John Kullmann, Tom Molnar, John Morley, Stephen Nelson, Michael Press, Jeff Roberts, Shinji Sato, Marc Sinykin, Earl Wallace, Gayle Wiesner. This list of credits will show up again later. PICTs. I'm not sure what they refer to. Notice that the Toolbar Menu when you use the third mouse button is actually just a bunch of PICT resources. There are some other interesting things of note when we start going through the binaries. I separately extracted the files from the installer packages (they're just cpio archives) to preserve their time stamps for analysis. Let's look at everything that's there and then dig into the most notable individual files. All of the core binaries have a modification date of January 23, 1997, presumably the RTM date. Of the library files, data and engine are probably the most notable. We will look at those seprately. KeymapDepotDB is where the default keymaps MAE uses are kept, and MajorUpdate is instructions to the installer script for how to perform an upgrade to a new major release. Since there's no MAE 4.0, this presumably will never be used again. The manual does not document what btree does, and it has only a single readable string in it: Copyright 1991 Apple Computer, Inc. All Rights Reserved. Ricardo Batista The rest are character set mappings and the EULAs in graphic form for both the MAE demo and the full version (with X bitmap buttons for accept/don't accept in all languages except English): After installation Default_MIV_file also lives in /opt/apple/lib, and optionally AliasList for default aliases to appear when starting MAE. To reduce the size of individual users' System Folders, a substantial portion of the composite MAE System Folder is pulled from the shared directory, and other pieces from /opt/apple/lib/data. /opt/apple/lib/data contains the rest of the System Folder, with common pieces like the System 7.5 jigsaw puzzle (licensed by Apple from Captain's Software), note pad (Light Software), scrapbook, menu bar clock (from Steve Christensen's SuperClock!), desktop patterns, compressed System resources and standard INITs and CDEVs. /opt/apple/sys, however, which we won't do much more with here, is the master template for creating each user's own System Folder. We don't need to look at it again because we already saw my own copy of it. /opt/apple/lib/engine is a mashup of many miscellaneous tools. There are various conglomated binaries in it ratted out by the presence of .text, .data and .bss, plus the fake DeclROM for the virtual video card and the Quadra 660AV/840AV Toolbox ROM it uses. There are also many other interesting strings, and being a 10MB file, there are a lot of them: /dev/null 2>&1 Move failed: (%d) [...] cGetDevPixmap, could not emulate Macintosh color table (%d), exiting. cGetDevPixmap, unknown depth %d in encountered. doVideo, error installing video driver, exiting. doVideo, error initializing NewGDevice, exiting. doVideo, could not emulate any Macintosh video devices. Insufficient shared memory or swap space. Using malloc instead. Performance could be increased by adding more swap space and/or configuring more shared memory into the kernel. ERROR: MAE could not allocate the video screen (%dx%d, %dk). Please increase swap space or kill other processes before restarting MAE. Could not malloc new screen buffer, restoring previous size. Not enough shared memory, using malloc instead. Performance would be increased by configuring more shared memory into the kernel. [...] BUGS on MacPlus/SE, NuMc on later [...] Got the OKAY to clear %s (0x%02x bytes at pram 0x%02x) - [...] QDtoGC: (penMode & kHilitePenModeMask); *punting* QDtoGC: penmode=invert (but not well matched); *punting* [...] Aae: AAAAARGH! Fatal X Error [...] Copyright (c) 1987 Apple Computer, Inc., 1985 Adobe Systems Incorporated, 1983-87 AT&T-IS, 1985-87 Motorola Inc., 1980-87 Sun Microsystems Inc., 1980-87 The Regents of the University of California, 1985-87 Unisoft Corporation, All Rights Reserved. [...] 36 41 2 1 c #FFFFFFFFFFFF . c #000000000000 ... .... ..... ..... ..... ..... .... .. ...... ...... .......... .......... ....................... ......................... ....................... ....................... ....................... ...................... ...................... ...................... ...................... ....................... ...................... ....................... ......................... ....................... ....................... ..................... .................... ................... ........ ........ .... .... 36 41 9 1 c #FFFFFFFFFFFF c #0000BBBB0000 c #FFFFFFFF0000 c #FFFF66663333 c #FFFF64640202 c #DDDD00000000 c #999900006666 c #999900009999 c #00000000DDDD ... .... ..... ..... ..... ..... .... .. ...... ...... .......... .......... ....................... ........................ XXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXX oOOOOOOOOOOOOOOOOooooo oOOOOOOOOOOOOOOOOOOOOo oOOOOOOOOOOOOOOOOOOOOo oOOOOOOOOOOOOOOOOOOOOOo +++++++++++++++++++++++ ++++++++++++++++++++++ +++++++++++++++++++++++ ++@+@+@+@+@+@+@+@+@+@+@+ @@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@# $$$$$$$$$$$$$$$$$$$$ $$$$$$$$$$$$$$$$$$ $$$$$$$$ $$$$$$$ $$$$ $$$$ # CREATOR: MAE 3.0 %d %d %3d %3d %3d GIF87a $Id: ximage_high.c,v 3.6 1996/09/11 08:19:50 johng Exp $ $Id: ximage_icon.c,v 3.0 1995/03/23 20:51:23 cvs Exp $ X36 41 2 1 c #FFFFFFFFFFFF . c #000000000000 ... .... ..... ..... ..... ..... .... .. ...... ...... .......... .......... ....................... ......................... ....................... ....................... ....................... ...................... ...................... ...................... ...................... ....................... ...................... ....................... ......................... ....................... ....................... ..................... .................... ................... ........ ........ .... .... 36 41 9 1 c #FFFFFFFFFFFF c #0000BBBB0000 c #FFFFFFFF0000 c #FFFF66663333 c #FFFF64640202 c #DDDD00000000 c #999900006666 c #999900009999 c #00000000DDDD ... .... ..... ..... ..... ..... .... .. ...... ...... .......... .......... ....................... ........................ XXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXX oOOOOOOOOOOOOOOOOooooo oOOOOOOOOOOOOOOOOOOOOo oOOOOOOOOOOOOOOOOOOOOo oOOOOOOOOOOOOOOOOOOOOOo +++++++++++++++++++++++ ++++++++++++++++++++++ +++++++++++++++++++++++ ++@+@+@+@+@+@+@+@+@+@+@+ @@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@@@@# $$$$$$$$$$$$$$$$$$$$ $$$$$$$$$$$$$$$$$$ $$$$$$$$ $$$$$$$ $$$$ $$$$ PaintIconIntoImage, unknown message type %d. %d %d %d Unable to get Icon window id from MACD, exiting. The SuperROM SuperTeam: Central: Ricardo Batista, Rich Biasi, Philip Nguyen, Roger Mann Kurt Clark, Chas Spillar, Paul Wolf, Clinton Bauder Giovanni Agnoli and Debbie Lockett RISC: Scott Boyd, Tim Nichols and Steve Smith MSAD: Jeff Miller and Fred Monroe Cyclone: Tony Leung, Greg Schroeder, Mark Law, Fernando Urbina Dan Hitchens, Jeff Boone, Craig Prouse, Eric Behnke Mike Bell, Mike Puckett, William Sheet, Robert Polic and Kevin Williams Thankzzz to all who contributed to past ROMs...and System 7.x legal, which is the program that requires you to accept the EULA on first run. legal in particular looks like it's just an embedded Tcl/Tk script. It's also notable that appleping, appletalk, atlookup, asdtool and legal still have symbol tables. The AppleTalk tools in particular list functions related to DDP, LAP, NBP and RTMP, but you'd expect that (legal has symbols for Tcl/Tk instead). Interestingly, a few of the strings in appletalk suggest that LocalTalk might have been, or at least was considered for being, supported at one time. Although mae is the binary that you run directly, macd is what handles a lot of the background stuff and mae communicates with it over IPC. The administrator's manual is (probably intentionally) vague about its exact functions, saying only that it "is a daemon that runs whenever apple/bin/mae runs and helps MAE interact with the UNIX environment. It also cleans up after mae if the mae process is killed." We can get a little better idea of what it does from its own set of strings at least: And now mae itself. Some of these strings and components are duplicative of what we saw in /opt/apple/lib/engine. I'm not sure why they're being used twice. Then we start getting into an unusual section. ] [-o >outfile>] [-step] [-remote_debug] decimalinetport hexinetaddr <A/UX COFF file> [<arg1>] ... [<argn>] emulator: cannot uname(2) local system errno = %d TBATDEBUG SOFTMAC_RESTARTING_NOW TBWARN Midnight Emulator version %s Remote Debug version built %s at %s, loading %s [...] Midnight Debugger Unless otherwise specified, the following rules apply: - Commands are single-letter and case matters a whole lot! - Whitespace between the command and arguments are allowed. - Values are hex by default. - Addresses are automatically forced to be on even address boundaries. - '<68kaddr>' is an address which will be offsetted automatically by the emulator so it lies within the 68k image. For example, on this HP system a <68kaddr> of 0x4600 is a real address of 0x%x. - '<emuaddr>' is an address which is not mucked with in any manner. It can be any address within the emulator address space. [...] HUGGERZ What About Bob? ___ ____ ___ ____( \ .-' `-. / )____ (____ \_____ / (O O) \ _____/ ____) (____ `-----( ) )-----' ____) (____ _____________\ .____. /_____________ ____) (______/ `-.____.-' \______) *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug**Hug**Hug* *Hug* *Hug* *Hug* *Hug**Hug**Hug* *Hug* *Hug* *Hug* *Hug**Hug* *Hug* *T3W* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* *Hug* Here goes a big hug from the MAE Team!!!!! [...] mae binary has a second executable binary in it, called the Midnight Emulator. And we can run it! ] [-o >outfile>] [-step] [-remote_debug] decimalinetport hexinetaddr <A/UX COFF file> [<arg1>] ... [<argn>] ruby:/opt/apple/bin/% ./midnight -h Midnight Emulator version 12:02:00 Remote Debug version built Mar 25 1997 at 10:26:25, loading -h must set LM_LICENSE_FILE env var for midnight ruby:/opt/apple/bin/% setenv LM_LICENSE_FILE /opt/apple/XXXXXXX ruby:/opt/apple/bin/% ./midnight -h Midnight Emulator version 12:02:00 Remote Debug version built Mar 25 1997 at 10:26:25, loading -h Unable to open file -h spindler, my clock-chipped Quadra 800. We'll fire it up in A/UX 3.1. spindler:/bin/% uname -a A/UX spindler 3.1 SVR2 mc68040 spindler:/bin/% file sync sync: COFF object paged executable spindler:/bin/% ls -l sync -rwxr-xr-x 1 bin bin 764 Feb 4 1994 sync spindler:/bin/% dis sync **** DISASSEMBLER **** disassembly for sync section .text $ a8: 23c0 0040 0150 mov.l %d0,0x400150.l $ ae: 518f subq.l &8,%sp $ b0: 2eaf 0008 mov.l 0x8(%sp),(%sp) $ b4: 41ef 000c lea 0xc(%sp),%a0 [...] $ 144: 480e ffff fffc link.l %fp,&-4 $ 14a: 4e71 nop $ 14c: 4e5e unlk %fp $ 14e: 4e75 rts /bin/sync looks like a very small, yet valid A/UX binary we can pull over and see if Midnight will run it. First, a quick negative control by running it on itself: The complaint that it's neither COFF nor engine suggests that its normal state is to be running /opt/apple/lib/engine, though this isn't too interesting, since we would assume MAE does that ordinarily. Regardless, it doesn't like our real A/UX COFF binary, even though it does try to load it. It is particularly strange that mae has a modification date of January 23, 1997, but the Midnight Emulator claims to have been built on March 25, over two months later. More explorations to come, especially into whether this could help to debug MAE itself. At the end of this extensive strange trip, I found I rather liked the way MAE worked, and the integration features with HP-UX in particular really tempt me to try running it as my primary environment on top of CDE or VUE. Its performance was surprisingly good and I think if I had the choice back in the day between buying new a 3400c or this thing, even as noisy, heavy and costly as it is, I might strongly have considered buying the latter. Also, I bet MAE would run like a bat out of hell on my maximally configured C8000 and some additional explorations of the Macintosh Application Environment, possibly also on one of my SAIC Galaxys with a floppy drive and an earlier version of HP-UX, might be the subject of a future article. The MAE team clearly didn't think 3.0 was the end of MAE; in the MAE 3.0 white paper's "Future Directions" they indicate that support for additional hardware platforms and "additional UNIX systems" are "being considered." They acknowledge the fact it doesn't run Power Mac software, even saying that "Apple is also investigating the viability of supporting PowerPC Macintosh applications on MAE." However, the document also adds that "Apple wants to ensure that the performance of RISC-on-RISC emulation will meet customer requirements before committing to this development effort." It's not clear if any work on this was ever done. In the wake of Apple's purchase of NeXT in 1997 there was a mention in Informationweek that MAE would be ported to NeXTSTEP as a solution for legacy applications, but this would have been a limiting choice because of the existing PowerPC software that people wanted to run and I don't think it was ever a truly serious proposal. Although I couldn't find anything obvious in the trade rags about exactly when MAE was cancelled, I suspect Gil Amelio did it at Steve Jobs' suggestion after the buyout, like what happened with the Apple Network Server and OpenDoc. After all, MAE had just become superfluous after Apple adopted Rhapsody as the future Mac OS: now that Apple had its own Unix, there was no reason to support anyone else's. Nevertheless, although the Classic Environment in PowerPC versions of Rhapsody and Mac OS X (nicknamed the "Blue Box") is not a direct descendant of MAE, it's very possible that MAE informed its design. In practice, Classic is actually closer to MAS in concept in that it runs native PowerPC code directly in the so-called "problem state" on a paravirtualized Mac OS 8 or 9, using the standard ROM 68K emulator for 68K applications. Classic is less flexible than MAE in that only one instance can be running on any one Power Mac and only one user can be running it, but it was never going to be the future anyway.

2 days ago 6 votes
High quality, low filesize GIFs

While the GIF format is a little on the older side, it’s still a really handy format in 2025 for sharing short clips where an actual video file might have some compatibility issues. For instance, I find when you just want a short little video on your website, a GIF is still so handy versus a video, where some browsers will refuse to autoplay them, or seem like they’ll autoplay them fine until Low Battery Mode is activated, etc. With GIFs it’s just… easy, and sometimes easy is nice. They’re super handy for showing a screen recording of a cool feature in your app, for instance. What’s not nice is the size of GIFs. They have a reputation of being absolutely enormous from a filesize perspective, and they often are, but that doesn’t have to be the case, you can be smart about your GIF and optimize its size substantially. Over the years I’ve tried lots of little apps that promise to help to no avail, so I’ve developed a little script to make this easier that I thought might be helpful to share. Naive approach Let’s show where GIFs get that bad reputation so we can have a baseline. We’ll use trusty ol’ ffmpeg (in the age of LLMs it is a super handy utility), which if you don’t have already you can install via brew install ffmpeg. It’s a handy (and in my opinion downright essential) tool for doing just about anything with video. For a video we’ll use this cute video of some kittens I took at our local animal shelter: It’s 4K, 30 FPS, 5 seconds long, and thanks to its H265/HEVC video encoding it’s only 19.5 MB. Not bad! Let’s just chuck it into ffmpeg and tell it to output a GIF and see how it does. ffmpeg -i kitties.mp4 kitties.gif Okay, let that run and- oh no. For your sake I’m not even going to attach the GIF here in case folks are on mobile data, but the resulting file is 409.4MB. Almost half a gigabyte for a 5 second GIF of kittens. We gotta do better. Better We can do better. Let’s throw a bunch of confusing parameters at ffmpeg (that I’ll break down) to make this a bit more manageable. ffmpeg -i kitties.mp4 -filter_complex "fps=24,scale=iw*sar:ih,scale=1000:-1,split[a][b];[a]palettegen[p];[b][p]paletteuse=dither=floyd_steinberg" kitties2.gif Okay, lot going on here, let’s break it down. fps=24: we’re dropping down to 24 fps from 30 fps, many folks upload full YouTube videos at this framerate so it’s more than acceptable for a GIF. scale=iw*sar:ih: sometimes video files have weird situations where the aspect ratio of each pixel isn’t square, which GIFs don’t like, so this is just a correction step so that doesn’t potentially trip us up scale=1000:-1: we don’t need our GIF to be 4K, and I’ve found 1,000 pixels across to be a great middle ground for GIFs. The -1 at the end just means scale the height to the appropriate value rather than us having to do the math ourselves. The rest is related to the color palette, we’re telling ffmpeg to scan the entire video to build an appropriate color palette up, and to use the Floyd-Steinberg algorithm to do so. I find this algorithm gives us the highest quality output (which is also handy for compressing it more in further steps) This gives us a dang good looking GIF that clocks in at about 10% the file size at 45.8MB. Link to GIF in lieu of embedding directly Nice! Even better ffmpeg is great, but where it’s geared toward videos it doesn’t do every GIF optimization imaginable. You could stop where we are and be happy, but if you want to shave off a few more megabytes, we can leverage gifsicle, a small command line utility that is built around optimizing GIFs. We’ll install gifsicle via brew install gifsicle and throw our GIF into it with the following: gifsicle -O3 --lossy=65 --gamma=1.2 kitties2.gif -o kitties3.gif So what’s going on here? O3 is essentially gifsicle’s most efficient mode, doing fancy things like delta frames so changes between frames are stored rather than each frame separately lossy=65 defines the level of compression, 65 has been a good middle ground for me (200 I believe is the highest compression level) gamma=1.2 is a bit confusing, but essentially the gamma controls how the lossy parameter reacts to (and thus compresses) colors. 1 will allow it to be quite aggressive with colors, while 2.2 (the default) is much less so. Through trial and error I’ve found 1.2 causes nice compression without much of a loss in quality The resulting GIF is now 23.8MB, shaving a nice additional 22MB off, so we’re now at a meagre 5% of our original filesize. That’s a lot closer to the 4K, 20MB input, so for a GIF I’ll call that a win. And for something like a simpler screen recording it’ll be even smaller! Make it easy Rather than having to remember that command or come back here and copy paste it all the time, add the following to your ~/.zshrc (or create it if you don’t have one already): gifify() { # Defaults local lossy=65 fps=24 width=1000 gamma=1.2 while [[ $# -gt 0 ]]; do case "$1" in --lossy) lossy="$2"; shift 2 ;; --fps) fps="$2"; shift 2 ;; --width) width="$2"; shift 2 ;; --gamma) gamma="$2"; shift 2 ;; --help|-h) echo "Usage: gifify [--lossy N] [--fps N] [--width N] [--gamma VAL] <input video> <output.gif>" echo "Defaults: --lossy 65 --fps 24 --width 1000 --gamma 1.2" return 0 ;; --) shift; break ;; --*) echo "Unknown option: $1" >&2; return 2 ;; *) break ;; esac done if (( $# < 2 )); then echo "Usage: gifify [--lossy N] [--fps N] [--width N] [--gamma VAL] <input video> <output.gif>" >&2 return 2 fi local in="$1" local out="$2" local tmp="$(mktemp -t gifify.XXXXXX).gif" trap 'rm -f "$tmp"' EXIT echo "[gifify] FFmpeg: starting encode → '$in' → temp GIF (fps=${fps}, width=${width})…" if ! ffmpeg -hide_banner -loglevel error -nostats -y -i "$in" \ -filter_complex "fps=${fps},scale=iw*sar:ih,scale=${width}:-1,split[a][b];[a]palettegen[p];[b][p]paletteuse=dither=floyd_steinberg" \ "$tmp" then echo "[gifify] FFmpeg failed." >&2 return 1 fi echo "[gifify] FFmpeg: done. Starting gifsicle (lossy=${lossy}, gamma=${gamma})…" if ! gifsicle -O3 --gamma="$gamma" --lossy="$lossy" "$tmp" -o "$out"; then echo "[gifify] gifsicle failed." >&2 return 1 fi local bytes bytes=$(stat -f%z "$out" 2>/dev/null || stat -c%s "$out" 2>/dev/null || echo "") if [[ -n "$bytes" ]]; then local mb mb=$(LC_ALL=C printf "%.2f" $(( bytes / 1000000.0 ))) echo "[gifify] gifsicle: done. Wrote '$out' (${mb} MB)." else echo "[gifify] gifsicle: done. Wrote '$out'." fi } This will allow you to easily call it as either gifify <input-filename.mp4> <output-gifname.gif> and default to the values above, or if you want to tweak them you can use any optional parameters with gifify --fps 30 --gamma 1.8 --width 600 --lossy 100 <input-filename.mp4> <output-gifname.gif>. For instance: # Using default values we used above gifify cats.mp4 cats.gif # Changing the lossiness and gamma gifify --lossy 30 --gamma 2.2 cats.mp4 cats.gif Much easier. May your GIFs be beautiful and efficient.

3 days ago 4 votes
Your Computer Interviewed Chris Curry (1981)

Chris talks about his work with Clive Sinclair and Acorn Computers with a little BBC Micro.

3 days ago 6 votes