Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
153
Blog post that probably has an audience of one, myself. Introduction Zephyr Ravenna - Confusing Information Two PCBs - Control Board & Switch Assembly Switch of the Breaker!!! Glass Canopy Removal Duct Cover Removal Swapping the Control Board Reassembly Conclusion Introduction Our kitchen has a Zephyr Ravenna kitchen hood that started to behave erratically: the LED strip didn’t want to switch off entirely the buttons usually worked, but sometimes didn’t The hood was still usable, but it didn’t give us that warm and fuzzy feeling. I tried to power cycle the thin by toggling the power breaker, but that didn’t help. In fact, a few times it made things worse: all the blue indicator LEDs would light up for 15 minutes before it went back to almost working. I hate working on appliances that are still functional enough: yes, it’s possible that I can fix it, but there’s also a considerable chance that I’ll end up with something worse, and that we’ll need to call in a $$$ technician. But my...
11 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Electronics etc…

DSLogic U3Pro16 Review and Teardown

Introduction The DSLogic U3Pro16 In the Box Probe Cables and Clips The Controller Hardware The Input Circuit Impact of Input Circuit on Circuit Under Test Additional IOs: External Clock, Trigger In, Trigger Out Software: From Saleae Logic to PulseView to DSView Installing DSView on a Linux Machine DSView UI Streaming Data to the Host vs Local Storage in DRAM Triggers Conclusion References Footnotes Introduction The year was 2020 and offices all over the world shut down. A house remodel had just started, so my office moved from a comfortably airconditioned corporate building to a very messy garage. Since I’m in the business of developing and debugging hardware, a few pieces of equipment came along for the ride, including a Saleae Logic Pro 16. I had the unit for work stuff, I may once in a while have used it for some hobby-related activities too. There’s no way around it: Saleae makes some of the best USB logic analyzers around. Plenty of competitors have matched or surpassed their digital features, but none have the ability to record the 16 channels in analog format as well. After corporate offices reopened, the Saleae went back to its original habitat and I found myself without a good 16-channel USB logic analyzer. Buying a Saleae for myself was out of the question: even after the $150 hobbyist discount, I can’t justify the $1350 price tag. After looking around for a bit, I decided to give the DSLogic U3Pro16 from DreamSourceLab a chance. I bought it on Amazon for $299. (Click to enlarge) In this blog post, I’ll look at some of the features, my experience with the software, and I’ll also open it up to discover what’s inside. The DSLogic U3Pro16 The DSLogic series currently consists of 3 logic analyzers: the $149 DSLogic Plus (16 channels) the $299 DSLogic U3Pro16 (16 channels) the $399 DSLogic U3Pro32 (32 channels) The DSLogic Plus and U3Pro16 both have 16 channels, but acquisition memory of the Plus is only 256Mbits vs 2Gbits for the Pro, and it has to make do with USB 2.0 instead of a USB 3.0 interface, a crucial difference when streaming acquistion data straight to the PC to avoid the limitations of the acquistion memory. There’s also a difference in sample rate, 400MHz vs 1GHz, but that’s not important in practice. The only functional difference between the U3Pro16 and U3Pro32 is the number of channels. It’s tempting to go for the 32 channel version but I’ve rarely had the need to record more than 16 channels at the same time and if I do, I can always fall back to my HP 1670G logic analyzer, a pristine $200 flea market treasure with a whopping 136 channels1. So the U16Pro it is! In the Box The DSLogic U16Pro comes with a nice, elongated hard case. Inside, you’ll find: the device itself. It has a slick aluminum enclosure. a USB-C to USB-A cable 5 4-way probe cables and 1 3-way clock and trigger cable 18 test clips Probe Cables and Clips You read it right, my unit came with 5 4-way probe cables, not 4. I don’t know if DreamSourceLab added one extra in case you lose one or if they mistakenly included one too much, but it’s good to have a spare. The cables are slightly stiffer than those that comes with a Saleae but not to the point that it adds a meaningful additional strain to the probe point. They’re stiffer because each of the 16 probe wires carries both signal and ground, probably a thin coaxial cable, which lowers the inductance of the probe and reduce ringing when measuring signal with fast rise and fall times. In terms of quality, the probe cables are a step up from the Saleae ones. The case is long enough so that the probe cables can be stored without bending them. The quality of the test clips is not great, but they are no different than those of the 5 times more expensive Saleae Logic 16 Pro. Both are clones of the HP/Agilent logic analyzer grabbers that I got from eBay and will do the job, but I much prefer the ones from Tektronix. The picture below shows 4 different grabbers. From left to right: Tektronix, Agilent, Saleae and DSLogic ones. Compared to the 3 others, the stem of the Tektronix probe is narrow which makes it easier to place multiple ones next to each other one fine-pitch pin arrays. If you’re thinking about upgrading your current probes to Tektronix ones: stay away from fakes. As I write this, you can find packs of 20 probes on eBay for $40 (incl shipping), so around $2 per probe. Search for “Tektronix SMG50” or “Tektronix 020-1386-01”. Meanwhile, you can buy a pack of 12 fake ones on Amazon for $16, or $1.3 a piece. They work, but they aren’t any better than the probes that come standard with the DSLogic. Fake probe on the left, Tek probe on the right The stem of the fake one is much thicker and the hooks are different too. The Tek probe has rounded hooks with a sharp angle at the tip: Tektronix hooks The hooks of a fake probe are flat and don’t attach nearly as well to their target: Fake hooks If you need to probe targets with a pitch that is smaller than 1.25mm, you should check out these micro clips that I reviewed ages ago. The Controller Hardware Each cable supports 4 probes and plugs into the main unit with 8 0.05” pins in 4x2 configuration, one pin for the signal, one pin for ground. The cable itself has a tiny PCB sticking out that slots into a gap of the aluminum enclosure. This way it’s not possible to plug in the cable incorrectly… unlike the Saleae. It’s great. When we open up the device, we can see an Infineon (formerly Cypress) CYUSB3014-BZX EZ-USB FX3 SuperSpeed controller. A Saleae Logic Pro uses the same device. These are your to-go-to USB interface chips when you need a microcontroller in addition to the core USB3 functionatility. They’re relatively cheap too, you can get them for $16 in single digital quantities at LCSC.com. The other size of the PCB is much busier. (Click to enlarge) The big ticket components are: a Spartan-6 XC6SLX16 FPGA Reponsible data acquisition, triggering, run-length encoding/compression, data storage to DRAM, and sending data to the CYUSB3014. A Saleae Logic 16 Pro has a smaller Spartan-6 LX9. That makes sense: its triggering options aren’t as advanced as the DSLogic and since it lacks external DDR memory, it doesn’t need a memory controller on the FPGA either. a DDR3-1600 DRAM It’s a Micron MT41K128M16JT-125, marked D9PTK, with 2Gbits of storage and a 16-bit data bus. an Analog Devices ADF4360-7 clock generator I found this a bit surprising. A Spartan-6 LX16 FPGA has 2 clock management tiles (CMT) that each have 1 real PLL and 2 DCMs (digital clock manager) with delay locked loop, digital frequency synthesizer, etc. The VCO of the PLL can be configured with a frequency up to 1080 MHz which should be sufficient to capture signals at 1GHz, but clearly there was a need for something better. The ADF4360-7 can generate an output clock as fast a 1800MHz. There’s obviously an extensive supporting cast: a Macronix MX25R2035F serial flash This is used to configure the FPGA. an SGM2054 DDR termination voltage controller an LM26480 power management unit It has two linear voltage regulators and two step-down DC-DC convertors. two clock oscillators: 24MHz and 19.2MHz a TI HD3SS3220 USB-C Mux This the glue logic that makes it possible for USB-C connectors to be orientation independent. a SP3010-04UTG for USB ESD protection Marked QH4 Two 5x2 pin connectors J7 and J8 on the right size of the PCB are almost certainly used to connect programming and debugging cables to the FPGA and the CYUSB-3014. (Click to enlarge) The Input Circuit I spent a bit of time Ohm-ing out the input circuit. Here’s what I came up with: The cable itself has a 100k Ohm series resistance. Together with a 100k Ohm shunt resistor to ground at the entrance of the PCB it acts as by-two resistive divider. The series resistor also limits the current going into the device. Before passing through a 33 Ohm series resistor that goes into the FPGA, there’s an ESD protection device. I’m not 100% sure, but my guess is that it’s an SRV05-4D-TP or some variant thereof. I’m not 100% sure why the 33 Ohm resistor is there. It’s common to have these type of resistors on high speed lines to avoid reflection but since there’s already a 100k resistor in the path, I don’t think that makes much sense here. It might be there for additional protection of the ESD structure that resides inside the FPGA IOs? A DSLogic has a fully programmable input threshold voltage. If that’s the case, then where’s the opamp to compare the input voltage against this threshold voltage? (There is such a comparator on a Saleae Logic Pro!) The answer to that question is: “it’s in the FPGA!” FPGA IOs can support many different I/O standards: single-ended ones, think CMOS and TTL, and a whole bunch of differential standards too. Differential protocols compare a positive and a negative version of the same signal but nothing prevents anyone from assigning a static value to the negative input of a differential pair and making the input circuit behave as a regular single-end pair with programmable threshold. Like this: There is plenty of literature out there about using the LVDS comparator in single-ended mode. It’s even possible to create pretty fast analog-digital convertors this way, but that’s outside the scope of this blog post. Impact of Input Circuit on Circuit Under Test 7 years ago, OpenTechLab reviewed the DSLogic Plus, the predecessor of the DSLogic U3Pro16. Joel spent a lot of time looking at its input circuit. He mentions a 7.6k Ohm pull-down resistor at the input, different than the 100k Ohm that I measured. There’s no mention of a series resistor in the cable or about the way adjustable thresholds are handled, but I think that the DSLogic Pro has a simular input circuit. His review continues with an in-depth analysis of how measuring a signal can impact the signal itself, he even builds a simulation model of the whole system, and does a real-world comparison between a DSLogic measurement and a fake-Saleae one. While his measurements are convincing, I wasn’t able to repeat his results on a similar setup with a DSLogic U3Pro and a Saleae Logic Pro: for both cases, a 200MHz signal was still good enough. I need to spend a bit more time to better understand the difference between my and his setup… Either way, I recommend watching this video. Additional IOs: External Clock, Trigger In, Trigger Out In addition to the 16 input pins that are used to record data, the DSLogic has 3 special IOs and a seperate 3-wire cable to wire them up. They are marked with the character “OIC” above the connector, which stands for Output, Input, Clock. Clock Instead of using a free-running internal clock, the 16 input signals can be sampled with an external sampling clock. This corresponds to a mode that’s called “state clocking” in big-iron Tektronix and HP/Agilent/Keysight logic analyzers. Using an external clock that is the same as the one that is used to generate the signals that you want to record is a major benefit: you will always record the signal at the right time as long as setup and hold requirements are met. When using a free-running internal sampling clock, the sample rate must a factor of 2 or more higher to get an accurate representation of what’s going on in the system. The DSLogic U16Pro provides the option to sample the data signals at the positive or negative edge of the external clock. On one hand, I would have prefered more options in moving the edge of the clock back and forth. It’s something that should be doable with the DLLs that are part of the DCMs blocks of a Spartan-6. But on the other, external clocking is not supported at all by Saleae analyzers. The maximum clock speed of the external clock input is 50MHz, significantly lower than the free-running sample speed. This is the usually the case as well for big iron logic analyzers. For example, my old Agilent 1670G has a free running sampling clock of 500MHz and supports a maximum state clock of 150MHz. Trigger In According to the manuals: “TI is the input for an external trigger signal”. That’s a great feature, but I couldn’t figure out a way in DSView on how to enable it. After a bit of googling, I found the following comment in an issue on GitHub. This “TI” signal has no function now. It’s reserved for compatible and further extension. This comment is dated July 29, 2018. A closer look at the U3Pro16 datasheets shows the description of the “TI” input as “Reserved”… Trigger Out When a trigger is activated inside the U3Pro, a pulse is generated on this pin. The manual doesn’t give more details, but after futzing around with the horrible oscilloscope UI of my 1670G, I was able to capture a 500ms trigger-out pulse of 1.8V. Software: From Saleae Logic to PulseView to DSView When Saleae first came to market, they raised the bar for logic analyzer software with Logic, which had a GUI that allowed scrolling and zooming in and out of waveforms at blazing speed. Logic also added a few protocol decoders, and an C++ API to create your own decoders. It was the inspiration of PulseView, an open source equivalent that acts as the front-end application of SigRok, an open source library and tool that acts as the waveform data acquisition backend. PulseView supports protocol decoders as well, but it has an easier to use Python API and it allows stacked protocol decoders: a low-level decoder might convert the recorded signals into, say, I2C tokens (start/stop/one/zero). A second decoder creates byte-level I2C transactions out of the tokens. And I2C EPROM decoder could interpret multiple I2C transactions as read and write operations. PulseView has tons of protocol decoders, from simple UART transactions, all the way to USB 2.0 decoders. When the DSLogic logic analyzer hit the market after a successful Kickstarter campaign, it shipped with DSView, DreamSourceLab’s closed source waveform viewer. However, people soon discovered that it was a reskinned version of PulseView, a big no-no since the latter is developed under a GPL3 license. After a bit of drama, DreamSourceLab made DSView available on GitHub under the required GPL3 as well, with attribution to the sigrok project. DSView is a hard fork of PulseView and there are still some bad feelings because DreamSourceLab doesn’t push changes to the PulseView project, but at least they’ve legally in the clear for the past 6 years. The default choice would be to use DSView to control your DSLogic, but Sigrok/PulseView supports DSView as well. In the figure below, you can see DSView in demo mode, no hardware device connected, and an example of the 3 stacked protocol described earlier: (Click to enlarge) For this review, I’ll be using DSView. Saleae has since upgrade Logic to Logic2, and now also supports stacked protocol decoders. It still uses a C++ API though. You can find an example decoder here. Installing DSView on a Linux Machine DreamSourceLab provides DSView binaries for Windows and MacOS binaries but not for Linux. When you click the Download button for Linux, it returns a tar file with the source code, which you’re expected to compile yourself. I wasn’t looking forward to running into the usual issues with package dependencies and build failures, but after following the instructions in the INSTALL file, I ended up with a working executable on first try. DSView UI The UI of DSView is straightforward and similar to Saleae Logic 2. There are things that annoy me in both tools but I have a slight preference for Logic 2. Both DSView and Logic2 have a demo mode that allows you to play with it without a real device attached. If you want to get a feel of what you like better, just download the software and play with it. Some random observations: DSView can pan and zoom in or out just as fast as Logic 2. On a MacBook, the way to navigate through the waveform really rubs me the wrong way: it uses the pinching gesture on a trackpad to zoom in and out. That seems like the obvious way to do it, but since it’s such a common operation to browse through a waveform it slows you down. On my HP Laptop 17, DSView uses the 2 finger slide up and down to zoom in and out which is much faster. Logic 2 also uses the 2 finger slide up and down. The stacked protocol decoders area amazing. Like Logic 2, DSView can export decoded protocols as CSV files, but only one protocol at a time. It would be nice to be able to export multiple protocols in the same CSV file so that you can easier compare transaction flow between interfaces. Logic 2 behaves predictably when you navigate through waveforms while the devices is still acquiring new data. DSView behaves a bit erratic. In DSView, you need to double click on the waveform to set a time marker. That’s easy enough, but it’s not intuitive and since I only use the device occasionally, I need to google every time I take it out of the closet. You can’t assign a text label to a DSView cursors/time marker. None of the points above disquality DSView: it’s a functional and stable piece of software. But I’d be lying if I wrote that DSView is as frictionless and polished as Logic 2. Streaming Data to the Host vs Local Storage in DRAM The Saleae Logic 16 Pro only supports streaming mode: recorded data is immediately sent to the PC to which the device is connected. The U3Pro supports both streaming and buffered mode, where data is written in the DRAM that’s on the device and only transported to the host when the recording is complete. Streaming mode introduces a dependency on the upstream bandwidth. An Infineon FX3 supports USB3 data rates up 5Gbps, but it’s far from certain that those rates are achieved in practice. And if so, it still limits recording 16 channels to around 300MHz, assuming no overhead. In practice, higher rates are possible because both devices support run length encoding (RLE), a compression technique that reduces sequences of the same value to that value and the length of the sequence. Of course, RLE introduces recording uncertainty: high activity rates may result in the exceeding the available bandwidth. The U3Pro has a 16-bit wide 2Gbit DDR3 DRAM with a maximum data rate of 1.6G samples per second. Theoretically, make it possible to record 16 channels with a 1.6GHz sample rate, but that assumes accessing DRAM with 100% efficiency, which is never the case. The GUI has the option of recording 16 signals at 500MHz or 8 signals at 1GHz. Even when recording to the local DRAM, RLE compression is still possible. When RLE is disabled and the highest sample rate is selected, 268ms of data can be recorded. When connected to my Windows laptop, buffered mode worked fine, but on my MacBook Air M2 DSView always hangs when downloading the data that was recorded at high sample rates and I have to kill the application. In practice, I rarely record at high sample rates and I always use streaming mode which works reliably on the Mac too. But it’s not a good look for DSView. Triggers One of the biggest benefits of the U3Pro over a Saleae is their trigger capability. Saleae Logic 2.4.22 offers the following options: You can set a rising edge, falling edge, a high or a low level on 1 signal in combination with some static values on other signals, and that’s it. There’s not even a rising-or-falling edge option. It’s frankly a bit embarrassing. When you have a FPGA at your disposal, triggering functionality is not hard to implement. Meanwhile, even in Simple Trigger mode, the DSLogic can trigger on multiple edges at the same time, something that can be useful when using an external sampling clock. But the DSLogic really shines when enabling the Advanced Trigger option. In Stage Trigger mode, you can create state sequences that are up to 16 phases long, with 2 16-bit comparisons and a counter per stage. Alternatively, Serial Trigger mode is a powerful enough to capture protocols like I2C, as shown below, where a start flag is triggered by a falling edge of SDA when SCL is high, a stop flag by a rising edge of SDA when SCL is high, and data bits are captured on the rising edge of SCL: You don’t always need powerful trigger options, but they’re great to have when you do. Conclusion The U3Pro is not perfect. It doesn’t have an analog mode, buffered mode doesn’t work reliably on my MacBook, and the DSView GUI is a bit quirky. But it is relatively cheap, it has a huge library of decoding protocols, and the triggering modes are excellent. I’ve used it for a few projects now and it hasn’t let me down so far. If you’re in the market for a cheap logic analyzer, give it a good look. References Logic Analyzer Shopping Comparison between Saleae Logic Pro 16, Innomaker LA2016, Innomaker LA5016, DSLogic Plus, and DSLogic U3Pro16 Footnotes It even has the digital storage scope option with 2 analog channels, 500MHz bandwidth and 2GSa/s sampling rate. ↩

a week ago 10 votes
HP Laptop 17 RAM Upgrade

Introduction Selecting the RAM Opening up Replacing the RAM Reassembly References Introduction I do virtually all of my hobby and home computing on Linux and MacOS. The MacOS stuff on a laptop and almost all Linux work a desktop PC. The desktop PC has Windows on it installed as well, but it’s too much of a hassle to reboot so it never gets used in practice. Recently, I’ve been working on a project that requires a lot of Spice simulations. NGspice works fine under Linux, but it doesn’t come standard with a GUI and, more important, the simulation often refuse to converge once your design becomes a little bit bigger. Tired of fighting against the tool, I switched to LTspice from Analog Devices. It’s free to use and while it support Windows and MacOS in theory, the Mac version is many years behind the Windows one and nearly unusuable. After dual-booting into Windows too many times, a Best Buy deal appeared on my BlueSky timeline for an HP laptop for just $330. The specs were pretty decent too: AMD Ryzen 5 7000 17.3” 1080p screen 512GB SSD 8 GB RAM Full size keyboard Windows 11 Someone at the HP marketing departement spent long hours to come up with a suitable name and settled on “HP Laptop 17”. I generally don’t pay attention to what’s available on the PC laptop market, but it’s hard to really go wrong for this price so I took the plunge. Worst case, I’d return it. We’re now 8 weeks later and the laptop is still firmly in my possession. In fact, I’ve used it way more than I thought I would. I haven’t noticed any performance issues, the screen is pretty good, the SSD larger than what I need for the limited use case, and, surprisingly, the trackpad is the better than any Windows laptop that I’ve ever used, though that’s not a high bar. It doesn’t come close to MacBook quality, but palm rejection is solid and it’s seriously good at moving the mouse around in CAD applications. The two worst parts are the plasticy keyboard and the 8GB of RAM. I can honestly not quantify whether or not it has a practical impact, but I decided to upgrade it anyway. In this blog post, I go through the steps of doing this upgrade. Important: there’s a good chance that you will damage your laptop when trying this upgade and almost certainly void your warranty. Do this at your own risk! Selecting the RAM The laptop wasn’t designed to be upgradable and thus you can’t find any official resources about it. And with such a generic name, there’s guaranteed to be multiple hardware versions of the same product. To have reasonable confidence that you’re buying the correct RAM, check out the full product name first. You can find it on the bottom: Mine is an HP Laptop 17-cp3005dx. There’s some conflicting information about being able to upgrade the thing. The BestBuy Q&A page says: The HP 17.3” Laptop Model 17-cp3005dx RAM and Storage are soldered to the motherboard, and are not upgradeable on this model. This is flat out wrong for my device. After a bit of Googling around, I learned that it has a single 8GB DDR4 SODIMM 260-pin RAM stick but that the motherboard has 2 RAM slots and that it can support up to 2x32GB. I bought a kit with Crucial 2x16GB 3200MHz SODIMMs from Amazon. As I write this, the price is $44. Opening up Removing the screws This is the easy part. There are 10 screws at the bottom, 6 of which are hidden underneath the 2 rubber anti-slip strips. It’s easy to peel these stips loose. It’s als easy to put them back without losing the stickiness. Removing the bottom cover The bottom cover is held back by those annoying plastic tabs. If you have a plastic spudger or prying tool, now is the time to use them. I didn’t so I used a small screwdriver instead. Chances are high that you’ll leave some tiny scuffmarks on the plastic casing. I found it easiest to open the top lid a bit, place the laptop on its side, and start on the left and right side of the keyboard. After that, it’s a matter of working your way down the long sides at the front and back of the laptop. There are power and USB connectors that are right against the side of the bottom panel so be careful not to poke with the spudger or screwdriver inside the case. It’s a bit of a jarring process, going back and forth and making steady improvement. In addition to all the clips around the board of the bottom panel, there are also a few in the center that latch on to the side of the battery. But after enough wiggling and creaking sounds, the panel should come loose. Replacing the RAM As expected, there are 2 SODIMM slots, one of which is populated with a 3200MHz 8GDB RAM stick. At the bottom right of the image below, you can also see the SSD slot. If you don’t enjoy the process of opening up the laptop and want to upgrade to a larger drive as well, now would be the time for that. New RAM in place! It’s always a good idea to test the surgery before reassembly: Success! Reassembly Reassembly of the laptop is much easier than taking it apart. Everything simply clicks together. The only minor surprise was that both anti-slip strips became a little bit longer… References Memory Upgrade for HP 17-cp3005dx Laptop Upgrading Newer HP 17.3” Laptop With New RAM And M.2 NVMe SSD Different model with Intel CPU but the case is the same.

a month ago 23 votes
Symbolic Reference and Hardware Models in Python

The Traditional Hardware Design and Verification Flow An Image Downscaler as Example Design The Reference Model The Micro-Architecture Model Comparing the results Conversion to Hardware Combining symbolic models with random input generation Specification changes Things to experiment with… Symbolic models are best for block or sub-block level modelling References Conclusion The Traditional Hardware Design and Verification Flow In a professional FPGA or ASIC development flow, multiple models are tested against each other to ensure that the final design behaves the way it should. Common models are: a behavioral model that describes the functionality at the highest level These models can be implemented in Matlab, Python, C++ etc. and are usually completely hardware architecture agnostic. They are often not bit accurate in their calculated results, for example because they use floating point numbers instead of fixed point numbers that are more commonly used by the hardware, A good example is the floating point C model that I used to develop my Racing the Beam Ray Tracer, though in this case, the model later transistioned into a hybrid reference/achitectural model. an architectural transaction accurate model An architectural model is already aware of how the hardware is split into major functional groups and models the interfaces between these functional groups in a bit-accurate and transaction-accurate way at the interface level. It doesn’t have a concept of timing in the form of clock cycles. source hardware model This model is the source from which the actual hardware is generated. Traditionally, and still in most cases, this is a synthesizable RTL model written in Verilog or VHDL, but high-level synthesis (HLS) is getting some traction as well. In the case of RTL, this model is cycle accurate. In the case of HLS, it still won’t be. The difference between an HLS C++ model1 and an architectural C++ model is in the way it is coded: HLS code needs to obey coding style restrictions that will otherwise prevent the HLS tool to convert the code to RTL. The HLS model is usually also split up in much more smaller units that interact with each other. RTL model The Verilog or VHDL model of the design. This can be the same as the source hardware model or it can be generated from HLS. Gate-level model The RTL model synthesized into a gatelevel netlist. During the design process, different models are compared against each other. Their outputs should be the same… to a certain extent, since it’s not possible to guarantee identical results between floating point and fixed point models. One thing that is constant among these models is that they get fed with, operate on, and output actual data values. Let’s use the example of a video pipeline. The input of the hardware block might be raw pixels from a video sensor, the processing could be some filtering algorithm to reduce noise, and the outputs are processed pixels. To verify the design, the various models are fed with a combination of generic images, ‘interesting’ images that are expected to hit certain use cases, images with just random pixels, or directed tests that explicity try to trigger corner cases. When there is mismatch between different models, the fun part begins: figuring out the root cause. For complex algorithms that have a lot of internal state, an error may have happened thousands of transactions before it manifests itself at the output. Tracking down such an issue can be a gigantic pain. For many hardware units, the hard part of the design is not the math, but getting the right data to the math units at the right time, by making sure that the values are written, read, and discarded from internal RAMs and FIFOs in the right order. Even with a detailed micro-architectural specification, a major part of the code may consist of using just the correct address calculation or multiplixer input under various conditions. For these kind of units, I use a different kind of model: instead of passing around and operating on data values through the various stages of the pipeline or algorithm, I carry around where the data is coming from. This is not so easy to do in C++ or RTL, but it’s trivial in Python. For lack of a better name, I call these symbolic models. There are thus two additional models to my arsenal of tools: a reference symbolic model a hardware symbolic model These models are both written in Python and their outputs are compared against each other. In this blog post, I’ll go through an example case where I use such model. An Image Downscaler as Example Design Let’s design a hardware module that is easy enough to not spend too much time on it for a blog post, but complex enough to illustrate the benefits of a symbolic model: an image downscaler. The core functionality is the following: it accepts an monochrome image with a maximum resolution of 7680x4320. it downscales the input image with a fixed ratio of 2 in both directions. it uses a 3x3 tap 2D filter kernel for downscaling. The figure below shows how an image with a 12x8 resolution that gets filtered and downsampled into a 6x4 resolution image. Each square represents an input pixel, each hatched square an output pixel, and the arrows show how input pixels contribute to the input of the 3x3 filter for the output pixel. For pixels that lay against the top or left border, the top and left pixels are repeated upward and leftward so that the same 3x3 filter can be used. If this downscaler is part of a streaming display pipeline that eventually sends pixels to a monitor, there is not a lot of flexibility: pixels arrive in a left to right and top to bottom scan order and you need 2 line stores (memories) because there are 3 vertical filter taps. Due to the 2:1 scaling ratio, the line stores can be half the horizontal resolution, but for an 8K resolution that’s still 7680/2 ~ 4KB of RAM just for line buffering. In the real world, you’d have to multiply that by 3 to support RGB instead of monochrome. And since we need to read and write from this RAM every clock cycle, there’s no chance of off-loading this storage to cheaper memory such as external DRAM. However, we’re lucky: the downscaler is part of a video decoder pipeline and those typically work with super blocks of 32x32 or 64x64 pixels that are scanned left-to-right and top-to-bottom. Within each super block, pixels are grouped in tiles of 4x4 pixels that are scanned the same way. In other words, there are 3 levels of left-to-right, top-to-bottom scan operations: the pixels inside each 4x4 pixel tile the pixel tiles inside each super block the super blocks inside each picture (Click to enlarge) The output has the same organization of pixels, 4x4 pixel blocks and super blocks, but due to the 2:1 downsampling in both directions, the size of a super block is 32x32 instead of 64x64 pixels. There are two major advantages to having the data flow organized this way: the downscaler can operate on one super block at a time instead of the full image For pixels inside the super block, that reduces size of the active input image width from 7680 to just 64 pixels. as long as the filter kernel is less than 5 pixels high, only 1 line store is needed The line store contains a partial sum of multiple lines of pixels. While the line store still needs to cover the full picture width when moving from one row of super blocks to the one below it, the bandwidth that is required to access the line store is but a fraction of the one befire: 1/64th to be exact. That opens up the opportunity to stream line store data in and out of external DRAM instead of keeping it in expensive on-chip RAMs. But it’s not all roses! There are some negative consequences as well: pixels from the super block above the current one must be fetched from DMA and stored in a local memory pixels at the bottom of the current super block must be sent to DMA the right-most column of pixels from the current super block are used in the next super block to when doing the 3x3 filter operation 4x4 size input tiles get downsampled to 2x2 size output tiles, but they must be sent out again as 4x4 tiles. This requires some kind of pixel tile merging operation. While the RAM area savings are totally worth it, all this adds a significant amount of data management complexity. This is the kind of problem where a symbolic micro-architecture model shines. The Reference Model When modeling transformations that work at the picture level, it’s convenient to assume that there are no memory size constraints and that you can access all pixels at all times no matter where they are located in the image. You don’t have to worry about how much RAM this would take on silicon: it’s up to the designer of the micro-architecture to figure out how to create an area efficient implementation. This usually results in a huge simplification of the reference model, which is good because as the source of truth you want to avoid any bugs in it. For our downscaler, the reference model creates an array of output pixels where each output pixel contains the coordinates of all the input pixels that are required to calculate its value. The pseudo code is someting like this: for y in range(OUTPUT_HEIGHT): for x in range(OUTPUT_WIDTH): get coordinates of input pixels for filter store coordinates at (x,y) of the output image The reference model python code is not much more complicated. You can find the code here. Instead of a 2-dimensional array, it uses a dictionary with the output pixel coordinates as key. This is a personal preference: I think ref_output_pixels [ (x,y) ] looks cleaner than ref_output_pixels[y][x] or ref_output_pixels[x][y]. When the reference model data creation is complete, the ref_output_pixels array contains values like this: (0,0) => [ Pixel(x=0, y=0), Pixel(x=0, y=0), Pixel(x=1, y=0), Pixel(x=0, y=0), Pixel(x=0, y=0), Pixel(x=1, y=0), Pixel(x=0, y=1), Pixel(x=0, y=1), Pixel(x=1, y=1) ] (1,0) => [ Pixel(x=1, y=0), Pixel(x=2, y=0), Pixel(x=3, y=0), Pixel(x=1, y=0), Pixel(x=2, y=0), Pixel(x=3, y=0), Pixel(x=1, y=1), Pixel(x=2, y=1), Pixel(x=3, y=1) ] ... (8,7) => [ Pixel(x=15, y=13), Pixel(x=16, y=13), Pixel(x=17, y=13), Pixel(x=15, y=14), Pixel(x=16, y=14), Pixel(x=17, y=14), Pixel(x=15, y=15), Pixel(x=16, y=15), Pixel(x=17, y=15) ] ... The reference value of each output pixel is a list of input pixels that are needed to calculate its value. I do not care about the actual value of the pixels or the mathematical operation that is applied on these inputs. The Micro-Architecture Model The source code of the hardware symbolic model can be found here. It has the following main data buffers and FIFOs: an input stream, generated by gen_input_stream, 4x4 pixel tiles that are sent super block by super block and then tile by tile. an output stream of 4x4 pixel tiles with the downsampled image. a DMA FIFO, modelled with simple Python list in which the bottom pixels of a super block area stored and later fetched when the super block of the next row needs the neighboring pixels above. buffers with above and left neighboring pixels that cover the width and height of a super block. an output merge FIFO is used to group a set to 4 2x2 downsampled pixels into a 4x4 tile of pixels The model loops through the super blocks in scan order and then the tiles in scan order, and for each 4x4 tile it calculates a 2x2 output tiles. for sy in range(nr of vertical super block): for sx in range(nr of horizontal super block): for tile_y in range(nr of vertical tiles in a superblock): for tile_x in range(nr of horizontal tiles in a superblock): fetch 4x4 tile with input pixels calculate 2x2 output pixels merge 2x2 output pixels into 4x4 output tiles data management When we look at the inputs that are required to calculate the 4 output pixels for each tile of 16 input pixels, we get the following: In addition to pixels from the tile itself, some output pixels also need input values from above, above-left and/or left neighbors. When switching from one super block to the next, buffers must be updated with neighboring pixels for the whole width and height of the super block. But instead of storing the values of individual pixels, we can store intermediate sums to reduce the number of values: At first sight, it looks like this reduces the number of values in the above and left neighbors buffers by half, but that’s only true for the above buffer. While the left neighbors can be reduced by half for the current tile, the bottom left pixel value is still needed to calculuate the above value for the 4x4 tiles of the next row. So the size of the left buffer is not 1/2 of the size of the super block but 3/4. In the left figure above, the red rectangles contain the components needed for the top-left output pixel, the green for the top-right output pixel etc. The right figure shows the partial sums that must be calculated for the left and above neighbors for future 4x4 tiles. These red, green, blue and purple rectangles have direct corresponding sections in the code. p00 = ( tile_above_pixels[0], tile_left_pixels[0], input_tile[0], input_tile[1], input_tile[4], input_tile[5] ) p10 = ( tile_above_pixels[1], input_tile[ 1], input_tile[ 2], input_tile[ 3] , input_tile[ 5], input_tile[ 6], input_tile[ 7] ) p01 = (tile_left_pixels[1], input_tile[ 4], input_tile[ 5] , input_tile[ 8], input_tile[ 9] , input_tile[12], input_tile[13] ) p11 = ( input_tile[ 5], input_tile[ 6], input_tile[ 7] , input_tile[ 9], input_tile[10], input_tile[11] , input_tile[13], input_tile[14], input_tile[15] ) For each tile, there’s quite a bit of bookkeeping of context values, reading and writing to buffers to keep everything going. Comparing the results In traditional models, as soon as intermediate values are calculated, the original values can be dropped. In the case of our example, with a filter where all coefficients are 1, the above and left values intermediate values of the top-left output pixels are summed and stored as 18 and 11, and the original values of (3,9,6) and (5,6) aren’t needed anymore. This and the fact the multiple inputs might have the same numerical value is what makes traditional models often hard to debug. This is not the case for symbolic models where all input values, the input pixel coordinates, are carried along until the end. In our model, the intermediate results are not removed from the final result. Here is the output result for output pixel (12,10): ... ( # Above neighbor intermediate sum (Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19)), # Left neighbor intermediate sum (Pixel(x=23, y=20), Pixel(x=23, y=21)), # Values from the current tile Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=24, y=21), Pixel(x=25, y=21) ), ... Keeping the intermediate results makes it easier to debug but to compare against the reference model, the data with nested lists must be flattened into this: ... ( Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19), Pixel(x=23, y=20), Pixel(x=23, y=21), Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=24, y=21), Pixel(x=25, y=21) ), ... But even that is not enough to compare: the reference value has the 3x3 input values in a scan order that was destroyed due to using intermediate values so there’s a final sorting step to restore the scan order: ... ( Pixel(x=23, y=19), Pixel(x=24, y=19), Pixel(x=25, y=19), Pixel(x=23, y=20), Pixel(x=24, y=20), Pixel(x=25, y=20), Pixel(x=23, y=21), Pixel(x=24, y=21), Pixel(x=25, y=21) ) Finally, we can go through all the output tiles of the hardware model and compare them against the tiles of the reference model. If all goes well, the script should give the following: > ./downscaler.py PASS! Any kind of bug will result in an error message like this one: > ./downscaler.py MISMATCH! sb(1,0) tile(0,0) (0,0) 1 ref: [Pixel(x=7, y=0), Pixel(x=7, y=0), Pixel(x=8, y=0), Pixel(x=8, y=0), Pixel(x=9, y=0), Pixel(x=9, y=0), Pixel(x=7, y=1), Pixel(x=8, y=1), Pixel(x=9, y=1)] hw: [Pixel(x=7, y=0), Pixel(x=8, y=0), Pixel(x=8, y=0), Pixel(x=9, y=0), Pixel(x=9, y=0), Pixel(x=7, y=1), Pixel(x=8, y=1), Pixel(x=9, y=1), Pixel(x=7, y=4)] Conversion to Hardware The difficulty of converting the Python micro-architectural model to a hardware implementation model depends on the abstraction level of the hardware implementation language. When using C++ and HLS, the effort can be trivial: some of my blocks have a thousand or more lines of Python that can be converted entirely to C++ pretty much line by line. It can take a few weeks to develop and debug the Python model yet getting the C++ model running only takes a day or two. If the Python model is fully debugged, the only issues encountered are typos made during the conversion and signal precision mistakes. The story is different when using RTL: with HLS, the synthesis-to-Verilog will convert for loops to FSMs and take care of pipelining. When writing RTL directly, that tasks falls on you. You could change the Python model and switch to FSMs there to make that step a bit easier. Either way, having flushed out all the data management will allow you to focus on just the RTL specific tasks while being confident that the core architecture is correct. Combining symbolic models with random input generation The downscaler example is a fairly trivial unit with a predictable input data stream and a simple algorithm. In a video encoder or decoder, instead of a scan-order stream of 4x4 tiles, the input is often a hierarchical coding tree with variable size coding units that are scanned in quad-tree depth-first order. Dealing with this kind of data stream kicks up the complexity a whole lot. For designs like this, the combination of a symbolic model and a random coding tree generator is a super power that will hit corner case bugs with an efficiency that puts regular models to shame. Specification changes The benefits of symbolic models don’t stop with quickly finding corner case bugs. I’ve run in a number of cases where the design requirements weren’t fully understood at the time of implementation and incorrect behavior was discovered long after the hardware model was implemented. By that time, some of the implementation subtleties may have already been forgotten. It’s scary to make changes on a hardware design that has complex data management when corner case bugs take thousands of regression simulations to uncover. If the symbolic model is the initial source of truth, this is usually just not an issue: exhaustive tests can often be run in seconds and once the changes pass there, you have confidence that the corresponding changes in the hardware model source code are sound. Things to experiment with… Generating hardware stimuli I haven’t explored this yet, but it is possible to use a symbolic model to generate stimuli for the hardware model. All it takes is to replace the symbolic input values (pixel coordinates) by the actual pixel values at that location and performing the right mathematical equations on the symbolic values. A joint symbolic/hardware model Having a Python symbolic model and a C++ HLS hardware model isn’t a huge deal but there’s still the effort of converting one into the other. There is a way to have a unified symbolic/hardware model by switching the data type of the input and output values from one that contains symbolic values to one that contains the real values. If C++ is your HLS language, then this requires writing the symbolic model in C++ instead of Python. You’d trade off the rapid interation time and conciseness of Python against having only a single code base. Symbolic models are best for block or sub-block level modelling Since symbolic models carry along the full history of calculated values, symbolic models aren’t very practical when modelling multiple larger blocks together: hierarchical lists with tens or more input values create information overload. For this reason, I use symbolic models at the individual block level or sometimes even sub-block level when dealing with particularly tricky data management cases. My symbolic model script might contain multiple disjoint models that each implement a sub-block of the same major block, without interacting with eachother. References symbolic_model repo on GitHub Conclusion Symbolic models in Python have been a major factor in boosting my design productivity and increasing my confidence in a micro-architectural implementation. If you need to architect and implement a hardware block with some tricky data management, give them a try, and let me know how it went! Not all HLS code is written with C++. There are other languages as well. ↩

3 months ago 54 votes
Making Screenshots of Test Equipment Old and New

Introduction Screenshot Capturing Interfaces Hardware and Software Tools Capturing GPIB data in Talk Only mode TDS 540 Oscilloscope - GPIB - PCL Output HP 54542A Oscilloscope - Parallel Port - PCL or HPGL Output HP Inifinium 54825A Oscilloscope - Parallel Port - Encapsulated Postscript TDS 684B - Parallel Port - PCX Color Output Advantest R3273 Spectrum Analyzer - Parallel Port - PCL Output HP 8753C Vector Network Analyzer - GPIB - HP 8753 Companion Siglent SDS 2304X Oscilloscope - USB Drive, Ethernet or USB Introduction Last year, I create Fake Parallel Printer, a tool to capture the output of the parallel printer port of old-ish test equipment so that it can be converted into screenshots for blog posts etc. It’s definitely a niche tool, but of all the projects that I’ve done, it’s definitely the one that has seen the most amount of use. One issue is that converting the captured raw printing data to a bitmap requires recipes that may need quite a bit of tuning. Some output uses HP PCL, other is Encapsulated Postscript (EPS), if you’re lucky the output is a standard bitmap format like PCX. In the blog post, describe the procedures that I use to create screenshots of the test equipment that I personally own, so that don’t need to figure it out again when I use the device a year later. That doesn’t make it all that useful for others, but somebody may benefit from it when googling for it… As always, I’m using Linux so the software used below reflects that. Screenshot Capturing Interfaces Here are some common ways to transfer screenshots from test equipment to your PC: USB flash drive Usually the least painless by far, but it only works on modern equipment. USB cable Requires some effort to set the right udev driver permissions and a script that sends commands that are device specific. But it generally works fine. Ethernet Still requires slightly modern equipment, and there’s often some configuration pain involved. RS-232 serial Reliable, but often slow. Floppy disk I have a bunch of test equipment with a floppy drive and I also have a USB floppy drive for my PC. However, the drives on all this equipment is broken, in the sense that it can’t correctly write data to a disc. There must be some kind of contamination going on when a floppy drive head isn’t used for decades. GPIB Requires an expensive interface dongle and I’ve yet to figure out how to make it work for all equipment. Below, I was able to make it work for a TDS 540 oscilloscope, but not for an HP 54532A oscilloscope, for example. Parallel printer port Available on a lot of old equipment, but it normally can’t be captured by a PC unless you use Fake Parallel Printer. We’re now more than a year later, and I use it all the time. I find it to be the easiest to use of all the printer interfaces. Hardware and Software Tools GPIB to USB Dongle If you want to print to GPIB, you’ll need a PC to GPIB interface. These days, the cheapest and most common are GPIB to USB dongles. I’ve written about those here and here. The biggest take-away is that they’re expensive (>$100 second hand) and hard to configure when using Linux. And as mentioned above, I have only had limited success and using them in printer mode. ImageMagick ImageMagick is the swiss army knife of bitmap file processing. It has a million features, but I primarily use it for file format conversion and image cropping. I doubt that there’s any major Linux distribution that doesn’t have it as a standard package… sudo apt install imagemagick GhostPCL GhostPCL is used to decode PCL files. On many old machines, these files are created when printing to Thinkjet, Deskjet or Laserjet. Installation: Download the GhostPCL/GhostPDL source code. Compile cd ~/tools tar xfv ~/Downloads/ghostpdl-10.03.0.tar.gz cd ghostpdl-10.03.0/ ./configure --prefix=/opt/ghostpdl make -j$(nproc) export PATH="/opt/ghostpdl/bin:$PATH" Install sudo make install A whole bunch of tools will now be available in /opt/ghostpdl/bin, including gs (Ghostscript) and gpcl6. hp2xx hp2xx converts HPGL files, originally intended for HP plotter, to bitmaps, EPS etc. It’s available as a standard package for Ubuntu: sudo apt install hp2xx Inkscape Inkscape is full-featured vector drawing app, but it can also be used as a command line tool to convert vector content to bitmaps. I use it to convert Encapsulated Postscript file (EPS) to bitmaps. Like other well known tools, installation on Ubuntu is simple: sudo apt install inkscape HP 8753C Companion This tool is specific to HP 8753 vector network analyzers. It captures HPGL plotter commands, extracts the data, recreates what’s displayed on the screen, and allow you to interact with it. It’s available on GitHub. Capturing GPIB data in Talk Only mode Some devices will only print to GPIB in Talk Only mode, or sometimes it’s just easier to use that way. When the device is in Talk Only mode, the PC GPIB controller becomes a Listen Only device, a passive observer that doesn’t initiate commands but just receives data. I wrote the following script to record the printing data and save it to a file: gpib_talk_to_file.py: (Click to download) #! /usr/bin/env python3 import sys import pyvisa gpib_addr = int(sys.argv[1]) output_filename = sys.argv[2] rm = pyvisa.ResourceManager() inst = rm.open_resource(f'GPIB::{gpib_addr}') try: # Read data from the device data = inst.read_raw() with open(output_filename, 'wb') as file: file.write(data) except pyvisa.VisaIOError as e: print(f"Error: {e}") Pyvisa is a universal library to take to test equipement. I wrote about it here. It will quickly time out when no data arrives in Talk Only mode, but since all data transfers happen with valid-ready protocol, you can avoid time-out issued by pressing the hardcopy or print button on your oscilloscope first, and only then launch the script above. This will work as long as the printing device doesn’t go silent while in the middle of printing a page. TDS 540 Oscilloscope - GPIB - PCL Output My old TDS 540 oscilloscope doesn’t have a printer port, so I had to make do with GPIB. Unlike later version of the TDS series, it also doesn’t have the ability to export bitmaps directly, but it has outputs for: Thinkjet, Deskjet, and Laserjet in PCL format Epson in ESC/P format Interleaf format EPS Image format HPGL plotter format The TDS 540 has a screen resolution of 640x480. I found the Thinkjet output format, with a DPI of 75x75, easiest to deal with. The device adds a margin of 20 pixels to the left, and 47 pixels at the top, but those can be removed with ImageMagick. With a GPIB address of 11, the overall recipe looks like this: # Capture the PCL data gpib_talk_to_file.py 11 tds540.thinkjet.pcl # Convert PCL to png gpcl6 -dNOPAUSE -sOutputFile=tds540.png -sDEVICE=png256 -g680x574 -r75x75 tds540.thinkjet.pcl # Remove the margins and crop the image to 640x480 convert tds540.png -crop 640x480+20+47 tds540.crop.png The end result looks like this: HP 54542A Oscilloscope - Parallel Port - PCL or HPGL Output This oscilloscope was an ridiculous $20 bargain at the Silicon Valley Electronics Flea Market and it’s the one I love working with the most: the user interface is just so smooth and intuitive. Like all other old oscilloscopes, the biggest thing going against it is the amount if desk space it requires. It has a GPIB, RS-232, and Centronics parallel port, and all 3 can be used for printing. I tried to get printing to GPIB to work but wasn’t successful: I’m able to talk to the device and send commands like “*IDN?” and get a reply just fine, but the GPIB script that works fine with the TDS 540 always times out eventually. I switched to my always reliable Fake Parallel Printer and that worked fine. There’s also the option to use the serial cable. The printer settings menu can by accessed by pressing the Utility button and then the top soft-button with the name “HPIB/RS232/CENT CENTRONICS”. You have the following options: ThinkJet DeskJet75dpi, DeskJet100dpi, DeskJet150dpi, DeskJet300dpi LaserJet PaintJet Plotter Unlike the TDS 540 I wasn’t able to get the ThinkJet option to convert into anything, but the DeskJet75dpi option worked fine with this recipe: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f hp54542a_ -s deskjet.pcl -v gpcl6 -dNOPAUSE -sOutputFile=hp54542a.png -sDEVICE=png256 -g680x700 -r75x75 hp54542a_0.deskjet.pcl convert hp54542a.png -crop 640x388+19+96 hp54542a.crop.png The 54542A doesn’t just print out the contents of the screen, it also prints the date and adds the settings for the channels that are enabled, trigger options etc. The size of these additional values depends on how many channels and other parameters are enabled. When you select PaintJet or Plotter as output device, you have the option to select different colors for regular channels, math channels, graticule, markers etc. So it is possible to create nice color screenshots from this scope, even if the CRT is monochrome. I tried the PaintJet option, and while gcpl6 was able to extract an image, the output was much worse than the DeskJet option. I had more success using the Plotter option. It prints out a file in HPGL format that can be converted to a bitmap with hp2xx. The following recipe worked for me: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f hp54542a_ -s plotter.hpgl -v hp2xx -m png -a 1.4 --width 250 --height 250 -c 12345671 -p 11111111 hp54542a_0.plotter.hpgl I’m not smitten with the way it looks, but if you want color, this is your best option. The command line options of hp2xx are not intuitive. Maybe it’s possible to get this to look a bit better with some other options. Click to enlarge HP Inifinium 54825A Oscilloscope - Parallel Port - Encapsulated Postscript This indefinite-loaner-from-a-friend oscilloscope has a small PC in it that runs an old version of Windows. It can be connected to Ethernet, but I’ve never done that: capturing parallel printer traffic is just too convenient. On this oscilloscope, I found that printing things out as Encapsulated Postscript was the best option. I then use inkscape to convert the screenshot to PNG: ./fake_printer.py --port=/dev/ttyACM0 -t 2 -v --prefix=hp_osc_ -s eps inkscape -f ./hp_osc_0.eps -w 1580 -y=255 -e hp_osc_0.png convert hp_osc_0.png -crop 1294x971+142+80 hp_osc_0_cropped.png Ignore the part circled in red, that was added in post for an earlier article: Click to enlarge TDS 684B - Parallel Port - PCX Color Output I replaced my TDS 540 oscilloscope with a TDS 684B. On the outside they look identical. They also have the same core user interface, but the 648B has a color screen, a bandwidth of 1GHz, and a sample rate of 5 Gsps. Print formats The 684B also has a lot more output options: Thinkjet, Deskjet, DeskjetC (color), Laserjet output in PCL format Epson in ESC/P format DPU thermal printer PC Paintbrush mono and color in PCX file format TIFF file format BMP mono and color format RLE color format EPS mono and color printer format EPS mono and color plotter format Interleaf .img format HPGL color plot Phew. Like the HP 54542A, my unit has GPIB, parallel port, and serial port. It can also write out the files to floppy drive. So which one to use? BMP is an obvious choice and supported natively by all modern PCs. The only issue is that it gets written out without any compression so it takes over 130 seconds to capture with fake printer. PCX is a very old bitmap file format, I used it way back in 1988 on my first Intel 8088 PC, but it compresses with run length encoding which works great on oscilloscope screenshots. It only take 22 seconds to print. I tried the TIFF option and was happy to see that it only took 17 seconds, but the output was monochrome. So for color bitmap files, PCX is the way to go. The recipe: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f tds684_ -s pcx -v convert tds684_0.pcx tds684.png The screenshot above uses the Normal color setting. The scope also a Bold color setting: There’s a Hardcopy option as well: It’s a matter of personal taste, but my preference is the Normal option. Advantest R3273 Spectrum Analyzer - Parallel Port - PCL Output Next up is my Advantest R3273 spectrum analyzer. It has a printer port, a separate parallel port that I don’t know what it’s used for, a serial port, a GPIB port, and floppy drive that refuses to work. However, in the menus I can only configure prints to go to floppy or to the parallel port, so fake parallel printer is what I’m using. The print configuration menu can be reached by pressing: [Config] - [Copy Config] -> [Printer]: The R3273 supports a bunch of formats, but I had the hardest time getting it create a color bitmap. After a lot of trial and error, I ended up with this: ~/projects/fake_parallel_printer/fake_printer.py -i -p /dev/ttyACM0 -f r3273_ -s pcl -v gpcl6 -dNOPAUSE -sOutputFile=r3273_tmp.png -sDEVICE=png256 -g4000x4000 -r600x600 r3273_0.pcl convert r3273_tmp.png -filter point -resize 1000 r3273_filt.png rm r3273_tmp.png convert r3273_filt.png -crop 640x480+315+94 r3273.png rm r3273_filt.png The conversion loses something in the process. The R3273 hardcopy mimics the shades of depressed buttons with a 4x4 pixel dither pattern: If you use a 4x4 pixel box filter and downsample by a factor of 4, this dither pattern converts to a nice uniform gray, but the actual spectrum data gets filtered down as well: With the recipe above, I’m using 4x4 to 1 pixel point-sampling instead, with a phase that is chosen just right so that the black pixels of the dither pattern get picked. The highlighted buttons are now solid black and everything looks good. HP 8753C Vector Network Analyzer - GPIB - HP 8753 Companion My HP 8753C VNA only has a GPIB interface, so there’s not a lot of choice there. I’m using HP 8753 Companion. It can be used for much more than just grabbing screenshots: you can save the measured data to a filter, upload calibration kit data and so on. It’s great! You can render the screenshot the way it was plotted by the HP 8753C, like this: Click to enlarge Or you can display it as in a high resolution mode, like this: Click to enlarge Default color settings for the HPGL plot aren’t ideal, but everything is configurable. If you don’t have one, the existence of HP 8753 Companion alone is a good reason to buy a USB-to-GPIB dongle. Click to enlarge Siglent SDS 2304X Oscilloscope - USB Drive, Ethernet or USB My Siglent SDS 2304X was my first oscilloscope. It was designed 20 years later than all the other stuff, with a modern UI, and modern interfaces such as USB and Ethernet. There is no GPIB, parallel or RS-232 serial port to be found. I don’t love the scope. The UI can become slow when you’re displaying a bunch of data on the screen, and selecting anything from a menu with a detentless rotary knob can be the most infuriating experience. But it’s my daily driver because it’s not a boat anchor: even on my messy desk, I can usually create room to put it down without too much effort. You’d think that I use USB or Ethernet to grab screenshots, but most of the time I just use a USB stick and shuttle it back and forth between the scope and the PC. That’s because setting up the connection is always a bit of pain. However, if you insist, you can set things up this way: Ethernet To configure Ethernet, you need to go to [Utility] -> [Next Page] -> [I/O] -> [LAN]. Unlike my HP 1670G logic analyzer, the Siglent supports DHCP but when writing this blog post, the scope refused to grab an IP address on my network. No amount of rebooting, disabling and re-enabling DHCP helped. I have gotten it to work in the past, but today it just wasn’t happening. You’ll probably understand why using a zero-configuration USB stick becomes an attractive alternative. USB If you want to use USB, you need an old relic of a USB-B cable. It shows up like this: sudo dmesg -w [314170.674538] usb 1-7.1: new full-speed USB device number 11 using xhci_hcd [314170.856450] usb 1-7.1: not running at top speed; connect to a high speed hub [314170.892455] usb 1-7.1: New USB device found, idVendor=f4ec, idProduct=ee3a, bcdDevice= 2.00 [314170.892464] usb 1-7.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [314170.892469] usb 1-7.1: Product: SDS2304X [314170.892473] usb 1-7.1: Manufacturer: Siglent Technologies Co,. Ltd. [314170.892476] usb 1-7.1: SerialNumber: SDS2XJBD1R2754 Note 3 key parameters: USB vendor ID: f4ec USB product ID: ee3a Product serial number: SDS2XJBD1R2754 Set udev rules so that you can access this device of USB without requiring root permission by creating an /etc/udev/rules.d/99-usbtmc.rules file and adding the following line: SUBSYSTEM=="usb", ATTR{idVendor}=="f4ec", ATTR{idProduct}=="ee3a", MODE="0666" You should obviously replace the vendor ID and product ID with the one of your case. Make the new udev rules active: sudo udevadm control --reload-rules sudo udevadm trigger You can now download screenshots with the following script: siglent_screenshot_usb.py: (Click to download) #!/usr/bin/env python3 import argparse import io import pyvisa from pyvisa.constants import StatusCode from PIL import Image def screendump(filename): rm = pyvisa.ResourceManager('') # Siglent SDS2304X scope = rm.open_resource('USB0::0xF4EC::0xEE3A::SDS2XJBD1R2754::INSTR') scope.read_termination = None scope.write('SCDP') data = scope.read_raw(2000000) image = Image.open(io.BytesIO(data)) image.save(filename) scope.close() rm.close() if __name__ == '__main__': parser = argparse.ArgumentParser( description='Grab a screenshot from a Siglent DSO.') parser.add_argument('--output', '-o', dest='filename', required=True, help='the output filename') args = parser.parse_args() screendump(args.filename) Once again, take note of this line scope = rm.open_resource('USB0::0xF4EC::0xEE3A::SDS2XJBD1R2754::INSTR') and don’t forget to replace 0xF4EC, 0xEE3A, and SDS2XJBD1R2754 by the correct USB product ID, vendor ID and serial number. Call the script like this: ./siglent_screenshot_usb.py -o siglent_screenshot.png If all goes well, you’ll get something like this:

4 months ago 60 votes
A Hardware Interposer to Fix the Symmetricom SyncServer S200 GPS Week Number Rollover Problem

Introduction IMPORTANT: Use the Right GPS Antenna! The Problem: SyncServer Refuses to Lock to GPS The GPS Week Number Rollover Issue Making the Furuno GT-8031 Work Again How It Works Build Instructions Power Supply Recapping The Future: A Software-Only Solution The Result References Footnotes Introduction In my earlier blog post, I wrote about how to set up a SyncServer S200 as a regular NTP server, and how to install the backside BNC connectors to bring out the 10 MHz and 1PPS outputs. The ultimate goal is to use the SyncServer as a lab timing reference, but at the end of that blog post, it’s clear that using NTP alone is not good enough to get a precise 10 MHz clock: the output frequency was off by almost 100Hz! To get a more accurate output clock, you need to synchronize the SyncServer to the GPS system so that it becomes a GPS disciplined oscillator (GPSDO) and a stratum 1 time keeping device. The S200 has a GPS antenna input and a GPS receiver module inside, so in theory this should be a matter of connecting the right GPS antenna. But in practice it wasn’t simple at all because the GPS module in the SyncServer S200 is so old that it suffers from the so-called Week Number Roll-Over (WNRO) problem. In this blog post, I’ll discuss what the WNRO problem is all about and show my custom hardware solution that fixed the problem. IMPORTANT: Use the Right GPS Antenna! Let me once again point out the importance of using the right GPS antenna to avoid damaging it permanently due to over-voltage. GPS antennas have active elements that amplify the received signal right at the point of reception before sending it down the cable to the GPS unit. Most antennas need 3.3V or 5V that is supplied through the GPS antenna connector, but Symmetricom S200 supplies 12V! Make sure you are using a 12V GPS antenna! Check out my earlier blog post for more information. The Problem: SyncServer Refuses to Lock to GPS When you connect a GPS antenna to a SyncServer in its original configuration, everything seems to go fine initially. The front panel reports the antenna connection as “Good”, a few minutes later the number of satellites detected goes up, and the right location gets reported. But the most important “Status” field remains stuck in the “Unlocked” state which means that the SyncServer refuses to lock its internal clock to the GPS unit. This issue has been discussed to death in a number of EEVblog forum threads, but conclusion is always the same: the Furuno GT-8031 GPS module suffers from the GPS Week Number Roll-Over (WNRO) issue and nothing can be done about it other than replacing the GPS module with an after-market replacement one. The GPS Week Number Rollover Issue The original GPS system used a 10-bit number to count the number of weeks that have elapsed since January 6, 1980. Every 19.7 years, this number rolls over from 1023 back to 0. The first rollover happened on August 21, 1999, the second on April 6, 2019, and the next one will be on November 20, 2038. Check out this US Naval Observatory presention for some more information. GPS module manufacturers have dealt with the issue is by using a dynamic base year or variable pivot year. Let’s say that a device is designed in at the start of 2013 during week 697 of the 19.7 year epoch that started in 1999. The device then assumes that all week numbers higher than 697 are for years 2013 to 2019, and that numbers from 0 to 697 are for the years 2019 and later. Such a device will work fine for the 19.7 years from 2013 until 2032. And with just a few bits of non-volatile storage it is even possible to make a GPS unit robust against this kind of rollover forever: if the last seen date by the GPS unit was 2019 and suddenly it sees a date of 1999, it can infer that there was a rollover and record in the storage that the next GPS epoch has started. Unfortunately many modules don’t do that. The only way to fix the issue is to either update the module firmware or for some external device to tell the GPS module about the current GPS epoch. Many SyncServer S2xx devices shipped with a Motorola M12 compatible module that is based on a Furuno GT-8031 which has a starting date of February 2, 2003 and rolled over on September 18, 2022. You can send a command to the module that hints the current date and that fixes the issue, but there is no SyncServer S200 firmware that supports that. Check out this Furuno technical document for more the rollover details. The same document also tells us how to adjust the rollover date. It depends on the protocol that is supposed by the module. For a GT-8031, you need to use the ZDA command, other modules require the @@Gb or the TIME command. If you want to run a SyncServer for its intended purpose, a time server, it is of course important that you get the correct date. But if you don’t care about the date because the primary purpose it to use it as a GPSDO then the 1PPS output from the GPS module should be sufficient to drive a PLL that locks the internal 10 MHz oscillator to the GPS system. This 1PPS signal is still present on the GT-8031 of my unit and I verified with an oscilloscope that it matches the 1PPS output of my TM4313 GPSDO both in frequency and in phase as soon as it sees a copule of satellites. But there is something in the SyncServer firmware that depends on more than just the 1PPS signal because my S200 refuses to enter into “GPS Locked” mode, and the 10 MHz oscillator stays in free-running mode at the miserable frequency of roughly 9,999,993 Hz. Making the Furuno GT-8031 Work Again There are aftermarket replacement modules out there with a rollover date that is far into the future, but they are priced pretty high. There’s the iLotus IL-GPS-0030-B Module which goes for around $100 in AliExpress but that one has a rollover date on August 2024, and other modules go as high as $240. The reason for these high prices is because these modules don’t use a $5 location GPS chip but specialized ones that are designed for accurate time keeping such as the u-blox NEO/LEA series. Instead of solving the problem with money, I wondered if it was possible to make the GT-8031 send the right date to the S200 with a hardware interposer that sits between the module and motherboard. There were 2 options: intercept the date sent from the GPS module, correct it, and transmit it to the motherboard I tried that, but didn’t get it work. send a configuration command to the GPS module to set the right date This method was suggested by Alex Forencich on the time-nuts mailing list. He implemented it by patching the firmware of a microcontroller on his SyncServer S350. His solution might be the best one eventually, it doesn’t require extra hardware, but by the time he posted his message, my interposer was already up and running on my desk. It took 2 PCB spins, but I eventually came up with the following solution: Click to enlarge In the picture above, you see the GT-8031 plugged into my interposer that is in turn plugged into the motherboard. The interposer itself looks like this: The design is straightforward: an RP2024-zero, a smaller variant of the Raspberry Pico, puts itself in between the serial TX and RX wires that normally go between the module and the motherboard. It’s up to the software that runs on the RP2040 to determine what to do with the data streams that run over those wires. There are a few of other connectors: the one at the bottom right is for observing the signals with a logic analyzer. There are also 2 connectors for power. When finally installed, the interposer gets powered with a 5V supply that’s available on a pin that is conveniently located right behind the GPS module. In the picture above, the red wires provides the 5V, the ground is connected through the screws that hold the board in place. The total cost of the board is as follows: PCB: $2 for 5 PCBs + $1.50 shipping = $3.50 RP2040-zero: $9 on Amazon 2 5x2 connectors: $5 on Mouser + $5 shipping = $10 Total: $22.50 The full project details can be found my gps_interposer GitHub repo. How It Works To arrive at a working solution, I recorded all the transactions on the serial port RX and TX and ran them through a decoder script to convert them into readable GPS messages. Here are the messages that are exchanged between the motherboard after powering up the unit: >>> @@Cf - set to defaults command: [], [37], [13, 10] - 7 >>> @@Gf - time raim alarm message: [0, 10], [43], [13, 10] - 9 >>> @@Aw - time correction select: [1], [55], [13, 10] - 8 >>> @@Bp - request utc/ionospheric data: [0], [50], [13, 10] - 8 >>> @@Ge - time raim select message: [1], [35], [13, 10] - 8 >>> @@Gd - position control message: [3], [32], [13, 10] - 8 <<< @@Aw - time correction select: [1], [55], [13, 10] - 8 time_mode:UTC <<< @@Co - utc/ionospheric data input: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [44], [13, 10] - 29 alpha0:0, alpha1:0, alhpa2:0, alpha3:0 beta0:0, beta1:0, alhpa2:0, beta3:0 A0:0, A1:0 delta_tls:0, tot:0, WNt:0, WNlsf:0, DN:0, delta_Tlsf:0 The messages above can be seen in the blue and green rectangles of this logic analyzer screenshot: Click to enlarge Earlier, we saw that the GT-8031 module itself needs the ZDA command to set the time. This is an NMEA command. The messages above are Motorola M12 commands however. On the M12 module that contains the GT-8031 there is also a TI M430F147 microcontroller that takes care of the conversion between Motorola and NMEA commands. Note how the messages that arrive at the interposer immediately get forwarded to the other side. But there is one transaction marked in red, generated by the interposer itself, that sends the @@Gb command. When the GPS module is not yet locked to a satelllite, this command sends an initial estimate of the current date and time. The M12 User Guide has the following to say about this command: The interposer sends a hinted date of May 4, 2024. When the GPS module receives date from its first satellite, it corrects the date and time to the right value but it uses the initial estimated date to correct for the week number rollover! I initially placed the @@Gb command right after the @@Cf command that resets the module to default values, but that didn’t work. The solution was to send it after the initial burst of configuration commmands. With this fix in place, it still takes almost 15 minutes before the S200 enters into GPS lock. You can see this on the logic analyzer by an increased rate of serial port traffic: Click to enlarge There’s also the immense satisfaction to see the GPS status changed to “Locked”: Build Instructions Disclaimer: it’s easy to make mistakes and permanently damage your SyncServer. Build this at your own risk! All design assets for this project can be found in my gps_interposer project on GitHub. There is a KiCAD project with schematic, PCB design and gerber files. A PDF with the schematics can be found here. The design consists of a few connectors, some resistors and the RP2024-zero. If you just want the Gerbers, you can use gps_interposer_v2.zip. The interposer is connected to the motherboard and to the M12 module with 2x5 pin 0.050” connectors, one male and one female. But because it was difficult to come up with the right height of the male connector, I chose to add 2 female connectors, available here on Mouser and used a loose male-to-male connector between them to make the connection. The trickiest part of the build is soldering 2x5 pin connectors that connects to the motherboard: it needs to be reasonably well positioned otherwise the alignment of the screw holes will be out of whack. I applied some solder past on the solder pads, carefully placed the connector on top of the past, and used a soldering iron to melt the paste without moving the connector. It wasn’t too bad. Under normal circumstances, the board is powered by the 5V rail that’s present right next to the GPS module. However, when you plug in the USB-C cable into the RP2040-zero, you need to make sure to remove that connection because you’ll short out the 5V rail of the S200 otherwise! The board has 2 connectors for the 5V: at the front next to the USB-C connector and in the back right above the 5V of the motherboard. The goal was to plug in the power there, but it turned out to be easier to use a patch wire between the motherboard and the front power connector. You’ll need to the program the RP2024-zero with the firmware of course. You can find it in the sw/rp2040_v2 directory of my gps_interposer GitHub repo. Power Supply Recapping If you plan to deploy the SyncServer in alway-on mode, you should seriously consider replacement the capacitors in the power supply unit: they are known to be leaky and if things really go wrong they can be a fire hazard. The power supply board can easily be removed with a few screws. In my version, the capacitors were held in place with some hard sticky goo, so it takes some effort to remove them. The Future: A Software-Only Solution My solution uses a cheap piece of custom hardware. Alex’s solution currently requires patching some microcontroller firmware and then flashing this firmware with $30 programming dongle. So both solutions require some hardware. A software-only solution should be possible though: the microcontroller gets reprogrammed during an official SyncServer software update procedure. It should be possible to insert the patched microcontroller firmware into an existing software update package and then do an upgrade. Since the software update package is little more than a tar-archive of the Linux disk image that runs the embedded PC, it shouldn’t be very hard to make this happen, but right now, this doesn’t exist, and I’m happy with what I have. The Result The video below shows the 10 MHz output of the S200 being measured by a frequency counter that uses a calibrated stand-alone frequency standard as reference clock. The stability of the S200 seems slightly worse than that of my TM4313 GPSDO, but it’s close. Right now, I don’t have the knowledge yet to measure and quantify these differences in a scientifically acceptable way. References Microsemi SyncServer S200 datasheet Microsemi SyncServer S200, S250, S250i User Guide EEVblog - Symmetricom S200 Teardown/upgrade to S250 EEVblog - Symmetricom Syncserver S350 + Furuno GT-8031 timing GPS + GPS week rollover EEVblog - Synserver S200 GPS lock question Furuno GPS/GNSS Receiver GPS Week Number Rollover Footnotes

8 months ago 88 votes

More in technology

Greatest Hits

I’ve been blogging now for approximately 8,465 days since my first post on Movable Type. My colleague Dan Luu helped me compile some of the “greatest hits” from the archives of ma.tt, perhaps some posts will stir some memories for you as well: Where Did WordCamps Come From? (2023) A look back at how Foo … Continue reading Greatest Hits →

21 hours ago 2 votes
Let's give PRO/VENIX a barely adequate, pre-C89 TCP/IP stack (featuring Slirp-CK)

TCP/IP Illustrated (what would now be called the first edition prior to the 2011 update) for a hundred-odd bucks on sale which has now sat on my bookshelf, encased in its original shrinkwrap, for at least twenty years. It would be fun to put up the 4.4BSD data structures poster it came with but that would require opening it. Fortunately, today we have AI we have many more excellent and comprehensive documents on the subject, and more importantly, we've recently brought back up an oddball platform that doesn't have networking either: our DEC Professional 380 running the System V-based PRO/VENIX V2.0, which you met a couple articles back. The DEC Professionals are a notoriously incompatible member of the PDP-11 family and, short of DECnet (DECNA) support in its unique Professional Operating System, there's officially no other way you can get one on a network — let alone the modern Internet. Are we going to let that stop us? Crypto Ancienne proxy for TLS 1.3. And, as we'll discuss, if you can get this thing on the network, you can get almost anything on the network! Easily portable and painfully verbose source code is included. Recall from our lengthy history of DEC's early misadventures with personal computers that, in Digital's ill-advised plan to avoid the DEC Pros cannibalizing low-end sales from their categorical PDP-11 minicomputers, Digital's Small Systems Group deliberately made the DEC Professional series nearly totally incompatible despite the fact they used the same CPUs. In their initial roll-out strategy in 1982, the Pros (as well as their sibling systems, the Rainbow and the DECmate II) were only supposed to be mere desktop office computers — the fact the Pros were PDP-11s internally was mostly treated as an implementation detail. The idea backfired spectacularly against the IBM PC when the Pros and their promised office software failed to arrive on time and in 1984 DEC retooled around a new concept of explicitly selling the Pros as desktop PDP-11s. This required porting operating systems that PDP-11 minis typically ran: RSX-11M Plus was already there as the low-level layer of the Professional Operating System (P/OS), and DEC internally ported RT-11 (as PRO/RT-11) and COS. PDP-11s were also famous for running Unix and so DEC needed a Unix for the Pro as well, though eventually only one official option was ever available: a port of VenturCom's Venix based on V7 Unix and later System V Release 2.0 called PRO/VENIX. After the last article, I had the distinct pleasure of being contacted by Paul Kleppner, the company's first paid employee in 1981, who was part of the group at VenturCom that did the Pro port and stayed at the company until 1988. Venix was originally developed from V6 Unix on the PDP-11/23 incorporating Myron Zimmerman's real-time extensions to the kernel (such as semaphores and asynchronous I/O), then a postdoc in physics at MIT; Kleppner's father was the professor of the lab Zimmerman worked in. Zimmerman founded VenturCom in 1981 to capitalize on the emerging Unix market, becoming one of the earliest commercial Unix licensees. Venix-11 was subsequently based on the later V7 Unix, as was Venix/86, which was the first Unix on the IBM PC in January 1983 and was ported to the DEC Rainbow as Venix/86R. In addition to its real-time extensions and enhanced segmentation capability, critical for memory management in smaller 16-bit address spaces, it also included a full desktop graphics package. Notably, DEC themselves were also a Unix licensee through their Unix Engineering Group and already had an enhanced V7 Unix of their own running on the PDP-11, branded initially as V7M. Subsequently the UEG developed a port of 4.2BSD with some System V components for the VAX and planned to release it as Ultrix-32, simultaneously retconning V7M as Ultrix-11 even though it had little in common with the VAX release. Paul recalls that DEC did attempt a port of Ultrix-11 to the Pro 350 themselves but ran into intractable performance problems. By then the clock was ticking on the Pro relaunch and the issues with Ultrix-11 likely prompted DEC to look for alternatives. Crucially, Zimmerman had managed to upgrade Venix-11's kernel while still keeping it small, a vital aspect on his 11/23 which lacked split instruction and data addressing and would have had to page in and out a larger kernel otherwise. Moreover, the 11/23 used an F-11 CPU — the same CPU as the original Professional 350 and 325. DEC quickly commissioned VenturCom to port their own system over to the Pro, which Paul says was a real win for VenturCom, and the first release came out in July 1984 complete with its real-time features intact and graphics support for the Pro's bitmapped screen. It was upgraded ("PRO/VENIX Rev 2.0") in October 1984, adding support for the new top-of-the-line DEC Professional 380, and then switched to System V (SVR2) in July 1985 with PRO/VENIX V2.0. (For its part Ultrix-11 was released as such in 1984 as well, but never for the Pro series.) Keep that kernel version history in mind for when we get to oddiments of the C compiler. As for networking, though, with the exception of UUCP over serial, none of these early versions of Venix on either the PDP-11 or 8086 supported any kind of network connectivity out of the box — officially the only Pro operating system to support its Ethernet upgrade option was P/OS 2.0. Although all Pros have a 15-pin AUI network port, it isn't activated until an Ethernet CTI card is installed. (While Stan P. found mention of a third-party networking product called Fusion by Network Research Corporation which could run on PRO/VENIX, Paul's recollection is that this package ran into technical problems with kernel size during development. No examples of the PRO/VENIX version have so far been located and it may never have actually been released. You'll hear about it if a copy is found. The unofficial Pro 2.9BSD port also supports the network card, but that was always an under-the-table thing.) Since we run Venix on our Pro, that means currently our only realistic option to get this on the 'Nets is also over a serial port. lower speed port for our serial IP implementation. PRO/VENIX supports using only the RS-423 port as a remote terminal, and because it's twice as fast, it's more convenient for logins and file exchange over Kermit (which also has no TCP/IP overhead). Using the printer port also provides us with a nice challenge: if our stack works acceptably well at 4800bps, it should do even better at higher speeds if we port it elsewhere. On the Pro, we connect to our upstream host using a BCC05 cable (in the middle of this photograph), which terminates in a regular 25-pin RS-232 on the other end. Now for the software part. There are other small TCP/IP stacks, notably things like Adam Dunkel's lwIP and so on. But even SVR2 Venix is by present standards a old Unix with a much less extensive libc and more primitive C compiler — in a short while you'll see just how primitive — and relatively modern code like lwIP's would require a lot of porting. Ideally we'd like a very minimal, indeed barely adequate, stack that can do simple tasks and can be expressed in a fashion acceptable to a now antiquated compiler. Once we've written it, it would be nice if it were also easily portable to other very limited systems, even by directly translating it to assembly language if necessary. What we want this barebones stack to accomplish will inform its design: and the hardware 24-7 to make such a use case meaningful. The Ethernet option was reportedly competent at server tasks, but Ethernet has more bandwidth, and that card also has additional on-board hardware. Let's face the cold reality: as a server, we'd find interacting with it over the serial port unsatisfactory at best and we'd use up a lot of power and MTBF keeping it on more than we'd like to. Therefore, we really should optimize for the client case, which means we also only need to run the client when we're performing a network task. no remote login capacity, like, I dunno, a C64, the person on the console gets it all. Therefore, we really should optimize for the single user case, which means we can simplify our code substantially by merely dealing with sockets sequentially, one at a time, without having to worry about routing packets we get on the serial port to other tasks or multiplexing them. Doing so would require extra work for dual-socket protocols like FTP, but we're already going to use directly-attached Kermit for that, and if we really want file transfer over TCP/IP there are other choices. (On a larger antique system with multiple serial ports, we could consider a setup where each user uses a separate outgoing serial port as their own link, which would also work under this scheme.) Some of you may find this conflicts hard with your notion of what a "stack" should provide, but I also argue that the breadth of a full-service driver would be wasted on a limited configuration like this and be unnecessarily more complex to write and test. Worse, in many cases, is better, and I assert this particular case is one of them. Keeping the above in mind, what are appropriate client tasks for a microcomputer from 1984, now over 40 years old — even a fairly powerful one by the standards of the time — to do over a slow TCP/IP link? Crypto Ancienne's carl can serve as an HTTP-to-HTTPS proxy to handle the TLS part, if necessary.) We could use protocols like these to download and/or view files from systems that aren't directly connected, or to send and receive status information. One task that is also likely common is an interactive terminal connection (e.g., Telnet, rlogin) to another host. However, as a client this particular deployment is still likely to hit the same sorts of latency problems for the same reasons we would experience connecting to it as a server. These other tasks here are not highly sensitive to latency, require only a single "connection" and no multiplexing, and are simple protocols which are easy to implement. Let's call this feature set our minimum viable product. Because we're writing only for a couple of specific use cases, and to make them even more explicit and easy to translate, we're going to take the unusual approach of having each of these clients handle their own raw packets in a bytewise manner. For the actual serial link we're going to go even more barebones and use old-school RFC 1055 SLIP instead of PPP (uncompressed, too, not even Van Jacobson CSLIP). This is trivial to debug and straightforward to write, and if we do so in a relatively encapsulated fashion, we could consider swapping in CSLIP or PPP later on. A couple of utility functions will do the IP checksum algorithm and reading and writing the serial port, and DNS and some aspects of TCP also get their own utility subroutines, but otherwise all of the programs we will create will read and write their own network datagrams, using the SLIP code to send and receive over the wire. The C we will write will also be intentionally very constrained, using bytewise operations assuming nothing about endianness and using as little of the C standard library as possible. For types, you only need some sort of 32-bit long, which need not be native, an int of at least 16 bits, and a char type — which can be signed, and in fact has to be to run on earlier Venices (read on). You can run the entirety of the code with just malloc/free, read/write/open/close, strlen/strcat, sleep, rand/srand and time for the srand seed (and fprintf for printing debugging information, if desired). On a system with little or no operating system support, almost all of these primitive library functions are easy to write or simulate, and we won't even assume we're capable of non-blocking reads despite the fact Venix can do so. After all, from that which little is demanded, even less is expected. slattach which effectively makes a serial port directly into a network interface. Such an arrangement would be the most flexible approach from the user's perspective because you necessarily have a fixed, bindable external address, but obviously such a scheme didn't scale over time. With the proliferation of dialup Unix shell accounts in the late 1980s and early 1990s, closed-source tools like 1993's The Internet Adapter ("TIA") could provide the SLIP and later PPP link just by running them from a shell prompt. Because they synthesize artificial local IP addresses, sort of NAT before the concept explicitly existed, the architecture of such tools prevented directly creating listening sockets — though for some situations this could be considered a more of a feature than a bug. Any needed external ports could be proxied by the software anyway and later network clients tended not to require it, so for most tasks it was more than sufficient. Closed-source and proprietary SLIP/PPP-over-shell solutions like TIA were eventually displaced by open source alternatives, most notably SLiRP. SLiRP (hereafter Slirp so I don't gouge my eyes out) emerged in 1995 and used a similar architecture to TIA, handing out virtual addresses on an synthetic network and bridging that network to the Internet through the host system. It rapidly became the SLIP/PPP shell solution of choice, leading to its outright ban by some shell ISPs who claimed it violated their terms of service. As direct SLIP/PPP dialup became more common than shell accounts, during which time yours truly upgraded to a 56K Mac modem I still have around here somewhere, Slirp eventually became most useful for connecting small devices via their serial ports (PDAs and mobile phones especially, but really anything — subsets of Slirp are still used in emulators today like QEMU for a similar purpose) to a LAN. By a shocking and completely contrived coincidence, that's exactly what we'll be doing! Slirp has not been officially maintained since 2006. There is no package in Fedora, which is my usual desktop Linux, and the one in Debian reportedly has issues. A stack of patch sets circulated thereafter, but the planned 1.1 release never happened and other crippling bugs remain, some of which were addressed in other patches that don't seem to have made it into any release, source or otherwise. If you tried to build Slirp from source on a modern system and it just immediately exits, you got bit. I have incorporated those patches and a couple of my own to port naming and the configure script, plus some additional fixes, into an unofficial "Slirp-CK" which is on Github. It builds the same way as prior versions and is tested on Fedora Linux. I'm working on getting it functional on current macOS also. Next, I wrote up our four basic functional clients: ping, DNS lookup, NTP client (it doesn't set the clock, just shows you the stratum, refid and time which you can use for your own purposes), and TCP client. The TCP client accepts strings up to a defined maximum length, opens the connection, sends those strings (optionally separated by CRLF), and then reads the reply until the connection closes. This all seemed to work great on the Linux box, which you yourself can play with as a toy stack (directions at the end). Unfortunately, I then pushed it over to the Pro with Kermit and the compiler immediately started complaining. SLIP is a very thin layer on IP packets. There are exactly four metabytes, which I created preprocessor defines for: A SLIP packet ends with SLIP_END, or hex $c0. Where this must occur within a packet, it is replaced by a two byte sequence for unambiguity, SLIP_ESC SLIP_ESC_END, or hex $db $dc, and where the escape byte must occur within a packet, it gets a different two byte sequence, SLIP_ESC SLIP_ESC_ESC, or hex $db $dd. Although I initially set out to use defines and symbols everywhere instead of naked bytes, and wrote slip.c on that basis, I eventually settled on raw bytes afterwards using copious comments so it was clear what was intended to be sent. That probably saved me a lot of work renaming everything, because: Dimly I recalled that early C compilers, including System V, limit their identifiers to eight characters (the so-called "Ritchie limit"). At this point I probably should have simply removed them entirely for consistency with their absence elsewhere, but I went ahead and trimmed them down to more opaque, pithy identifiers. That wasn't the only problem, though. I originally had two functions in slip.c, slip_start and slip_stop, and it didn't like that either despite each appearing to have a unique eight-character prefix: That's because their symbols in the object file are actually prepended with various metacharacters like _ and ~, so effectively you only get seven characters in function identifiers, an issue this error message fails to explain clearly. The next problem: there's no unsigned char, at least not in PRO/VENIX Rev. 2.0 which I want to support because it's more common, and presumably the original versions of PRO/VENIX and Venix-11. (This type does exist in PRO/VENIX V2.0, but that's because it's System V and has a later C compiler.) In fact, the unsigned keyword didn't exist at all in the earliest C compilers, and even when it did, it couldn't be applied to every basic type. Although unsigned char was introduced in V7 Unix and is documented as legal in the PRO/VENIX manual, and it does exist in Venix/86 2.1 which is also a V7 Unix derivative, the PDP-11 and 8086 C compilers have different lineages and Venix's V7 PDP-11 compiler definitely doesn't support it: I suspect this may not have been intended because unsigned int works (unsigned long would be pointless on this architecture, and indeed correctly generates Misplaced 'long' on both versions of PRO/VENIX). Regardless of why, however, the plain char type on the PDP-11 is signed, and for compatibility reasons here we'll have no choice but to use it. Recall that when C89 was being codified, plain char was left as an ambiguous type since some platforms (notably PDP-11 and VAX) made it signed by default and others made it unsigned, and C89 was more about codifying existing practice than establishing new ones. That's why you see this on a modern 64-bit platform, e.g., my POWER9 workstation, where plain char is unsigned: If we change the original type explicitly to signed char on our POWER9 Linux machine, that's different: and, accounting for different sizes of int, seems similar on PRO/VENIX V2.0 (again, which is System V): but the exact same program on PRO/VENIX Rev. 2.0 behaves a bit differently: The differences in int size we expect, but there's other kinds of weird stuff going on here. The PRO/VENIX manual lists all the various permutations about type conversions and what gets turned into what where, but since the manual is already wrong about unsigned char I don't think we can trust the documentation for this part either. Our best bet is to move values into int and mask off any propagated sign bits before doing comparisons or math, which is agonizing, but reliable. That means throwing around a lot of seemingly superfluous & 0xff to make sure we don't get negative numbers where we don't want them. Once I got it built, however, there were lots of bugs. Many were because it turns out the compiler isn't too good with 32-bit long, which is not a native type on the 16-bit PDP-11. This (part of the NTP client) worked on my regular Linux desktop, but didn't work in Venix: The first problem is that the intermediate shifts are too large and overshoot, even though they should be in range for a long. Consider this example: On the POWER9, accounting for the different semantics of %lx, But on Venix, the second shift blows out the value. We can get an idea of why from the generated assembly in the adb debugger (here from PRO/VENIX V2.0, since I could cut and paste from the Kermit session): (Parenthetical notes: csav is a small subroutine that pushes volatiles r2 through r4 on the stack and turns r5 into the frame pointer; the corresponding cret unwinds this. The initial branch in this main is used to reserve additional stack space, but is often practically a no-op.) The first shift is here at ~main+024. Remember the values are octal, so 010 == 8. r0 is 16 bits wide — no 32-bit registers — so an eight-bit shift is fine. When we get to the second shift, however, it's the same instruction on just one register (030 == 24) and the overflow is never checked. In fact, the compiler never shifts the second part of the long at all. The result is thus zero. The second problem in this example is that the compiler never treats the constant as a long even though statically there's no way it can fit in a 16-bit int. To get around those two gotchas on both Venices here, I rewrote it this way: An alternative to a second variable is to explicitly mark the epoch constant itself as long, e.g., by casting it, which also works. Here's another example for your entertainment. At least some sort of pseudo-random number generator is crucial, especially for TCP when selecting the pseudo-source port and initial sequence numbers, or otherwise Slirp seemed to get very confused because we would "reuse" things a lot. Unfortunately, the obvious typical idiom to seed it like srand(time(NULL)) doesn't work: srand() expects a 16-bit int but time(NULL) returns a 32-bit long, and it turns out the compiler only passes the 16 most significant bits of the time — i.e., the ones least likely to change — to srand(). Here's the disassembly as proof (contents trimmed for display here; since this is a static binary, we can see everything we're calling): At the time we call the glue code for time from main, the value under the stack pointer (i.e., r6) is cleared immediately beforehand since we're passing NULL (at ~main+06). We then invoke the system call, which per the Venix manual for time(2) uses two registers for the 32-bit result, namely r0 (high bits) and r1 (low bits). We passed a null pointer, so the values remain in those registers and aren't written anywhere (branch at _time+014). When we return to ~main+014, however, we only put r0 on the stack for srand (remember that r5 is being used as the frame pointer; see the disassembly I provided for csav) and r1 is completely ignored. Why would this happen? It's because time(2) isn't declared anywhere in /usr/include or /usr/include/sys (the two C include directories), nor for that matter rand(3) or srand(3). This is true of both Rev. 2.0 and V2.0. Since the symbols are statically present in the standard library, linking will still work, but since the compiler doesn't know what it's supposed to be working with, it assumes int and fails to handle both halves of the long. One option is to manually declare everything ourselves. However, from the assembly at _time+016 we do know that if we pass a pointer, the entire long value will get placed there. That means we can also do this: Now this gets the lower bits and there is sufficient entropy for our purpose (though obviously not a cryptographically-secure PRNG). Interestingly, the Venix manual recommends using the time as the seed, but doesn't include any sample code. At any rate this was enough to make the pieces work for IP, ICMP and UDP, but TCP would bug out after just a handful of packets. As it happens, Venix has rather small serial buffers by modern standards: tty(7), based on the TIOCQCNT ioctl(2), appears to have just a 256-byte read buffer (sg_ispeed is only char-sized). If we don't make adjustments for this, we'll start losing framing when the buffer gets overrun, as in this extract from a test build with debugging dumps on and a maximum segment size/window of 512 bytes. Here, the bytes marked by dashes are the remote end and the bytes separated by dots are what the SLIP driver is scanning for framing and/or throwing away; you'll note there is obvious ASCII data in them. If we make the TCP MSS and window on our client side 256 bytes, there is still retransmission, but the connection is more reliable since overrun occurs less often and seems to work better than a hard cap on the maximum transmission unit (e.g., "mtu 256") from SLiRP's side. Our only consequence to dropping the TCP MSS and window size is that the TCP client is currently hard-coded to just send one packet at the beginning (this aligns with how you'd do finger, HTTP/1.x, gopher, etc.), and that datagram uses the same size which necessarily limits how much can be sent. If I did the extra work to split this over several datagrams, it obviously wouldn't be a problem anymore, but I'm lazy and worse is better! The connection can be made somewhat more reliable still by improving the SLIP driver's notion of framing. RFC 1055 only specifies that the SLIP end byte (i.e., $c0) occur at the end of a SLIP datagram, though it also notes that it was proposed very early on that it could also start datagrams — i.e., if two occur back to back, then it just looks like a zero length or otherwise obviously invalid entity which can be trivially discarded. However, since there's no guarantee or requirement that the remote link will do this, we can't assume it either. We also can't just look for a $45 byte (i.e., IPv4 and a 20 byte length) because that's an ASCII character and appears frequently in text payloads. However, $45 followed by a valid DSCP/ECN byte is much less frequent, and most of the time this byte will be either $00, $08 or $10; we don't currently support ECN (maybe we should) and we wouldn't find other DSCP values meaningful anyway. The SLIP driver uses these sequences to find the start of a datagram and $c0 to end it. While that doesn't solve the overflow issue, it means the SLIP driver will be less likely to go out of framing when the buffer does overrun and thus can better recover when the remote side retransmits. And, well, that's it. There are still glitches to bang out but it's good enough to grab Hacker News: src/ directory, run configure and then run make (parallel make is fine, I use -j24 on my POWER9). Connect your two serial ports together with a null modem, which I assume will be /dev/ttyUSB0 and /dev/ttyUSB1. Start Slirp-CK with a command line like ./slirp -b 4800 "tty /dev/ttyUSB1" but adjusting the baud and path to your serial port. Take note of the specified virtual and nameserver addresses: Unlike the given directions, you can just kill it with Control-C when you're done; the five zeroes are only if you're running your connection over standard output such as direct shell dial-in (this is a retrocomputing blog so some of you might). To see the debug version in action, next go to the BASS directory and just do a make. You'll get a billion warnings but it should still work with current gcc and clang because I specifically request -std=c89. If you use a different path for your serial port (i.e., not /dev/ttyUSB0), edit slip.c before you compile. You don't do anything like ifconfig with these tools; you always provide the tools the client IP address they'll use (or create an alias or script to do so). Try this initial example, with slirp already running: Because I'm super-lazy, you separate the components of the IPv4 address with spaces, not dots. In Slirp-land, 10.0.2.2 is always the host you are connected to. You can see the ICMP packet being sent, the bytes being scanned by the SLIP driver for framing (the ones with dots), and then the reply (with dashes). These datagram dumps have already been pre-processed for SLIP metabytes. Unfortunately, you may not be able to ping other hosts through Slirp because there's no backroute but you could try this with a direct SLIP connection, an exercise left for the reader. If Slirp doesn't want to respond and you're sure your serial port works (try testing both ends with Kermit?), you can recompile it with -DDEBUG (change this in the generated Makefile) and pass your intended debug level like -d 1 or -d 3. You'll get a file called slirp_debug with some agonizingly detailed information so you can see if it's actually getting the datagrams and/or liking the datagrams it gets. For nslookup, ntp and minisock, the second address becomes your accessible recursive nameserver (or use -i to provide an IP). The DNS dump is also given in the debug mode with slashes for the DNS answer section. nslookup and ntp are otherwise self-explanatory: minisock takes a server name (or IP) and port, followed by optional strings. The strings, up to 255 characters total (in this version), are immediately sent with CR-LFs between them except if you specify -n. If you specify no strings, none are sent. It then waits on that port for data and exits when the socket closes. This is how we did the HTTP/1.0 requests in the screenshots. On the DEC Pro, this has been tested on my trusty DEC Professional 380 running PRO/VENIX V2.0. It should compile and run on a 325 or 350, and on at least PRO/VENIX Rev. V2.0, though I don't have any hardware for this and Xhomer's serial port emulation is not good enough for this purpose (so unfortunately you'll need a real DEC Pro until I or Tarek get around to fixing it). The easiest way to get it over there is Kermit. Assuming you have this already, connect your host and the Pro on the "real" serial port at 9600bps. Make sure both sides are set to binary and just push all the files over (except the Markdown documentation unless you really want), and then do a make -f Makefile.venix (it may have been renamed to makefile.venix; adjust accordingly). Establishing the link is as simple as connecting your server's serial port to the other end of the BCC05 or equivalent from the Pro and starting Slirp to talk to that port (on my system, it's even the same port, so the same command line suffices). If you experience issues with the connection, the easiest fix is to just bounce Slirp — because there are no timeouts, there are also no retransmits. I don't know if this is hitting bugs in Slirp or in my code, though it's probably the latter. Nevertheless, I've been able to run stuff most of the day without issue. It's nice to have a simple network option and the personal satisfaction of having written it myself. There are many acknowledged deficiencies, mostly because I assume little about the system itself and tried to keep everything very simplistic. There are no timeouts and thus no retransmits, and if you break the TCP connection in the middle there will be no proper teardown. Also, because I used Slirp for the other side (as many others will), and because my internal network is full of machines that have no idea what IPv6 is, there is no IPv6 support. I agree there should be and SLIP doesn't care whether it gets IPv4 or IPv6, but for now that would require patching Slirp which is a job I just don't feel up to at the moment. I'd also like to support at least CSLIP in the future. In the meantime, if you want to try this on other operating systems, the system-dependent portions are in compat.h and slip.c with a small amount in ntp.c for handling time values. You will likely want to make changes to where your serial ports are and the speed they run at and how to make that port "raw" in slip.c. You should also add any extra #includes to compat.h that your system requires. I'd love to hear about it running other places. Slirp-CK remains under the original modified Slirp license and BASS is under the BSD 2-clause license. You can get Slirp-CK and BASS at Github.

15 hours ago 2 votes
Transactions are a protocol

Transactions are not an intrinsic part of a storage system. Any storage system can be made transactional: Redis, S3, the filesystem, etc. Delta Lake and Orleans demonstrated techniques to make S3 (or cloud storage in general) transactional. Epoxy demonstrated techniques to make Redis (and any other system) transactional. And of course there's always good old Two-Phase Commit. If you don't want to read those papers, I wrote about a simplified implementation of Delta Lake and also wrote about a simplified MVCC implementation over a generic key-value storage layer. It is both the beauty and the burden of transactions that they are not intrinsic to a storage system. Postgres and MySQL and SQLite have transactions. But you don't need to use them. It isn't possible to require you to use transactions. Many developers, myself a few years ago included, do not know why you should use them. (Hint: read Designing Data Intensive Applications.) And you can take it even further by ignoring the transaction layer of an existing transactional database and implement your own transaction layer as Convex has done (the Epoxy paper above also does this). It isn't entirely clear that you have a lot to lose by implementing your own transaction layer since the indexes you'd want on the version field of a value would only be as expensive or slow as any other secondary index in a transactional database. Though why you'd do this isn't entirely clear (I will like to read about this from Convex some time). It's useful to see transaction protocols as another tool in your system design tool chest when you care about consistency, atomicity, and isolation. Especially as you build systems that span data systems. Maybe, as Ben Hindman hinted at the last NYC Systems, even proprietary APIs will eventually provide something like two-phase commit so physical systems outside our control can become transactional too. Transactions are a protocol short new post pic.twitter.com/nTj5LZUpUr — Phil Eaton (@eatonphil) April 20, 2025

21 hours ago 2 votes
Humanities Crash Course Week 16: The Art of War

In week 16 of the humanities crash course, I revisited the Tao Te Ching and The Art of War. I just re-read the Tao Te Ching last year, so I only revisited my notes now. I’ve also read The Art of War a few times, but decided to re-visit it now anyway. Readings Both books are related. The Art of War is older; Sun Tzu wrote it around 500 BCE, at a time when war was becoming more “professionalized” in China. The book aims convey what had (or hadn’t) worked in the battlefield. The starting point is conflict. There’s an enemy we’re looking to defeat. The best victory is achieved without engagement. That’s not always possible, so the book offers pragmatic suggestions on tactical maneuvers and such. It gives good advice for situations involving conflict, which is why they’ve influenced leaders (including businesspeople) throughout centuries: It’s better to win before any shots are fired (i.e., through cunning and calculation.) Use deception. Don’t let conflicts drag on. Understand the context to use it to your advantage. Keep your forces unified and disciplined. Adapt to changing conditions on the ground. Consider economics and logistics. Gather intelligence on the opposition. The goal is winning through foresight rather than brute force — good advice! The Tao Te Ching, written by Lao Tzu around the late 4th century BCE, is the central text in Taoism, a philosophy that aims for skillful action by aligning with the natural order of the universe — i.e., doing through “non-doing” and transcending distinctions (which aren’t present in reality but layered onto experiences by humans.) Tao means Way, as in the Way to achieve such alignment. The book is a guide to living the Tao. (Living in Tao?) But as it makes clear from its very first lines, you can’t really talk about it: the Tao precedes language. It’s a practice — and the practice entails non-striving. Audiovisual Music: Gioia recommended the Beatles (The White Album, Sgt. Pepper’s, and Abbey Road) and Rolling Stones (Let it Bleed, Beggars Banquet, and Exile on Main Street.) I’d heard all three Rolling Stones albums before, but don’t know them by heart (like I do with the Beatles.) So I revisited all three. Some songs sounded a bit cringe-y, especially after having heard “real” blues a few weeks ago. Of the three albums, Exile on Main Street sounds more authentic. (Perhaps because of the band member’s altered states?) In any case, it sounded most “in the Tao” to me — that is, as though the musicians surrendered to the experience of making this music. It’s about as rock ‘n roll as it gets. Arts: Gioia recommended looking at Chinese architecture. As usual, my first thought was to look for short documentaries or lectures in YouTube. I was surprised by how little there was. Instead, I read the webpage Gioia suggested. Cinema: Since we headed again to China, I took in another classic Chinese film that had long been on my to-watch list: Wong Kar-wai’s IN THE MOOD FOR LOVE. I found it more Confucian than Taoist, although its slow pacing, gentleness, focus on details, and passivity strike something of a Taoist mood. Reflections When reading the Tao Te Ching, I’m often reminded of this passage from the Gospel of Matthew: No man can serve two masters: for either he will hate the one, and love the other; or else he will hold to the one, and despise the other. Ye cannot serve God and mammon. Therefore I say unto you, Take no thought for your life, what ye shall eat, or what ye shall drink; nor yet for your body, what ye shall put on. Is not the life more than meat, and the body than raiment? Behold the fowls of the air: for they sow not, neither do they reap, nor gather into barns; yet your heavenly Father feedeth them. Are ye not much better than they? Which of you by taking thought can add one cubit unto his stature? And why take ye thought for raiment? Consider the lilies of the field, how they grow; they toil not, neither do they spin: And yet I say unto you, That even Solomon in all his glory was not arrayed like one of these. Wherefore, if God so clothe the grass of the field, which to day is, and to morrow is cast into the oven, shall he not much more clothe you, O ye of little faith? Therefore take no thought, saying, What shall we eat? or, What shall we drink? or, Wherewithal shall we be clothed? (For after all these things do the Gentiles seek:) for your heavenly Father knoweth that ye have need of all these things. But seek ye first the kingdom of God, and his righteousness; and all these things shall be added unto you. Take therefore no thought for the morrow: for the morrow shall take thought for the things of itself. Sufficient unto the day is the evil thereof. The Tao Te Ching is older and from a different culture, but “Consider the lilies of the field, how they grow; they toil not, neither do they spin” has always struck me as very Taoistic: both texts emphasize non-striving and putting your trust on a higher order. Even though it’s even older, that spirit is also evident in The Art of War. It’s not merely letting things happen, but aligning mindfully with the needs of the time. Sometimes we must fight. Best to do it quickly and efficiently. And best yet if the conflict can be settled before it begins. Notes on Note-taking This week, I started using ChatGPT’s new o3 model. Its answers are a bit better than what I got with previous models, but there are downsides. For one thing, o3 tends to format answers in tables rather than lists. This works well if you use ChatGPT in a wide window, but is less useful on a mobile device or (as in my case) on a narrow window to the side. This is how I usually use ChatGPT on my Mac: in a narrow window. o3’s responses often include tables that get cut off in this window. For another, replies take much longer as the AI does more “research” in the background. As a result, it feels less conversational than 4o — which changes how I interact with it. I’ll play more with o3 for work, but for this use case, I’ll revert to 4o. Up Next Gioia recommends Apulelius’s The Golden Ass. I’ve never read this, and frankly feel weary about returning to the period of Roman decline. (Too close to home?) But I’ll approach it with an open mind. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

14 hours ago 1 votes
My approach to teaching electronics

Explaining the reasoning behind my series of articles on electronics -- and asking for your thoughts.

yesterday 2 votes