Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
6
First off, apologies for a quiet month as I've been dealing with family matters which hopefully are now on a better footing (more articles are in the hopper). Unfortunately, the same apparently can't be said for the once-great Living Computers Museum + Labs in Seattle, established by the late Microsoft co-founder Paul Allen and closed in 2020 during the COVID pandemic after his death, at least some of which is going up for auction. The specific pieces have not yet been announced by Christie's, but will ostensibly include his personal DECsystem-10, a 1971 KI10 DEC PDP-10 from the MIT AI Lab which is the first computer he and Bill Gates ever used. (There's just something about your first. I still have my actual first computer too, and with only around 1500 systems built the unit at the LCM was apparently the exact machine they used also. Here's a picture of it from when it was in residence at the LCM and used to develop a replica.) Obviously, while I think it's a crying shame, the...
9 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Old Vintage Computing Research

Let's give PRO/VENIX a barely adequate, pre-C89 TCP/IP stack (featuring Slirp-CK)

TCP/IP Illustrated (what would now be called the first edition prior to the 2011 update) for a hundred-odd bucks on sale which has now sat on my bookshelf, encased in its original shrinkwrap, for at least twenty years. It would be fun to put up the 4.4BSD data structures poster it came with but that would require opening it. Fortunately, today we have AI we have many more excellent and comprehensive documents on the subject, and more importantly, we've recently brought back up an oddball platform that doesn't have networking either: our DEC Professional 380 running the System V-based PRO/VENIX V2.0, which you met a couple articles back. The DEC Professionals are a notoriously incompatible member of the PDP-11 family and, short of DECnet (DECNA) support in its unique Professional Operating System, there's officially no other way you can get one on a network — let alone the modern Internet. Are we going to let that stop us? Crypto Ancienne proxy for TLS 1.3. And, as we'll discuss, if you can get this thing on the network, you can get almost anything on the network! Easily portable and painfully verbose source code is included. Recall from our lengthy history of DEC's early misadventures with personal computers that, in Digital's ill-advised plan to avoid the DEC Pros cannibalizing low-end sales from their categorical PDP-11 minicomputers, Digital's Small Systems Group deliberately made the DEC Professional series nearly totally incompatible despite the fact they used the same CPUs. In their initial roll-out strategy in 1982, the Pros (as well as their sibling systems, the Rainbow and the DECmate II) were only supposed to be mere desktop office computers — the fact the Pros were PDP-11s internally was mostly treated as an implementation detail. The idea backfired spectacularly against the IBM PC when the Pros and their promised office software failed to arrive on time and in 1984 DEC retooled around a new concept of explicitly selling the Pros as desktop PDP-11s. This required porting operating systems that PDP-11 minis typically ran: RSX-11M Plus was already there as the low-level layer of the Professional Operating System (P/OS), and DEC internally ported RT-11 (as PRO/RT-11) and COS. PDP-11s were also famous for running Unix and so DEC needed a Unix for the Pro as well, though eventually only one official option was ever available: a port of VenturCom's Venix based on V7 Unix and later System V Release 2.0 called PRO/VENIX. After the last article, I had the distinct pleasure of being contacted by Paul Kleppner, the company's first paid employee in 1981, who was part of the group at VenturCom that did the Pro port and stayed at the company until 1988. Venix was originally developed from V6 Unix on the PDP-11/23 incorporating Myron Zimmerman's real-time extensions to the kernel (such as semaphores and asynchronous I/O), then a postdoc in physics at MIT; Kleppner's father was the professor of the lab Zimmerman worked in. Zimmerman founded VenturCom in 1981 to capitalize on the emerging Unix market, becoming one of the earliest commercial Unix licensees. Venix-11 was subsequently based on the later V7 Unix, as was Venix/86, which was the first Unix on the IBM PC in January 1983 and was ported to the DEC Rainbow as Venix/86R. In addition to its real-time extensions and enhanced segmentation capability, critical for memory management in smaller 16-bit address spaces, it also included a full desktop graphics package. Notably, DEC themselves were also a Unix licensee through their Unix Engineering Group and already had an enhanced V7 Unix of their own running on the PDP-11, branded initially as V7M. Subsequently the UEG developed a port of 4.2BSD with some System V components for the VAX and planned to release it as Ultrix-32, simultaneously retconning V7M as Ultrix-11 even though it had little in common with the VAX release. Paul recalls that DEC did attempt a port of Ultrix-11 to the Pro 350 themselves but ran into intractable performance problems. By then the clock was ticking on the Pro relaunch and the issues with Ultrix-11 likely prompted DEC to look for alternatives. Crucially, Zimmerman had managed to upgrade Venix-11's kernel while still keeping it small, a vital aspect on his 11/23 which lacked split instruction and data addressing and would have had to page in and out a larger kernel otherwise. Moreover, the 11/23 used an F-11 CPU — the same CPU as the original Professional 350 and 325. DEC quickly commissioned VenturCom to port their own system over to the Pro, which Paul says was a real win for VenturCom, and the first release came out in July 1984 complete with its real-time features intact and graphics support for the Pro's bitmapped screen. It was upgraded ("PRO/VENIX Rev 2.0") in October 1984, adding support for the new top-of-the-line DEC Professional 380, and then switched to System V (SVR2) in July 1985 with PRO/VENIX V2.0. (For its part Ultrix-11 was released as such in 1984 as well, but never for the Pro series.) Keep that kernel version history in mind for when we get to oddiments of the C compiler. As for networking, though, with the exception of UUCP over serial, none of these early versions of Venix on either the PDP-11 or 8086 supported any kind of network connectivity out of the box — officially the only Pro operating system to support its Ethernet upgrade option was P/OS 2.0. Although all Pros have a 15-pin AUI network port, it isn't activated until an Ethernet CTI card is installed. (While Stan P. found mention of a third-party networking product called Fusion by Network Research Corporation which could run on PRO/VENIX, Paul's recollection is that this package ran into technical problems with kernel size during development. No examples of the PRO/VENIX version have so far been located and it may never have actually been released. You'll hear about it if a copy is found. The unofficial Pro 2.9BSD port also supports the network card, but that was always an under-the-table thing.) Since we run Venix on our Pro, that means currently our only realistic option to get this on the 'Nets is also over a serial port. lower speed port for our serial IP implementation. PRO/VENIX supports using only the RS-423 port as a remote terminal, and because it's twice as fast, it's more convenient for logins and file exchange over Kermit (which also has no TCP/IP overhead). Using the printer port also provides us with a nice challenge: if our stack works acceptably well at 4800bps, it should do even better at higher speeds if we port it elsewhere. On the Pro, we connect to our upstream host using a BCC05 cable (in the middle of this photograph), which terminates in a regular 25-pin RS-232 on the other end. Now for the software part. There are other small TCP/IP stacks, notably things like Adam Dunkel's lwIP and so on. But even SVR2 Venix is by present standards a old Unix with a much less extensive libc and more primitive C compiler — in a short while you'll see just how primitive — and relatively modern code like lwIP's would require a lot of porting. Ideally we'd like a very minimal, indeed barely adequate, stack that can do simple tasks and can be expressed in a fashion acceptable to a now antiquated compiler. Once we've written it, it would be nice if it were also easily portable to other very limited systems, even by directly translating it to assembly language if necessary. What we want this barebones stack to accomplish will inform its design: and the hardware 24-7 to make such a use case meaningful. The Ethernet option was reportedly competent at server tasks, but Ethernet has more bandwidth, and that card also has additional on-board hardware. Let's face the cold reality: as a server, we'd find interacting with it over the serial port unsatisfactory at best and we'd use up a lot of power and MTBF keeping it on more than we'd like to. Therefore, we really should optimize for the client case, which means we also only need to run the client when we're performing a network task. no remote login capacity, like, I dunno, a C64, the person on the console gets it all. Therefore, we really should optimize for the single user case, which means we can simplify our code substantially by merely dealing with sockets sequentially, one at a time, without having to worry about routing packets we get on the serial port to other tasks or multiplexing them. Doing so would require extra work for dual-socket protocols like FTP, but we're already going to use directly-attached Kermit for that, and if we really want file transfer over TCP/IP there are other choices. (On a larger antique system with multiple serial ports, we could consider a setup where each user uses a separate outgoing serial port as their own link, which would also work under this scheme.) Some of you may find this conflicts hard with your notion of what a "stack" should provide, but I also argue that the breadth of a full-service driver would be wasted on a limited configuration like this and be unnecessarily more complex to write and test. Worse, in many cases, is better, and I assert this particular case is one of them. Keeping the above in mind, what are appropriate client tasks for a microcomputer from 1984, now over 40 years old — even a fairly powerful one by the standards of the time — to do over a slow TCP/IP link? Crypto Ancienne's carl can serve as an HTTP-to-HTTPS proxy to handle the TLS part, if necessary.) We could use protocols like these to download and/or view files from systems that aren't directly connected, or to send and receive status information. One task that is also likely common is an interactive terminal connection (e.g., Telnet, rlogin) to another host. However, as a client this particular deployment is still likely to hit the same sorts of latency problems for the same reasons we would experience connecting to it as a server. These other tasks here are not highly sensitive to latency, require only a single "connection" and no multiplexing, and are simple protocols which are easy to implement. Let's call this feature set our minimum viable product. Because we're writing only for a couple of specific use cases, and to make them even more explicit and easy to translate, we're going to take the unusual approach of having each of these clients handle their own raw packets in a bytewise manner. For the actual serial link we're going to go even more barebones and use old-school RFC 1055 SLIP instead of PPP (uncompressed, too, not even Van Jacobson CSLIP). This is trivial to debug and straightforward to write, and if we do so in a relatively encapsulated fashion, we could consider swapping in CSLIP or PPP later on. A couple of utility functions will do the IP checksum algorithm and reading and writing the serial port, and DNS and some aspects of TCP also get their own utility subroutines, but otherwise all of the programs we will create will read and write their own network datagrams, using the SLIP code to send and receive over the wire. The C we will write will also be intentionally very constrained, using bytewise operations assuming nothing about endianness and using as little of the C standard library as possible. For types, you only need some sort of 32-bit long, which need not be native, an int of at least 16 bits, and a char type — which can be signed, and in fact has to be to run on earlier Venices (read on). You can run the entirety of the code with just malloc/free, read/write/open/close, strlen/strcat, sleep, rand/srand and time for the srand seed (and fprintf for printing debugging information, if desired). On a system with little or no operating system support, almost all of these primitive library functions are easy to write or simulate, and we won't even assume we're capable of non-blocking reads despite the fact Venix can do so. After all, from that which little is demanded, even less is expected. slattach which effectively makes a serial port directly into a network interface. Such an arrangement would be the most flexible approach from the user's perspective because you necessarily have a fixed, bindable external address, but obviously such a scheme didn't scale over time. With the proliferation of dialup Unix shell accounts in the late 1980s and early 1990s, closed-source tools like 1993's The Internet Adapter ("TIA") could provide the SLIP and later PPP link just by running them from a shell prompt. Because they synthesize artificial local IP addresses, sort of NAT before the concept explicitly existed, the architecture of such tools prevented directly creating listening sockets — though for some situations this could be considered a more of a feature than a bug. Any needed external ports could be proxied by the software anyway and later network clients tended not to require it, so for most tasks it was more than sufficient. Closed-source and proprietary SLIP/PPP-over-shell solutions like TIA were eventually displaced by open source alternatives, most notably SLiRP. SLiRP (hereafter Slirp so I don't gouge my eyes out) emerged in 1995 and used a similar architecture to TIA, handing out virtual addresses on an synthetic network and bridging that network to the Internet through the host system. It rapidly became the SLIP/PPP shell solution of choice, leading to its outright ban by some shell ISPs who claimed it violated their terms of service. As direct SLIP/PPP dialup became more common than shell accounts, during which time yours truly upgraded to a 56K Mac modem I still have around here somewhere, Slirp eventually became most useful for connecting small devices via their serial ports (PDAs and mobile phones especially, but really anything — subsets of Slirp are still used in emulators today like QEMU for a similar purpose) to a LAN. By a shocking and completely contrived coincidence, that's exactly what we'll be doing! Slirp has not been officially maintained since 2006. There is no package in Fedora, which is my usual desktop Linux, and the one in Debian reportedly has issues. A stack of patch sets circulated thereafter, but the planned 1.1 release never happened and other crippling bugs remain, some of which were addressed in other patches that don't seem to have made it into any release, source or otherwise. If you tried to build Slirp from source on a modern system and it just immediately exits, you got bit. I have incorporated those patches and a couple of my own to port naming and the configure script, plus some additional fixes, into an unofficial "Slirp-CK" which is on Github. It builds the same way as prior versions and is tested on Fedora Linux. I'm working on getting it functional on current macOS also. Next, I wrote up our four basic functional clients: ping, DNS lookup, NTP client (it doesn't set the clock, just shows you the stratum, refid and time which you can use for your own purposes), and TCP client. The TCP client accepts strings up to a defined maximum length, opens the connection, sends those strings (optionally separated by CRLF), and then reads the reply until the connection closes. This all seemed to work great on the Linux box, which you yourself can play with as a toy stack (directions at the end). Unfortunately, I then pushed it over to the Pro with Kermit and the compiler immediately started complaining. SLIP is a very thin layer on IP packets. There are exactly four metabytes, which I created preprocessor defines for: A SLIP packet ends with SLIP_END, or hex $c0. Where this must occur within a packet, it is replaced by a two byte sequence for unambiguity, SLIP_ESC SLIP_ESC_END, or hex $db $dc, and where the escape byte must occur within a packet, it gets a different two byte sequence, SLIP_ESC SLIP_ESC_ESC, or hex $db $dd. Although I initially set out to use defines and symbols everywhere instead of naked bytes, and wrote slip.c on that basis, I eventually settled on raw bytes afterwards using copious comments so it was clear what was intended to be sent. That probably saved me a lot of work renaming everything, because: Dimly I recalled that early C compilers, including System V, limit their identifiers to eight characters (the so-called "Ritchie limit"). At this point I probably should have simply removed them entirely for consistency with their absence elsewhere, but I went ahead and trimmed them down to more opaque, pithy identifiers. That wasn't the only problem, though. I originally had two functions in slip.c, slip_start and slip_stop, and it didn't like that either despite each appearing to have a unique eight-character prefix: That's because their symbols in the object file are actually prepended with various metacharacters like _ and ~, so effectively you only get seven characters in function identifiers, an issue this error message fails to explain clearly. The next problem: there's no unsigned char, at least not in PRO/VENIX Rev. 2.0 which I want to support because it's more common, and presumably the original versions of PRO/VENIX and Venix-11. (This type does exist in PRO/VENIX V2.0, but that's because it's System V and has a later C compiler.) In fact, the unsigned keyword didn't exist at all in the earliest C compilers, and even when it did, it couldn't be applied to every basic type. Although unsigned char was introduced in V7 Unix and is documented as legal in the PRO/VENIX manual, and it does exist in Venix/86 2.1 which is also a V7 Unix derivative, the PDP-11 and 8086 C compilers have different lineages and Venix's V7 PDP-11 compiler definitely doesn't support it: I suspect this may not have been intended because unsigned int works (unsigned long would be pointless on this architecture, and indeed correctly generates Misplaced 'long' on both versions of PRO/VENIX). Regardless of why, however, the plain char type on the PDP-11 is signed, and for compatibility reasons here we'll have no choice but to use it. Recall that when C89 was being codified, plain char was left as an ambiguous type since some platforms (notably PDP-11 and VAX) made it signed by default and others made it unsigned, and C89 was more about codifying existing practice than establishing new ones. That's why you see this on a modern 64-bit platform, e.g., my POWER9 workstation, where plain char is unsigned: If we change the original type explicitly to signed char on our POWER9 Linux machine, that's different: and, accounting for different sizes of int, seems similar on PRO/VENIX V2.0 (again, which is System V): but the exact same program on PRO/VENIX Rev. 2.0 behaves a bit differently: The differences in int size we expect, but there's other kinds of weird stuff going on here. The PRO/VENIX manual lists all the various permutations about type conversions and what gets turned into what where, but since the manual is already wrong about unsigned char I don't think we can trust the documentation for this part either. Our best bet is to move values into int and mask off any propagated sign bits before doing comparisons or math, which is agonizing, but reliable. That means throwing around a lot of seemingly superfluous & 0xff to make sure we don't get negative numbers where we don't want them. Once I got it built, however, there were lots of bugs. Many were because it turns out the compiler isn't too good with 32-bit long, which is not a native type on the 16-bit PDP-11. This (part of the NTP client) worked on my regular Linux desktop, but didn't work in Venix: The first problem is that the intermediate shifts are too large and overshoot, even though they should be in range for a long. Consider this example: On the POWER9, accounting for the different semantics of %lx, But on Venix, the second shift blows out the value. We can get an idea of why from the generated assembly in the adb debugger (here from PRO/VENIX V2.0, since I could cut and paste from the Kermit session): (Parenthetical notes: csav is a small subroutine that pushes volatiles r2 through r4 on the stack and turns r5 into the frame pointer; the corresponding cret unwinds this. The initial branch in this main is used to reserve additional stack space, but is often practically a no-op.) The first shift is here at ~main+024. Remember the values are octal, so 010 == 8. r0 is 16 bits wide — no 32-bit registers — so an eight-bit shift is fine. When we get to the second shift, however, it's the same instruction on just one register (030 == 24) and the overflow is never checked. In fact, the compiler never shifts the second part of the long at all. The result is thus zero. The second problem in this example is that the compiler never treats the constant as a long even though statically there's no way it can fit in a 16-bit int. To get around those two gotchas on both Venices here, I rewrote it this way: An alternative to a second variable is to explicitly mark the epoch constant itself as long, e.g., by casting it, which also works. Here's another example for your entertainment. At least some sort of pseudo-random number generator is crucial, especially for TCP when selecting the pseudo-source port and initial sequence numbers, or otherwise Slirp seemed to get very confused because we would "reuse" things a lot. Unfortunately, the obvious typical idiom to seed it like srand(time(NULL)) doesn't work: srand() expects a 16-bit int but time(NULL) returns a 32-bit long, and it turns out the compiler only passes the 16 most significant bits of the time — i.e., the ones least likely to change — to srand(). Here's the disassembly as proof (contents trimmed for display here; since this is a static binary, we can see everything we're calling): At the time we call the glue code for time from main, the value under the stack pointer (i.e., r6) is cleared immediately beforehand since we're passing NULL (at ~main+06). We then invoke the system call, which per the Venix manual for time(2) uses two registers for the 32-bit result, namely r0 (high bits) and r1 (low bits). We passed a null pointer, so the values remain in those registers and aren't written anywhere (branch at _time+014). When we return to ~main+014, however, we only put r0 on the stack for srand (remember that r5 is being used as the frame pointer; see the disassembly I provided for csav) and r1 is completely ignored. Why would this happen? It's because time(2) isn't declared anywhere in /usr/include or /usr/include/sys (the two C include directories), nor for that matter rand(3) or srand(3). This is true of both Rev. 2.0 and V2.0. Since the symbols are statically present in the standard library, linking will still work, but since the compiler doesn't know what it's supposed to be working with, it assumes int and fails to handle both halves of the long. One option is to manually declare everything ourselves. However, from the assembly at _time+016 we do know that if we pass a pointer, the entire long value will get placed there. That means we can also do this: Now this gets the lower bits and there is sufficient entropy for our purpose (though obviously not a cryptographically-secure PRNG). Interestingly, the Venix manual recommends using the time as the seed, but doesn't include any sample code. At any rate this was enough to make the pieces work for IP, ICMP and UDP, but TCP would bug out after just a handful of packets. As it happens, Venix has rather small serial buffers by modern standards: tty(7), based on the TIOCQCNT ioctl(2), appears to have just a 256-byte read buffer (sg_ispeed is only char-sized). If we don't make adjustments for this, we'll start losing framing when the buffer gets overrun, as in this extract from a test build with debugging dumps on and a maximum segment size/window of 512 bytes. Here, the bytes marked by dashes are the remote end and the bytes separated by dots are what the SLIP driver is scanning for framing and/or throwing away; you'll note there is obvious ASCII data in them. If we make the TCP MSS and window on our client side 256 bytes, there is still retransmission, but the connection is more reliable since overrun occurs less often and seems to work better than a hard cap on the maximum transmission unit (e.g., "mtu 256") from SLiRP's side. Our only consequence to dropping the TCP MSS and window size is that the TCP client is currently hard-coded to just send one packet at the beginning (this aligns with how you'd do finger, HTTP/1.x, gopher, etc.), and that datagram uses the same size which necessarily limits how much can be sent. If I did the extra work to split this over several datagrams, it obviously wouldn't be a problem anymore, but I'm lazy and worse is better! The connection can be made somewhat more reliable still by improving the SLIP driver's notion of framing. RFC 1055 only specifies that the SLIP end byte (i.e., $c0) occur at the end of a SLIP datagram, though it also notes that it was proposed very early on that it could also start datagrams — i.e., if two occur back to back, then it just looks like a zero length or otherwise obviously invalid entity which can be trivially discarded. However, since there's no guarantee or requirement that the remote link will do this, we can't assume it either. We also can't just look for a $45 byte (i.e., IPv4 and a 20 byte length) because that's an ASCII character and appears frequently in text payloads. However, $45 followed by a valid DSCP/ECN byte is much less frequent, and most of the time this byte will be either $00, $08 or $10; we don't currently support ECN (maybe we should) and we wouldn't find other DSCP values meaningful anyway. The SLIP driver uses these sequences to find the start of a datagram and $c0 to end it. While that doesn't solve the overflow issue, it means the SLIP driver will be less likely to go out of framing when the buffer does overrun and thus can better recover when the remote side retransmits. And, well, that's it. There are still glitches to bang out but it's good enough to grab Hacker News: src/ directory, run configure and then run make (parallel make is fine, I use -j24 on my POWER9). Connect your two serial ports together with a null modem, which I assume will be /dev/ttyUSB0 and /dev/ttyUSB1. Start Slirp-CK with a command line like ./slirp -b 4800 "tty /dev/ttyUSB1" but adjusting the baud and path to your serial port. Take note of the specified virtual and nameserver addresses: Unlike the given directions, you can just kill it with Control-C when you're done; the five zeroes are only if you're running your connection over standard output such as direct shell dial-in (this is a retrocomputing blog so some of you might). To see the debug version in action, next go to the BASS directory and just do a make. You'll get a billion warnings but it should still work with current gcc and clang because I specifically request -std=c89. If you use a different path for your serial port (i.e., not /dev/ttyUSB0), edit slip.c before you compile. You don't do anything like ifconfig with these tools; you always provide the tools the client IP address they'll use (or create an alias or script to do so). Try this initial example, with slirp already running: Because I'm super-lazy, you separate the components of the IPv4 address with spaces, not dots. In Slirp-land, 10.0.2.2 is always the host you are connected to. You can see the ICMP packet being sent, the bytes being scanned by the SLIP driver for framing (the ones with dots), and then the reply (with dashes). These datagram dumps have already been pre-processed for SLIP metabytes. Unfortunately, you may not be able to ping other hosts through Slirp because there's no backroute but you could try this with a direct SLIP connection, an exercise left for the reader. If Slirp doesn't want to respond and you're sure your serial port works (try testing both ends with Kermit?), you can recompile it with -DDEBUG (change this in the generated Makefile) and pass your intended debug level like -d 1 or -d 3. You'll get a file called slirp_debug with some agonizingly detailed information so you can see if it's actually getting the datagrams and/or liking the datagrams it gets. For nslookup, ntp and minisock, the second address becomes your accessible recursive nameserver (or use -i to provide an IP). The DNS dump is also given in the debug mode with slashes for the DNS answer section. nslookup and ntp are otherwise self-explanatory: minisock takes a server name (or IP) and port, followed by optional strings. The strings, up to 255 characters total (in this version), are immediately sent with CR-LFs between them except if you specify -n. If you specify no strings, none are sent. It then waits on that port for data and exits when the socket closes. This is how we did the HTTP/1.0 requests in the screenshots. On the DEC Pro, this has been tested on my trusty DEC Professional 380 running PRO/VENIX V2.0. It should compile and run on a 325 or 350, and on at least PRO/VENIX Rev. V2.0, though I don't have any hardware for this and Xhomer's serial port emulation is not good enough for this purpose (so unfortunately you'll need a real DEC Pro until I or Tarek get around to fixing it). The easiest way to get it over there is Kermit. Assuming you have this already, connect your host and the Pro on the "real" serial port at 9600bps. Make sure both sides are set to binary and just push all the files over (except the Markdown documentation unless you really want), and then do a make -f Makefile.venix (it may have been renamed to makefile.venix; adjust accordingly). Establishing the link is as simple as connecting your server's serial port to the other end of the BCC05 or equivalent from the Pro and starting Slirp to talk to that port (on my system, it's even the same port, so the same command line suffices). If you experience issues with the connection, the easiest fix is to just bounce Slirp — because there are no timeouts, there are also no retransmits. I don't know if this is hitting bugs in Slirp or in my code, though it's probably the latter. Nevertheless, I've been able to run stuff most of the day without issue. It's nice to have a simple network option and the personal satisfaction of having written it myself. There are many acknowledged deficiencies, mostly because I assume little about the system itself and tried to keep everything very simplistic. There are no timeouts and thus no retransmits, and if you break the TCP connection in the middle there will be no proper teardown. Also, because I used Slirp for the other side (as many others will), and because my internal network is full of machines that have no idea what IPv6 is, there is no IPv6 support. I agree there should be and SLIP doesn't care whether it gets IPv4 or IPv6, but for now that would require patching Slirp which is a job I just don't feel up to at the moment. I'd also like to support at least CSLIP in the future. In the meantime, if you want to try this on other operating systems, the system-dependent portions are in compat.h and slip.c with a small amount in ntp.c for handling time values. You will likely want to make changes to where your serial ports are and the speed they run at and how to make that port "raw" in slip.c. You should also add any extra #includes to compat.h that your system requires. I'd love to hear about it running other places. Slirp-CK remains under the original modified Slirp license and BASS is under the BSD 2-clause license. You can get Slirp-CK and BASS at Github.

15 hours ago 2 votes
COMPUTE!'s Gazette revived for July 2025

COMPUTE!'s Gazette was for many years the leading Commodore-specific managzine. I liked Ahoy! and RUN, and I subscribed to Loadstar too, but Gazette had the most interesting type-ins and the most extensive coverage. They were also the last of COMPUTE!'s machine-specific magazines and one of the longest lived Commodore publications, period: yours truly had some articles published in COMPUTE (no exclamation point by then) Gazette as a youthful freelancer in the 1990s until General Media eventually made Gazette disk-only and then halted entirely in 1995. I remember pitching Tom Netzel on a column idea and getting a cryptic E-mail back from him saying that "things were afoot." What was afoot was General Media divesting the entire publication to Ziff-Davis, who was only interested in the mailing list, and I got a wholly inadequate subscription to PC Magazine in exchange which I mostly didn't read and eventually didn't renew. This week I saw an announcement about a rebooted Gazette — even with a print edition, and restoring the classic ABC/Cap Cities trade dress — slated for release in July. I'm guessing that "president and founder [sic]" Edwin Nagle either bought or licensed the name from Ziff-Davis when forming the new COMPUTE! Media; the announcement also doesn't say if he only has rights to the name, or if he actually has access to the back catalogue, which I think could be more lucrative: since there appears to be print capacity, seems like there could be some money in low-run back issue reprints or even reissuing some of their disk products, assuming any residual or royalty arrangements could be dealt with. I should say for the record that I don't have anything to do with the company myself and I don't know Nagle personally. By and large I naturally think this is a good thing, and I'll probably try to get a copy, though the stated aim of the magazine is more COMPUTE! and less Gazette since it intends to cover the entire retro community. Doing so may be the only way to ensure an adequate amount of content at a monthly cadence, so I get the reasoning, but it necessarily won't be the Gazette you remember. Also, since most retro enthusiasts have some means to push downloaded data to their machines, the type-in features which were the predominant number of pages in the 1980s will almost certainly be diminished or absent. I suspect you'll see something more like the General Media incarnation, which was a few type-ins slotted between various regular columns, reviews and feature articles. The print rate strikes me as very reasonable at $9.95/mo for a low-volume rag and I hope they can keep that up, though they would need to be finishing the content for layout fairly soon and the only proferred sample articles seem to be on their blog. I'm at most cautiously optimistic right now, but the fact they're starting up at all is nice to see, and I hope it goes somewhere.

6 days ago 9 votes
MacLynx beta 6: back to the Power Mac

prior articles for more of the history, but MacLynx is a throwback port of the venerable Lynx 2.7.1 to the classic Mac OS last updated in 1997 which I picked up again in 2020. Rather than try to replicate its patches against a more current Lynx which may not even build, I've been improving its interface and Mac integration along with the browser core, incorporating later code and patching the old stuff. However, beta 6 is not a fat binary — the two builds are intentionally separate. One reason is so I can use a later CodeWarrior for better code that didn't have to support 68K, but the main one is to consider different code on Power Macs which may be expensive or infeasible on 68K Macs. The primary use case for this — which may occur as soon as the next beta — is adding a built-in vendored copy of Crypto Ancienne for onboard TLS without a proxy. On all but upper-tier 68040s, setting up the TLS connection takes longer than many servers will wait, but even the lowliest Performa 6100 with a barrel-bottom 60MHz 601 can do so reasonably quickly. The port did not go altogether smoothly. While Olivier Gutknecht's original fat binary worked fine on Power Macs, it took quite a while to get all the pieces reassembled on a later CodeWarrior with a later version of GUSI, the Mac POSIX glue layer which is a critical component (the Power Mac version uses 2.2.3, the 68K version uses 1.8.0). Some functions had changed and others were missing and had to be rewritten with later alternatives. One particularly obnoxious glitch was due to a conflict between the later GUSI's time.h and Apple Universal Interfaces' Time.h (remember, HFS+ is case-insensitive) which could not be solved by changing the search order in the project due to other conflicting headers. The simplest solution was to copy Time.h into the project and name it something else! Even after that, though, basic Mac GUI operations like popping open the URL dialogue would cause it to crash. Can you figure out why? Here's a hint: application: your application itself was almost certainly fully native. However, a certain amount of the Toolbox and the Mac OS retained 68K code, even in the days of Classic under Mac OS X, and your PowerPC application would invariably hit one of these routines eventually. The component responsible for switching between ISAs is the Mixed Mode Manager, which is tightly integrated with the 68K emulator and bridges the two architectures' different calling conventions, marshalling their parameters (PowerPC in registers, 68K on the stack) and managing return addresses. I'm serious when I say the normal state is to run 68K code: 68K code is necessarily the first-class citizen in Mac OS, even in PowerPC-only versions, because to run 68K apps seamlessly they must be able to call any 68K routine directly. All the traps that 68K apps use must also look like 68K code to them — and PowerPC apps often use those traps, too, because they're fundamental to the operating system. 68K apps can and do call code fragments in either ISA using the Code Fragment Manager (and PowerPC apps are obliged to), but the system must still be able to run non-CFM apps that are unaware of its existence. To jump to native execution thus requires an additional step. Say a 68K app running in emulation calls a function in the Toolbox which used to be 68K, but is now PowerPC. On a 68K MacOS, this is just 68K code. In later versions, this is replaced by a routine descriptor with a special trap meaningful only to the 68K emulator. This descriptor contains the destination calling convention and a pointer to the PowerPC function's transition vector, which has both the starting address of the code fragment and the initial value for the TOC environment register. The MMM converts the parameters to a PowerOpen ABI call according to the specified convention and moves the return address into the PowerPC link register, and upon conclusion converts the result back and unwinds the stack. The same basic idea works for 68K code calling a PowerPC routine. Unfortunately, we forgot to make a descriptor for this and other routines the Toolbox modal dialogue routine expected to call, so the nanokernel remains in 68K mode trying to execute them and makes a big mess. (It's really hard to debug it when this happens, too; the backtrace is usually totally thrashed.) the last time that my idea with MacLynx is to surround the text core with the Mac interface. Lynx keys should still work and it should still act like Lynx, but once you move to a GUI task you should stay in the GUI until that task is completed. In beta 5, I added support for the Standard File package so you get a requester instead of entering a filename, but once you do this you still need to manually select "Save to disk" inside Lynx. That changes in beta 6: :: which in MacOS is treated as the parent folder. Resizing, scrolling and repainting are also improved. The position of the thumb in MacLynx's scrollbar is now implemented using a more complex but yet more dynamic algorithm which should also more properly respond to resize events. A similar change fixes scroll wheels with USB Overdrive. When MacLynx's default window opens, a scrollbar control is artificially added to it. USB Overdrive implements its scrollwheel support by finding the current window's scrollbar, if any, and emulating clicks on its up and down (or left and right) buttons as the wheel is moved. This works fine in MacLynx, at least initially. When the window is resized, however, USB Overdrive seems to lose track of the scrollbar, which causes its scrollwheel functionality to stop working. The solution was to destroy and rebuild the scrollbar after the window takes its new dimensions, like what happens on start up when the window first opens. This little song and dance may also fix other scrollwheel extensions. Always keep in mind that the scrollbar is actually used as a means to send commands to Lynx to change its window on the document; it isn't scrolling, say, a pre-rendered GWorld. This causes the screen to be redrawn quite frequently, and big window sizes tend to chug. You can also outright crash the browser with large window widths: this is difficult to do on a 68K Mac with on-board video where the maximum screen size isn't that large, but on my 1920x1080 G4 I can do so reliably. lynx.cfg a no-op. However, if you are intentionally using another character set and this will break you, please feel free to plead your use case to me and I will consider it. Another bug fixed was an infinite loop that could trigger during UTF-8 conversion of certain text strings. These sorts of bugs are also a big pain to puzzle out because all you can do from CodeWarrior is force a trap with an NMI, leaving the debugger's view of the program counter likely near but probably not at the scene of the foul. Eventually I single-stepped from a point near the actual bug and was able to see what was happening, and it turned out to be a very stupid bug on my part, and that's all I'm going to say about that. SameSite and HttpOnly (irrelevant on Lynx but supported for completeness) attributes are, the next problem was that any cookie with an expiration value — which nowadays is nearly any login cookie — wouldn't stick. The problem turned out to be the difference in how the classic MacOS handles time values. In 32-bit Un*xy things, including Mac OS X, time_t is a signed 32-bit integer with an epoch starting on Thursday, January 1, 1970. In the classic MacOS, time_t is an unsigned 32-bit integer with an epoch starting on Friday, January 1, 1904. (This is also true for timestamps in HFS+ filesystems, even in Mac OS X and modern macOS, but not APFS.) Lynx has a utility function that can convert a ASCII date string into a seconds-past-the-epoch count, but in case you haven't guessed, this function defaults to the Unix epoch. In fact, the version previously in MacLynx only supports the Unix epoch. That means when converted into seconds after the epoch, the cookie expiration value would always appear to be in the past compared to the MacOS time value which, being based on a much earlier epoch, will always be much larger — and thus MacLynx would conclude the cookie was actually expired and politely clear it. I reimplemented this function based on the MacOS epoch, and now login cookies actually let you log in! Unfortunately other cookies like trackers can be set too, and this is why we can't have nice things. Sorry. At least they don't persist between runs of the browser. Even then, though, there's still some additional time fudging because time(NULL) on my Quadra 800 running 8.1 and time(NULL) on my G4 MDD running 9.2.2, despite their clocks being synchronized to the same NTP source down to the second, yielded substantially different values. Both of these calls should go to the operating system and use the standard Mac epoch, and not through GUSI, so GUSI can't be why. For the time being I use a second fudge factor if we get an outlandish result before giving up. I'm still trying to figure out why this is necessary. ogle). This didn't work for PNG images before because it was using the wrong internal MIME type, which is now fixed. (Ignore the MIME types in the debug window because that's actually a problem I noticed with my Internet Config settings, not MacLynx. Fortunately Picture Viewer will content-sniff, so it figures it out.) Finally, there is also miscellaneous better status code and redirect handling (again not a problem with mainline Lynx, just our older fork here), which makes login and browsing sites more streamlined, and you can finally press Shift-Tab to cycle backwards through forms and links. If you want to build MacLynx from source, building beta 6 is largely the same on 68K with the same compiler and prerequisites except that builds are now segregated to their own folders and you will need to put a copy of lynx.cfg in with them (the StuffIt source archive does not have aliases predone for you). For the PowerPC version, you'll need the same set up but substituting CodeWarrior Pro 7.1, and, like CWGUSI, GUSI 2.2.3 should be in the same folder or volume that contains the MacLynx source tree. There are debug and optimized builds for each architecture. Pre-built binaries and source are available from the main MacLynx page. MacLynx, like Lynx, is released under the GNU General Public License v2.

2 weeks ago 12 votes
The April Fools joke that might have got me fired

Everyone should pull one great practical joke in their lifetimes. This one was mine, and I think it's past the statute of limitations. The story is true. Only the names are redacted to protect the guilty. My first job out of college was a database programmer, even though my undergraduate degree had nothing to do with computers and my current profession still mostly doesn't. The reason was that the University I worked for couldn't afford competitive wages, but they did offer various fringe benefits, and they were willing to train someone who at least had decent working knowledge. I, as a newly minted graduate of the august University of California system, had decent working knowledge at least of BSD/386 and SunOS, but more importantly also had the glowing recommendation of my predecessor who was being promoted into a new position. I was hired, which was their first mistake. The system I was hired to work on was an HP 9000 K250, one of Hewlett-Packard's big PA-RISC servers. I wish I had a photograph of it, but all I have are a couple bad scans of some bad Polaroids of my office and none of the server room. The server room was downstairs from my office back in the days when server rooms were on-premises, complete with a swipe card lock and a halon system that would give you a few seconds of grace before it flooded everything. The K250 hulked in there where it had recently replaced what I think was an Encore mini of some sort (probably a Multimax, since it was a few years old and the 88K Encores would have been too new for the University), along with the AIX RS/6000s that provided student and faculty shell accounts and E-mail, the bonded T1 lines, some of the terminal servers, the massive Cabletron routers and a lot of the telco stuff. One of the tape reels from the Encore hangs on my wall today as a memento. The K250 and the Encore it replaced (as well as the L-Class that later replaced the K250 when I was a consultant) ran an all-singing, all-dancing student information system called CARS. CARS is still around, renamed Jenzabar, though I suspect that many of its underpinnings remain if you look under the table. In those days CARS was a massive overlay that was loaded atop the operating system and database, which when I started were, respectively, HP/UX 10.20 and Informix. (I'm old.) It used Informix tables, screens and stored procedures plus its own text UI libraries to run code written variously as Perform screens, SQL, C-shell scripts and plain old C or ESQL/C. Everything was tracked in RCS using overgrown Makefiles. I had the admin side (resource management, financials, attendance trackers, etc.) and my office partner had the academic side (mostly grades and faculty tracking). My job was to write and maintain this code and shortly after to help the University create custom applications in CARS' brand-spanking new web module, which chose the new hotness in scripting languages, i.e., Perl. Fortuitously I had learned Perl in, appropriately enough, a computational linguistics course. CARS also managed most of the printers on campus except for the few that the RS/6000s controlled directly. Most of the campus admin printers were HP LaserJet 4 units of some derivation equipped with JetDirect cards for networking. These are great warhorse printers, some of the best laser printers HP ever made. I suspect there were line printers other places, but those printers were largely what existed in the University's offices. It turns out that the READY message these printers show on their VFD panels is changeable. I don't remember where I read this, probably idly paging through the manual over a lunch break, but initially the only fun things I could think of to do was to have the printer say hi to my boss when she sent jobs to it, stuff like that (whereupon she would tell me to get back to work). Then it dawned on me: because I had access to the printer spools on the K250, and the spool directories were conveniently named the same as their hostnames, I knew where each and every networked LaserJet on campus was. I was young, rash and motivated. This was a hack I just couldn't resist. It would be even better than what had been my favourite joke at my alma mater, where campus services, notable for posting various service suspension notices, posted one April Fools' Day that gravity itself would be suspended to various buildings. I felt sure this hack would eclipse that too. The plan on April Fools' Day was to get into work at OMG early o'clock and iterate over every entry in the spool, sending it a sequence that would change the READY message to INSERT 5 CENTS. This would cause every networked LaserJet on campus to appear to ask for a nickel before you printed anything. The script was very simple (this is the actual script, I saved it): The ^[ was a literal ASCII 27 ESCape character, and netto was a simple netcat-like script I had written in these days before netcat was widely used. That's it. Now, let me be clear: the printer was still ready! The effect was merely cosmetic! It would still print if you sent jobs to it! Nevertheless, to complete the effect, this message was sent out on the campus-wide administration mailing list (which I also saved): At the end of the day I would reset everything back to READY, smile smugly, and continue with my menial existence. That was the plan. Having sent this out, I fielded a few anxious calls, who laughed uproariously when they realized, and I reset their printers manually afterwards. The people who knew me, knew I was a practical joker, took note of the date, and sent approving replies. One of the best was sent to me later in the day by intercampus mail, printed on their laser printer, with a nickel taped to it. Unfortunately, not everybody on campus knew me, and those who did not not only did not call me, but instead called university administration directly. By 8:30am it was chaos in the main office and this filtered up to the head of HR, who most definitely did know me, and told me I'd better send a retraction before the CFO got in or I was in big trouble. That went wrong also, because my retraction said that campus administration was not considering charging per-page fees when in fact they actually were, so I had to retract it and send a new retraction that didn't call attention to that fact. I also ran the script to reset everything early. Eventually the hubbub finally settled down around noon. Everybody in the office thought it was very funny. Even my boss, who officially disapproved, thought it was somewhat funny. The other thing that went wrong, as if all that weren't enough, was that the director of IT — which is to say, my boss's boss — was away on vacation when all this took place. (Read E-mail remotely? Who does that?) I compounded this situation with the tactical error of going skiing over the coming weekend and part of the next week, most of which I spent snowplowing down the bunny slopes face first, so that he discovered all the angry E-mail in his box without me around to explain myself. (My office partner remembers him coming in wide-eyed asking, "what did he do??") When I returned, it was icier in the office than it had been on the mountain. The assistant director, who thought it was funny, was in trouble for not putting a lid on it, and I was in really big trouble for doing it in the first place. I was appropriately contrite and made various apologies and was an uncharacteristically model employee for an unnaturally long period of time. The Ice Age eventually thawed and the incident was officially dropped except for a "poor judgment" on my next performance review and the satisfaction of what was then considered the best practical joke ever pulled on campus. Indeed, everyone agreed it was much more technically accomplished than the previous award winner, where someone had supposedly gotten it around the grounds that the security guards at the entrance would be charging a nominal admission fee per head. Years later they still said it was legendary. I like to think they still do.

2 weeks ago 18 votes
More pro for the DEC Professional 380 (featuring PRO/VENIX)

In computing the DEC PDP-11 is something of a geologic feature. Plus, as most systems in the family were minicomputers, they had the whole monolith thing going for them too (minus murderous apes and sucking astronauts into hyperspace). Its fame is even more notable given that Digital Equipment Corporation was among the last major computer companies to introduce a 16-bit mini architecture, beaten by the IBM 1130 (1965), HP 2116A (1966), TI-960 (1969) and Data General Nova (1969) — itself a renegade offshoot of the "PDP-X" project which DEC president Ken Olsen didn't support and even cancelled in 1968 — leaving DEC to bring up the rear with the PDP-11/20 in 1970. So it shouldn't be a surprise that DEC, admittedly like many fellow mini makers, was similarly retrograde when it officially entered the personal computer market in 1982. At least on paper the DEC Rainbow was reasonable enough: CP/M was still a thing and MS-DOS was just newly a thing, so Digital put an 8088 and a Z80 inside so it could run both. On the other hand, the DECmate II, ostensibly part of the venerable PDP-8 family, was mostly treated as a word processor and office machine; its operating system was somewhat crippled and various bugs hampered compatibility with earlier software. You could put a Z80 or an 8086 in it and run CP/M and MS-DOS (more or less), but it wasn't a PC, and its practical utility as a micro-PDP didn't fully match the promise. wanted to run and little that existing PDP users did. Still, despite questionable technical choices, these machines (the Pros in particular) are some of the most well-built computers of the era. Indeed, they must have sold in some quantity to justify the Pro getting another shot as a high end system. Here's the apex of the line, the 1984 DEC Professional 380. the Tomy Tutor). It, too, was never sold as a successor to the 990; later 990 hardware even used 9900-series CPUs directly. However, TI got greedy and shortsightedly repulsed third-party development, while the 9900 architecture had what turned out to be a fatal dependence on RAM speed and became a technological dead end. More problems occurred after the IBM PC scrambled the landscape and mini vendors tried touting their smaller "microminis" as upmarket alternatives, though these systems were deliberately less powerful than and sometimes mostly or totally incompatible with their big systems to avoid cannibalizing high-end sales. Their prices were likewise uncompetitive, so newer cost-sensitive customers continued buying cheaper PC-compatibles while legacy customers were unhappy their existing software might not work. Attempts to entice that low end by adding more typical microcomputer CPUs as compatibility options, usually the 8086 or 8088, simply made them into poor PCs that cost even more. Data General was one notorious instance as they repeatedly failed to parlay their successful Nova into smaller offerings, first with the poorly-received 1977 microNOVA, and later with the microECLIPSE in the bizarre 1983 Desktop Generation modular machines. While Data General claimed it could run everything the bigger MV hardware could, such software had to be converted by vendors "with standard software [in] a few hours" (PC Magazine 11/83), and its PC compatibility side was unable to run major applications like Lotus 1-2-3 without patches. Given how expensive the DG was, most developers didn't bother and most potential customers didn't either. For its part, although early rumours talked about a small System/370, IBM never turned any of their mainframes or minis into commodity microcomputers except for various specialized add-on boards, and the 5150 PC itself was all off-the-shelf. Surprisingly, DEC was already in this segment, sort of, albeit half-heartedly and in small quantities. As far back as 1974, an internal skunkworks unit had presented management with two small systems prototypes described as a PDP-8 in a VT50 terminal and a portable PDP-11 chassis. Engineers were intrigued but sales staff felt these smaller versions would cut into their traditional product lines, and Olsen duly cancelled the project, famously observing no one would want a computer in their home. Team member David Ahl was particularly incensed and quit DEC in frustration, going on to found Creative Computing. A charitable interpretation says Olsen may have been referring to the size and state of computers at the time, and most people probably wouldn't have wanted one of those in their house, but it wasn't very future-thinking to imply they'd always stay that way. Olsen reiterated to the World Future Society in 1977 that "[t]here is no reason for any individual to have a computer in their home," later arguing in various retrospectives that he meant no one would want a computer at home controlling everything. True to his words, both Ken Olsen and Gordon Bell reportedly had terminals in their residences but no standalone systems. Creative Computing that Olsen's attitude was changing, possibly goaded by the IBM PC's wildly successful launch in August or maybe when Ahl said "his [Olsen's] daughter begged for a computer at home," adding DEC's new low-end computer would be "based on the venerable PDP-8." (This became, of course, the DECmate II.) Olsen subsequently told investors in November to expect a "DEC personal computer" (his words), adding "we are not planning to go after the home computer market" but that it would be "equivalent" to the IBM PC. In early 1982 he intimated further that there would indeed be an updated DECmate soon, plus two new lower-cost "16-bit" systems. This project, administered by DEC's new Small Systems Group (SSG), was internally referred to as the "XT" Computer Terminal (not to be confused with the IBM PC/XT, which IBM didn't release until March 1983). four microcomputers: the DECmate II and the two Professionals, as predicted, but also the previously unannounced DEC Rainbow 100 aimed directly at the IBM PC 5150. In his remarks at the press conference, Ken Olsen called DEC's personal computer initiative "the largest investment in people and manpower" the company had made in its 25 years of existence. All superficially related units, the industrial design of the four computers was strongly based on the DEC XT prototype. Each system used similar or identical cases, the same monitors, the same floppy drives, the same 103-key keyboard and mostly the same cables, though their guts of course were in some cases very different. To industry observers' surprise, DEC uncharacteristically seemed to price the DECmate II and Rainbow to sell. The now 8MHz DECmate II had better hardware yet went for much less than the DECmate ($3745 [$12,600] compared to 1982's later price of $5845 [$19,700]), even considering it needed its own keyboard and monitor. However, what really startled people was the Rainbow. Introduced at $3495 [$11,700], DEC had already slashed it to $2675 [$9000] by the fall. At a time when a reasonably configured IBM PC system might set you back itself around $3500-ish in late 1982, with CPU, monochrome display, base 64K RAM and 180K floppy drive(s), a comparable Rainbow setup with 64K, baseline 400K RX50 5.25" dual drives and serial port, hybrid CP/M-86/80, LK201 keyboard and VR201 monochrome display came in at just over $3700 at the later pricing, with MS-DOS support on the way. Although the barebone 16K IBM PC started at $1565 [$5260], DEC wisely predicted most people would opt for the larger RAM. Even the higher-end Professionals seemed to be aggressively priced, though admittedly mostly when compared to their mini ancestors, starting with the lowest-spec 325 at $3995 [$13,400]. Their 256K base of RAM came from the original 128K plus the 128K upgrade (both as daughtercards) which was now standard. The Professional 325, sold with four CTI slots and no hard disk option, was proudly cited in DEC's Professional Handbook as "the lowest cost PDP-11 system ever produced." However, the two upper tiers got expensive quickly: the midrange configuration, a 350 without a hard disk, was otherwise the same except for its six CTI slots and sold for $4995 [$16,800], and adding the 5MB hard disk and controller card pushed the total bill to $8495 [$28,500]. The bloom started coming off the rose when manufacturing problems delayed sales; the 325 and 350 didn't hit the market until almost December, and the Rainbow was stalled for months. Even when people could buy them, DEC's unusual design choices started coming home to roost. None of the systems could format their own floppy disks with their shipping operating systems, leading to user revolt when they were expected to buy preformatted media from DEC directly; officially only later versions of CP/M and MS-DOS for the Rainbow could do this, and some Pro users bought a Rainbow specifically for the purpose. While the Rainbow could read and write specially formatted MS-DOS floppies, its native format was the incompatible (but higher capacity) RX50's, and unlike the 5150 PC its text mode acted like a VT220 terminal with initially no graphics support of any kind — or ISA slots to add some. (Graphics options later became available.) More ominously for the future, the first version of the Rainbow 100 couldn't boot from a hard drive. As for the DECmate II, it was less expensive and more expandable but to buyers' displeasure somewhat less functional than its predecessor, affected by irregularities in the new OS/278 and compatibility problems with older programs. It also offered no built-in options for acting as a standalone terminal (only software), which was a common use for the DECmate I in office environments. Meanwhile, power users found the 325 and 350 slow and ponderous compared to larger PDPs and objected to the Pro family's inability to run traditional PDP-11 environments. Reviewers appreciated the sold build quality and modular design, but complained the machines were heavy and the fans and hard drive were unusually loud. While Pro high-resolution graphics were considered by all to be extremely good, even making an appearance at SIGGRAPH, they were slow to update and scroll, and made using the menu-driven P/OS (based on a modified port of the real-time RSX-11M Plus) feel even more sluggish. Consistent with the XT's office aims DEC had promised early adopters a port of VisiCalc and a word processing package, but by summer 1983 little application software of any sort was available which caused retailers and buyers to start cancelling orders. Dan Bricklin, who himself led the P/OS ports of VisiCalc and TK!Solver, blamed P/OS's large memory footprint in particular and cited poor developer relations with DEC generally. In the fall DEC leadership concluded their personal computer strategy was failing and SSG VP Andrew Knowles resigned in September 1983, the sixth vice-president to leave Digital in two years. Analysts cited high prices, ineffective marketing, weak distribution channels and missing software applications, as well as the Pro's substantial production delays which allowed the IBM PC to flourish at its expense. For its part, the Rainbow's unique architecture became an increasing liability as current software now assumed a true PC compatible and directly accessed the hardware, hampering it from running all but simple or well-behaved DOS applications. In April 1984 DEC introduced the Rainbow 100B, adding hard disk support, improved PC hardware compatibility and twice the base memory at the same price, along with a RAM expansion for both the original Rainbow 100 (now the 100A) and the new 100B, and additional software telecommunications options. On the very low end DEC also shrunk the DECmate II into the less-expensive DECmate III, reducing its options, size and clock speed. Likely as the quickest means to market, DEC contracted VenturCom to port Venix-11 to the Pro 350 as its official Unix option, adding specific support for the Pro video hardware in its graphics library and including Venix's more unusual features like real-time programming, shared data segments, semaphores and code mapping (to dynamically page executable code in and out of main memory on small systems). We'll talk about why these features were important to DEC when we get to using Venix proper. The result was PRO/VENIX, announced in June 1984, and even included a future-facing UNIX System V license from AT&T. (Later on DEC also commissioned Pro ports of XENIX and Whitesmiths Idris, though it's unclear if these were actually completed or sold.) I wound up with a 380 is a little more prosaic. In 2013 I got contacted — I don't recall now exactly how — about a storage unit owned by a recently deceased individual "with a lot of old computers" that had to be cleaned out, and would I like to see what was there before the scrappers came? (I'm happy to do this and save or re-home any systems I can, but here's a protip: if you value your collection, don't ever let it get this far.) Whatever I could haul away was mine and what we couldn't haul away went to recycling that afternoon. So I rented a van and talked my friend Jon into coming along as extra muscle and we drove out to Pasadena to have a look. ALGORITHM: The Hacker Movie, written and directed by Jon, streaming free on YouTube! And don't miss his new short-form series, The Difference Engines. First episode drops March 17.) PDP-11/44 in a BA11-A enclosure with a 1350W power supply. We selected this one because it looked like it might work and we could actually get it on the dolly into the van. Besides the clear monster hard disk and the 11/44, the other things we hauled away were an RL02 and whatever hard disk packs we could fit, approximately one metric crap-ton of paper tape, some documentation (including, it turned out, a mostly complete set of Venix manuals for the 350), a couple AUI Ethernet hubs, various random cables, two VAXstation 100s (I told you I didnt know much about DEC hardware at the time), a somewhat thrashed VT100 terminal I thought I could restore, and, relevant to this post, a DECmate II, a VR201 green screen monitor, a VR241 colour monitor, and — ta-daa! — two DEC Pro 380s. Everything was dirty and gritty but other than the whacked VT they were intact, and neither of the monitors had evidence of the "mold spots" or "cataracts" that sometimes afflict these models. That was all we could rescue and there was no time for a second trip or to call anyone else; the scrappers were already pulling up as we tried to get the van door closed. Over time, I am relieved to say that to the best of my knowledge none of these items ended up in the skip, or at least not on my account. Unlike the other VAXstations (like the VAXstation 3100 M76 I have, currently my only VAX or VMS system), the VAXstation 100 is in fact a graphical terminal that requires a UNIBUS tether, and is of some specific historical interest (which I was unaware of at the time) because it was part of the transition from the W Window System to X. It has its own 68000 CPU, but it is not a standalone machine, and they turned out to be useless to me because I had no idea how to hook them up to the 11/44. Fortunately I found a home for them. In the end we didn't have the space and I was concerned we didn't have the power to actually fire up the 11/44 either, nor did we ever determine what that monster hard disk went with, so the PDP, the monster, the RL02, the disk packs, the beat-up VT100 and the rat's nest of paper tape went to someone else. I kept the monitors, cables, documentation and the remaining computers and tracked down an LK201 keyboard, and if I get around to finding or making boot disks for it, the DECmate II may be the subject of a future article. never connect them with the system turned on or an inadvertent short could fry any attached keyboard. For the monochrome VR201 display, plug a BCC02 cable into the Pro, plug the RJ-type cable from the LK201 into the VR201, and plug the other end of the BCC02 cable into the VR201. A straight-thru female-to-female DA-15 cable, the same port used for earlier PC joysticks or old Mac monitors, substitutes easily and also works for the DECmate II. In this configuration the monitor and keyboard are powered by the computer. The monochrome video signal can also be displayed on just about any composite monitor. For the larger VR241 colour display, plug a BCC03 cable into the Pro, plug the RJ-type cable from the LK201 into the box at the other end of the cable, and attach the BNC R, G and B cables to the appropriate connectors on the monitor. The VR241 additionally requires its own source of wall power, and the BCC03 should not be used with the Rainbow (it uses the BCC17). We'll be using the VR241 here; I've since allocated the VR201 to the DECmate II. On the far right in this picture are the status LEDs. The rightmost green LED indicates good power (got a DCOK signal from the power supply) and should stay lit. If this light doesn't come on or the system won't power up generally, check the internal circuit breaker first. The other four red LEDs (numbered 4-1, left to right) start lit and then go out in various order as internal system tests are passed. If any remain lit, there was a fault. In broad strokes, LED 4 indicates which type of error and the other three LEDs are an error code in binary, with LED 1 being the least significant bit. if LED 4 is out, then the system is indicating a problem with a particular slot (e.g., LED 3 and LED 2 on means physical slot 6 is bad). If LED 4 is on, then from 1-7 (0 is unpossible), the keyboard failed or was not detected, no boot device could be found, no monitor cable was connected, both logic board memory banks are bad, the low bank of memory is bad, the high bank of memory is bad, or the system module has failed completely (i.e., all LEDs on and will not extinguish). An on-screen display may give more information and/or additional error codes if enough of the system is working. We'll see an example of that a little later. upside down. This confused enough people using the Rainbow in particular that DEC eventually put a guide direction arrow inside later units, though this earlier example doesn't have it. the slowest PDP-11 ever produced," though they were nevertheless cheaper and the CPU modules in particular became very popular. The success of the LSI-11, and its attendant disadvantages, spurred DEC to develop its own designs. These commenced with the F-11 "Fonz" chipset, used in KDF11 CPUs starting with the 1979 11/23, and was a true 16-bit microarchitecture. In the DEC Pro 325 and 350, it was called the KDF11-CA CPU and consisted of five chips fabricated on a 6-micron process consolidated into three hybrid 40-pin DIPs that implement the standard PDP-11 instruction set, the EIS (extended instruction set), and FP-11 floating point instructions. The data chip (containing all non-float registers and scratchpads, the ALU and conditional branching logic) and the control chip (containing the microcode) are on one DIP together equivalent to the KEV11-B, the MMU is its own DIP equivalent to the KTF11-A, and the floating point adapter is spread over two chips on the third equivalent to the KEF11-A. Interestingly, the actual floating point registers are stored in the MMU. The F-11 as implemented in the 325 and 350 can address up to 4MB (22-bit) with 1MB reserved for option cards and I/O, though it lacks split I+D mode (again, we'll talk about this when we discuss Venix's internals) and has no cache. CPU timings are derived from a 26.666MHz crystal and divided by two to yield the 13.333MHz master CPU clock, which is internally divided again by four to yield its nominal clock rate of 3.333MHz. The J-11 "Jaws" chip shown here in the Pro 380 is more compact than the F-11 and higher performance, but the processor ended up less powerful than it was intended to be: Supnik describes the design as "idiosyncratic," stating that its complexity and size overwhelmed development partner Harris, and the chips had so many problems with yield and bugs that "Jaws" never reached its internal clock speed goal of 5MHz. The basic package consisted of two 4-micron chips on one carrier, a data chip with the ALU, external interfaces, MMU and registers, and a control chip with the microcode. Pads on the underside of the hybrid were to accommodate two more chips, likely for additional instruction set options, but this idea was never implemented. Internally the J-11 also appears to divide its clock by four. The J-11 was introduced with the PDP-11/73 in 1984, using the 15MHz (i.e., 3.75MHz) KDJ11-B CPU card with an 8K write-through cache and a separate floating point accelerator, and likewise implements the base instruction set, the EIS and FP-11. DEC's fastest J-11s eventually topped out at 4.5MHz (on an 18MHz clock), though the Mentec M100 reportedly pushed it to 4.9152MHz (on a 19.66MHz clock). Unfortunately the 380's J-11 (the KDJ11-C) is somewhat gimped by comparison, having been downlocked to 10.08MHz (divided down from the 20.16MHz system master clock crystal near the DC 363) for an effective internal clock speed of 2.52MHz, and lacking any options to add cache or floating point acceleration. However, it does implement split I+D, and Venix even makes use of the feature. Because the J-11 also requires fewer cycles per instruction and the 380's RAM is faster, the 380 is anywhere from two to three times quicker in practice than the 325/350 despite being clocked slower, and the CPU collectively uses less than a fifth of the power. A third and even smaller DEC-designed PDP-11 CPU was also available, the 7.5MHz/10MHz T-11 "Tiny" chip. The T-11 was introduced in 1981 and is a single-chip CPU with no MMU or floating point support fabricated on a 5-micron process with 12,000 transistors. DEC intended this processor for embedded systems and used it in some disk controllers and the VT240 terminal, but its most famous use was as the main CPU of the Atari System 2 arcade board (some of my favourite arcade games like Paperboy, 720 and the best one of all time, APB, which I played for the first time as a kid in the Disneyland arcade — who doesn't like donuts, demerits and police brutality?). It was indisputably the most successful of the PDPs-on-some-chips and produced in the hundreds of thousands. These particular CPUs and the PDP-11 architecture generally were all mercilessly ripped off by the Soviets, by the way, and one very small CPU in this family we may visit at another time. For that matter, the Red Menace even made clone Pros! David Gesswein's MFM hard disk reader and emulator, which can read an existing drive to an image and then immediately substitute for it. It's not a dirt-cheap device but you get what you pay for. It has separate connectors for reading and for emulation, and looks to the Pro almost exactly like the hard disk that used to be there. custom software on Debian Linux for which the rest of the hardware is implemented as a cape. Dave's software is all open-source and included in the operating system image. Although you can power the BeagleBone itself over USB, the cape requires +12V that we'll pull from the connector for the RX50, since we need to keep the other connector to power the RD52. The large bank of capacitors acts as reserve power so that the BeagleBone is able to cleanly shutdown when it notices the main system power is off. We'll take advantage of that for another purpose as well. root, no password. Here we'll test out the capacitors before we continue. The powerfail system is what monitors the power supply and forces the BBG to shut down if the voltage is below a critical level. Here the voltage is as we expect, so we'll next do a test powered restart: The card duly goes down and immediately comes back up, so we can consider the power subsystem to be good. After logging back in we'll power down the board completely to hook it up. (You may need to forcibly remove and replace the Molex power connector after shutdown to get it to restart.) root, --analyze is intended for figuring out operating parameters on a disk you don't know, but we already know at least the geometry for the RD52, and it appears it didn't guess quite right which is why I halted it after seeing copy errors. Interestingly it detected the setup as an Elektronica_85, which is in fact one of the godless pinko Commie Soviet PDP-11 clones (please send hate mail to /dev/null). --analyze did correctly get the number of heads and cylinders (8 and 512 respectively), and did properly find the hard disk on drive select 1 as configured from the DEC assembly line. We can try the copy again with a simpler set of options, and this time it succeeds, yielding an 85MB image file. This file is as large as it is because it captures all the transitions for a very accurate copy. We'll leave it on the BBG internal flash for now (Venix does have a fixed-size swap partition but we'll have enough memory so that it won't have to use it much). Finally, we power down the board again completely at this point in preparation for installing it. don't miss. that? root@beaglebone:~# cd ~/emu root@beaglebone:~/emu# setup_emu Rev B Board root@beaglebone:~/emu# mfm_emu --drive 1 --file ../emufile_a Board revision B detected Drive 0 num cyl 512 num head 8 track len 20836 begin_time 0 PRU clock 200000000 Waiting, seek time 0.0 ms max 0.0 min free buffers 200 bad pattern count 0 Read queue underrun 0 Write queue overrun 0 Ecapture overrun 0 glitch count 0 glitch value 0 0:test 0 0 0:test 1 0 0:test 2 0 0:test 3 0 0:test 4 0 1:test 0 0 1:test 1 0 1:test 2 0 1:test 3 0 1:test 4 0 select 1 head 0 select 0 head 0 Drive 0 Cyl 0->1 select 1, head 0 dirty 0 Waiting, seek time 5.8 ms max 5.8 min free buffers 200 Drive 0 Cyl 1->0 select 1, head 0 dirty 0 Waiting, seek time 4.1 ms max 5.8 min free buffers 200 Drive 0 Cyl 0->151 select 1, head 0 dirty 0 Waiting, seek time 10.1 ms max 10.1 min free buffers 200 Drive 0 Cyl 151->0 select 1, head 0 dirty 0 Waiting, seek time 4.0 ms max 10.1 min free buffers 200 Drive 0 Cyl 0->151 select 1, head 0 dirty 0 Waiting, seek time 4.0 ms max 10.1 min free buffers 200 fsck succeeds on both filesystems (/ and /usr). Qapla'! Seemingly (that's called foreshadowing, kids) the last step should be to make the BBG automatically start serving the drive on powerup. Dave provides this facility as a systemd service. We edit /etc/mfm_emu.conf and set EmuFN1="/opt/mfm/emufile_a" in that file, then systemctl enable mfm_emu.service and put on the capacitor jumper so we have bridging power. When I ran this all from the separate power supply, the board came up and started serving the drive image, I powered on the system, and Venix booted. When I turned off the separate power supply after powering off the system, the board properly shut down. When I ran all this from the system power supply, however, it wouldn't ever see the emulated hard disk and wouldn't ever boot. Can you figure out why? Pencils down: the BBG wasn't booting fast enough. I let the whole thing discharge overnight, took the BBG out of the MFM cape and put it on the workbench. I am not a fan of systemd (which is why I don't run current Linux on any of my servers or server-adjacent machines), but I grudgingly admit the tooling is better. As configured the BBG was taking a whopping 37 seconds to come up, over 31 of those seconds spent bringing up the network. But we're not using the network! I edited /etc/network/interfaces and commented out everything but the loopback, then turned off the WiFi ... and restarted. This is still not fast enough for the Pro to see the emulated disk on startup. Even hacking the first three scripts to exit 0 immediately only got me down to about 7 seconds (apparently most of the overhead is from just launching them in the first place), and we've also not accounted for the time needed to bring the MFM emulator up either. I undid these changes; everything else is too slight to significantly matter. We simply can't seem to beat the Pro's firmware to come up first before the Pro concludes no hard disk is present. However, we have an alternative. We know that if we give the BBG enough time — and "enough time" is now around seven or eight seconds — we can get the system up. We also know from testing and Dave's estimates that the supercapacitor bank will get us at least 30 seconds of runtime (in fact, in practice it seems to be several minutes or more on this Revision B). I logged back into the BBG, edited /etc/mfm_emu.conf and added PowerFailOptions="-w 20" . What this allows us to do is power on the Pro, charge the capacitors (very fast) and start bringing up the BBG while the Pro's initial boot fails. With our new shortened boot time the BBG will be ready and waiting by the time the Pro displays its error screen. We then power the Pro off and power it back on. As long as the power cycle is less than 20 seconds (I eventually upped this to 30 as there appears to be plenty of power available), the BBG will ride the stored charge and keep serving the emulated disk image so that on that second power-on (and every power-on thereafter) the Pro will see the drive and boot from it. When we're done with a session, we power the system off for more than thirty seconds, causing the BBG to shut down cleanly as expected. With that sorted, let's do the memory upgrade too. two sets of segment registers, one for instruction fetches and one for data, effectively turning a notionally Von Neumann CPU into a Harvard one. Instructions, plus their operands, addresses and constants embedded in the instruction stream come from the I-space segments; loads and stores to memory addresses, however, reference the D-space segments. This effectively allows 128K to be addressed more or less at once, albeit with a bit of bookwork. (Again, also compare with the MOS 6509, a 6502 with on-board registers to set execution and indirection banks, though the banks are granular only to the 64K level.) the Alpha Micro, where you can mark programs as reentrant [other users can run the same image] or reuseable [the image doesn't have to be reloaded from disk].) Programs that require a much larger code size can be linked with code mapping, where as long as individual object modules (i.e., its component .o files) are each less than 8K, segment 1 can be used to page them in and out for a program size that can be as large as physical memory minus the size of the Venix kernel minus the size of the process' data. A fixed segment containing the paging support logic, function mapping table and other code sits in the lowest 8K controlled by segment 0, with up to 48K of data controlled by segments 2-6 and stack in the usual location controlled by segment 7. Alternatively, on J-11 systems the C compiler is split I+D aware and code can be compiled to support it (though such binaries will not run on a 325 or 350). Since both code-mapped and split I+D executables necessarily can't modify their program text, they can be run "pure" too. The PDP-11 Venices additionally provide special system calls for its own implementation of shared memory which it calls "shared data segments" (this feature was also available on Venix/86, albeit with different arguments and semantics, and Venix-specific code was generally not portable between architectures). This facility allows a second data segment to be dynamically constructed and mapped into one of the 8K segments. By using a filename a mmap()-like facility becomes possible which can also be shared by other processes, and it can page the segment window over a much larger range using a disk-backed file to handle addressing data spaces larger than 56K. IPC is further aided by a simple (though ultimately future-incompatible) implementation of semaphores, permitting local and global locks, which mainline AT&T Unix did not support until System V. Venix additionally supports real-time programming, allowing programs running as root to run at exclusive priority, lock their processes in memory, and directly access I/O by using another Venix-specific system call to point a segment at the device's physical address (usually segment 6, generally free except in code-mapped executables and programs with unusually large text segments). DEC, having a substantial investment in its own real-time operating environments for data acquisition and process automation, particularly valued a Unix option that could do so as well. To see a bit of this in operation, consider this complete program in C (written in K&R C, because that was the era). It is a direct C-language port I made of a 4096-colour demonstration originally written in PDP-11 assembly by vol.litwr. Remember, int in this environment is 16-bit. > 4; while (*(videop + 2) > 0) { } *(videop + 3) = 16; *(videop + 4) = 18; *(videop + 10) = r3; *(videop + 9) = 4; r3 = r3 >> 4; while (*(videop + 2) > 0) { } *(videop + 4) = 4624; *(videop + 10) = r3; *(videop + 9) = 4; while (*(videop + 2) > 0) { } } pb4096(r1, r2, r3, r4, r5) int r1, r2, r3, r4, r5; { int i,j,k; for (i=r4; i>0; i--) { k = r1 << 2; for(j=r5; j>0; j--) { pp4096(k, r2, r3); k += 4; } r2++; } } pal12() { int i,j,r1=0,r2=0,r3=0,r4=5,r5=2; for (i=35; i>0; i--) { r1 = 0; for(j=120; j>0; j--) { pb4096(r1,r2,r3,r4,r5); r3++; r1 += r5; } r2 += r4; } } main(argc, argv) int argc; char **argv; { int i, vr4, vr6, vr8, vr14, vr16; /* map 64 bytes to segment 6 from physical address 0x3ffb00 */ if(phys(6, 1, 0177754)) { perror("mapping"); exit(1); } i = *videop; if (i != 0x0010) { fprintf(stderr, "unexpected value for hardware: %04x\n", i); exit(1); } i = *(videop + 2); if (i & 0x2000) { fprintf(stderr, "no EBO? %04x\n", i); exit(1); } fprintf(stderr, "ok, detected 380 with EBO\n"); vr4 = *(videop + 2); vr6 = *(videop + 3); vr8 = *(videop + 4); vr14 = *(videop + 7); vr16 = *(videop + 8); *(videop + 2) = 0; pal12(); sleep(10); *(videop + 2) = vr4; *(videop + 3) = vr6; *(videop + 4) = vr8; *(videop + 7) = vr14; *(videop + 8) = vr16; exit(0); } phys(2) system call, then manipulates the registers to draw to the individual video planes. There's no cache and the Venix PDP-11 C compiler is fairly simple-minded, so we can simply directly drive the registers in a way that would make a modern C programmer blanch. (Rust programmers, just look away and go to your happy place.) Compiled with cc -o test test.c, the result looks like this (./test): phys() call needs to be adjusted to point to your video card if you try to run this on a 350 (which will vary on the slot). However, it works, and it's very pretty. We'll be messing around more with Pro graphics in a future article. The code we ran here directly accesses the hardware, but Venix included a complete high-level graphics package for both Venix/86 and PRO/VENIX with more typical drawing functions that operated at the Pro's maximum resolution. This package was in turn used for Unix commands to draw shapes and graphs, which I'll demonstrate. VenturCom initially had a Unix Version 7 (V7) license, the last Bell Labs release in 1979 before AT&T took it over and the first iteration of Unix generally considered readily portable. XENIX on x86 was also initially derived from this version. V7 was the earliest mainline release to officially integrate (among other things) the Bourne shell, awk, a Fortran-77 compiler, a portable C compiler (though the C compilers in Venix/86 and Venix-11 seem to have different lineages), uucp, tar and make, though development versions of these tools appeared in other Bell Labs variants like Programmer's Workbench (PWB/UNIX). In addition to their custom changes, Venix further added several components from 4BSD, most notably the C-shell (I'm one of those people), more and vi. V7 Unix was the basis for the original Venix-11, which wasn't much of a stretch since V7 Unix already ran there, and by descent PRO/VENIX in July 1984 on the 350. This package was sold explicitly with a future UNIX System V license which had just come out in 1983. In October 1984 VenturCom upgraded PRO/VENIX to "Rev 2.0" (and misnamed it VENIX/PRO in the release notes) which fixed some bugs and added support for the 380 while remaining compatible with the 350. I'm emphasizing the Rev part; it will shortly become salient. Here's PRO/VENIX Rev 2.0 on Xhomer, a Pro 350 emulator which provides a pre-installed disk image: demo, you get a little graphics demonstration. Rev 2.0 was still based on V7, effectively the Pro version of Venix/86 Encore from September 1984 (and the same release used on the Rainbow as Venix/86R), most easily distinguished because of the missing uname command and underlying system call which weren't added until UNIX System III. The kernel is an ordinary executable (again, salient shortly). There is no formal shutdown procedure for any of these earlier releases of Venix (shutdown doesn't even exist) — you just get everyone out and turn the computer off. PRO/VENIX V2.0 (not Rev 2.0), however, is a different animal. It was released in July 1985 and primarily intended for the 380, though a set of 350 overlay disks patch the kernel and certain other large programs during installation; you can't boot PRO/VENIX V2.0 on a 325 or 350 without them. True to VenturCom's word, it leapfrogged UNIX System III completely and is in fact a derivative of System V Release 2 (SVR2 or V.2). The V is deliberate and capitalised for a reason: it's for System V, not short for Version. V2.0 was explicitly intended to correspond with System V 2.0. It was the final release of PRO/VENIX and the version we have on our 380 here. As a parenthetical note, Venix/86 2.1 was the last of the original version number sequence; 2.1 is actually less advanced than PRO/VENIX V2.0 despite the lower apparent "version" number and better considered as an upgrade to Venix/86 Encore as it remains based on V7. fsck whether the disk needs it or not: SUPER prompt when you've logged on as root (there is also a MAINT> prompt when you're single-user; I'll show that when we actually shut down at the end). This prompt and an associated warning comes from Venix's default .profile. The screenshot here is a direct monochrome grab from the Pro's planar video using the pscreen utility which emits the screen contents as LA50 printer codes; you can also save and load screen images with sscreen and lscreen respectively in a format I haven't currently deciphered yet. These tools are part of the graphics package and also exist in Venix/86. /etc/inittab instead of having to muck around with /etc/ttys, though you'll note that the serial port is called com1 as a PC holdover (how bourgeois). /dev/lp is the serial printer port, but Venix limits this to 4800bps, so we'll stick with the faster serial port for logins. We'll put /etc/getty on com1 at the highest speed available, 9600bps, and then bounce init's runlevel with telinit. When we do, we get a login prompt. There is no banner. (By default, the root password is gnomes.) Well, la dee dah. I went back to the console and created a plain user the old fashioned way: I earned it made entries in /etc/passwd (no password because I don't care) and /etc/group, created a home directory for myself, and made my new account the owner. I don't have the V2.0 release notes, but I bet they were interesting. There are in fact no man pages in PRO/VENIX because you (are supposed to) have real manuals. Along the way I think we went backwards on this, somewhere. On the other hand, news(1) is not in my older copy of the User Reference Manual and appears to have been newly added to V2.0 also. A more immediate problem: my terminal setting is pretty messed up because it assumed I was logging in from some other Pro connected via the serial port, so (you can't export TERM=vt100). And, well, I'm using the Bourne shell and I hate the Bourne shell, so I set it to C-shell like civilized people. Let's get our bearings. There's your proof that this is System V. Also, because our clock battery failed decades ago, the machine is still getting its time from the filesystem. This is useful to us because that tells us when the computer was last likely in use, as we can reasonably assume a regular user would have either gotten the battery replaced or at least set the time correctly. I should also add for the remainder of this session that it is likely this local installation of Venix had some minor removals or modifications. Before we explore further, I'm going to customize my environment a little and make it set my terminal correctly when I'm logged in over the serial port. The current Venix license it came with (something to hack later) only allows two users, so a simplistic method will suffice. If we have a functional awk, and we should, then we can do this easily in .login. .login set j=`who | grep spectre | tail -1 | awk '{print $2}'` if ("x$j" == "xcom1") then setenv TERM vt100 endif ^D % cat >hello.c #include <stdio.h> main() { puts("hello world"); exit(0); } ^D% cc -o hello hello.c % ./hello hello world % file hello hello: executable not stripped % file /bin/date /bin/date: pure executable % file /usr/bin/awk /usr/bin/awk: separate I&D % file /usr/lib/libcurses.a /usr/lib/libcurses.a: archive % file /usr/lib/liby.a /usr/lib/liby.a: archive % file /venix /venix: separate I&D not stripped per se). The short program we compiled is an unstripped executable, but most of the essential system binaries like /bin/date are marked pure so that they can be shared by other logged in users. However, because awk and the /venix kernel are so large, they are compiled with split I+D ("separate I&D"), which is more efficient and has fewer restrictions than using code mapping. You can see that when we ask about the object file sizes: Our unstripped compiled binary has a code size of 2348 bytes, a data size of 244 bytes and a BSS (uninitialized data) size of 1224 bytes. This is actually smaller than date, a "pure" executable, which has code, data and BSS sizes all larger at 8320, 1008 and 1352 bytes respectively, showing that the "purity" of an executable has no relationship to its image in memory. On the other hand, our split I+D binaries (awk and the kernel) have comparatively large code sizes. Even though the kernel has no BSS portion, there is no way its code and data could simultaneously live in the same 64K space (for that matter, neither could awk's, because the stack is a fixed 8K at the top of the address range). However, since both were compiled split I+D, they don't have to. As a point of comparison, here's what you'd see in PRO/VENIX Rev 2.0. /bin/date in the earlier version is not pure (though for what it's worth /bin/ls is, so it's not like it lacks pure executables altogether). awk is in a different place, but it is, as expected, a mapped (meaning code-mapped) executable owing to its size. Unexpectedly, one of the archives is also marked as an executable, though it doesn't have execute bits and you can't actually run it. But the biggest surprise is that the kernel itself is not mapped, which is explained when we look at the sizes: /bin/ls, a pure executable, has a size of 7616+842+3852 — all bigger than /bin/date, which isn't — and again more proof that "purity" says nothing about program size. A couple other things: This is (well, was) an RD52, so it's a 30MB drive, the second biggest ever sold for the Pro. However, if you add these numbers up, you only get around 24MB; the rest is in fact a hidden swap partition. 6MB might seem a little excessive when the original memory size of this machine was 512K, but in Venix the swap isn't just for moving processes out so others can run — it's also used as a precache for certain pure executables that have the sticky bit set such as ls. (Again, not an uncommon feature in other operating systems of this era. AMOS could preload certain executables into the system image and always run them from RAM, for example.) These executables are copied to the swap on first use after bootup and executed henceforth from there in specific, precomputed locations, avoiding a walk through the filesystem. This also means, however, that once copied such executables cannot be deleted while the system is up or, in the words of the manual, "the file system will become slightly corrupted." (Eeeek.) Note that no version of PRO/VENIX supported demand paging; that wasn't implemented in AT&T Unix until SVR2.4. There is no /home or other special mountpoint or partition for home directories; everything else is in /usr, as was typical for Unix of this vintage. I later copied the files out with Kermit (it had Kermit on the hard disk) and decided to see what the Fedora Linux POWER9 would make of them. 2^24 No swap space for exec args %s on the %s, unit %d reading writing %s while %s block number %D. Status 0%o Can't allocate message buffer. VENIX SysVr2.0 85061411 PRO380 [...] is notable that in 2025 my Linux box still recognizes what type of executables these are. Anyway, let's dig around in the filesystem. Whoever had the system last didn't really clean up much; the tmac.* files in particular are macro definitions that look like they were supposed to be put somewhere else. Here's root's .profile, by the way: .profile must live in / because, during maintenance (single-user) mode, /usr and /tmp aren't even mounted. Once in single-user mode, the system can be powered off. /f0 and /f1 are mount points for the two RX50 floppies. But, in case you thought Venix could get around the RX50 formatting restriction, guess again: "On PRO/VENIX, floppy formatting is not possible; format is useful only for creating file systems on factory-formatted diskettes." Damn you, Kenneth Olsen. Better go buy that Rainbow. This next file did not come with Venix: I don't know who did that. It's among the last files created on the system before it came into my possession. You're welcome, little buddy. You're welcome. I suspect this directory did come with the system, but it's been modified, and I even played with the demo script and data files a little myself. There is no demo login on this system (it might have been removed), though if you run that shell script you'll see some plots that serve a similar function. Here are two: % ls -l /usr total 23 drwxr-xr-x 3 bin bin 80 Jun 28 22:34 adm drwxrwxr-x 2 bin bin 1952 Jun 28 22:34 bin drwxr-xr-x 3 bin bin 96 Jun 19 1985 games drwxrwxr-x 2 guest visitor 272 Jun 11 13:49 guest drwxr-xr-x 2 bin bin 64 Jun 19 1985 help drwxrwxr-x 3 bin bin 880 Jun 19 1985 include drwxrwxr-x 2 root sys 656 Jun 19 1985 kermit drwxrwxr-x 16 bin bin 1312 Jun 19 1985 lib drwxr-xr-x 2 bin bin 512 Jun 19 1985 lost+found drwxrwxr-x 2 bin mail 64 May 2 1986 mail drwxr-xr-x 2 bin bin 64 Jun 19 1985 news drwxr-xr-x 2 bin bin 64 Jun 19 1985 pub drwxrwxr-x 3 spectre other 304 Jun 28 22:11 spectre drwxr-xr-x 6 bin bin 96 Jun 19 1985 spool drwxr-xr-x 4 bin bin 128 Jun 19 1985 sys drwxrwxrwx 2 bin bin 80 Jun 2 1987 tmp /usr/bin was me copying /bin/date there for a reason I'll get to in a moment. The rest are as they came. At one time there was a guest login (it's no longer in /etc/passwd) and it has one file. This is not unlike my first time trying to quit vi, though today you wouldn't catch me dead using emacs. Caltech archives (PDF). Unfortunately no other names appear on the disk, and there are no other files on the system that appear related to his research, or anyone else's. There are also no mail files to read, and while this version of Venix supports UUCP, it doesn't look like it was ever used. UUCP (Unix-to-Unix Copy) was the only networking option supported by Venix of this vintage; no PRO/VENIX or Venix-11 version supported TCP/IP or any other means of networking, serial line or otherwise. Only P/OS officially supported the CTI Ethernet card. A subset of the Unix games package is in Venix. The manual says, "If you find that the User Reference Manual is rather prosaic reading, see section 6 for games." Strangely, this section of the manual is actually in the Programmer Reference Manual, and I don't like what this is implying. PRO/VENIX Rev 2.0's set is abbreviated compared to V7 Unix's complement, possibly for space reasons. % ls -l /usr/games total 106 -rwxr-xr-x 1 bin bin 10534 Apr 3 1985 bj -rwxr-xr-x 1 bin bin 6156 Apr 3 1985 fortune drwxr-xr-x 2 bin bin 80 Jun 19 1985 lib -rwsr-xr-x 1 daemon bin 34812 Apr 3 1985 snake % /usr/games/fortune Take care of the luxuries and the necessities will take care of themselves. % /usr/games/fortune Colorless green ideas sleep furiously. % /usr/games/bj Black Jack! New game Shuffle JC up JS+6D Hit? n You have 16 Dealer has KH = 20 You lose $2 Action $2 You're down $2 New game TD up TH+KS Hit? ^C Action $2 You're down $2 Bye!! snake worked pretty well too. A copy of C-Kermit was also installed, along with source code. This is a different Kermit build than the executable for Rev 2.0, which is a Venix-specific one rather than a generic System III or System V target. It is the easiest way to interchange files with Venix; I used Kermit as the terminal program on the Linux side and on the Venix side used kermit -is to push files, or kermit -ir to receive them. On the Linux side, transfers should always be in binary mode. A bit of garbage follows the transfer but the files always checked out. We saw those already with news(1). These are a couple sundry short help files (remember, no man pages). more.help just explains how more works. basic.help is for the built-in copy of UBC (University of British Columbia) BASIC, which is a "an ANSI compatible BASIC interpreter that runs under VENIX." It is a standard part of the operating system and can either run separately written programs or act as a simple REPL like other BASICs of the time. There is no graphics support, but it does have floating point. The interpreter is started with the basic command: bye returns to the shell. Interestingly, instead of new, UBC BASIC uses scr to reset the program and variable area. A couple more sundry files are in /usr/pub. Finally, let's look at what commands are installed. I'm not going to go through all of these, and I don't have manual entries for all of them either, but it is a substantial complement and appears to have all of the base tools in System V R2.0 plus the Venix value-adds from 4BSD and a handful of Venix-specific utilities. The ps utility in particular is more of a system status utility, printing not only processes and their flags but also swap and memory status. This system additionally has Ted Green's vedit (probably a port of the C-language version developed for Xenix) and what look like cross-assemblers. You can also see the tools that were used to draw the graphs I showed you. Here are the libraries. Remember, they aren't shared, although the programs that use them might be. Finally, the pieces of the kernel. This isn't in PRO/VENIX Rev 2.0 either, but Venix/86 2.1 does have the ability to rebuild its older V7 kernel (there are no PDP-11 targets, however). No source code is included, just object files. Here are relevant sections of the Makefile with the supported targets and the Pro-specific targets. In the extracts below, the XFER kernel refers to the bootable mini-kernel on the first install floppy disk. win doesn't mean Microsoft Windows; it's just an abbreviation for a Winchester hard disk. The 380 target passes -i to the linker to enable split I+D spaces. Since the 350 doesn't support that, its target passes -m instead to generate a (slower) code-mapped kernel. This won't run as well as Rev 2.0's, but because the kernel is now much larger, it's the only way to boot V2.0 on the 350 at all. The 350 overlay disks replace all split I+D programs, including the kernel, with code-mapped versions. (See these "reconstructed" PRO/VENIX V2.0 disk images.) It is also worth noting that there is no target for Venix-11 anymore (on real PDP-11 hardware), which seems to have become unsupported by VenturCom after the Pro port. Although the 80286 port comes in both real ("compatibility") and protected mode variations, the kernels otherwise have the same features, though of course you can't actually build a PC kernel on the Pro because there are no binaries and no x86 linker. Also, as written, no target will actually build anything because by default there is no /usr/bin/date, and there's also a CHECK = -@[ -f uts.c ] || exit 0 && in the Makefile — since uts.c is not present, the build will fail when this check is run as well. uts.c can be reconstructed and a very small one will suffice. However, given the absence of anything to write drivers for or a kernel ABI to write them to, or for that matter any source code, building a new kernel right now is mostly an academic exercise. MAINT> prompt comes from the .profile I showed you earlier). At this point you can just turn off the machine. Let's finish our twin stories, starting with Venix. PRO/VENIX doesn't seem to have sold much given that not many Pros got sold to begin with. This worsened after DEC started to emphasize the Pro's "PDP-11 compatibility," such as it was, since most of these later customers had prior experience with DEC and ran it under P/OS so that they got RSX-11M, or with the RT-11 or COS ports instead. (There was even an unofficial 2.9BSD port to the Pros, included as an example disk image with Xhomer, though none of DEC's own Unices like Ultrix ever made it.) By then VenturCom was making most of their money from the x86 versions instead, so cutting the other ports out was no great loss. VenturCom continued aligning Venix releases with System V release numbers for the rest of Venix's commercial existence; Venix's last SVR2 was V2.4 for the 80286 and 80386 in 1988. as MaxRT. Meanwhile, the new DEC personal computer strategy did about as badly as the prior one, though mostly due to the Rainbow which by now was seen by the market as so thoroughly idiosyncratic and incompatible that sales had no chance of rebounding. DEC completed a port of Windows 1.0 to the Rainbow but it made little change to its perception. In 1986 Digital introduced the VAXmate, ostensibly side-by-side with the Rainbow, but secretly intended as its replacement. The VAXmate was a true AT-class all-in-one clone PC, not based on the VAX architecture, but was notable that it could netboot over Ethernet from a VAX/VMS server as well as run regular MS-DOS from a conventional hard disk. Instead of incompatible RX50 floppy drives it had the lower-capacity but PC-interchangeable 1.2MB 5.25" RX33, and did not need nor use Rainbow peripherals. The new 380 did have some takers, but that was a relative statement especially since the PDP-11 market was fading generally and other business units at DEC wanted to promote VAX systems. DEC ultimately cancelled both the Professional and Rainbow families in 1987, replacing the 380-based VAX Console with a MicroVAX II for the second-generation VAX 8800s. Of the original 1982 rollout only the DECmate series survived, sold as the DECmate III+ until 1990 when it was no longer seen as competitive with PC word processing options. DEC nevertheless kept trying to sell their proprietary architectures as microcomputers, particularly in the form of the later VAXstations which primarily ran VMS as workstations and were generally compatible with bigger VAXen yet ultimately suffered a similar fate to the Pros. Digital did sell their own lines of PC clone desktops and laptops, and the Alpha had some modest initial success as a high-end PC competitor under emulation, but on the whole there were a lot fewer DEC computers in the home than there should have been. DEC was bought out by Compaq in 1998, and Compaq itself was acquired by Hewlett-Packard in 2002. This isn't the last you'll see of this machine. I'd like to explore writing some userspace networking options, and like I say, there's a lot of untapped potential in that graphics hardware. Stay tuned for future articles.

a month ago 16 votes

More in technology

Greatest Hits

I’ve been blogging now for approximately 8,465 days since my first post on Movable Type. My colleague Dan Luu helped me compile some of the “greatest hits” from the archives of ma.tt, perhaps some posts will stir some memories for you as well: Where Did WordCamps Come From? (2023) A look back at how Foo … Continue reading Greatest Hits →

21 hours ago 2 votes
Let's give PRO/VENIX a barely adequate, pre-C89 TCP/IP stack (featuring Slirp-CK)

TCP/IP Illustrated (what would now be called the first edition prior to the 2011 update) for a hundred-odd bucks on sale which has now sat on my bookshelf, encased in its original shrinkwrap, for at least twenty years. It would be fun to put up the 4.4BSD data structures poster it came with but that would require opening it. Fortunately, today we have AI we have many more excellent and comprehensive documents on the subject, and more importantly, we've recently brought back up an oddball platform that doesn't have networking either: our DEC Professional 380 running the System V-based PRO/VENIX V2.0, which you met a couple articles back. The DEC Professionals are a notoriously incompatible member of the PDP-11 family and, short of DECnet (DECNA) support in its unique Professional Operating System, there's officially no other way you can get one on a network — let alone the modern Internet. Are we going to let that stop us? Crypto Ancienne proxy for TLS 1.3. And, as we'll discuss, if you can get this thing on the network, you can get almost anything on the network! Easily portable and painfully verbose source code is included. Recall from our lengthy history of DEC's early misadventures with personal computers that, in Digital's ill-advised plan to avoid the DEC Pros cannibalizing low-end sales from their categorical PDP-11 minicomputers, Digital's Small Systems Group deliberately made the DEC Professional series nearly totally incompatible despite the fact they used the same CPUs. In their initial roll-out strategy in 1982, the Pros (as well as their sibling systems, the Rainbow and the DECmate II) were only supposed to be mere desktop office computers — the fact the Pros were PDP-11s internally was mostly treated as an implementation detail. The idea backfired spectacularly against the IBM PC when the Pros and their promised office software failed to arrive on time and in 1984 DEC retooled around a new concept of explicitly selling the Pros as desktop PDP-11s. This required porting operating systems that PDP-11 minis typically ran: RSX-11M Plus was already there as the low-level layer of the Professional Operating System (P/OS), and DEC internally ported RT-11 (as PRO/RT-11) and COS. PDP-11s were also famous for running Unix and so DEC needed a Unix for the Pro as well, though eventually only one official option was ever available: a port of VenturCom's Venix based on V7 Unix and later System V Release 2.0 called PRO/VENIX. After the last article, I had the distinct pleasure of being contacted by Paul Kleppner, the company's first paid employee in 1981, who was part of the group at VenturCom that did the Pro port and stayed at the company until 1988. Venix was originally developed from V6 Unix on the PDP-11/23 incorporating Myron Zimmerman's real-time extensions to the kernel (such as semaphores and asynchronous I/O), then a postdoc in physics at MIT; Kleppner's father was the professor of the lab Zimmerman worked in. Zimmerman founded VenturCom in 1981 to capitalize on the emerging Unix market, becoming one of the earliest commercial Unix licensees. Venix-11 was subsequently based on the later V7 Unix, as was Venix/86, which was the first Unix on the IBM PC in January 1983 and was ported to the DEC Rainbow as Venix/86R. In addition to its real-time extensions and enhanced segmentation capability, critical for memory management in smaller 16-bit address spaces, it also included a full desktop graphics package. Notably, DEC themselves were also a Unix licensee through their Unix Engineering Group and already had an enhanced V7 Unix of their own running on the PDP-11, branded initially as V7M. Subsequently the UEG developed a port of 4.2BSD with some System V components for the VAX and planned to release it as Ultrix-32, simultaneously retconning V7M as Ultrix-11 even though it had little in common with the VAX release. Paul recalls that DEC did attempt a port of Ultrix-11 to the Pro 350 themselves but ran into intractable performance problems. By then the clock was ticking on the Pro relaunch and the issues with Ultrix-11 likely prompted DEC to look for alternatives. Crucially, Zimmerman had managed to upgrade Venix-11's kernel while still keeping it small, a vital aspect on his 11/23 which lacked split instruction and data addressing and would have had to page in and out a larger kernel otherwise. Moreover, the 11/23 used an F-11 CPU — the same CPU as the original Professional 350 and 325. DEC quickly commissioned VenturCom to port their own system over to the Pro, which Paul says was a real win for VenturCom, and the first release came out in July 1984 complete with its real-time features intact and graphics support for the Pro's bitmapped screen. It was upgraded ("PRO/VENIX Rev 2.0") in October 1984, adding support for the new top-of-the-line DEC Professional 380, and then switched to System V (SVR2) in July 1985 with PRO/VENIX V2.0. (For its part Ultrix-11 was released as such in 1984 as well, but never for the Pro series.) Keep that kernel version history in mind for when we get to oddiments of the C compiler. As for networking, though, with the exception of UUCP over serial, none of these early versions of Venix on either the PDP-11 or 8086 supported any kind of network connectivity out of the box — officially the only Pro operating system to support its Ethernet upgrade option was P/OS 2.0. Although all Pros have a 15-pin AUI network port, it isn't activated until an Ethernet CTI card is installed. (While Stan P. found mention of a third-party networking product called Fusion by Network Research Corporation which could run on PRO/VENIX, Paul's recollection is that this package ran into technical problems with kernel size during development. No examples of the PRO/VENIX version have so far been located and it may never have actually been released. You'll hear about it if a copy is found. The unofficial Pro 2.9BSD port also supports the network card, but that was always an under-the-table thing.) Since we run Venix on our Pro, that means currently our only realistic option to get this on the 'Nets is also over a serial port. lower speed port for our serial IP implementation. PRO/VENIX supports using only the RS-423 port as a remote terminal, and because it's twice as fast, it's more convenient for logins and file exchange over Kermit (which also has no TCP/IP overhead). Using the printer port also provides us with a nice challenge: if our stack works acceptably well at 4800bps, it should do even better at higher speeds if we port it elsewhere. On the Pro, we connect to our upstream host using a BCC05 cable (in the middle of this photograph), which terminates in a regular 25-pin RS-232 on the other end. Now for the software part. There are other small TCP/IP stacks, notably things like Adam Dunkel's lwIP and so on. But even SVR2 Venix is by present standards a old Unix with a much less extensive libc and more primitive C compiler — in a short while you'll see just how primitive — and relatively modern code like lwIP's would require a lot of porting. Ideally we'd like a very minimal, indeed barely adequate, stack that can do simple tasks and can be expressed in a fashion acceptable to a now antiquated compiler. Once we've written it, it would be nice if it were also easily portable to other very limited systems, even by directly translating it to assembly language if necessary. What we want this barebones stack to accomplish will inform its design: and the hardware 24-7 to make such a use case meaningful. The Ethernet option was reportedly competent at server tasks, but Ethernet has more bandwidth, and that card also has additional on-board hardware. Let's face the cold reality: as a server, we'd find interacting with it over the serial port unsatisfactory at best and we'd use up a lot of power and MTBF keeping it on more than we'd like to. Therefore, we really should optimize for the client case, which means we also only need to run the client when we're performing a network task. no remote login capacity, like, I dunno, a C64, the person on the console gets it all. Therefore, we really should optimize for the single user case, which means we can simplify our code substantially by merely dealing with sockets sequentially, one at a time, without having to worry about routing packets we get on the serial port to other tasks or multiplexing them. Doing so would require extra work for dual-socket protocols like FTP, but we're already going to use directly-attached Kermit for that, and if we really want file transfer over TCP/IP there are other choices. (On a larger antique system with multiple serial ports, we could consider a setup where each user uses a separate outgoing serial port as their own link, which would also work under this scheme.) Some of you may find this conflicts hard with your notion of what a "stack" should provide, but I also argue that the breadth of a full-service driver would be wasted on a limited configuration like this and be unnecessarily more complex to write and test. Worse, in many cases, is better, and I assert this particular case is one of them. Keeping the above in mind, what are appropriate client tasks for a microcomputer from 1984, now over 40 years old — even a fairly powerful one by the standards of the time — to do over a slow TCP/IP link? Crypto Ancienne's carl can serve as an HTTP-to-HTTPS proxy to handle the TLS part, if necessary.) We could use protocols like these to download and/or view files from systems that aren't directly connected, or to send and receive status information. One task that is also likely common is an interactive terminal connection (e.g., Telnet, rlogin) to another host. However, as a client this particular deployment is still likely to hit the same sorts of latency problems for the same reasons we would experience connecting to it as a server. These other tasks here are not highly sensitive to latency, require only a single "connection" and no multiplexing, and are simple protocols which are easy to implement. Let's call this feature set our minimum viable product. Because we're writing only for a couple of specific use cases, and to make them even more explicit and easy to translate, we're going to take the unusual approach of having each of these clients handle their own raw packets in a bytewise manner. For the actual serial link we're going to go even more barebones and use old-school RFC 1055 SLIP instead of PPP (uncompressed, too, not even Van Jacobson CSLIP). This is trivial to debug and straightforward to write, and if we do so in a relatively encapsulated fashion, we could consider swapping in CSLIP or PPP later on. A couple of utility functions will do the IP checksum algorithm and reading and writing the serial port, and DNS and some aspects of TCP also get their own utility subroutines, but otherwise all of the programs we will create will read and write their own network datagrams, using the SLIP code to send and receive over the wire. The C we will write will also be intentionally very constrained, using bytewise operations assuming nothing about endianness and using as little of the C standard library as possible. For types, you only need some sort of 32-bit long, which need not be native, an int of at least 16 bits, and a char type — which can be signed, and in fact has to be to run on earlier Venices (read on). You can run the entirety of the code with just malloc/free, read/write/open/close, strlen/strcat, sleep, rand/srand and time for the srand seed (and fprintf for printing debugging information, if desired). On a system with little or no operating system support, almost all of these primitive library functions are easy to write or simulate, and we won't even assume we're capable of non-blocking reads despite the fact Venix can do so. After all, from that which little is demanded, even less is expected. slattach which effectively makes a serial port directly into a network interface. Such an arrangement would be the most flexible approach from the user's perspective because you necessarily have a fixed, bindable external address, but obviously such a scheme didn't scale over time. With the proliferation of dialup Unix shell accounts in the late 1980s and early 1990s, closed-source tools like 1993's The Internet Adapter ("TIA") could provide the SLIP and later PPP link just by running them from a shell prompt. Because they synthesize artificial local IP addresses, sort of NAT before the concept explicitly existed, the architecture of such tools prevented directly creating listening sockets — though for some situations this could be considered a more of a feature than a bug. Any needed external ports could be proxied by the software anyway and later network clients tended not to require it, so for most tasks it was more than sufficient. Closed-source and proprietary SLIP/PPP-over-shell solutions like TIA were eventually displaced by open source alternatives, most notably SLiRP. SLiRP (hereafter Slirp so I don't gouge my eyes out) emerged in 1995 and used a similar architecture to TIA, handing out virtual addresses on an synthetic network and bridging that network to the Internet through the host system. It rapidly became the SLIP/PPP shell solution of choice, leading to its outright ban by some shell ISPs who claimed it violated their terms of service. As direct SLIP/PPP dialup became more common than shell accounts, during which time yours truly upgraded to a 56K Mac modem I still have around here somewhere, Slirp eventually became most useful for connecting small devices via their serial ports (PDAs and mobile phones especially, but really anything — subsets of Slirp are still used in emulators today like QEMU for a similar purpose) to a LAN. By a shocking and completely contrived coincidence, that's exactly what we'll be doing! Slirp has not been officially maintained since 2006. There is no package in Fedora, which is my usual desktop Linux, and the one in Debian reportedly has issues. A stack of patch sets circulated thereafter, but the planned 1.1 release never happened and other crippling bugs remain, some of which were addressed in other patches that don't seem to have made it into any release, source or otherwise. If you tried to build Slirp from source on a modern system and it just immediately exits, you got bit. I have incorporated those patches and a couple of my own to port naming and the configure script, plus some additional fixes, into an unofficial "Slirp-CK" which is on Github. It builds the same way as prior versions and is tested on Fedora Linux. I'm working on getting it functional on current macOS also. Next, I wrote up our four basic functional clients: ping, DNS lookup, NTP client (it doesn't set the clock, just shows you the stratum, refid and time which you can use for your own purposes), and TCP client. The TCP client accepts strings up to a defined maximum length, opens the connection, sends those strings (optionally separated by CRLF), and then reads the reply until the connection closes. This all seemed to work great on the Linux box, which you yourself can play with as a toy stack (directions at the end). Unfortunately, I then pushed it over to the Pro with Kermit and the compiler immediately started complaining. SLIP is a very thin layer on IP packets. There are exactly four metabytes, which I created preprocessor defines for: A SLIP packet ends with SLIP_END, or hex $c0. Where this must occur within a packet, it is replaced by a two byte sequence for unambiguity, SLIP_ESC SLIP_ESC_END, or hex $db $dc, and where the escape byte must occur within a packet, it gets a different two byte sequence, SLIP_ESC SLIP_ESC_ESC, or hex $db $dd. Although I initially set out to use defines and symbols everywhere instead of naked bytes, and wrote slip.c on that basis, I eventually settled on raw bytes afterwards using copious comments so it was clear what was intended to be sent. That probably saved me a lot of work renaming everything, because: Dimly I recalled that early C compilers, including System V, limit their identifiers to eight characters (the so-called "Ritchie limit"). At this point I probably should have simply removed them entirely for consistency with their absence elsewhere, but I went ahead and trimmed them down to more opaque, pithy identifiers. That wasn't the only problem, though. I originally had two functions in slip.c, slip_start and slip_stop, and it didn't like that either despite each appearing to have a unique eight-character prefix: That's because their symbols in the object file are actually prepended with various metacharacters like _ and ~, so effectively you only get seven characters in function identifiers, an issue this error message fails to explain clearly. The next problem: there's no unsigned char, at least not in PRO/VENIX Rev. 2.0 which I want to support because it's more common, and presumably the original versions of PRO/VENIX and Venix-11. (This type does exist in PRO/VENIX V2.0, but that's because it's System V and has a later C compiler.) In fact, the unsigned keyword didn't exist at all in the earliest C compilers, and even when it did, it couldn't be applied to every basic type. Although unsigned char was introduced in V7 Unix and is documented as legal in the PRO/VENIX manual, and it does exist in Venix/86 2.1 which is also a V7 Unix derivative, the PDP-11 and 8086 C compilers have different lineages and Venix's V7 PDP-11 compiler definitely doesn't support it: I suspect this may not have been intended because unsigned int works (unsigned long would be pointless on this architecture, and indeed correctly generates Misplaced 'long' on both versions of PRO/VENIX). Regardless of why, however, the plain char type on the PDP-11 is signed, and for compatibility reasons here we'll have no choice but to use it. Recall that when C89 was being codified, plain char was left as an ambiguous type since some platforms (notably PDP-11 and VAX) made it signed by default and others made it unsigned, and C89 was more about codifying existing practice than establishing new ones. That's why you see this on a modern 64-bit platform, e.g., my POWER9 workstation, where plain char is unsigned: If we change the original type explicitly to signed char on our POWER9 Linux machine, that's different: and, accounting for different sizes of int, seems similar on PRO/VENIX V2.0 (again, which is System V): but the exact same program on PRO/VENIX Rev. 2.0 behaves a bit differently: The differences in int size we expect, but there's other kinds of weird stuff going on here. The PRO/VENIX manual lists all the various permutations about type conversions and what gets turned into what where, but since the manual is already wrong about unsigned char I don't think we can trust the documentation for this part either. Our best bet is to move values into int and mask off any propagated sign bits before doing comparisons or math, which is agonizing, but reliable. That means throwing around a lot of seemingly superfluous & 0xff to make sure we don't get negative numbers where we don't want them. Once I got it built, however, there were lots of bugs. Many were because it turns out the compiler isn't too good with 32-bit long, which is not a native type on the 16-bit PDP-11. This (part of the NTP client) worked on my regular Linux desktop, but didn't work in Venix: The first problem is that the intermediate shifts are too large and overshoot, even though they should be in range for a long. Consider this example: On the POWER9, accounting for the different semantics of %lx, But on Venix, the second shift blows out the value. We can get an idea of why from the generated assembly in the adb debugger (here from PRO/VENIX V2.0, since I could cut and paste from the Kermit session): (Parenthetical notes: csav is a small subroutine that pushes volatiles r2 through r4 on the stack and turns r5 into the frame pointer; the corresponding cret unwinds this. The initial branch in this main is used to reserve additional stack space, but is often practically a no-op.) The first shift is here at ~main+024. Remember the values are octal, so 010 == 8. r0 is 16 bits wide — no 32-bit registers — so an eight-bit shift is fine. When we get to the second shift, however, it's the same instruction on just one register (030 == 24) and the overflow is never checked. In fact, the compiler never shifts the second part of the long at all. The result is thus zero. The second problem in this example is that the compiler never treats the constant as a long even though statically there's no way it can fit in a 16-bit int. To get around those two gotchas on both Venices here, I rewrote it this way: An alternative to a second variable is to explicitly mark the epoch constant itself as long, e.g., by casting it, which also works. Here's another example for your entertainment. At least some sort of pseudo-random number generator is crucial, especially for TCP when selecting the pseudo-source port and initial sequence numbers, or otherwise Slirp seemed to get very confused because we would "reuse" things a lot. Unfortunately, the obvious typical idiom to seed it like srand(time(NULL)) doesn't work: srand() expects a 16-bit int but time(NULL) returns a 32-bit long, and it turns out the compiler only passes the 16 most significant bits of the time — i.e., the ones least likely to change — to srand(). Here's the disassembly as proof (contents trimmed for display here; since this is a static binary, we can see everything we're calling): At the time we call the glue code for time from main, the value under the stack pointer (i.e., r6) is cleared immediately beforehand since we're passing NULL (at ~main+06). We then invoke the system call, which per the Venix manual for time(2) uses two registers for the 32-bit result, namely r0 (high bits) and r1 (low bits). We passed a null pointer, so the values remain in those registers and aren't written anywhere (branch at _time+014). When we return to ~main+014, however, we only put r0 on the stack for srand (remember that r5 is being used as the frame pointer; see the disassembly I provided for csav) and r1 is completely ignored. Why would this happen? It's because time(2) isn't declared anywhere in /usr/include or /usr/include/sys (the two C include directories), nor for that matter rand(3) or srand(3). This is true of both Rev. 2.0 and V2.0. Since the symbols are statically present in the standard library, linking will still work, but since the compiler doesn't know what it's supposed to be working with, it assumes int and fails to handle both halves of the long. One option is to manually declare everything ourselves. However, from the assembly at _time+016 we do know that if we pass a pointer, the entire long value will get placed there. That means we can also do this: Now this gets the lower bits and there is sufficient entropy for our purpose (though obviously not a cryptographically-secure PRNG). Interestingly, the Venix manual recommends using the time as the seed, but doesn't include any sample code. At any rate this was enough to make the pieces work for IP, ICMP and UDP, but TCP would bug out after just a handful of packets. As it happens, Venix has rather small serial buffers by modern standards: tty(7), based on the TIOCQCNT ioctl(2), appears to have just a 256-byte read buffer (sg_ispeed is only char-sized). If we don't make adjustments for this, we'll start losing framing when the buffer gets overrun, as in this extract from a test build with debugging dumps on and a maximum segment size/window of 512 bytes. Here, the bytes marked by dashes are the remote end and the bytes separated by dots are what the SLIP driver is scanning for framing and/or throwing away; you'll note there is obvious ASCII data in them. If we make the TCP MSS and window on our client side 256 bytes, there is still retransmission, but the connection is more reliable since overrun occurs less often and seems to work better than a hard cap on the maximum transmission unit (e.g., "mtu 256") from SLiRP's side. Our only consequence to dropping the TCP MSS and window size is that the TCP client is currently hard-coded to just send one packet at the beginning (this aligns with how you'd do finger, HTTP/1.x, gopher, etc.), and that datagram uses the same size which necessarily limits how much can be sent. If I did the extra work to split this over several datagrams, it obviously wouldn't be a problem anymore, but I'm lazy and worse is better! The connection can be made somewhat more reliable still by improving the SLIP driver's notion of framing. RFC 1055 only specifies that the SLIP end byte (i.e., $c0) occur at the end of a SLIP datagram, though it also notes that it was proposed very early on that it could also start datagrams — i.e., if two occur back to back, then it just looks like a zero length or otherwise obviously invalid entity which can be trivially discarded. However, since there's no guarantee or requirement that the remote link will do this, we can't assume it either. We also can't just look for a $45 byte (i.e., IPv4 and a 20 byte length) because that's an ASCII character and appears frequently in text payloads. However, $45 followed by a valid DSCP/ECN byte is much less frequent, and most of the time this byte will be either $00, $08 or $10; we don't currently support ECN (maybe we should) and we wouldn't find other DSCP values meaningful anyway. The SLIP driver uses these sequences to find the start of a datagram and $c0 to end it. While that doesn't solve the overflow issue, it means the SLIP driver will be less likely to go out of framing when the buffer does overrun and thus can better recover when the remote side retransmits. And, well, that's it. There are still glitches to bang out but it's good enough to grab Hacker News: src/ directory, run configure and then run make (parallel make is fine, I use -j24 on my POWER9). Connect your two serial ports together with a null modem, which I assume will be /dev/ttyUSB0 and /dev/ttyUSB1. Start Slirp-CK with a command line like ./slirp -b 4800 "tty /dev/ttyUSB1" but adjusting the baud and path to your serial port. Take note of the specified virtual and nameserver addresses: Unlike the given directions, you can just kill it with Control-C when you're done; the five zeroes are only if you're running your connection over standard output such as direct shell dial-in (this is a retrocomputing blog so some of you might). To see the debug version in action, next go to the BASS directory and just do a make. You'll get a billion warnings but it should still work with current gcc and clang because I specifically request -std=c89. If you use a different path for your serial port (i.e., not /dev/ttyUSB0), edit slip.c before you compile. You don't do anything like ifconfig with these tools; you always provide the tools the client IP address they'll use (or create an alias or script to do so). Try this initial example, with slirp already running: Because I'm super-lazy, you separate the components of the IPv4 address with spaces, not dots. In Slirp-land, 10.0.2.2 is always the host you are connected to. You can see the ICMP packet being sent, the bytes being scanned by the SLIP driver for framing (the ones with dots), and then the reply (with dashes). These datagram dumps have already been pre-processed for SLIP metabytes. Unfortunately, you may not be able to ping other hosts through Slirp because there's no backroute but you could try this with a direct SLIP connection, an exercise left for the reader. If Slirp doesn't want to respond and you're sure your serial port works (try testing both ends with Kermit?), you can recompile it with -DDEBUG (change this in the generated Makefile) and pass your intended debug level like -d 1 or -d 3. You'll get a file called slirp_debug with some agonizingly detailed information so you can see if it's actually getting the datagrams and/or liking the datagrams it gets. For nslookup, ntp and minisock, the second address becomes your accessible recursive nameserver (or use -i to provide an IP). The DNS dump is also given in the debug mode with slashes for the DNS answer section. nslookup and ntp are otherwise self-explanatory: minisock takes a server name (or IP) and port, followed by optional strings. The strings, up to 255 characters total (in this version), are immediately sent with CR-LFs between them except if you specify -n. If you specify no strings, none are sent. It then waits on that port for data and exits when the socket closes. This is how we did the HTTP/1.0 requests in the screenshots. On the DEC Pro, this has been tested on my trusty DEC Professional 380 running PRO/VENIX V2.0. It should compile and run on a 325 or 350, and on at least PRO/VENIX Rev. V2.0, though I don't have any hardware for this and Xhomer's serial port emulation is not good enough for this purpose (so unfortunately you'll need a real DEC Pro until I or Tarek get around to fixing it). The easiest way to get it over there is Kermit. Assuming you have this already, connect your host and the Pro on the "real" serial port at 9600bps. Make sure both sides are set to binary and just push all the files over (except the Markdown documentation unless you really want), and then do a make -f Makefile.venix (it may have been renamed to makefile.venix; adjust accordingly). Establishing the link is as simple as connecting your server's serial port to the other end of the BCC05 or equivalent from the Pro and starting Slirp to talk to that port (on my system, it's even the same port, so the same command line suffices). If you experience issues with the connection, the easiest fix is to just bounce Slirp — because there are no timeouts, there are also no retransmits. I don't know if this is hitting bugs in Slirp or in my code, though it's probably the latter. Nevertheless, I've been able to run stuff most of the day without issue. It's nice to have a simple network option and the personal satisfaction of having written it myself. There are many acknowledged deficiencies, mostly because I assume little about the system itself and tried to keep everything very simplistic. There are no timeouts and thus no retransmits, and if you break the TCP connection in the middle there will be no proper teardown. Also, because I used Slirp for the other side (as many others will), and because my internal network is full of machines that have no idea what IPv6 is, there is no IPv6 support. I agree there should be and SLIP doesn't care whether it gets IPv4 or IPv6, but for now that would require patching Slirp which is a job I just don't feel up to at the moment. I'd also like to support at least CSLIP in the future. In the meantime, if you want to try this on other operating systems, the system-dependent portions are in compat.h and slip.c with a small amount in ntp.c for handling time values. You will likely want to make changes to where your serial ports are and the speed they run at and how to make that port "raw" in slip.c. You should also add any extra #includes to compat.h that your system requires. I'd love to hear about it running other places. Slirp-CK remains under the original modified Slirp license and BASS is under the BSD 2-clause license. You can get Slirp-CK and BASS at Github.

15 hours ago 2 votes
Transactions are a protocol

Transactions are not an intrinsic part of a storage system. Any storage system can be made transactional: Redis, S3, the filesystem, etc. Delta Lake and Orleans demonstrated techniques to make S3 (or cloud storage in general) transactional. Epoxy demonstrated techniques to make Redis (and any other system) transactional. And of course there's always good old Two-Phase Commit. If you don't want to read those papers, I wrote about a simplified implementation of Delta Lake and also wrote about a simplified MVCC implementation over a generic key-value storage layer. It is both the beauty and the burden of transactions that they are not intrinsic to a storage system. Postgres and MySQL and SQLite have transactions. But you don't need to use them. It isn't possible to require you to use transactions. Many developers, myself a few years ago included, do not know why you should use them. (Hint: read Designing Data Intensive Applications.) And you can take it even further by ignoring the transaction layer of an existing transactional database and implement your own transaction layer as Convex has done (the Epoxy paper above also does this). It isn't entirely clear that you have a lot to lose by implementing your own transaction layer since the indexes you'd want on the version field of a value would only be as expensive or slow as any other secondary index in a transactional database. Though why you'd do this isn't entirely clear (I will like to read about this from Convex some time). It's useful to see transaction protocols as another tool in your system design tool chest when you care about consistency, atomicity, and isolation. Especially as you build systems that span data systems. Maybe, as Ben Hindman hinted at the last NYC Systems, even proprietary APIs will eventually provide something like two-phase commit so physical systems outside our control can become transactional too. Transactions are a protocol short new post pic.twitter.com/nTj5LZUpUr — Phil Eaton (@eatonphil) April 20, 2025

21 hours ago 2 votes
Humanities Crash Course Week 16: The Art of War

In week 16 of the humanities crash course, I revisited the Tao Te Ching and The Art of War. I just re-read the Tao Te Ching last year, so I only revisited my notes now. I’ve also read The Art of War a few times, but decided to re-visit it now anyway. Readings Both books are related. The Art of War is older; Sun Tzu wrote it around 500 BCE, at a time when war was becoming more “professionalized” in China. The book aims convey what had (or hadn’t) worked in the battlefield. The starting point is conflict. There’s an enemy we’re looking to defeat. The best victory is achieved without engagement. That’s not always possible, so the book offers pragmatic suggestions on tactical maneuvers and such. It gives good advice for situations involving conflict, which is why they’ve influenced leaders (including businesspeople) throughout centuries: It’s better to win before any shots are fired (i.e., through cunning and calculation.) Use deception. Don’t let conflicts drag on. Understand the context to use it to your advantage. Keep your forces unified and disciplined. Adapt to changing conditions on the ground. Consider economics and logistics. Gather intelligence on the opposition. The goal is winning through foresight rather than brute force — good advice! The Tao Te Ching, written by Lao Tzu around the late 4th century BCE, is the central text in Taoism, a philosophy that aims for skillful action by aligning with the natural order of the universe — i.e., doing through “non-doing” and transcending distinctions (which aren’t present in reality but layered onto experiences by humans.) Tao means Way, as in the Way to achieve such alignment. The book is a guide to living the Tao. (Living in Tao?) But as it makes clear from its very first lines, you can’t really talk about it: the Tao precedes language. It’s a practice — and the practice entails non-striving. Audiovisual Music: Gioia recommended the Beatles (The White Album, Sgt. Pepper’s, and Abbey Road) and Rolling Stones (Let it Bleed, Beggars Banquet, and Exile on Main Street.) I’d heard all three Rolling Stones albums before, but don’t know them by heart (like I do with the Beatles.) So I revisited all three. Some songs sounded a bit cringe-y, especially after having heard “real” blues a few weeks ago. Of the three albums, Exile on Main Street sounds more authentic. (Perhaps because of the band member’s altered states?) In any case, it sounded most “in the Tao” to me — that is, as though the musicians surrendered to the experience of making this music. It’s about as rock ‘n roll as it gets. Arts: Gioia recommended looking at Chinese architecture. As usual, my first thought was to look for short documentaries or lectures in YouTube. I was surprised by how little there was. Instead, I read the webpage Gioia suggested. Cinema: Since we headed again to China, I took in another classic Chinese film that had long been on my to-watch list: Wong Kar-wai’s IN THE MOOD FOR LOVE. I found it more Confucian than Taoist, although its slow pacing, gentleness, focus on details, and passivity strike something of a Taoist mood. Reflections When reading the Tao Te Ching, I’m often reminded of this passage from the Gospel of Matthew: No man can serve two masters: for either he will hate the one, and love the other; or else he will hold to the one, and despise the other. Ye cannot serve God and mammon. Therefore I say unto you, Take no thought for your life, what ye shall eat, or what ye shall drink; nor yet for your body, what ye shall put on. Is not the life more than meat, and the body than raiment? Behold the fowls of the air: for they sow not, neither do they reap, nor gather into barns; yet your heavenly Father feedeth them. Are ye not much better than they? Which of you by taking thought can add one cubit unto his stature? And why take ye thought for raiment? Consider the lilies of the field, how they grow; they toil not, neither do they spin: And yet I say unto you, That even Solomon in all his glory was not arrayed like one of these. Wherefore, if God so clothe the grass of the field, which to day is, and to morrow is cast into the oven, shall he not much more clothe you, O ye of little faith? Therefore take no thought, saying, What shall we eat? or, What shall we drink? or, Wherewithal shall we be clothed? (For after all these things do the Gentiles seek:) for your heavenly Father knoweth that ye have need of all these things. But seek ye first the kingdom of God, and his righteousness; and all these things shall be added unto you. Take therefore no thought for the morrow: for the morrow shall take thought for the things of itself. Sufficient unto the day is the evil thereof. The Tao Te Ching is older and from a different culture, but “Consider the lilies of the field, how they grow; they toil not, neither do they spin” has always struck me as very Taoistic: both texts emphasize non-striving and putting your trust on a higher order. Even though it’s even older, that spirit is also evident in The Art of War. It’s not merely letting things happen, but aligning mindfully with the needs of the time. Sometimes we must fight. Best to do it quickly and efficiently. And best yet if the conflict can be settled before it begins. Notes on Note-taking This week, I started using ChatGPT’s new o3 model. Its answers are a bit better than what I got with previous models, but there are downsides. For one thing, o3 tends to format answers in tables rather than lists. This works well if you use ChatGPT in a wide window, but is less useful on a mobile device or (as in my case) on a narrow window to the side. This is how I usually use ChatGPT on my Mac: in a narrow window. o3’s responses often include tables that get cut off in this window. For another, replies take much longer as the AI does more “research” in the background. As a result, it feels less conversational than 4o — which changes how I interact with it. I’ll play more with o3 for work, but for this use case, I’ll revert to 4o. Up Next Gioia recommends Apulelius’s The Golden Ass. I’ve never read this, and frankly feel weary about returning to the period of Roman decline. (Too close to home?) But I’ll approach it with an open mind. Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

14 hours ago 1 votes
My approach to teaching electronics

Explaining the reasoning behind my series of articles on electronics -- and asking for your thoughts.

yesterday 2 votes