Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
4
Now that our 1987 Canon Cat is refurbished and ready to go another nine innings or so, it's time to get into the operating system and pull some tricks. from our historical discussion of the Canon Cat, the Cat was designed by Jef Raskin as a sophisticated user-centric computer but demoted to office machine within Canon's typewriter division, which was tasked with selling it. Because Canon only ever billed the Cat as a "work processor" for documents and communications, and then abruptly tanked it after just six months, it never had any software packages that were commercially produced. In fact, I can't find any software written for it other than the original Tutor and Demo diskettes included with the system and a couple of Canon-specific utilities, which I don't have and don't seem to be imaged anywhere. So this entry will cover a lot of ground: first, we have to be able to reliably read, image and write Canon disks on another system, then decipher the format, and then patch those...
8 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Old Vintage Computing Research

The April Fools joke that might have got me fired

Everyone should pull one great practical joke in their lifetimes. This one was mine, and I think it's past the statute of limitations. The story is true. Only the names are redacted to protect the guilty. My first job out of college was a database programmer, even though my undergraduate degree had nothing to do with computers and my current profession still mostly doesn't. The reason was that the University I worked for couldn't afford competitive wages, but they did offer various fringe benefits, and they were willing to train someone who at least had decent working knowledge. I, as a newly minted graduate of the august University of California system, had decent working knowledge at least of BSD/386 and SunOS, but more importantly also had the glowing recommendation of my predecessor who was being promoted into a new position. I was hired, which was their first mistake. The system I was hired to work on was an HP 9000 K250, one of Hewlett-Packard's big PA-RISC servers. I wish I had a photograph of it, but all I have are a couple bad scans of some bad Polaroids of my office and none of the server room. The server room was downstairs from my office back in the days when server rooms were on-premises, complete with a swipe card lock and a halon system that would give you a few seconds of grace before it flooded everything. The K250 hulked in there where it had recently replaced what I think was an Encore mini of some sort (probably a Multimax, since it was a few years old and the 88K Encores would have been too new for the University), along with the AIX RS/6000s that provided student and faculty shell accounts and E-mail, the bonded T1 lines, some of the terminal servers, the massive Cabletron routers and a lot of the telco stuff. One of the tape reels from the Encore hangs on my wall today as a memento. The K250 and the Encore it replaced (as well as the L-Class that later replaced the K250 when I was a consultant) ran an all-singing, all-dancing student information system called CARS. CARS is still around, renamed Jenzabar, though I suspect that many of its underpinnings remain if you look under the table. In those days CARS was a massive overlay that was loaded atop the operating system and database, which when I started were, respectively, HP/UX 10.20 and Informix. (I'm old.) It used Informix tables, screens and stored procedures plus its own text UI libraries to run code written variously as Perform screens, SQL, C-shell scripts and plain old C or ESQL/C. Everything was tracked in RCS using overgrown Makefiles. I had the admin side (resource management, financials, attendance trackers, etc.) and my office partner had the academic side (mostly grades and faculty tracking). My job was to write and maintain this code and shortly after to help the University create custom applications in CARS' brand-spanking new web module, which chose the new hotness in scripting languages, i.e., Perl. Fortuitously I had learned Perl in, appropriately enough, a computational linguistics course. CARS also managed most of the printers on campus except for the few that the RS/6000s controlled directly. Most of the campus admin printers were HP LaserJet 4 units of some derivation equipped with JetDirect cards for networking. These are great warhorse printers, some of the best laser printers HP ever made. I suspect there were line printers other places, but those printers were largely what existed in the University's offices. It turns out that the READY message these printers show on their VFD panels is changeable. I don't remember where I read this, probably idly paging through the manual over a lunch break, but initially the only fun things I could think of to do was to have the printer say hi to my boss when she sent jobs to it, stuff like that (whereupon she would tell me to get back to work). Then it dawned on me: because I had access to the printer spools on the K250, and the spool directories were conveniently named the same as their hostnames, I knew where each and every networked LaserJet on campus was. I was young, rash and motivated. This was a hack I just couldn't resist. It would be even better than what had been my favourite joke at my alma mater, where campus services, notable for posting various service suspension notices, posted one April Fools' Day that gravity itself would be suspended to various buildings. I felt sure this hack would eclipse that too. The plan on April Fools' Day was to get into work at OMG early o'clock and iterate over every entry in the spool, sending it a sequence that would change the READY message to INSERT 5 CENTS. This would cause every networked LaserJet on campus to appear to ask for a nickel before you printed anything. The script was very simple (this is the actual script, I saved it): The ^[ was a literal ASCII 27 ESCape character, and netto was a simple netcat-like script I had written in these days before netcat was widely used. That's it. Now, let me be clear: the printer was still ready! The effect was merely cosmetic! It would still print if you sent jobs to it! Nevertheless, to complete the effect, this message was sent out on the campus-wide administration mailing list (which I also saved): At the end of the day I would reset everything back to READY, smile smugly, and continue with my menial existence. That was the plan. Having sent this out, I fielded a few anxious calls, who laughed uproariously when they realized, and I reset their printers manually afterwards. The people who knew me, knew I was a practical joker, took note of the date, and sent approving replies. One of the best was sent to me later in the day by intercampus mail, printed on their laser printer, with a nickel taped to it. Unfortunately, not everybody on campus knew me, and those who did not not only did not call me, but instead called university administration directly. By 8:30am it was chaos in the main office and this filtered up to the head of HR, who most definitely did know me, and told me I'd better send a retraction before the CFO got in or I was in big trouble. That went wrong also, because my retraction said that campus administration was not considering charging per-page fees when in fact they actually were, so I had to retract it and send a new retraction that didn't call attention to that fact. I also ran the script to reset everything early. Eventually the hubbub finally settled down around noon. Everybody in the office thought it was very funny. Even my boss, who officially disapproved, thought it was somewhat funny. The other thing that went wrong, as if all that weren't enough, was that the director of IT — which is to say, my boss's boss — was away on vacation when all this took place. (Read E-mail remotely? Who does that?) I compounded this situation with the tactical error of going skiing over the coming weekend and part of the next week, most of which I spent snowplowing down the bunny slopes face first, so that he discovered all the angry E-mail in his box without me around to explain myself. (My office partner remembers him coming in wide-eyed asking, "what did he do??") When I returned, it was icier in the office than it had been on the mountain. The assistant director, who thought it was funny, was in trouble for not putting a lid on it, and I was in really big trouble for doing it in the first place. I was appropriately contrite and made various apologies and was an uncharacteristically model employee for an unnaturally long period of time. The Ice Age eventually thawed and the incident was officially dropped except for a "poor judgment" on my next performance review and the satisfaction of what was then considered the best practical joke ever pulled on campus. Indeed, everyone agreed it was much more technically accomplished than the previous award winner, where someone had supposedly gotten it around the grounds that the security guards at the entrance would be charging a nominal admission fee per head. Years later they still said it was legendary. I like to think they still do.

yesterday 4 votes
More pro for the DEC Professional 380 (featuring PRO/VENIX)

In computing the DEC PDP-11 is something of a geologic feature. Plus, as most systems in the family were minicomputers, they had the whole monolith thing going for them too (minus murderous apes and sucking astronauts into hyperspace). Its fame is even more notable given that Digital Equipment Corporation was among the last major computer companies to introduce a 16-bit mini architecture, beaten by the IBM 1130 (1965), HP 2116A (1966), TI-960 (1969) and Data General Nova (1969) — itself a renegade offshoot of the "PDP-X" project which DEC president Ken Olsen didn't support and even cancelled in 1968 — leaving DEC to bring up the rear with the PDP-11/20 in 1970. So it shouldn't be a surprise that DEC, admittedly like many fellow mini makers, was similarly retrograde when it officially entered the personal computer market in 1982. At least on paper the DEC Rainbow was reasonable enough: CP/M was still a thing and MS-DOS was just newly a thing, so Digital put an 8088 and a Z80 inside so it could run both. On the other hand, the DECmate II, ostensibly part of the venerable PDP-8 family, was mostly treated as a word processor and office machine; its operating system was somewhat crippled and various bugs hampered compatibility with earlier software. You could put a Z80 or an 8086 in it and run CP/M and MS-DOS (more or less), but it wasn't a PC, and its practical utility as a micro-PDP didn't fully match the promise. wanted to run and little that existing PDP users did. Still, despite questionable technical choices, these machines (the Pros in particular) are some of the most well-built computers of the era. Indeed, they must have sold in some quantity to justify the Pro getting another shot as a high end system. Here's the apex of the line, the 1984 DEC Professional 380. the Tomy Tutor). It, too, was never sold as a successor to the 990; later 990 hardware even used 9900-series CPUs directly. However, TI got greedy and shortsightedly repulsed third-party development, while the 9900 architecture had what turned out to be a fatal dependence on RAM speed and became a technological dead end. More problems occurred after the IBM PC scrambled the landscape and mini vendors tried touting their smaller "microminis" as upmarket alternatives, though these systems were deliberately less powerful than and sometimes mostly or totally incompatible with their big systems to avoid cannibalizing high-end sales. Their prices were likewise uncompetitive, so newer cost-sensitive customers continued buying cheaper PC-compatibles while legacy customers were unhappy their existing software might not work. Attempts to entice that low end by adding more typical microcomputer CPUs as compatibility options, usually the 8086 or 8088, simply made them into poor PCs that cost even more. Data General was one notorious instance as they repeatedly failed to parlay their successful Nova into smaller offerings, first with the poorly-received 1977 microNOVA, and later with the microECLIPSE in the bizarre 1983 Desktop Generation modular machines. While Data General claimed it could run everything the bigger MV hardware could, such software had to be converted by vendors "with standard software [in] a few hours" (PC Magazine 11/83), and its PC compatibility side was unable to run major applications like Lotus 1-2-3 without patches. Given how expensive the DG was, most developers didn't bother and most potential customers didn't either. For its part, although early rumours talked about a small System/370, IBM never turned any of their mainframes or minis into commodity microcomputers except for various specialized add-on boards, and the 5150 PC itself was all off-the-shelf. Surprisingly, DEC was already in this segment, sort of, albeit half-heartedly and in small quantities. As far back as 1974, an internal skunkworks unit had presented management with two small systems prototypes described as a PDP-8 in a VT50 terminal and a portable PDP-11 chassis. Engineers were intrigued but sales staff felt these smaller versions would cut into their traditional product lines, and Olsen duly cancelled the project, famously observing no one would want a computer in their home. Team member David Ahl was particularly incensed and quit DEC in frustration, going on to found Creative Computing. A charitable interpretation says Olsen may have been referring to the size and state of computers at the time, and most people probably wouldn't have wanted one of those in their house, but it wasn't very future-thinking to imply they'd always stay that way. Olsen reiterated to the World Future Society in 1977 that "[t]here is no reason for any individual to have a computer in their home," later arguing in various retrospectives that he meant no one would want a computer at home controlling everything. True to his words, both Ken Olsen and Gordon Bell reportedly had terminals in their residences but no standalone systems. Creative Computing that Olsen's attitude was changing, possibly goaded by the IBM PC's wildly successful launch in August or maybe when Ahl said "his [Olsen's] daughter begged for a computer at home," adding DEC's new low-end computer would be "based on the venerable PDP-8." (This became, of course, the DECmate II.) Olsen subsequently told investors in November to expect a "DEC personal computer" (his words), adding "we are not planning to go after the home computer market" but that it would be "equivalent" to the IBM PC. In early 1982 he intimated further that there would indeed be an updated DECmate soon, plus two new lower-cost "16-bit" systems. This project, administered by DEC's new Small Systems Group (SSG), was internally referred to as the "XT" Computer Terminal (not to be confused with the IBM PC/XT, which IBM didn't release until March 1983). four microcomputers: the DECmate II and the two Professionals, as predicted, but also the previously unannounced DEC Rainbow 100 aimed directly at the IBM PC 5150. In his remarks at the press conference, Ken Olsen called DEC's personal computer initiative "the largest investment in people and manpower" the company had made in its 25 years of existence. All superficially related units, the industrial design of the four computers was strongly based on the DEC XT prototype. Each system used similar or identical cases, the same monitors, the same floppy drives, the same 103-key keyboard and mostly the same cables, though their guts of course were in some cases very different. To industry observers' surprise, DEC uncharacteristically seemed to price the DECmate II and Rainbow to sell. The now 8MHz DECmate II had better hardware yet went for much less than the DECmate ($3745 [$12,600] compared to 1982's later price of $5845 [$19,700]), even considering it needed its own keyboard and monitor. However, what really startled people was the Rainbow. Introduced at $3495 [$11,700], DEC had already slashed it to $2675 [$9000] by the fall. At a time when a reasonably configured IBM PC system might set you back itself around $3500-ish in late 1982, with CPU, monochrome display, base 64K RAM and 180K floppy drive(s), a comparable Rainbow setup with 64K, baseline 400K RX50 5.25" dual drives and serial port, hybrid CP/M-86/80, LK201 keyboard and VR201 monochrome display came in at just over $3700 at the later pricing, with MS-DOS support on the way. Although the barebone 16K IBM PC started at $1565 [$5260], DEC wisely predicted most people would opt for the larger RAM. Even the higher-end Professionals seemed to be aggressively priced, though admittedly mostly when compared to their mini ancestors, starting with the lowest-spec 325 at $3995 [$13,400]. Their 256K base of RAM came from the original 128K plus the 128K upgrade (both as daughtercards) which was now standard. The Professional 325, sold with four CTI slots and no hard disk option, was proudly cited in DEC's Professional Handbook as "the lowest cost PDP-11 system ever produced." However, the two upper tiers got expensive quickly: the midrange configuration, a 350 without a hard disk, was otherwise the same except for its six CTI slots and sold for $4995 [$16,800], and adding the 5MB hard disk and controller card pushed the total bill to $8495 [$28,500]. The bloom started coming off the rose when manufacturing problems delayed sales; the 325 and 350 didn't hit the market until almost December, and the Rainbow was stalled for months. Even when people could buy them, DEC's unusual design choices started coming home to roost. None of the systems could format their own floppy disks with their shipping operating systems, leading to user revolt when they were expected to buy preformatted media from DEC directly; officially only later versions of CP/M and MS-DOS for the Rainbow could do this, and some Pro users bought a Rainbow specifically for the purpose. While the Rainbow could read and write specially formatted MS-DOS floppies, its native format was the incompatible (but higher capacity) RX50's, and unlike the 5150 PC its text mode acted like a VT220 terminal with initially no graphics support of any kind — or ISA slots to add some. (Graphics options later became available.) More ominously for the future, the first version of the Rainbow 100 couldn't boot from a hard drive. As for the DECmate II, it was less expensive and more expandable but to buyers' displeasure somewhat less functional than its predecessor, affected by irregularities in the new OS/278 and compatibility problems with older programs. It also offered no built-in options for acting as a standalone terminal (only software), which was a common use for the DECmate I in office environments. Meanwhile, power users found the 325 and 350 slow and ponderous compared to larger PDPs and objected to the Pro family's inability to run traditional PDP-11 environments. Reviewers appreciated the sold build quality and modular design, but complained the machines were heavy and the fans and hard drive were unusually loud. While Pro high-resolution graphics were considered by all to be extremely good, even making an appearance at SIGGRAPH, they were slow to update and scroll, and made using the menu-driven P/OS (based on a modified port of the real-time RSX-11M Plus) feel even more sluggish. Consistent with the XT's office aims DEC had promised early adopters a port of VisiCalc and a word processing package, but by summer 1983 little application software of any sort was available which caused retailers and buyers to start cancelling orders. Dan Bricklin, who himself led the P/OS ports of VisiCalc and TK!Solver, blamed P/OS's large memory footprint in particular and cited poor developer relations with DEC generally. In the fall DEC leadership concluded their personal computer strategy was failing and SSG VP Andrew Knowles resigned in September 1983, the sixth vice-president to leave Digital in two years. Analysts cited high prices, ineffective marketing, weak distribution channels and missing software applications, as well as the Pro's substantial production delays which allowed the IBM PC to flourish at its expense. For its part, the Rainbow's unique architecture became an increasing liability as current software now assumed a true PC compatible and directly accessed the hardware, hampering it from running all but simple or well-behaved DOS applications. In April 1984 DEC introduced the Rainbow 100B, adding hard disk support, improved PC hardware compatibility and twice the base memory at the same price, along with a RAM expansion for both the original Rainbow 100 (now the 100A) and the new 100B, and additional software telecommunications options. On the very low end DEC also shrunk the DECmate II into the less-expensive DECmate III, reducing its options, size and clock speed. Likely as the quickest means to market, DEC contracted VenturCom to port Venix-11 to the Pro 350 as its official Unix option, adding specific support for the Pro video hardware in its graphics library and including Venix's more unusual features like real-time programming, shared data segments, semaphores and code mapping (to dynamically page executable code in and out of main memory on small systems). We'll talk about why these features were important to DEC when we get to using Venix proper. The result was PRO/VENIX, announced in June 1984, and even included a future-facing UNIX System V license from AT&T. (Later on DEC also commissioned Pro ports of XENIX and Whitesmiths Idris, though it's unclear if these were actually completed or sold.) I wound up with a 380 is a little more prosaic. In 2013 I got contacted — I don't recall now exactly how — about a storage unit owned by a recently deceased individual "with a lot of old computers" that had to be cleaned out, and would I like to see what was there before the scrappers came? (I'm happy to do this and save or re-home any systems I can, but here's a protip: if you value your collection, don't ever let it get this far.) Whatever I could haul away was mine and what we couldn't haul away went to recycling that afternoon. So I rented a van and talked my friend Jon into coming along as extra muscle and we drove out to Pasadena to have a look. ALGORITHM: The Hacker Movie, written and directed by Jon, streaming free on YouTube! And don't miss his new short-form series, The Difference Engines. First episode drops March 17.) PDP-11/44 in a BA11-A enclosure with a 1350W power supply. We selected this one because it looked like it might work and we could actually get it on the dolly into the van. Besides the clear monster hard disk and the 11/44, the other things we hauled away were an RL02 and whatever hard disk packs we could fit, approximately one metric crap-ton of paper tape, some documentation (including, it turned out, a mostly complete set of Venix manuals for the 350), a couple AUI Ethernet hubs, various random cables, two VAXstation 100s (I told you I didnt know much about DEC hardware at the time), a somewhat thrashed VT100 terminal I thought I could restore, and, relevant to this post, a DECmate II, a VR201 green screen monitor, a VR241 colour monitor, and — ta-daa! — two DEC Pro 380s. Everything was dirty and gritty but other than the whacked VT they were intact, and neither of the monitors had evidence of the "mold spots" or "cataracts" that sometimes afflict these models. That was all we could rescue and there was no time for a second trip or to call anyone else; the scrappers were already pulling up as we tried to get the van door closed. Over time, I am relieved to say that to the best of my knowledge none of these items ended up in the skip, or at least not on my account. Unlike the other VAXstations (like the VAXstation 3100 M76 I have, currently my only VAX or VMS system), the VAXstation 100 is in fact a graphical terminal that requires a UNIBUS tether, and is of some specific historical interest (which I was unaware of at the time) because it was part of the transition from the W Window System to X. It has its own 68000 CPU, but it is not a standalone machine, and they turned out to be useless to me because I had no idea how to hook them up to the 11/44. Fortunately I found a home for them. In the end we didn't have the space and I was concerned we didn't have the power to actually fire up the 11/44 either, nor did we ever determine what that monster hard disk went with, so the PDP, the monster, the RL02, the disk packs, the beat-up VT100 and the rat's nest of paper tape went to someone else. I kept the monitors, cables, documentation and the remaining computers and tracked down an LK201 keyboard, and if I get around to finding or making boot disks for it, the DECmate II may be the subject of a future article. never connect them with the system turned on or an inadvertent short could fry any attached keyboard. For the monochrome VR201 display, plug a BCC02 cable into the Pro, plug the RJ-type cable from the LK201 into the VR201, and plug the other end of the BCC02 cable into the VR201. A straight-thru female-to-female DA-15 cable, the same port used for earlier PC joysticks or old Mac monitors, substitutes easily and also works for the DECmate II. In this configuration the monitor and keyboard are powered by the computer. The monochrome video signal can also be displayed on just about any composite monitor. For the larger VR241 colour display, plug a BCC03 cable into the Pro, plug the RJ-type cable from the LK201 into the box at the other end of the cable, and attach the BNC R, G and B cables to the appropriate connectors on the monitor. The VR241 additionally requires its own source of wall power, and the BCC03 should not be used with the Rainbow (it uses the BCC17). We'll be using the VR241 here; I've since allocated the VR201 to the DECmate II. On the far right in this picture are the status LEDs. The rightmost green LED indicates good power (got a DCOK signal from the power supply) and should stay lit. If this light doesn't come on or the system won't power up generally, check the internal circuit breaker first. The other four red LEDs (numbered 4-1, left to right) start lit and then go out in various order as internal system tests are passed. If any remain lit, there was a fault. In broad strokes, LED 4 indicates which type of error and the other three LEDs are an error code in binary, with LED 1 being the least significant bit. if LED 4 is out, then the system is indicating a problem with a particular slot (e.g., LED 3 and LED 2 on means physical slot 6 is bad). If LED 4 is on, then from 1-7 (0 is unpossible), the keyboard failed or was not detected, no boot device could be found, no monitor cable was connected, both logic board memory banks are bad, the low bank of memory is bad, the high bank of memory is bad, or the system module has failed completely (i.e., all LEDs on and will not extinguish). An on-screen display may give more information and/or additional error codes if enough of the system is working. We'll see an example of that a little later. upside down. This confused enough people using the Rainbow in particular that DEC eventually put a guide direction arrow inside later units, though this earlier example doesn't have it. the slowest PDP-11 ever produced," though they were nevertheless cheaper and the CPU modules in particular became very popular. The success of the LSI-11, and its attendant disadvantages, spurred DEC to develop its own designs. These commenced with the F-11 "Fonz" chipset, used in KDF11 CPUs starting with the 1979 11/23, and was a true 16-bit microarchitecture. In the DEC Pro 325 and 350, it was called the KDF11-CA CPU and consisted of five chips fabricated on a 6-micron process consolidated into three hybrid 40-pin DIPs that implement the standard PDP-11 instruction set, the EIS (extended instruction set), and FP-11 floating point instructions. The data chip (containing all non-float registers and scratchpads, the ALU and conditional branching logic) and the control chip (containing the microcode) are on one DIP together equivalent to the KEV11-B, the MMU is its own DIP equivalent to the KTF11-A, and the floating point adapter is spread over two chips on the third equivalent to the KEF11-A. Interestingly, the actual floating point registers are stored in the MMU. The F-11 as implemented in the 325 and 350 can address up to 4MB (22-bit) with 1MB reserved for option cards and I/O, though it lacks split I+D mode (again, we'll talk about this when we discuss Venix's internals) and has no cache. CPU timings are derived from a 26.666MHz crystal and divided by two to yield the 13.333MHz master CPU clock, which is internally divided again by four to yield its nominal clock rate of 3.333MHz. The J-11 "Jaws" chip shown here in the Pro 380 is more compact than the F-11 and higher performance, but the processor ended up less powerful than it was intended to be: Supnik describes the design as "idiosyncratic," stating that its complexity and size overwhelmed development partner Harris, and the chips had so many problems with yield and bugs that "Jaws" never reached its internal clock speed goal of 5MHz. The basic package consisted of two 4-micron chips on one carrier, a data chip with the ALU, external interfaces, MMU and registers, and a control chip with the microcode. Pads on the underside of the hybrid were to accommodate two more chips, likely for additional instruction set options, but this idea was never implemented. Internally the J-11 also appears to divide its clock by four. The J-11 was introduced with the PDP-11/73 in 1984, using the 15MHz (i.e., 3.75MHz) KDJ11-B CPU card with an 8K write-through cache and a separate floating point accelerator, and likewise implements the base instruction set, the EIS and FP-11. DEC's fastest J-11s eventually topped out at 4.5MHz (on an 18MHz clock), though the Mentec M100 reportedly pushed it to 4.9152MHz (on a 19.66MHz clock). Unfortunately the 380's J-11 (the KDJ11-C) is somewhat gimped by comparison, having been downlocked to 10.08MHz (divided down from the 20.16MHz system master clock crystal near the DC 363) for an effective internal clock speed of 2.52MHz, and lacking any options to add cache or floating point acceleration. However, it does implement split I+D, and Venix even makes use of the feature. Because the J-11 also requires fewer cycles per instruction and the 380's RAM is faster, the 380 is anywhere from two to three times quicker in practice than the 325/350 despite being clocked slower, and the CPU collectively uses less than a fifth of the power. A third and even smaller DEC-designed PDP-11 CPU was also available, the 7.5MHz/10MHz T-11 "Tiny" chip. The T-11 was introduced in 1981 and is a single-chip CPU with no MMU or floating point support fabricated on a 5-micron process with 12,000 transistors. DEC intended this processor for embedded systems and used it in some disk controllers and the VT240 terminal, but its most famous use was as the main CPU of the Atari System 2 arcade board (some of my favourite arcade games like Paperboy, 720 and the best one of all time, APB, which I played for the first time as a kid in the Disneyland arcade — who doesn't like donuts, demerits and police brutality?). It was indisputably the most successful of the PDPs-on-some-chips and produced in the hundreds of thousands. These particular CPUs and the PDP-11 architecture generally were all mercilessly ripped off by the Soviets, by the way, and one very small CPU in this family we may visit at another time. For that matter, the Red Menace even made clone Pros! David Gesswein's MFM hard disk reader and emulator, which can read an existing drive to an image and then immediately substitute for it. It's not a dirt-cheap device but you get what you pay for. It has separate connectors for reading and for emulation, and looks to the Pro almost exactly like the hard disk that used to be there. custom software on Debian Linux for which the rest of the hardware is implemented as a cape. Dave's software is all open-source and included in the operating system image. Although you can power the BeagleBone itself over USB, the cape requires +12V that we'll pull from the connector for the RX50, since we need to keep the other connector to power the RD52. The large bank of capacitors acts as reserve power so that the BeagleBone is able to cleanly shutdown when it notices the main system power is off. We'll take advantage of that for another purpose as well. root, no password. Here we'll test out the capacitors before we continue. The powerfail system is what monitors the power supply and forces the BBG to shut down if the voltage is below a critical level. Here the voltage is as we expect, so we'll next do a test powered restart: The card duly goes down and immediately comes back up, so we can consider the power subsystem to be good. After logging back in we'll power down the board completely to hook it up. (You may need to forcibly remove and replace the Molex power connector after shutdown to get it to restart.) root, --analyze is intended for figuring out operating parameters on a disk you don't know, but we already know at least the geometry for the RD52, and it appears it didn't guess quite right which is why I halted it after seeing copy errors. Interestingly it detected the setup as an Elektronica_85, which is in fact one of the godless pinko Commie Soviet PDP-11 clones (please send hate mail to /dev/null). --analyze did correctly get the number of heads and cylinders (8 and 512 respectively), and did properly find the hard disk on drive select 1 as configured from the DEC assembly line. We can try the copy again with a simpler set of options, and this time it succeeds, yielding an 85MB image file. This file is as large as it is because it captures all the transitions for a very accurate copy. We'll leave it on the BBG internal flash for now (Venix does have a fixed-size swap partition but we'll have enough memory so that it won't have to use it much). Finally, we power down the board again completely at this point in preparation for installing it. don't miss. that? root@beaglebone:~# cd ~/emu root@beaglebone:~/emu# setup_emu Rev B Board root@beaglebone:~/emu# mfm_emu --drive 1 --file ../emufile_a Board revision B detected Drive 0 num cyl 512 num head 8 track len 20836 begin_time 0 PRU clock 200000000 Waiting, seek time 0.0 ms max 0.0 min free buffers 200 bad pattern count 0 Read queue underrun 0 Write queue overrun 0 Ecapture overrun 0 glitch count 0 glitch value 0 0:test 0 0 0:test 1 0 0:test 2 0 0:test 3 0 0:test 4 0 1:test 0 0 1:test 1 0 1:test 2 0 1:test 3 0 1:test 4 0 select 1 head 0 select 0 head 0 Drive 0 Cyl 0->1 select 1, head 0 dirty 0 Waiting, seek time 5.8 ms max 5.8 min free buffers 200 Drive 0 Cyl 1->0 select 1, head 0 dirty 0 Waiting, seek time 4.1 ms max 5.8 min free buffers 200 Drive 0 Cyl 0->151 select 1, head 0 dirty 0 Waiting, seek time 10.1 ms max 10.1 min free buffers 200 Drive 0 Cyl 151->0 select 1, head 0 dirty 0 Waiting, seek time 4.0 ms max 10.1 min free buffers 200 Drive 0 Cyl 0->151 select 1, head 0 dirty 0 Waiting, seek time 4.0 ms max 10.1 min free buffers 200 fsck succeeds on both filesystems (/ and /usr). Qapla'! Seemingly (that's called foreshadowing, kids) the last step should be to make the BBG automatically start serving the drive on powerup. Dave provides this facility as a systemd service. We edit /etc/mfm_emu.conf and set EmuFN1="/opt/mfm/emufile_a" in that file, then systemctl enable mfm_emu.service and put on the capacitor jumper so we have bridging power. When I ran this all from the separate power supply, the board came up and started serving the drive image, I powered on the system, and Venix booted. When I turned off the separate power supply after powering off the system, the board properly shut down. When I ran all this from the system power supply, however, it wouldn't ever see the emulated hard disk and wouldn't ever boot. Can you figure out why? Pencils down: the BBG wasn't booting fast enough. I let the whole thing discharge overnight, took the BBG out of the MFM cape and put it on the workbench. I am not a fan of systemd (which is why I don't run current Linux on any of my servers or server-adjacent machines), but I grudgingly admit the tooling is better. As configured the BBG was taking a whopping 37 seconds to come up, over 31 of those seconds spent bringing up the network. But we're not using the network! I edited /etc/network/interfaces and commented out everything but the loopback, then turned off the WiFi ... and restarted. This is still not fast enough for the Pro to see the emulated disk on startup. Even hacking the first three scripts to exit 0 immediately only got me down to about 7 seconds (apparently most of the overhead is from just launching them in the first place), and we've also not accounted for the time needed to bring the MFM emulator up either. I undid these changes; everything else is too slight to significantly matter. We simply can't seem to beat the Pro's firmware to come up first before the Pro concludes no hard disk is present. However, we have an alternative. We know that if we give the BBG enough time — and "enough time" is now around seven or eight seconds — we can get the system up. We also know from testing and Dave's estimates that the supercapacitor bank will get us at least 30 seconds of runtime (in fact, in practice it seems to be several minutes or more on this Revision B). I logged back into the BBG, edited /etc/mfm_emu.conf and added PowerFailOptions="-w 20" . What this allows us to do is power on the Pro, charge the capacitors (very fast) and start bringing up the BBG while the Pro's initial boot fails. With our new shortened boot time the BBG will be ready and waiting by the time the Pro displays its error screen. We then power the Pro off and power it back on. As long as the power cycle is less than 20 seconds (I eventually upped this to 30 as there appears to be plenty of power available), the BBG will ride the stored charge and keep serving the emulated disk image so that on that second power-on (and every power-on thereafter) the Pro will see the drive and boot from it. When we're done with a session, we power the system off for more than thirty seconds, causing the BBG to shut down cleanly as expected. With that sorted, let's do the memory upgrade too. two sets of segment registers, one for instruction fetches and one for data, effectively turning a notionally Von Neumann CPU into a Harvard one. Instructions, plus their operands, addresses and constants embedded in the instruction stream come from the I-space segments; loads and stores to memory addresses, however, reference the D-space segments. This effectively allows 128K to be addressed more or less at once, albeit with a bit of bookwork. (Again, also compare with the MOS 6509, a 6502 with on-board registers to set execution and indirection banks, though the banks are granular only to the 64K level.) the Alpha Micro, where you can mark programs as reentrant [other users can run the same image] or reuseable [the image doesn't have to be reloaded from disk].) Programs that require a much larger code size can be linked with code mapping, where as long as individual object modules (i.e., its component .o files) are each less than 8K, segment 1 can be used to page them in and out for a program size that can be as large as physical memory minus the size of the Venix kernel minus the size of the process' data. A fixed segment containing the paging support logic, function mapping table and other code sits in the lowest 8K controlled by segment 0, with up to 48K of data controlled by segments 2-6 and stack in the usual location controlled by segment 7. Alternatively, on J-11 systems the C compiler is split I+D aware and code can be compiled to support it (though such binaries will not run on a 325 or 350). Since both code-mapped and split I+D executables necessarily can't modify their program text, they can be run "pure" too. The PDP-11 Venices additionally provide special system calls for its own implementation of shared memory which it calls "shared data segments" (this feature was also available on Venix/86, albeit with different arguments and semantics, and Venix-specific code was generally not portable between architectures). This facility allows a second data segment to be dynamically constructed and mapped into one of the 8K segments. By using a filename a mmap()-like facility becomes possible which can also be shared by other processes, and it can page the segment window over a much larger range using a disk-backed file to handle addressing data spaces larger than 56K. IPC is further aided by a simple (though ultimately future-incompatible) implementation of semaphores, permitting local and global locks, which mainline AT&T Unix did not support until System V. Venix additionally supports real-time programming, allowing programs running as root to run at exclusive priority, lock their processes in memory, and directly access I/O by using another Venix-specific system call to point a segment at the device's physical address (usually segment 6, generally free except in code-mapped executables and programs with unusually large text segments). DEC, having a substantial investment in its own real-time operating environments for data acquisition and process automation, particularly valued a Unix option that could do so as well. To see a bit of this in operation, consider this complete program in C (written in K&R C, because that was the era). It is a direct C-language port I made of a 4096-colour demonstration originally written in PDP-11 assembly by vol.litwr. Remember, int in this environment is 16-bit. > 4; while (*(videop + 2) > 0) { } *(videop + 3) = 16; *(videop + 4) = 18; *(videop + 10) = r3; *(videop + 9) = 4; r3 = r3 >> 4; while (*(videop + 2) > 0) { } *(videop + 4) = 4624; *(videop + 10) = r3; *(videop + 9) = 4; while (*(videop + 2) > 0) { } } pb4096(r1, r2, r3, r4, r5) int r1, r2, r3, r4, r5; { int i,j,k; for (i=r4; i>0; i--) { k = r1 << 2; for(j=r5; j>0; j--) { pp4096(k, r2, r3); k += 4; } r2++; } } pal12() { int i,j,r1=0,r2=0,r3=0,r4=5,r5=2; for (i=35; i>0; i--) { r1 = 0; for(j=120; j>0; j--) { pb4096(r1,r2,r3,r4,r5); r3++; r1 += r5; } r2 += r4; } } main(argc, argv) int argc; char **argv; { int i, vr4, vr6, vr8, vr14, vr16; /* map 64 bytes to segment 6 from physical address 0x3ffb00 */ if(phys(6, 1, 0177754)) { perror("mapping"); exit(1); } i = *videop; if (i != 0x0010) { fprintf(stderr, "unexpected value for hardware: %04x\n", i); exit(1); } i = *(videop + 2); if (i & 0x2000) { fprintf(stderr, "no EBO? %04x\n", i); exit(1); } fprintf(stderr, "ok, detected 380 with EBO\n"); vr4 = *(videop + 2); vr6 = *(videop + 3); vr8 = *(videop + 4); vr14 = *(videop + 7); vr16 = *(videop + 8); *(videop + 2) = 0; pal12(); sleep(10); *(videop + 2) = vr4; *(videop + 3) = vr6; *(videop + 4) = vr8; *(videop + 7) = vr14; *(videop + 8) = vr16; exit(0); } phys(2) system call, then manipulates the registers to draw to the individual video planes. There's no cache and the Venix PDP-11 C compiler is fairly simple-minded, so we can simply directly drive the registers in a way that would make a modern C programmer blanch. (Rust programmers, just look away and go to your happy place.) Compiled with cc -o test test.c, the result looks like this (./test): phys() call needs to be adjusted to point to your video card if you try to run this on a 350 (which will vary on the slot). However, it works, and it's very pretty. We'll be messing around more with Pro graphics in a future article. The code we ran here directly accesses the hardware, but Venix included a complete high-level graphics package for both Venix/86 and PRO/VENIX with more typical drawing functions that operated at the Pro's maximum resolution. This package was in turn used for Unix commands to draw shapes and graphs, which I'll demonstrate. VenturCom initially had a Unix Version 7 (V7) license, the last Bell Labs release in 1979 before AT&T took it over and the first iteration of Unix generally considered readily portable. XENIX on x86 was also initially derived from this version. V7 was the earliest mainline release to officially integrate (among other things) the Bourne shell, awk, a Fortran-77 compiler, a portable C compiler (though the C compilers in Venix/86 and Venix-11 seem to have different lineages), uucp, tar and make, though development versions of these tools appeared in other Bell Labs variants like Programmer's Workbench (PWB/UNIX). In addition to their custom changes, Venix further added several components from 4BSD, most notably the C-shell (I'm one of those people), more and vi. V7 Unix was the basis for the original Venix-11, which wasn't much of a stretch since V7 Unix already ran there, and by descent PRO/VENIX in July 1984 on the 350. This package was sold explicitly with a future UNIX System V license which had just come out in 1983. In October 1984 VenturCom upgraded PRO/VENIX to "Rev 2.0" (and misnamed it VENIX/PRO in the release notes) which fixed some bugs and added support for the 380 while remaining compatible with the 350. I'm emphasizing the Rev part; it will shortly become salient. Here's PRO/VENIX Rev 2.0 on Xhomer, a Pro 350 emulator which provides a pre-installed disk image: demo, you get a little graphics demonstration. Rev 2.0 was still based on V7, effectively the Pro version of Venix/86 Encore from September 1984 (and the same release used on the Rainbow as Venix/86R), most easily distinguished because of the missing uname command and underlying system call which weren't added until UNIX System III. The kernel is an ordinary executable (again, salient shortly). There is no formal shutdown procedure for any of these earlier releases of Venix (shutdown doesn't even exist) — you just get everyone out and turn the computer off. PRO/VENIX V2.0 (not Rev 2.0), however, is a different animal. It was released in July 1985 and primarily intended for the 380, though a set of 350 overlay disks patch the kernel and certain other large programs during installation; you can't boot PRO/VENIX V2.0 on a 325 or 350 without them. True to VenturCom's word, it leapfrogged UNIX System III completely and is in fact a derivative of System V Release 2 (SVR2 or V.2). The V is deliberate and capitalised for a reason: it's for System V, not short for Version. V2.0 was explicitly intended to correspond with System V 2.0. It was the final release of PRO/VENIX and the version we have on our 380 here. As a parenthetical note, Venix/86 2.1 was the last of the original version number sequence; 2.1 is actually less advanced than PRO/VENIX V2.0 despite the lower apparent "version" number and better considered as an upgrade to Venix/86 Encore as it remains based on V7. fsck whether the disk needs it or not: SUPER prompt when you've logged on as root (there is also a MAINT> prompt when you're single-user; I'll show that when we actually shut down at the end). This prompt and an associated warning comes from Venix's default .profile. The screenshot here is a direct monochrome grab from the Pro's planar video using the pscreen utility which emits the screen contents as LA50 printer codes; you can also save and load screen images with sscreen and lscreen respectively in a format I haven't currently deciphered yet. These tools are part of the graphics package and also exist in Venix/86. /etc/inittab instead of having to muck around with /etc/ttys, though you'll note that the serial port is called com1 as a PC holdover (how bourgeois). /dev/lp is the serial printer port, but Venix limits this to 4800bps, so we'll stick with the faster serial port for logins. We'll put /etc/getty on com1 at the highest speed available, 9600bps, and then bounce init's runlevel with telinit. When we do, we get a login prompt. There is no banner. (By default, the root password is gnomes.) Well, la dee dah. I went back to the console and created a plain user the old fashioned way: I earned it made entries in /etc/passwd (no password because I don't care) and /etc/group, created a home directory for myself, and made my new account the owner. I don't have the V2.0 release notes, but I bet they were interesting. There are in fact no man pages in PRO/VENIX because you (are supposed to) have real manuals. Along the way I think we went backwards on this, somewhere. On the other hand, news(1) is not in my older copy of the User Reference Manual and appears to have been newly added to V2.0 also. A more immediate problem: my terminal setting is pretty messed up because it assumed I was logging in from some other Pro connected via the serial port, so (you can't export TERM=vt100). And, well, I'm using the Bourne shell and I hate the Bourne shell, so I set it to C-shell like civilized people. Let's get our bearings. There's your proof that this is System V. Also, because our clock battery failed decades ago, the machine is still getting its time from the filesystem. This is useful to us because that tells us when the computer was last likely in use, as we can reasonably assume a regular user would have either gotten the battery replaced or at least set the time correctly. I should also add for the remainder of this session that it is likely this local installation of Venix had some minor removals or modifications. Before we explore further, I'm going to customize my environment a little and make it set my terminal correctly when I'm logged in over the serial port. The current Venix license it came with (something to hack later) only allows two users, so a simplistic method will suffice. If we have a functional awk, and we should, then we can do this easily in .login. .login set j=`who | grep spectre | tail -1 | awk '{print $2}'` if ("x$j" == "xcom1") then setenv TERM vt100 endif ^D % cat >hello.c #include <stdio.h> main() { puts("hello world"); exit(0); } ^D% cc -o hello hello.c % ./hello hello world % file hello hello: executable not stripped % file /bin/date /bin/date: pure executable % file /usr/bin/awk /usr/bin/awk: separate I&D % file /usr/lib/libcurses.a /usr/lib/libcurses.a: archive % file /usr/lib/liby.a /usr/lib/liby.a: archive % file /venix /venix: separate I&D not stripped per se). The short program we compiled is an unstripped executable, but most of the essential system binaries like /bin/date are marked pure so that they can be shared by other logged in users. However, because awk and the /venix kernel are so large, they are compiled with split I+D ("separate I&D"), which is more efficient and has fewer restrictions than using code mapping. You can see that when we ask about the object file sizes: Our unstripped compiled binary has a code size of 2348 bytes, a data size of 244 bytes and a BSS (uninitialized data) size of 1224 bytes. This is actually smaller than date, a "pure" executable, which has code, data and BSS sizes all larger at 8320, 1008 and 1352 bytes respectively, showing that the "purity" of an executable has no relationship to its image in memory. On the other hand, our split I+D binaries (awk and the kernel) have comparatively large code sizes. Even though the kernel has no BSS portion, there is no way its code and data could simultaneously live in the same 64K space (for that matter, neither could awk's, because the stack is a fixed 8K at the top of the address range). However, since both were compiled split I+D, they don't have to. As a point of comparison, here's what you'd see in PRO/VENIX Rev 2.0. /bin/date in the earlier version is not pure (though for what it's worth /bin/ls is, so it's not like it lacks pure executables altogether). awk is in a different place, but it is, as expected, a mapped (meaning code-mapped) executable owing to its size. Unexpectedly, one of the archives is also marked as an executable, though it doesn't have execute bits and you can't actually run it. But the biggest surprise is that the kernel itself is not mapped, which is explained when we look at the sizes: /bin/ls, a pure executable, has a size of 7616+842+3852 — all bigger than /bin/date, which isn't — and again more proof that "purity" says nothing about program size. A couple other things: This is (well, was) an RD52, so it's a 30MB drive, the second biggest ever sold for the Pro. However, if you add these numbers up, you only get around 24MB; the rest is in fact a hidden swap partition. 6MB might seem a little excessive when the original memory size of this machine was 512K, but in Venix the swap isn't just for moving processes out so others can run — it's also used as a precache for certain pure executables that have the sticky bit set such as ls. (Again, not an uncommon feature in other operating systems of this era. AMOS could preload certain executables into the system image and always run them from RAM, for example.) These executables are copied to the swap on first use after bootup and executed henceforth from there in specific, precomputed locations, avoiding a walk through the filesystem. This also means, however, that once copied such executables cannot be deleted while the system is up or, in the words of the manual, "the file system will become slightly corrupted." (Eeeek.) Note that no version of PRO/VENIX supported demand paging; that wasn't implemented in AT&T Unix until SVR2.4. There is no /home or other special mountpoint or partition for home directories; everything else is in /usr, as was typical for Unix of this vintage. I later copied the files out with Kermit (it had Kermit on the hard disk) and decided to see what the Fedora Linux POWER9 would make of them. 2^24 No swap space for exec args %s on the %s, unit %d reading writing %s while %s block number %D. Status 0%o Can't allocate message buffer. VENIX SysVr2.0 85061411 PRO380 [...] is notable that in 2025 my Linux box still recognizes what type of executables these are. Anyway, let's dig around in the filesystem. Whoever had the system last didn't really clean up much; the tmac.* files in particular are macro definitions that look like they were supposed to be put somewhere else. Here's root's .profile, by the way: .profile must live in / because, during maintenance (single-user) mode, /usr and /tmp aren't even mounted. Once in single-user mode, the system can be powered off. /f0 and /f1 are mount points for the two RX50 floppies. But, in case you thought Venix could get around the RX50 formatting restriction, guess again: "On PRO/VENIX, floppy formatting is not possible; format is useful only for creating file systems on factory-formatted diskettes." Damn you, Kenneth Olsen. Better go buy that Rainbow. This next file did not come with Venix: I don't know who did that. It's among the last files created on the system before it came into my possession. You're welcome, little buddy. You're welcome. I suspect this directory did come with the system, but it's been modified, and I even played with the demo script and data files a little myself. There is no demo login on this system (it might have been removed), though if you run that shell script you'll see some plots that serve a similar function. Here are two: % ls -l /usr total 23 drwxr-xr-x 3 bin bin 80 Jun 28 22:34 adm drwxrwxr-x 2 bin bin 1952 Jun 28 22:34 bin drwxr-xr-x 3 bin bin 96 Jun 19 1985 games drwxrwxr-x 2 guest visitor 272 Jun 11 13:49 guest drwxr-xr-x 2 bin bin 64 Jun 19 1985 help drwxrwxr-x 3 bin bin 880 Jun 19 1985 include drwxrwxr-x 2 root sys 656 Jun 19 1985 kermit drwxrwxr-x 16 bin bin 1312 Jun 19 1985 lib drwxr-xr-x 2 bin bin 512 Jun 19 1985 lost+found drwxrwxr-x 2 bin mail 64 May 2 1986 mail drwxr-xr-x 2 bin bin 64 Jun 19 1985 news drwxr-xr-x 2 bin bin 64 Jun 19 1985 pub drwxrwxr-x 3 spectre other 304 Jun 28 22:11 spectre drwxr-xr-x 6 bin bin 96 Jun 19 1985 spool drwxr-xr-x 4 bin bin 128 Jun 19 1985 sys drwxrwxrwx 2 bin bin 80 Jun 2 1987 tmp /usr/bin was me copying /bin/date there for a reason I'll get to in a moment. The rest are as they came. At one time there was a guest login (it's no longer in /etc/passwd) and it has one file. This is not unlike my first time trying to quit vi, though today you wouldn't catch me dead using emacs. Caltech archives (PDF). Unfortunately no other names appear on the disk, and there are no other files on the system that appear related to his research, or anyone else's. There are also no mail files to read, and while this version of Venix supports UUCP, it doesn't look like it was ever used. UUCP (Unix-to-Unix Copy) was the only networking option supported by Venix of this vintage; no PRO/VENIX or Venix-11 version supported TCP/IP or any other means of networking, serial line or otherwise. Only P/OS officially supported the CTI Ethernet card. A subset of the Unix games package is in Venix. The manual says, "If you find that the User Reference Manual is rather prosaic reading, see section 6 for games." Strangely, this section of the manual is actually in the Programmer Reference Manual, and I don't like what this is implying. PRO/VENIX Rev 2.0's set is abbreviated compared to V7 Unix's complement, possibly for space reasons. % ls -l /usr/games total 106 -rwxr-xr-x 1 bin bin 10534 Apr 3 1985 bj -rwxr-xr-x 1 bin bin 6156 Apr 3 1985 fortune drwxr-xr-x 2 bin bin 80 Jun 19 1985 lib -rwsr-xr-x 1 daemon bin 34812 Apr 3 1985 snake % /usr/games/fortune Take care of the luxuries and the necessities will take care of themselves. % /usr/games/fortune Colorless green ideas sleep furiously. % /usr/games/bj Black Jack! New game Shuffle JC up JS+6D Hit? n You have 16 Dealer has KH = 20 You lose $2 Action $2 You're down $2 New game TD up TH+KS Hit? ^C Action $2 You're down $2 Bye!! snake worked pretty well too. A copy of C-Kermit was also installed, along with source code. This is a different Kermit build than the executable for Rev 2.0, which is a Venix-specific one rather than a generic System III or System V target. It is the easiest way to interchange files with Venix; I used Kermit as the terminal program on the Linux side and on the Venix side used kermit -is to push files, or kermit -ir to receive them. On the Linux side, transfers should always be in binary mode. A bit of garbage follows the transfer but the files always checked out. We saw those already with news(1). These are a couple sundry short help files (remember, no man pages). more.help just explains how more works. basic.help is for the built-in copy of UBC (University of British Columbia) BASIC, which is a "an ANSI compatible BASIC interpreter that runs under VENIX." It is a standard part of the operating system and can either run separately written programs or act as a simple REPL like other BASICs of the time. There is no graphics support, but it does have floating point. The interpreter is started with the basic command: bye returns to the shell. Interestingly, instead of new, UBC BASIC uses scr to reset the program and variable area. A couple more sundry files are in /usr/pub. Finally, let's look at what commands are installed. I'm not going to go through all of these, and I don't have manual entries for all of them either, but it is a substantial complement and appears to have all of the base tools in System V R2.0 plus the Venix value-adds from 4BSD and a handful of Venix-specific utilities. The ps utility in particular is more of a system status utility, printing not only processes and their flags but also swap and memory status. This system additionally has Ted Green's vedit (probably a port of the C-language version developed for Xenix) and what look like cross-assemblers. You can also see the tools that were used to draw the graphs I showed you. Here are the libraries. Remember, they aren't shared, although the programs that use them might be. Finally, the pieces of the kernel. This isn't in PRO/VENIX Rev 2.0 either, but Venix/86 2.1 does have the ability to rebuild its older V7 kernel (there are no PDP-11 targets, however). No source code is included, just object files. Here are relevant sections of the Makefile with the supported targets and the Pro-specific targets. In the extracts below, the XFER kernel refers to the bootable mini-kernel on the first install floppy disk. win doesn't mean Microsoft Windows; it's just an abbreviation for a Winchester hard disk. The 380 target passes -i to the linker to enable split I+D spaces. Since the 350 doesn't support that, its target passes -m instead to generate a (slower) code-mapped kernel. This won't run as well as Rev 2.0's, but because the kernel is now much larger, it's the only way to boot V2.0 on the 350 at all. The 350 overlay disks replace all split I+D programs, including the kernel, with code-mapped versions. (See these "reconstructed" PRO/VENIX V2.0 disk images.) It is also worth noting that there is no target for Venix-11 anymore (on real PDP-11 hardware), which seems to have become unsupported by VenturCom after the Pro port. Although the 80286 port comes in both real ("compatibility") and protected mode variations, the kernels otherwise have the same features, though of course you can't actually build a PC kernel on the Pro because there are no binaries and no x86 linker. Also, as written, no target will actually build anything because by default there is no /usr/bin/date, and there's also a CHECK = -@[ -f uts.c ] || exit 0 && in the Makefile — since uts.c is not present, the build will fail when this check is run as well. uts.c can be reconstructed and a very small one will suffice. However, given the absence of anything to write drivers for or a kernel ABI to write them to, or for that matter any source code, building a new kernel right now is mostly an academic exercise. MAINT> prompt comes from the .profile I showed you earlier). At this point you can just turn off the machine. Let's finish our twin stories, starting with Venix. PRO/VENIX doesn't seem to have sold much given that not many Pros got sold to begin with. This worsened after DEC started to emphasize the Pro's "PDP-11 compatibility," such as it was, since most of these later customers had prior experience with DEC and ran it under P/OS so that they got RSX-11M, or with the RT-11 or COS ports instead. (There was even an unofficial 2.9BSD port to the Pros, included as an example disk image with Xhomer, though none of DEC's own Unices like Ultrix ever made it.) By then VenturCom was making most of their money from the x86 versions instead, so cutting the other ports out was no great loss. VenturCom continued aligning Venix releases with System V release numbers for the rest of Venix's commercial existence; Venix's last SVR2 was V2.4 for the 80286 and 80386 in 1988. as MaxRT. Meanwhile, the new DEC personal computer strategy did about as badly as the prior one, though mostly due to the Rainbow which by now was seen by the market as so thoroughly idiosyncratic and incompatible that sales had no chance of rebounding. DEC completed a port of Windows 1.0 to the Rainbow but it made little change to its perception. In 1986 Digital introduced the VAXmate, ostensibly side-by-side with the Rainbow, but secretly intended as its replacement. The VAXmate was a true AT-class all-in-one clone PC, not based on the VAX architecture, but was notable that it could netboot over Ethernet from a VAX/VMS server as well as run regular MS-DOS from a conventional hard disk. Instead of incompatible RX50 floppy drives it had the lower-capacity but PC-interchangeable 1.2MB 5.25" RX33, and did not need nor use Rainbow peripherals. The new 380 did have some takers, but that was a relative statement especially since the PDP-11 market was fading generally and other business units at DEC wanted to promote VAX systems. DEC ultimately cancelled both the Professional and Rainbow families in 1987, replacing the 380-based VAX Console with a MicroVAX II for the second-generation VAX 8800s. Of the original 1982 rollout only the DECmate series survived, sold as the DECmate III+ until 1990 when it was no longer seen as competitive with PC word processing options. DEC nevertheless kept trying to sell their proprietary architectures as microcomputers, particularly in the form of the later VAXstations which primarily ran VMS as workstations and were generally compatible with bigger VAXen yet ultimately suffered a similar fate to the Pros. Digital did sell their own lines of PC clone desktops and laptops, and the Alpha had some modest initial success as a high-end PC competitor under emulation, but on the whole there were a lot fewer DEC computers in the home than there should have been. DEC was bought out by Compaq in 1998, and Compaq itself was acquired by Hewlett-Packard in 2002. This isn't the last you'll see of this machine. I'd like to explore writing some userspace networking options, and like I say, there's a lot of untapped potential in that graphics hardware. Stay tuned for future articles.

2 weeks ago 11 votes
The "35-cent" Commodore 64 softmodem

Rockwell famously used 6502-based cores in modems for many years, but that doesn't mean other 6502s couldn't be used. If only there were a way to connect a Commodore 64's audio output directly to an RJ-11 plug ... Convergent WorkSlate stuff I've got to catalogue. Officially the WorkSlate's only means of telecommunications is its 300 baud internal modem. While we have a 9600bps way of wiring up a Workslate to a modern computer, it's always nice to have a simpler alternative, and I figured this would be a great challenge to see if John's old program could let my Commodore SX-64 talk to my WorkSlate. Spoiler alert: it works! I don't know precisely what happened to John; regrettably I know little of his personal history. For a period of time he was a very prolific poster on comp.sys.cbm, but his last post there was July 19, 2000, in which he replied to someone's question about the relationship between sound frequencies and SID register values. (We'll actually talk about this in a bit.) His last post I can find in any Commodore newsgroup of the era is dated the next day, July 20, though he posted through CompuServe and it's possible may have made later posts there. Among his many contributions, including this one, are the Spyne self-extracting file archive utility, a .d64 downloader patch for Common Sense, an in-place PETSCII to ASCII text file converter, a user port-based audio A/D converter and player, and a custom track-by-track floppy disk formatting tool. He issued them all freely for anyone to use for any purpose. While he and I briefly corresponded over snail mail, I can't find the letter he sent me and I don't remember his exact location, and I never heard from him again. Sadly, although I hope I'm wrong, from his handwriting I knew he wasn't a young man and I'm all but certain he has since passed away. Rootsweb lists a John J. Iannetta who died in April 2001 at the age of 82. (If you know for sure, post in the comments, or E-mail me privately at ckaiser at floodgap dawt com.) The "35-cent modem" was first posted on October 27, 1998. John estimated the 35 cent cost based on the then-purchase price of an RCA jack ("listed in my Jameco catalog at 35 cents each"), though this didn't include the phone cable, or about 68 cents in 2025 dollars. Looking in their online catalogue now, you could even go cheaper, since Jameco (not affiliated, not sponsored, just using for comparison) now sells a through-hole right-angle RCA jack for $0.29 — in 1998 dollars, that would have been a mere 15 cents. If you don't have a landline phone cable anymore, the lowest Jameco price for a compatible connector I could find was a 6P6C modular cable for $1.49. Such a cable is technically a RJ-25, but it or an RJ-14 (6P4C) will do just fine. This project is a very easy build job, so let's do it. how T1 lines work. On old exchanges like this one (used in rural New South Wales, Australia), phone lines were carried by literal tip-and-ring connectors plugged into the switchboard to connect calls, which is where the name comes from. Telco guys call a combination of tip and ring a "pair." Each phone line uses one pair. Although it likely makes little difference for this application, there is a polarity to the connection which should be observed, i.e., tip is positive and ring is negative. The cable I used is a real USOC RJ-11C with a 6P2C connector and thus has only two wires for a single pair. The tip wire for this first pair on typical North American RJ-11 installations can be green, or white with a blue stripe (in other countries it may be any number of other colours); the ring wire can be red, blue, or blue with a white stripe. Thus, after stripping back the wires — sometimes easier said than done on a sticky old cable — connect ring to the phono jack's ground/sheath and tip to the phono jack's centre. Make it pretty and you're done. An important warning before we continue: from the telephone company side the line pair carries voltage used to power the phone and ringer, so never plug this cable into a wall jack — doing so could potentially send up to 48 volts to the computer, with likely undesirable and even fiery results. A cable like this should only ever be directly connected to another modem. The software part has to do with how data from the Commodore 64 is modulated to send to the other system's modem. For that, we turn to John's program, as he posted it (in separate versions for NTSC and PAL Commodores for reasons I'll explain as we analyse the disassembly). It was presented as a type-in program in BASIC, short enough to type in by hand, with an embedded machine language section loaded from DATA statements. Here's a couple videos showing what it looked like in practice. The modulated audio is played through the speaker, so don't have it up too high. so the KIM-1 could speak through a DECtalk, that the transmission of a byte or character of data is divided into more or less distinct phases. The default state is mark (one). At the beginning of transmission comes a start bit (always space, or zero), followed by the data (seven or eight bits). After the data comes an optional parity bit, then back to the stop bit for at least one and sometimes two or more bit times. The most common transmission type is 8N1, which is eight data bits, no parity bit, and one stop bit (i.e., characters must be separated by no less than one stop bit, though it can be more). Praat, we see a sine wave of varying wavelength. In the spectrogram at the bottom we can pick out two distinct frequencies being used to encode a character, as shown on the dark black band. This is the hallmark of audio frequency-shift keying (AFSK), or often just called FSK. For this spectrogram I've typed the letter "U" which in binary is 01010101. That's the "wiggle" in the middle. John's program sends 8-N-1, so since we know the byte is framed by stop bits, which are marks/ones, we can deduce the initial frequency is used to transmit a one. Serial communications send the bits in little endian order, i.e., from least significant to most significant, meaning the wiggle is actually the start bit (space/zero), followed by 10, 10, 10, 10, then a stop bit (mark/one) and finally the normal mark state between bytes, which in this plot is indistinguishable from the stop bit. Dr. Strangelove. It remained in operation well into the 1980s; even as just a giant coincidence there are many suspicious similarities between the concept and WarGames' WOPR. The 101 ran at 110 baud over regular telephone lines and became available for commercial sale in 1959. On the wire it uses separate sets of frequencies for each side of the conversation: 1070Hz and 1270Hz (space, mark) for the modem originating the call, and 2025Hz and 2225Hz (space, mark) for the modem answering the call. In 1962 AT&T introduced the Bell 103, which used the same frequencies but ran over twice as fast at 300 baud. It quickly became very popular and almost completely replaced the 101 in commercial use. Even after the 1976 Bell 202 introduced 1200 baud operation (with different frequencies and duplex modes), it remained compatible with the 103 in 300 baud mode, and virtually every third-party 300 baud modem was compatible as well (many were also compatible with ITU-T V.21, which uses the same basic communication scheme but different frequency sets). The Originate-Answer switch on 300 baud modems like the Commodore 1600 VICMODEM and Commodore 1660 "Modem/300" selects which two frequencies the modem will send bits with, using the other two frequencies for receiving. Since any 300bps modem can speak Bell 103, that's why John chose it, and since we're "responding" to the other side that "initiated" the "call" this code uses answerer frequencies. The software came in both NTSC and PAL versions because obviously something like this is highly timing-dependent, and PAL Commodore 64s run slightly slower (0.985250MHz) than NTSC systems (1.022730MHz). The variance is because each video standard uses a different master crystal from which all other clocks are obtained by dividing down, including the colourburst frequency needed for correct display, and also the clock speed of the CPU. This speed additionally affects the 6581 SID sound chip, since each of its three oscillators are incremented by the given audio value (0-65535) every clock cycle, so we need different values for the mark and space frequencies on PAL and NTSC systems. John's last known post in comp.sys.cbm, using slightly different processor speeds, explains the math (shown as written): Using this relationship, we can then solve for "word" to get the proper value for the SID frequency register based on the detected video standard. SID's three voices are thus able to generate tones up to ~3848Hz on PAL machines and ~3995Hz on NTSC machines, well in excess of the necessary range. Because SID generates audio asynchronously, we can just tell it to use infinite sustain (to infinitely prolong the note until we gate it off), play the mark frequency, and then leave the note playing while we go do something else, keeping the line open. Since the specification requires a sinusoidal wave, the code uses the SID's triangle waveform which is the closest approximation. The result is, in fact, the very tone you hear at the beginning of the videos. However, there's one other reason we need separate NTSC and PAL versions, and that's because of how John set up the baudrate. Here's how the BASIC loader starts (from the NTSC version): After reading the hex-encoded DATA statements into memory, at line 110 John's code starts initializing both the SID and the two CIA chips, though he primarily uses CIA #2. To determine bit times, rather than having the CPU manually count off a specific number of clock cycles, this code has the CIA do it. A critical point is that while the CIA chips can be set to issue IRQs or not, their interrupt control registers will still indicate when they would have fired one, even if that individual interrupt condition is technically disabled. John turns off all interrupts on both CIAs so they won't fire and upset system timing, including the usual Timer A IRQ on CIA #1 used for keyscan, then sets Timer A on CIA #2 to repeatedly count down $0d50 (3408) clock cycles. If we divide 1022730 by 3408, we get ... 300.09, almost exactly our baud rate. (It's okay to be a bit faster as long as you're never slower.) A smaller value is used for PAL systems. With the CIAs (and audio) set up, we then go into the mini-terminal, which is loaded into memory and started from the usual location for such routines at 49152 ($c000). We disassemble that next. John chose to call direct into the Kernal for these routines to short-circuit code he didn't need. With thousands of cycles available to send each single bit, the full-fat routines would have been fine, but why bother with work you don't need to do? After initializing the screen editor, the code waits for the next Timer A interval to fire and scans the keyboard manually (since the IRQ isn't running anymore), then fetches the next key, if there is one. Assuming it's not F1 (send a file) or F7 (quit the terminal), the code then goes on to send a character using this subroutine at $c027: Each access on the interrupt control register clears any conditions that were set. This apparent "double-wait" on entering the routine isn't an error: it ensures not only that everything's in a known state, but that also at least one stop bit's interval has elapsed between the prior character and this one. Once that has occurred, we clock out the start bit, then eight data bits least significant first, and finally leave back at the stop bit frequency. Each time, except for the very end, we wait for another trigger on CIA #2's ICR before we proceed. When F1 or F7 is pressed, the mini-terminal sets location $2 to non-zero or zero respectively (above, after the call at $c00f) and returns to BASIC. BASIC then turns back on the Timer A IRQ on CIA #1, and if $2 is non-zero, it proceeds to ask for a device number and filename. This is a fun routine on its own, but you've seen enough of the code to understand the basics of how it works, so let's get out the WorkSlate now and try it with a real device. For the Commodore side, we're going to use one of my portable SX-64 systems. A warning about the SX-64 specifically: never plug in a video cable — more specifically, never connect the audio output — with the computer's power on. Doing so runs you a decent chance of frying the SID, something I actually did many years ago and is a well-known problem. This goes likewise for connecting our mutant phone cable to the SX-64, since we necessarily have to use the computer's video port for the audio signal. The WorkSlate has a built-in terminal desk accessory which can be activated from the Phone menu and selecting Terminal. However, simply selecting the Terminal is not enough. The trick with the WorkSlate is to have the speakerphone line open (Phone, SpkPhone) first, and then try to answer with the Terminal. This is supported by the device; it assumes in this case that you've manually dialed a number somehow (say, from an attached phone handset) and the computer on the other end has answered. We already have the answer stop bit tone playing, so the WorkSlate's modem immediately hears it and tries to go on-line. ATX1D sets up a "blind dial" so that the 64's answer signal is also immediately recognized. The interesting part is comparing how the speakerphone operated between the three WorkSlates I now have. On the most recent one I acquired and on my "tester" unit that I soldered jumper probes to (both with serial numbers starting with CCA8415), I could hear the "call" and what the SX-64 was sending when the WorkSlate was in speakerphone mode, as expected. However, on my regular unit (a later machine with a CCA8417 serial number), I could hear both ends of the conversation through the SX-64's speaker, including the Workslate's dial tones and originate frequencies — and nothing on the WorkSlate's speaker. I'm not sure if this is due to different internal wiring, changes in the tape gate array or both. Again, this is a good reminder that the SID in the SX-64 is unusually vulnerable to stray voltages: if there were proper isolation I shouldn't have been able to hear incoming audio through the speaker output. In fairness, Commodore probably didn't think people would be wiring phone lines to SID audio either. But let's embroider the situation a little more. Some modems may listen for a dial tone first before they attempt to do anything, especially if you need to actually dial a phony (narf narf narf) telephone number, since they reasonably expect there's a real POTS "plain old telephone service" line on the other side. computer did. Programs like Common Sense could be provided a phone number and dial it by playing tones like music. (Interestingly, the VIC-20 does not seem to be capable of precise enough frequency control to generate DTMF; Commodore even warns against it in the Modem/300 manual.) Dialtones and other call-progress tones are often multi-frequency tones similar to DTMF, but they're specified separately by each region's telephone system. In the North American Bell System's Precise Tone Plan, dialtone is a combination of 350Hz and 440Hz at -13dBm, also played using a sine wave. If we use the formulas above and solve for the SID register values using those frequencies, these statements in BASIC will make a sufficient approximation of a dialtone on SID voices 1 and 2 (NTSC): smaller system, so let's make it a little friendlier. I removed the BASIC portion and wrote up a new menu system in pure assembly, incorporating and converting John's original code, and merging the PAL and NTSC versions together. It LOADs and RUNs like a BASIC program but is fully machine language. Ward Christensen, uses a fixed 128 data bytes per block and a simple checksum with known deficiencies, so John opted for the more complex version with a cyclic redundancy check to ensure errors could be promptly detected. Most terminal programs support this mode. We previously encountered a variant of Xmodem-CRC when we were figuring out how The Newsroom's Wire Service operated. From that article, the CRC-16-CCITT used in Xmodem-CRC is transmitted using this algorithm, rendered in K&R C: = 0) { crc = crc ^ (int)*ptr++ << 8; for (i = 0; i < 8; ++i) if (crc & 0x8000) crc = crc << 1 ^ 0x1021; else crc = crc << 1; } return (crc & 0xFFFF); } lda #$01 sta $96 ; number of current packet ; start sending the current Xmodem packet lc08a lda #$00 sta $02 lda #$01 jsr lc027 ; send Xmodem SOH $01 lda $96 jsr lc027 ; send packet number eor #$ff jsr lc027 ; send inverse of packet number ldy #$03 jsr $ffa5 ; Kernal acptr, read next byte from file sta $8b ; store in high byte of CRC jsr lc027 ; transmit it inc $02 ldx $90 ; EOF? beq lc0b3 lc0ad lda #$1a ; yes, handle final packet, store a ^Z sta $8c bne lc115 lc0b3 jsr $ffa5 ; read again sta $8c ; store in low byte of CRC jsr lc027 ; transmit again ldx $90 bne lc115 ; check EOF again lc0bf inc $02 big-endian, unlike the usual 6502 little-endian convention, and exploits the fact that most transmitted blocks will have at least two data bytes. The routine starts each block by clearing the count, then with the modulation routine at $c027 above it sends the standard Xmodem start-of-header (^A) character, the packet number and inverse of packet number, then (checking for EOF each time) reads two characters into the high byte and low byte of the running CRC-16 and transmits them. If the status word at $90 shows an EOF, this condition remains until the file is closed. For each byte after that to complete the block, another one is read and stored into $8d, then shifted into $8b and $8c: When a high bit is rolled out of the rolling CRC-16, this is detected as carry being set (no need for a bitmask) and the rolling CRC-16 bytes are exclusive-ORed with the required polynomial value ($1021, 4129). This is a very efficient translation of the algorithm. The code then continues to run to complete the block of 128 bytes. After the last byte is read from the file and shifted in, we need to incorporate three zero bytes into the CRC-16 to represent the header we sent. We then send that value and "wait" to pretend it worked, then go back for the next block. The weird delay routine allowed John to fit the entire loop into the maximum 7-bit displacement of a relative branch instruction. In fact, when I added code to flash the border on each block, I had to insert an absolute jump instead since those three extra bytes upset the apple cart. At the very end of the file, any block in progress is padded with EOT (^Z), and then Xmodem EOF (^D) is sent to terminate the transmission: Note that the branch at $c11b will always be taken since we just loaded a non-zero immediate into the accumulator. Again, another nice way of increasing code density and reducing type-in size. Should the file have ended in the first two bytes used to prime the CRC-16, John's code just stuffs ^Zs into it manually. John's original post containing the type-in versions (you can cut and paste these into VICE, if you like), plus the assembly source for this unified version and a pre-built binary, on Github. As John never asserted copyright to his programs and explicitly intended them to be freely distributable so that others could use and learn from them, I've placed this version into the public domain (to the extent available in your jurisdiction). You can assemble it using xa65. John was a good guy with a clever programming style and it was nice to see his code running again (and working, though that was a given). Plus, this is a great use for a Commodore to support your other systems, and a roadmap for doing something similar on other machines with sufficiently capable sound hardware. In future articles I think we'll explore a few other things he wrote, including that audio digitizer. I think he would have enjoyed it.

2 months ago 8 votes
Refurb weekend: Atari Stacy

Ask any Atari Stacy owner how to open an Atari Stacy and the answer is always "never, if you can avoid it." So I'll just lead with this spoiler image after the refurb to prove this particular escapade didn't completely end in tragedy: see the much lighter and streamlined STBook in the flesh, let alone own one. If you really want a portable all-in-one Atari ST system, the Stacy is likely the best you're gonna do. And we're going to make it worse, because this is the lowest-binned Stacy with the base 1MB of memory. I want to put the full 4MB the hardware supports in it to expand its operating system choices. It turns out that's much harder to do than I ever expected, making repairing its bad left mouse button while we're in there almost incidental — let's just say the process eventually involved cutting sheet metal. I'm not entirely happy with the end result but it's got 4MB, it's back together and it boots. Grit your teeth while we do a post-mortem on this really rough Refurb Weekend. it lacks a blitter, but does have an expansion slot electrically compatible with the Mega), it sports a backlit monochrome LCD, keyboard, trackball in lieu of the standard ST mouse, and a full assortment of ST ports including built-in MIDI. A floppy drive came standard; a second floppy or a 20 or 40MB internal SCSI hard disk was optional. This was Jack Tramiel-era Atari and the promises of a portable ST system were nearly as old as the ST itself. For a couple years those promises largely came to naught until Atari management noticed how popular the on-board MIDI was with musicians and music studios, who started to make requests for a transportable system that could be used on the road. These requests became voluminous enough for Tramiel's son and Atari president Sam Tramiel to greenlight work on a portable ST. In late 1988 Atari demonstrated a foam mockup of a concept design by Ira Velinsky to a small group of insiders and journalists, where it was well-received. By keeping its internals and chipset roughly the same as shipping ST machines, the concept design was able to quickly grow into a functional prototype for Atari World and COMDEX in March 1989. Atari announced the baseline 1MB Stacy with floppy disk would start at $1495 (about $3800 in 2024 dollars), once again beating its other 68000-based competitors to the punch as Apple hadn't themselves made a portable Macintosh yet, and Commodore never delivered a portable Amiga. Sam Tramiel was buoyed by the response, saying people went "crazy" for the Stacy prototype, and vowed that up to 35,000 a month could be made to sate demand. any configuration. FCC Part 15 certification for the hard disk-equipped 2MB Stacy2 and 4MB Stacy4 was delayed until December 1989 and at first only as Class A, officially limiting it to commercial use, while the lowest-end 1MB floppy-only Stacy didn't obtain clearance until the following year. We'll see at least one internal consequence of this shortly (I did mention sheet metal). The delays also stalled out the system's introduction in Europe and despite Tramiel's avowed industrial capacity relatively few Stacys were ultimately sold. Based on extant serial numbers, the total number is likely no more than a few thousand before Atari cancelled it in 1991, though that's greater than the successor ST Book which probably existed in just a thousand or so units tops. The Stacy's failure to meet its technical goals (particularly with respect to size and power use) was what likely led to the ST Book's development. Unfortunately, although a significant improvement on the Stacy, the ST's decline in the market made sustaining the ST Book infeasible for Atari, and it was cancelled along with the entirety of Atari's personal computer line in 1993. third party upgrade provided an installable internal battery option which could last up to two hours. ACSI ("Atari Computer Systems Interface") predates SCSI-1's standardization in 1986 but is still quite similar, using a smaller 19-pin port, a related but incompatible protocol, and a fixed bus relationship where the computer is always in control. It is nevertheless enough like SCSI that many SCSI devices can be interfaced to it — we'll come back to this too. The rear ports should be covered by a door, but it's missing from this system. is present that the expansion slot never got used by its prior owner(s), and I don't have anything to connect to it either. carefully through their hole in the bottom case so you can lift up the top case completely. peripherals for the Convergent WorkSlate (the WorkSlate itself uses a Hitachi 6303). This serves as the keyboard, mouse/trackball and joystick controller with its own 4K internal ROM and 128 bytes of internal RAM. Counting the RAM, though, we don't have 4MB on this side. Where's the rest of it? other side, covered by tape. Why is it taped? So it doesn't short against anything! Remember, this is exactly how Atari shipped it! The keyboard connector is here as well. This board is quite critical. Without it, the system has no RAM, no ROM and, almost trivially by comparison, no keyboard, trackball, mouse or joysticks. If it's not connected firmly, you'll get a blank screen. requiring desoldering of the 68HC000. This would have been a rather complex upgrade to install. not. What's depicted here is in fact a consolidation of multiple false starts and a whole lot of screaming. The first part was to put the metal shield back on and bend the tabs back to hold it in position. While doing so, be careful with the display wires to get them back into their little canal because they can literally short and spark. I don't know how this is possible but they do! You can also get the display cabling messed up enough that the Stacy will continuously beep at you when you turn it on. The only good way I found to avoid this was to pull as much play in the display wiring into the top case as possible so that the wires don't bunch up in the bottom case. also affect its connection with the logic board. The middle one seems to be the most involved. All of this suggests Atari never meant a 1MB Stacy to be upgraded with this particular card. Hall SC-VGA-2 scan converter to turn the ST's 71.2Hz high resolution display into the 60Hz my VGA box can capture. This stack doesn't get a pixel-perfect grab but the budget isn't there for the super duper OSSC right now, so you'll just have to deal. HDDRIVER that was already on the SD card. extensible control panel, much like Macs use CDEVs, uses CPXes (Control Panel Extensions). Show System. This was what I was using to display the memory configuration before. And, now adjusted, we still have 4MB of memory to my great relief with the computer back in one uneasy piece. I'm not 100% happy with the end result but the trackball button works better and our memory has quadrupled, at least when Stacy is in a good mood. Like I say, I can only conclude that the 1MB Stacy was never meant to be upgraded in this fashion. One of the third-party RAM cards might have worked, but I have no idea where I can find one. Regardless, based on the amount of apoplexy and late-night screaming that Stacy caused over the past couple months' weekends, my wife has told me in no uncertain terms that if I'm ever going to crack this laptop open again, I need to have a good long talk with her about it first. I've decided I'm okay with that.

2 months ago 7 votes
A mostly merry Southern Hemisphere Commodore Christmas

A merry Christmas and happy holidays from the Southern Hemisphere, where it's our year to be with my wife's family in regional New South Wales, Australia. One of my wife's relatives had an "old Commodore" in their house and asked if I wanted it. Stupid question, yeah? The Australian Commodore and Amiga Review published from 1983 to 1996. The issues here date from 4/89, 5/89, 6/89, 10/89, 11/89, 12/89, 2/90, 3/90, 4/90, 5/90, 6/90, 8/90, 9/90, and the 1990 Annual with an extensive list of Australian BBSes, software packages and user groups. By this time the Commodore 8-bits were past their prime compared to their 16-bit Amiga brethren, but there was still some decent coverage of the 64 and 128 in this set. Unlike most American Commodore magazines, there was little type-in content, at least in these particular issues; the ones here concentrate more on reviews and product announcements with sidecar tips and tricks. The other bit of literature in the box was a 1987-88 Dick Smith Electronics catalogue. If you're on one of the other continents, DSE was approximately Australia's equivalent of Radio Shack and at its peak sold a similar range of rebadged products and electronics. It is likewise no more (shut down in 2016), though its name lives on as a zombie Kogan brand; today its closest domestic equivalent would probably be Jaycar. Type-right and Whiz Kid. rebadged them also). The PC-1360 and PC-1401 were more advanced than Tandy's Sharp rebadges, but they did include the lowend PC-1246 (Tandy PC-8, which avoided being the worst Tandy Pocket Computer ever because of the execrable PC-7) and even lower-end PC-1100, a flip-face unit that had a narrower but 2-row display and basic organizer functions, and sold for more likely because of it. DSE also sold the excellent Sharp CE-126P, a lovely device that combines a thermal printer and cassette interface yet does not rely on the sure-to-fail NiCad batteries other such peripherals did to their detriment. Tandy never sold this unit, rebadged or otherwise. Unlike Tandy, though, DSE simply chose to rebadge other PC systems instead of creating its own like the Tandy 1000 series. At least initially their PCs came from a Taiwanese company called Multitech, which started in 1979 selling their Z80-based "MicroProfessor" MPF-I SBC and later two Apple II clones, the MPF-II and MPF-III. These clones were especially notable for their onboard Chinese language support, drawing characters in high-resolution graphics and as such completely omitting the Apple II's text mode. Subsequently Multitech started producing PC clones in 1984 with the MPF-PC-XT, and over several years served as a PC OEM for many diverse companies such as Texas Instruments. The clones shown here (the 8088-based PC500 and PC700, and the AT-class 80286 PC900) may have been some of the last to bear the Multitech name because after failing to land a large contract with a German firm, company head Stan Shih decided he'd had enough and retooled the company to start selling their own PCs under their own brand in 1987. He chose a new name for the company, too: Acer. But just like the Tandy Color Computers, Dick Smith was still selling their range of 8-bit home computers as well in those days. The last of this line was the Z80-based VZ300, yet another VTech rebadge, and had a whole assortment of peripherals including memory expansion, floppy disk drive (three times the cost of the computer), and interfaces for the joysticks, cassette, printer and disk drive — which was sold separately from the disk drive! I have a VZ300 and some upgrades I need to finish building which will be in a future article. READY. prompt because the computer can't boot from its internal disk drive. The drive activity light never turns on and the drive motor never turns off. inside a Commodore 128DCR for a prior refurb weekend and the disassembly here is the same. With a little bit of care we can avoid tearing the intact warranty sticker here too. match these symptoms.) One possible way to temporarily deal with the problem is disabling power and/or the serial ATN line to the internal drive and hooking up an external one. I don't have any of these chips here and I don't even have a proper soldering iron available on this side of the Pacific, so this will be a restoration for another day while I get everything together. But it was fun looking at the software and magazines, and I think this machine is eminently repairable. To be continued after a trip to Jaycar and some mail orders. A very happy holiday and a merry Christmas to those of you who celebrate it.

3 months ago 6 votes

More in technology

The government should stop worrying about the Daily Mail Test

You can't fix the Civil Service by penny-pinching

17 hours ago 2 votes
The post you knew was coming about the Switch 2 display

Nintendo gave the Switch 2 it's grand unveiling today, and I think it looks great. $449 is a steep starting price, but considering the features and the fact we live in a world of inflation and significant tariffs on many goods coming into the US, it's

7 hours ago 1 votes
This student made his own odds with a DIY slot machine

Today’s digital slot machines are anything but “fair,” in the way that most of us understand that word. There is tight regulation in most places, but the machines can still adjust their odds of payout in order to maintain a specific profit margin. If the machine thinks it has paid out too many wins recently, […] The post This student made his own odds with a DIY slot machine appeared first on Arduino Blog.

an hour ago 1 votes
Benchmarks from M2 Pro to M4 Pro

Long story short, I picked up a new MacBook Pro this week. I got the M4 Pro version with the higher core count and 1TB of internal storage. It's the exact same model in the lineup as the M2 Pro I've been using for the last

yesterday 1 votes
Forgot your safe combination? This Arduino-controlled autodialer can crack it for you

Safes are designed specifically to be impenetrable — that’s kind of the whole point. That’s great when you need to protect something, but it is a real problem when you forget the combination to your safe or when a safe’s combination becomes lost to history. In such situations, Charles McNall’s safe-cracking autodialer device can help. […] The post Forgot your safe combination? This Arduino-controlled autodialer can crack it for you appeared first on Arduino Blog.

yesterday 1 votes