Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
92
We got it in 1983, I think, so it only took me about 41 years to get around to it. This Tomy Tutor isn't a replacement system I secondarily acquired, nor is it a Ship of Theseus Frankenstein rebuild. This is my actual first computer, in its original case, on its original components, with the Federated Group sticker still on the original box. And it still works. His High Holy Munificence Fred R. Rated was blowing these babies out for a song by then. The receipt has long since disappeared, though $99 sounds about right plus maybe around $40 or so for a joystick, cassette deck and some cartridges, compared to somewhere between $200 and $300 for the recently discounted 64 — which didn't include anything else. (It tells you something about our family finances at the time when a C64 was too expensive.) I immediately started writing my own BASIC programs on it in its perverse little BASIC dialect and when my folks indeed saved up and bought us a C64 system the next year (complete with 1702...
10 months ago

More from Old Vintage Computing Research

The "35-cent" Commodore 64 softmodem

Rockwell famously used 6502-based cores in modems for many years, but that doesn't mean other 6502s couldn't be used. If only there were a way to connect a Commodore 64's audio output directly to an RJ-11 plug ... Convergent WorkSlate stuff I've got to catalogue. Officially the WorkSlate's only means of telecommunications is its 300 baud internal modem. While we have a 9600bps way of wiring up a Workslate to a modern computer, it's always nice to have a simpler alternative, and I figured this would be a great challenge to see if John's old program could let my Commodore SX-64 talk to my WorkSlate. Spoiler alert: it works! I don't know precisely what happened to John; regrettably I know little of his personal history. For a period of time he was a very prolific poster on comp.sys.cbm, but his last post there was July 19, 2000, in which he replied to someone's question about the relationship between sound frequencies and SID register values. (We'll actually talk about this in a bit.) His last post I can find in any Commodore newsgroup of the era is dated the next day, July 20, though he posted through CompuServe and it's possible may have made later posts there. Among his many contributions, including this one, are the Spyne self-extracting file archive utility, a .d64 downloader patch for Common Sense, an in-place PETSCII to ASCII text file converter, a user port-based audio A/D converter and player, and a custom track-by-track floppy disk formatting tool. He issued them all freely for anyone to use for any purpose. While he and I briefly corresponded over snail mail, I can't find the letter he sent me and I don't remember his exact location, and I never heard from him again. Sadly, although I hope I'm wrong, from his handwriting I knew he wasn't a young man and I'm all but certain he has since passed away. Rootsweb lists a John J. Iannetta who died in April 2001 at the age of 82. (If you know for sure, post in the comments, or E-mail me privately at ckaiser at floodgap dawt com.) The "35-cent modem" was first posted on October 27, 1998. John estimated the 35 cent cost based on the then-purchase price of an RCA jack ("listed in my Jameco catalog at 35 cents each"), though this didn't include the phone cable, or about 68 cents in 2025 dollars. Looking in their online catalogue now, you could even go cheaper, since Jameco (not affiliated, not sponsored, just using for comparison) now sells a through-hole right-angle RCA jack for $0.29 — in 1998 dollars, that would have been a mere 15 cents. If you don't have a landline phone cable anymore, the lowest Jameco price for a compatible connector I could find was a 6P6C modular cable for $1.49. Such a cable is technically a RJ-25, but it or an RJ-14 (6P4C) will do just fine. This project is a very easy build job, so let's do it. how T1 lines work. On old exchanges like this one (used in rural New South Wales, Australia), phone lines were carried by literal tip-and-ring connectors plugged into the switchboard to connect calls, which is where the name comes from. Telco guys call a combination of tip and ring a "pair." Each phone line uses one pair. Although it likely makes little difference for this application, there is a polarity to the connection which should be observed, i.e., tip is positive and ring is negative. The cable I used is a real USOC RJ-11C with a 6P2C connector and thus has only two wires for a single pair. The tip wire for this first pair on typical North American RJ-11 installations can be green, or white with a blue stripe (in other countries it may be any number of other colours); the ring wire can be red, blue, or blue with a white stripe. Thus, after stripping back the wires — sometimes easier said than done on a sticky old cable — connect ring to the phono jack's ground/sheath and tip to the phono jack's centre. Make it pretty and you're done. An important warning before we continue: from the telephone company side the line pair carries voltage used to power the phone and ringer, so never plug this cable into a wall jack — doing so could potentially send up to 48 volts to the computer, with likely undesirable and even fiery results. A cable like this should only ever be directly connected to another modem. The software part has to do with how data from the Commodore 64 is modulated to send to the other system's modem. For that, we turn to John's program, as he posted it (in separate versions for NTSC and PAL Commodores for reasons I'll explain as we analyse the disassembly). It was presented as a type-in program in BASIC, short enough to type in by hand, with an embedded machine language section loaded from DATA statements. Here's a couple videos showing what it looked like in practice. The modulated audio is played through the speaker, so don't have it up too high. so the KIM-1 could speak through a DECtalk, that the transmission of a byte or character of data is divided into more or less distinct phases. The default state is mark (one). At the beginning of transmission comes a start bit (always space, or zero), followed by the data (seven or eight bits). After the data comes an optional parity bit, then back to the stop bit for at least one and sometimes two or more bit times. The most common transmission type is 8N1, which is eight data bits, no parity bit, and one stop bit (i.e., characters must be separated by no less than one stop bit, though it can be more). Praat, we see a sine wave of varying wavelength. In the spectrogram at the bottom we can pick out two distinct frequencies being used to encode a character, as shown on the dark black band. This is the hallmark of audio frequency-shift keying (AFSK), or often just called FSK. For this spectrogram I've typed the letter "U" which in binary is 01010101. That's the "wiggle" in the middle. John's program sends 8-N-1, so since we know the byte is framed by stop bits, which are marks/ones, we can deduce the initial frequency is used to transmit a one. Serial communications send the bits in little endian order, i.e., from least significant to most significant, meaning the wiggle is actually the start bit (space/zero), followed by 10, 10, 10, 10, then a stop bit (mark/one) and finally the normal mark state between bytes, which in this plot is indistinguishable from the stop bit. Dr. Strangelove. It remained in operation well into the 1980s; even as just a giant coincidence there are many suspicious similarities between the concept and WarGames' WOPR. The 101 ran at 110 baud over regular telephone lines and became available for commercial sale in 1959. On the wire it uses separate sets of frequencies for each side of the conversation: 1070Hz and 1270Hz (space, mark) for the modem originating the call, and 2025Hz and 2225Hz (space, mark) for the modem answering the call. In 1962 AT&T introduced the Bell 103, which used the same frequencies but ran over twice as fast at 300 baud. It quickly became very popular and almost completely replaced the 101 in commercial use. Even after the 1976 Bell 202 introduced 1200 baud operation (with different frequencies and duplex modes), it remained compatible with the 103 in 300 baud mode, and virtually every third-party 300 baud modem was compatible as well (many were also compatible with ITU-T V.21, which uses the same basic communication scheme but different frequency sets). The Originate-Answer switch on 300 baud modems like the Commodore 1600 VICMODEM and Commodore 1660 "Modem/300" selects which two frequencies the modem will send bits with, using the other two frequencies for receiving. Since any 300bps modem can speak Bell 103, that's why John chose it, and since we're "responding" to the other side that "initiated" the "call" this code uses answerer frequencies. The software came in both NTSC and PAL versions because obviously something like this is highly timing-dependent, and PAL Commodore 64s run slightly slower (0.985250MHz) than NTSC systems (1.022730MHz). The variance is because each video standard uses a different master crystal from which all other clocks are obtained by dividing down, including the colourburst frequency needed for correct display, and also the clock speed of the CPU. This speed additionally affects the 6581 SID sound chip, since each of its three oscillators are incremented by the given audio value (0-65535) every clock cycle, so we need different values for the mark and space frequencies on PAL and NTSC systems. John's last known post in comp.sys.cbm, using slightly different processor speeds, explains the math (shown as written): Using this relationship, we can then solve for "word" to get the proper value for the SID frequency register based on the detected video standard. SID's three voices are thus able to generate tones up to ~3848Hz on PAL machines and ~3995Hz on NTSC machines, well in excess of the necessary range. Because SID generates audio asynchronously, we can just tell it to use infinite sustain (to infinitely prolong the note until we gate it off), play the mark frequency, and then leave the note playing while we go do something else, keeping the line open. Since the specification requires a sinusoidal wave, the code uses the SID's triangle waveform which is the closest approximation. The result is, in fact, the very tone you hear at the beginning of the videos. However, there's one other reason we need separate NTSC and PAL versions, and that's because of how John set up the baudrate. Here's how the BASIC loader starts (from the NTSC version): After reading the hex-encoded DATA statements into memory, at line 110 John's code starts initializing both the SID and the two CIA chips, though he primarily uses CIA #2. To determine bit times, rather than having the CPU manually count off a specific number of clock cycles, this code has the CIA do it. A critical point is that while the CIA chips can be set to issue IRQs or not, their interrupt control registers will still indicate when they would have fired one, even if that individual interrupt condition is technically disabled. John turns off all interrupts on both CIAs so they won't fire and upset system timing, including the usual Timer A IRQ on CIA #1 used for keyscan, then sets Timer A on CIA #2 to repeatedly count down $0d50 (3408) clock cycles. If we divide 1022730 by 3408, we get ... 300.09, almost exactly our baud rate. (It's okay to be a bit faster as long as you're never slower.) A smaller value is used for PAL systems. With the CIAs (and audio) set up, we then go into the mini-terminal, which is loaded into memory and started from the usual location for such routines at 49152 ($c000). We disassemble that next. John chose to call direct into the Kernal for these routines to short-circuit code he didn't need. With thousands of cycles available to send each single bit, the full-fat routines would have been fine, but why bother with work you don't need to do? After initializing the screen editor, the code waits for the next Timer A interval to fire and scans the keyboard manually (since the IRQ isn't running anymore), then fetches the next key, if there is one. Assuming it's not F1 (send a file) or F7 (quit the terminal), the code then goes on to send a character using this subroutine at $c027: Each access on the interrupt control register clears any conditions that were set. This apparent "double-wait" on entering the routine isn't an error: it ensures not only that everything's in a known state, but that also at least one stop bit's interval has elapsed between the prior character and this one. Once that has occurred, we clock out the start bit, then eight data bits least significant first, and finally leave back at the stop bit frequency. Each time, except for the very end, we wait for another trigger on CIA #2's ICR before we proceed. When F1 or F7 is pressed, the mini-terminal sets location $2 to non-zero or zero respectively (above, after the call at $c00f) and returns to BASIC. BASIC then turns back on the Timer A IRQ on CIA #1, and if $2 is non-zero, it proceeds to ask for a device number and filename. This is a fun routine on its own, but you've seen enough of the code to understand the basics of how it works, so let's get out the WorkSlate now and try it with a real device. For the Commodore side, we're going to use one of my portable SX-64 systems. A warning about the SX-64 specifically: never plug in a video cable — more specifically, never connect the audio output — with the computer's power on. Doing so runs you a decent chance of frying the SID, something I actually did many years ago and is a well-known problem. This goes likewise for connecting our mutant phone cable to the SX-64, since we necessarily have to use the computer's video port for the audio signal. The WorkSlate has a built-in terminal desk accessory which can be activated from the Phone menu and selecting Terminal. However, simply selecting the Terminal is not enough. The trick with the WorkSlate is to have the speakerphone line open (Phone, SpkPhone) first, and then try to answer with the Terminal. This is supported by the device; it assumes in this case that you've manually dialed a number somehow (say, from an attached phone handset) and the computer on the other end has answered. We already have the answer stop bit tone playing, so the WorkSlate's modem immediately hears it and tries to go on-line. ATX1D sets up a "blind dial" so that the 64's answer signal is also immediately recognized. The interesting part is comparing how the speakerphone operated between the three Workslates I now have. On the most recent one I acquired and on my "tester" unit that I soldered jumper probes to (both with serial numbers starting with CCA8415), I could hear the "call" and what the SX-64 was sending when the Workslate was in speakerphone mode, as expected. However, on my regular unit (a later machine with a CCA8417 serial number), I could hear both ends of the conversation through the SX-64's speaker, including the Workslate's dial tones and originate frequencies — and nothing on the Workslate's speaker. I'm not sure if this is due to different internal wiring, changes in the tape gate array or both. Again, this is a good reminder that the SID in the SX-64 is unusually vulnerable to stray voltages: if there were proper isolation I shouldn't have been able to hear incoming audio through the speaker output. In fairness, Commodore probably didn't think people would be wiring phone lines to SID audio either. But let's embroider the situation a little more. Some modems may listen for a dial tone first before they attempt to do anything, especially if you need to actually dial a phony (narf narf narf) telephone number, since they reasonably expect there's a real POTS "plain old telephone service" line on the other side. computer did. Programs like Common Sense could be provided a phone number and dial it by playing tones like music. (Interestingly, the VIC-20 does not seem to be capable of precise enough frequency control to generate DTMF; Commodore even warns against it in the Modem/300 manual.) Dialtones and other call-progress tones are often multi-frequency tones similar to DTMF, but they're specified separately by each region's telephone system. In the North American Bell System's Precise Tone Plan, dialtone is a combination of 350Hz and 440Hz at -13dBm, also played using a sine wave. If we use the formulas above and solve for the SID register values using those frequencies, these statements in BASIC will make a sufficient approximation of a dialtone on SID voices 1 and 2 (NTSC): smaller system, so let's make it a little friendlier. I removed the BASIC portion and wrote up a new menu system in pure assembly, incorporating and converting John's original code, and merging the PAL and NTSC versions together. It LOADs and RUNs like a BASIC program but is fully machine language. Ward Christensen, uses a fixed 128 data bytes per block and a simple checksum with known deficiencies, so John opted for the more complex version with a cyclic redundancy check to ensure errors could be promptly detected. Most terminal programs support this mode. We previously encountered a variant of Xmodem-CRC when we were figuring out how The Newsroom's Wire Service operated. From that article, the CRC-16-CCITT used in Xmodem-CRC is transmitted using this algorithm, rendered in K&R C: = 0) { crc = crc ^ (int)*ptr++ << 8; for (i = 0; i < 8; ++i) if (crc & 0x8000) crc = crc << 1 ^ 0x1021; else crc = crc << 1; } return (crc & 0xFFFF); } lda #$01 sta $96 ; number of current packet ; start sending the current Xmodem packet lc08a lda #$00 sta $02 lda #$01 jsr lc027 ; send Xmodem SOH $01 lda $96 jsr lc027 ; send packet number eor #$ff jsr lc027 ; send inverse of packet number ldy #$03 jsr $ffa5 ; Kernal acptr, read next byte from file sta $8b ; store in high byte of CRC jsr lc027 ; transmit it inc $02 ldx $90 ; EOF? beq lc0b3 lc0ad lda #$1a ; yes, handle final packet, store a ^Z sta $8c bne lc115 lc0b3 jsr $ffa5 ; read again sta $8c ; store in low byte of CRC jsr lc027 ; transmit again ldx $90 bne lc115 ; check EOF again lc0bf inc $02 big-endian, unlike the usual 6502 little-endian convention, and exploits the fact that most transmitted blocks will have at least two data bytes. The routine starts each block by clearing the count, then with the modulation routine at $c027 above it sends the standard Xmodem start-of-header (^A) character, the packet number and inverse of packet number, then (checking for EOF each time) reads two characters into the high byte and low byte of the running CRC-16 and transmits them. If the status word at $90 shows an EOF, this condition remains until the file is closed. For each byte after that to complete the block, another one is read and stored into $8d, then shifted into $8b and $8c: When a high bit is rolled out of the rolling CRC-16, this is detected as carry being set (no need for a bitmask) and the rolling CRC-16 bytes are exclusive-ORed with the required polynomial value ($1021, 4129). This is a very efficient translation of the algorithm. The code then continues to run to complete the block of 128 bytes. After the last byte is read from the file and shifted in, we need to incorporate three zero bytes into the CRC-16 to represent the header we sent. We then send that value and "wait" to pretend it worked, then go back for the next block. The weird delay routine allowed John to fit the entire loop into the maximum 7-bit displacement of a relative branch instruction. In fact, when I added code to flash the border on each block, I had to insert an absolute jump instead since those three extra bytes upset the apple cart. At the very end of the file, any block in progress is padded with EOT (^Z), and then Xmodem EOF (^D) is sent to terminate the transmission: Note that the branch at $c11b will always be taken since we just loaded a non-zero immediate into the accumulator. Again, another nice way of increasing code density and reducing type-in size. Should the file have ended in the first two bytes used to prime the CRC-16, John's code just stuffs ^Zs into it manually. John's original post containing the type-in versions (you can cut and paste these into VICE, if you like), plus the assembly source for this unified version and a pre-built binary, on Github. As John never asserted copyright to his programs and explicitly intended them to be freely distributable so that others could use and learn from them, I've placed this version into the public domain (to the extent available in your jurisdiction). You can assemble it using xa65. John was a good guy with a clever programming style and it was nice to see his code running again (and working, though that was a given). Plus, this is a great use for a Commodore to support your other systems, and a roadmap for doing something similar on other machines with sufficiently capable sound hardware. In future articles I think we'll explore a few other things he wrote, including that audio digitizer. I think he would have enjoyed it.

a week ago 23 votes
Refurb weekend: Atari Stacy

Ask any Atari Stacy owner how to open an Atari Stacy and the answer is always "never, if you can avoid it." So I'll just lead with this spoiler image after the refurb to prove this particular escapade didn't completely end in tragedy: see the much lighter and streamlined STBook in the flesh, let alone own one. If you really want a portable all-in-one Atari ST system, the Stacy is likely the best you're gonna do. And we're going to make it worse, because this is the lowest-binned Stacy with the base 1MB of memory. I want to put the full 4MB the hardware supports in it to expand its operating system choices. It turns out that's much harder to do than I ever expected, making repairing its bad left mouse button while we're in there almost incidental — let's just say the process eventually involved cutting sheet metal. I'm not entirely happy with the end result but it's got 4MB, it's back together and it boots. Grit your teeth while we do a post-mortem on this really rough Refurb Weekend. it lacks a blitter, but does have an expansion slot electrically compatible with the Mega), it sports a backlit monochrome LCD, keyboard, trackball in lieu of the standard ST mouse, and a full assortment of ST ports including built-in MIDI. A floppy drive came standard; a second floppy or a 20 or 40MB internal SCSI hard disk was optional. This was Jack Tramiel-era Atari and the promises of a portable ST system were nearly as old as the ST itself. For a couple years those promises largely came to naught until Atari management noticed how popular the on-board MIDI was with musicians and music studios, who started to make requests for a transportable system that could be used on the road. These requests became voluminous enough for Tramiel's son and Atari president Sam Tramiel to greenlight work on a portable ST. In late 1988 Atari demonstrated a foam mockup of a concept design by Ira Velinsky to a small group of insiders and journalists, where it was well-received. By keeping its internals and chipset roughly the same as shipping ST machines, the concept design was able to quickly grow into a functional prototype for Atari World and COMDEX in March 1989. Atari announced the baseline 1MB Stacy with floppy disk would start at $1495 (about $3800 in 2024 dollars), once again beating its other 68000-based competitors to the punch as Apple hadn't themselves made a portable Macintosh yet, and Commodore never delivered a portable Amiga. Sam Tramiel was buoyed by the response, saying people went "crazy" for the Stacy prototype, and vowed that up to 35,000 a month could be made to sate demand. any configuration. FCC Part 15 certification for the hard disk-equipped 2MB Stacy2 and 4MB Stacy4 was delayed until December 1989 and at first only as Class A, officially limiting it to commercial use, while the lowest-end 1MB floppy-only Stacy didn't obtain clearance until the following year. We'll see at least one internal consequence of this shortly (I did mention sheet metal). The delays also stalled out the system's introduction in Europe and despite Tramiel's avowed industrial capacity relatively few Stacys were ultimately sold. Based on extant serial numbers, the total number is likely no more than a few thousand before Atari cancelled it in 1991, though that's greater than the successor ST Book which probably existed in just a thousand or so units tops. The Stacy's failure to meet its technical goals (particularly with respect to size and power use) was what likely led to the ST Book's development. Unfortunately, although a significant improvement on the Stacy, the ST's decline in the market made sustaining the ST Book infeasible for Atari, and it was cancelled along with the entirety of Atari's personal computer line in 1993. third party upgrade provided an installable internal battery option which could last up to two hours. ACSI ("Atari Computer Systems Interface") predates SCSI-1's standardization in 1986 but is still quite similar, using a smaller 19-pin port, a related but incompatible protocol, and a fixed bus relationship where the computer is always in control. It is nevertheless enough like SCSI that many SCSI devices can be interfaced to it — we'll come back to this too. The rear ports should be covered by a door, but it's missing from this system. is present that the expansion slot never got used by its prior owner(s), and I don't have anything to connect to it either. carefully through their hole in the bottom case so you can lift up the top case completely. peripherals for the Convergent WorkSlate (the WorkSlate itself uses a Hitachi 6303). This serves as the keyboard, mouse/trackball and joystick controller with its own 4K internal ROM and 128 bytes of internal RAM. Counting the RAM, though, we don't have 4MB on this side. Where's the rest of it? other side, covered by tape. Why is it taped? So it doesn't short against anything! Remember, this is exactly how Atari shipped it! The keyboard connector is here as well. This board is quite critical. Without it, the system has no RAM, no ROM and, almost trivially by comparison, no keyboard, trackball, mouse or joysticks. If it's not connected firmly, you'll get a blank screen. requiring desoldering of the 68HC000. This would have been a rather complex upgrade to install. not. What's depicted here is in fact a consolidation of multiple false starts and a whole lot of screaming. The first part was to put the metal shield back on and bend the tabs back to hold it in position. While doing so, be careful with the display wires to get them back into their little canal because they can literally short and spark. I don't know how this is possible but they do! You can also get the display cabling messed up enough that the Stacy will continuously beep at you when you turn it on. The only good way I found to avoid this was to pull as much play in the display wiring into the top case as possible so that the wires don't bunch up in the bottom case. also affect its connection with the logic board. The middle one seems to be the most involved. All of this suggests Atari never meant a 1MB Stacy to be upgraded with this particular card. Hall SC-VGA-2 scan converter to turn the ST's 71.2Hz high resolution display into the 60Hz my VGA box can capture. This stack doesn't get a pixel-perfect grab but the budget isn't there for the super duper OSSC right now, so you'll just have to deal. HDDRIVER that was already on the SD card. extensible control panel, much like Macs use CDEVs, uses CPXes (Control Panel Extensions). Show System. This was what I was using to display the memory configuration before. And, now adjusted, we still have 4MB of memory to my great relief with the computer back in one uneasy piece. I'm not 100% happy with the end result but the trackball button works better and our memory has quadrupled, at least when Stacy is in a good mood. Like I say, I can only conclude that the 1MB Stacy was never meant to be upgraded in this fashion. One of the third-party RAM cards might have worked, but I have no idea where I can find one. Regardless, based on the amount of apoplexy and late-night screaming that Stacy caused over the past couple months' weekends, my wife has told me in no uncertain terms that if I'm ever going to crack this laptop open again, I need to have a good long talk with her about it first. I've decided I'm okay with that.

3 weeks ago 35 votes
A mostly merry Southern Hemisphere Commodore Christmas

A merry Christmas and happy holidays from the Southern Hemisphere, where it's my year to be with my wife's family in regional New South Wales, Australia. A friend of the family had an "old Commodore" in their house and asked if I wanted it. Stupid question, yeah? The Australian Commodore and Amiga Review published from 1983 to 1996. The issues here date from 4/89, 5/89, 6/89, 10/89, 11/89, 12/89, 2/90, 3/90, 4/90, 5/90, 6/90, 8/90, 9/90, and the 1990 Annual with an extensive list of Australian BBSes, software packages and user groups. By this time the Commodore 8-bits were past their prime compared to their 16-bit Amiga brethren, but there was still some decent coverage of the 64 and 128 in this set. Unlike most American Commodore magazines, there was little type-in content, at least in these particular issues; the ones here concentrate more on reviews and product announcements with sidecar tips and tricks. The other bit of literature in the box was a 1987-88 Dick Smith Electronics catalogue. If you're on one of the other continents, DSE was approximately Australia's equivalent of Radio Shack and at its peak sold a similar range of rebadged products and electronics. It is likewise no more (shut down in 2016), though its name lives on as a zombie Kogan brand; today its closest domestic equivalent would probably be Jaycar. Type-right and Whiz Kid. rebadged them also). The PC-1360 and PC-1401 were more advanced than Tandy's Sharp rebadges, but they did include the lowend PC-1246 (Tandy PC-8, which avoided being the worst Tandy Pocket Computer ever because of the execrable PC-7) and even lower-end PC-1100, a flip-face unit that had a narrower but 2-row display and basic organizer functions, and sold for more likely because of it. DSE also sold the excellent Sharp CE-126P, a lovely device that combines a thermal printer and cassette interface yet does not rely on the sure-to-fail NiCad batteries other such peripherals did to their detriment. Tandy never sold this unit, rebadged or otherwise. Unlike Tandy, though, DSE simply chose to rebadge other PC systems instead of creating its own like the Tandy 1000 series. At least initially their PCs came from a Taiwanese company called Multitech, which started in 1979 selling their Z80-based "MicroProfessor" MPF-I SBC and later two Apple II clones, the MPF-II and MPF-III. These clones were especially notable for their onboard Chinese language support, drawing characters in high-resolution graphics and as such completely omitting the Apple II's text mode. Subsequently Multitech started producing PC clones in 1984 with the MPF-PC-XT, and over several years served as a PC OEM for many diverse companies such as Texas Instruments. The clones shown here (the 8088-based PC500 and PC700, and the AT-class 80286 PC900) may have been some of the last to bear the Multitech name because after failing to land a large contract with a German firm, company head Stan Shih decided he'd had enough and retooled the company to start selling their own PCs under their own brand in 1987. He chose a new name for the company, too: Acer. But just like the Tandy Color Computers, Dick Smith was still selling their range of 8-bit home computers as well in those days. The last of this line was the Z80-based VZ300, yet another VTech rebadge, and had a whole assortment of peripherals including memory expansion, floppy disk drive (three times the cost of the computer), and interfaces for the joysticks, cassette, printer and disk drive — which was sold separately from the disk drive! I have a VZ300 and some upgrades I need to finish building which will be in a future article. READY. prompt because the computer can't boot from its internal disk drive. The drive activity light never turns on and the drive motor never turns off. inside a Commodore 128DCR for a prior refurb weekend and the disassembly here is the same. With a little bit of care we can avoid tearing the intact warranty sticker here too. To be continued after a trip to Jaycar and some mail orders. A very happy holiday and a merry Christmas to those of you who celebrate it.

a month ago 41 votes
Composite and hard reset mods for the Tandyvision One

I still have my literal first home computer (the Tomy Tutor), and it so happens I also have my literal first game console: the Tandyvision One, Tandy Radio Shack's label variant of the Mattel Intellivision Master Component. The Mattel Intellivision proper originally hails from 1978 and is notable for remaining supported and sold in three different decades (until 1990). Its development is explained well in many places, including by the Blue Sky Rangers themselves, so I'll talk here mostly about the variants. Most of them looked like (and were) ordinary Master Components with different trim; for example, early units were manufactured by GTE Sylvania and GTE had a label variant of their own using silver inlays instead of Mattel standard gold, sold until around 1980. Probably the most "extreme" was the 1981 Sears Super Video Arcade, which had a rather different beige top case and detachable controllers, but was nevertheless manufactured by Mattel under contract using the same basic hardware and slightly different EXEC ROMs. The Super Video Arcade was especially notable because the prior Sears Video Arcade was an Atari 2600 VCS clone. Other variants from around this time include the 1982 Bandai Intellivision, also otherwise identical to a standard Master Component except for Japanese television channel tuning, and the Digiplay Intellivision (also sold as Digimed), which was manufactured by a Sharp subsidiary in Brazil due to that country's then-protectionist policy against imported electronics. Tandy's particular spin was also later in the Inty's lifecycle. In 1981 Mattel Electronics moved manufacturing to its own facilities in Hong Kong; by 1982 they were working on the "Big Mac" project that became the 1983 Intellivision II, a smaller and cheaper cost-reduced version. Tandy, always willing to label engineer first and innovate second (sometimes third, or tenth), made a deal with Mattel as OEM to rebadge some of the remaining O.G. Master Components and sell them in Radio Shack stores. the unreleased 1989 Tutorvision. INTV shifted to NES and Sega development as Inty sales dropped, but their licensing arrangements required them to discontinue the Intellivision in 1990 and the company went bankrupt in 1991. Near as I can determine, only Digiplay sold a non-Mattel version of the Intellivision II (in Brazil), and only Mattel ever offered the Entertainment Computer System. prior to the robbery) it suddenly decided to start working again. In fact, it works perfectly now, just as it used to. I'm not sure why. Anyway, let's get started. There are six Phillips-head screws in the bottom case in the large recessed openings. Remove those and turn it over. off switch (i.e., the button switch is normally closed and pressing it opens the circuit). This particular button was very handy because it has little holes that fit the wire, so it was just a matter of threading them through, making sure the two sides were separated, and crimping it down. No soldering needed. This is the basic notion. The US FCC was very strict about shielding and radio emissions in 1978, so on the original Master Component everything apart from the power circuit lives in a shielded submodule called the logic board assembly. (This was not the case for the Intellivision II, which further helped it to be cheaper.) This submodule is in fact soldered shut for even less RF leakage, with just a few holes for the channel selector, RF out, screw mounts and reset button. Since this mod unavoidably involves some permanent modification to the logic board, even of a minor sort, I decided to do it on a separate known good logic board assembly I had on the shelf from another machine (another Tandyvision, as it happens). That way if I screwed it up, at least I wasn't doing it to the original assembly from my real machine. Plus, I still don't know why the STIC chip in my original console decided to pull a Lazarus, while this replacement one was unlikely to require service anytime soon. carefully unplug the controllers (note their orientation — there are a lot of fine wires!) and then lift out the entire submodule from the bottom case. Fortunately, the age of the solder even on this obviously newer unit is such that a good tug with a metal spatula will break most of the joins cold (I only had to heat up a couple). The "top" we are opening here is actually the bottom, so turn it over and remove the metal shield when the joins are open. Intellivision reimplementation using an NES-on-a-chip. This grab was done with the Sylvie, but the Sylvie has an RF modulator that's nearly as good as this one. The Inogeni VGA box grabbed it while connected to an LCDT600 to display the RF signal and I'm not sure if that's why the top of the screen has that red shift, though the colours are otherwise nice and there is relatively little ghosting. not to do was replace the discs on the controllers (the pads underneath them are still in good nick). Yes, the top layer of the disc is worn down in places, but it's worn because we played it. And now that Dad's no longer with us, touching that pad still feels a little like touching him. So I decided to keep it that way, just like he left it. You know, in case he ever drops by for a game of Biplanes or something. I'll even let him win.

a month ago 44 votes
The Hall SC-VGA-2 video processor, the Atari ST and NeXTSTEP: more tales of the unscreenshotable

A periodic fascination on this blog is figuring out better ways to get better screenshots of our classic systems, which often hail from the Wild Wild West/East in terms of video standards (read all entries in this series). Naturally the best way is a bitwise direct grab of the framebuffer, but that's only possible if there's sufficient operating system support. This support is obviously absent for things like boot messages (especially important when investigating NetWare on the Power Mac 6100), so we need to figure out a way to capture that information. My capture box of choice is currently an Inogeni VGA2USB3, which is small, self-contained, USB-powered, highly compatible and makes high quality grabs of anything you can wire into composite or a VGA HD-15 connector up to 1080p, but is limited to 60Hz refresh rates. Various solutions like the OSSC exist, but these are more oriented to arcades and consoles rather than (our primary interest) workstations. While you might be able to trick the hardware into emitting a compatible signal, that's not good enough or even possible with several of my machines. Previously my problem child was astro, my SAIC Galaxy 1100, a modified PA-RISC HP 9000/712 crammed into a MIL-SPEC portable case with a fabulous built-in flat panel. These machines ran HP-UX 10.10 in their original heyday, but this particular system runs NeXTSTEP 3.3 for PA-RISC during the brief period of time NeXT supported the architecture and was a big hit at the Vintage Computer Festival West a few years ago. Its flat panel runs at an odd 62Hz and the external VGA port only generates a 60Hz signal for 640x480 (all other resolutions use different refresh rates), which is hopeless for running NeXTSTEP. However, now I have a new candidate I'd like to get some grabs off: a particularly problematic member of the Atari ST family which has been the subject of a long-running and highly frustrating extended Refurb Weekend. You'll get to meet this bad girl soon enough. The standard ST high resolution mode is 640x400 — at 71.2Hz. I can get a picture from it with my trusty NEC flat panel, but not with the Inogeni. The usual solution to this is a scan converter, but those can be expensive and inconvenient. Here's one I picked up used on eBay for $2. Yes, really. It cost more to ship it. an HDMI version with additional resolutions ... for around US$500. However, this or the slightly newer SC-VGA-2A and SC-VGA-2B are all relatively common devices and found substantially cheaper used. Let's try it out and show some sample output, including those delicious NeXTSTEP system messages and some ST grabs. The reason I got the SC-VGA-2 so cheap, and I actually ordered two, was it was sold untested (no power supply). It looked like a robust device in a metal enclosure, so I figured it probably did work, but an extra $2 for a spare to hedge my bets seemed good insurance. Both of them do in fact work. Data General/One and the ULI successor to the AtariLab). Technically this is an 8052-type microcontroller with 1K of on-chip RAM and 32K of on-chip flash for program memory, though it must also have some means of storing settings internally since there are no other obvious sources of NVRAM. This particular part is rated up to 40MHz, but the crystal next to it is 11.052MHz, which still sounds pretty quick until you recall MCS-51 chips take about 12 cycles per instruction (compare to a 6502 that ranges from two to six). It seems to drive the unknown video ASIC using its on-board serial port which was not an unusual mechanism for the time; see, for example, the Focus FS401LF. The other, smaller chip near the input port is an MStar MST9883, which is an overgrown A/D converter sampling analogue pixel data and emitting a serial stream of digital RGB and clock for the video ASIC. It can sample up to 140MHz and doesn't appear to use the 14.31818MHz crystal next to it, which seems to be used by the ASIC. I don't think Hall designed this board; it seems to be Taiwanese based on the board markings, chip manufacturers and some Engrish in the Hall manual which was clearly from an image grab of something else. Other vendors may produce an equivalent version of this device, so if you know of one, post it in the comments. path console graphics (graphics1 is the "alternate" on-board flat panel) and reset. monitor 2). Despite being listed as supported, this caused a black screen, so I (blindly) reset the Galaxy and switched back to mode 5, and then tried mode 3. C. I don't know what this means and Hall doesn't mention it as a separate SKU. The input is "XGA-70" and the output is "XGA-60." to 1280x1024 60Hz. I should note that I don't know if the Galaxy video modes are standard, so I can't say if this is the Hall box's fault or not. -v for a verbose boot. OmniWeb 2.7 running Crypto Ancienne for TLS 1.3. But let's compare that with a similar Grab shot: 13-pin rear video port. Remember, this was Tramiel's Atari, so we got things like 13-pin video with wacky connectors and ACSI instead of SCSI. The converter is passive, so there is no scan conversion. LCDT600 I use for PAL composite capture, it requires an intermediary step with its own power supply and introduces a further amount of lag into the system. It also doesn't seem to display all the modes it advertises, though I have not yet determined who's really at fault for that. But it's a true scan converter that isn't very large, really does work, and seems reliable and well-built. I certainly got my $2 worth, by golly.

2 months ago 42 votes

More in technology

The origin and unexpected evolution of the word "mainframe"

What is the origin of the word "mainframe", referring to a large, complex computer? Most sources agree that the term is related to the frames that held early computers, but the details are vague.1 It turns out that the history is more interesting and complicated than you'd expect. Based on my research, the earliest computer to use the term "main frame" was the IBM 701 computer (1952), which consisted of boxes called "frames." The 701 system consisted of two power frames, a power distribution frame, an electrostatic storage frame, a drum frame, tape frames, and most importantly a main frame. The IBM 701's main frame is shown in the documentation below.2 This diagram shows how the IBM 701 mainframe swings open for access to the circuitry. From "Type 701 EDPM [Electronic Data Processing Machine] Installation Manual", IBM. From Computer History Museum archives. The meaning of "mainframe" has evolved, shifting from being a part of a computer to being a type of computer. For decades, "mainframe" referred to the physical box of the computer; unlike modern usage, this "mainframe" could be a minicomputer or even microcomputer. Simultaneously, "mainframe" was a synonym for "central processing unit." In the 1970s, the modern meaning started to develop—a large, powerful computer for transaction processing or business applications—but it took decades for this meaning to replace the earlier ones. In this article, I'll examine the history of these shifting meanings in detail. Early computers and the origin of "main frame" Early computers used a variety of mounting and packaging techniques including panels, cabinets, racks, and bays.3 This packaging made it very difficult to install or move a computer, often requiring cranes or the removal of walls.4 To avoid these problems, the designers of the IBM 701 computer came up with an innovative packaging technique. This computer was constructed as individual units that would pass through a standard doorway, would fit on a standard elevator, and could be transported with normal trucking or aircraft facilities.7 These units were built from a metal frame with covers attached, so each unit was called a frame. The frames were named according to their function, such as the power frames and the tape frame. Naturally, the main part of the computer was called the main frame. An IBM 701 system at General Motors. On the left: tape drives in front of power frames. Back: drum unit/frame, control panel and electronic analytical control unit (main frame), electrostatic storage unit/frame (with circular storage CRTs). Right: printer, card punch. Photo from BRL Report, thanks to Ed Thelen. The IBM 701's internal documentation used "main frame" frequently to indicate the main box of the computer, alongside "power frame", "core frame", and so forth. For instance, each component in the schematics was labeled with its location in the computer, "MF" for the main frame.6 Externally, however, IBM documentation described the parts of the 701 computer as units rather than frames.5 The term "main frame" was used by a few other computers in the 1950s.8 For instance, the JOHNNIAC Progress Report (August 8, 1952) mentions that "the main frame for the JOHNNIAC is ready to receive registers" and they could test the arithmetic unit "in the JOHNNIAC main frame in October."10 An article on the RAND Computer in 1953 stated that "The main frame is completed and partially wired" The main body of a computer called ERMA is labeled "main frame" in the 1955 Proceedings of the Eastern Computer Conference.9 Operator at console of IBM 701. The main frame is on the left with the cover removed. The console is in the center. The power frame (with gauges) is on the right. Photo from NOAA. The progression of the word "main frame" can be seen in reports from the Ballistics Research Lab (BRL) that list almost all the computers in the United States. In the 1955 BRL report, most computers were built from cabinets or racks; the phrase "main frame" was only used with the IBM 650, 701, and 704. By 1961, the BRL report shows "main frame" appearing in descriptions of the IBM 702, 705, 709, and 650 RAMAC, as well as the Univac FILE 0, FILE I, RCA 501, READIX, and Teleregister Telefile. This shows that the use of "main frame" was increasing, but still mostly an IBM term. The physical box of a minicomputer or microcomputer In modern usage, mainframes are distinct from minicomputers or microcomputers. But until the 1980s, the word "mainframe" could also mean the main physical part of a minicomputer or microcomputer. For instance, a "minicomputer mainframe" was not a powerful minicomputer, but simply the main part of a minicomputer.13 For example, the PDP-11 is an iconic minicomputer, but DEC discussed its "mainframe."14. Similarly, the desktop-sized HP 2115A and Varian Data 620i computers also had mainframes.15 As late as 1981, the book Mini and Microcomputers mentioned "a minicomputer mainframe." "Mainframes for Hobbyists" on the front cover of Radio-Electronics, Feb 1978. Even microcomputers had a mainframe: the cover of Radio Electronics (1978, above) stated, "Own your own Personal Computer: Mainframes for Hobbyists", using the definition below. An article "Introduction to Personal Computers" in Radio Electronics (Mar 1979) uses a similar meaning: "The first choice you will have to make is the mainframe or actual enclosure that the computer will sit in." The popular hobbyist magazine BYTE also used "mainframe" to describe a microprocessor's box in the 1970s and early 1980s16. BYTE sometimes used the word "mainframe" both to describe a large IBM computer and to describe a home computer box in the same issue, illustrating that the two distinct meanings coexisted. Definition from Radio-Electronics: main-frame n: COMPUTER; esp: a cabinet housing the computer itself as distinguished from peripheral devices connected with it: a cabinet containing a motherboard and power supply intended to house the CPU, memory, I/O ports, etc., that comprise the computer itself. Main frame synonymous with CPU Words often change meaning through metonymy, where a word takes on the meaning of something closely associated with the original meaning. Through this process, "main frame" shifted from the physical frame (as a box) to the functional contents of the frame, specifically the central processing unit.17 The earliest instance that I could find of the "main frame" being equated with the central processing unit was in 1955. Survey of Data Processors stated: "The central processing unit is known by other names; the arithmetic and ligical [sic] unit, the main frame, the computer, etc. but we shall refer to it, usually, as the central processing unit." A similar definition appeared in Radio Electronics (June 1957, p37): "These arithmetic operations are performed in what is called the arithmetic unit of the machine, also sometimes referred to as the 'main frame.'" The US Department of Agriculture's Glossary of ADP Terminology (1960) uses the definition: "MAIN FRAME - The central processor of the computer system. It contains the main memory, arithmetic unit and special register groups." I'll mention that "special register groups" is nonsense that was repeated for years.18 This definition was reused and extended in the government's Automatic Data Processing Glossary, published in 1962 "for use as an authoritative reference by all officials and employees of the executive branch of the Government" (below). This definition was reused in many other places, notably the Oxford English Dictionary.19 Definition from Bureau of the Budget: frame, main, (1) the central processor of the computer system. It contains the main storage, arithmetic unit and special register groups. Synonymous with (CPU) and (central processing unit). (2) All that portion of a computer exclusive of the input, output, peripheral and in some instances, storage units. By the early 1980s, defining a mainframe as the CPU had become obsolete. IBM stated that "mainframe" was a deprecated term for "processing unit" in the Vocabulary for Data Processing, Telecommunications, and Office Systems (1981); the American National Dictionary for Information Processing Systems (1982) was similar. Computers and Business Information Processing (1983) bluntly stated: "According to the official definition, 'mainframe' and 'CPU' are synonyms. Nobody uses the word mainframe that way." Guide for auditing automatic data processing systems (1961).](mainframe-diagram.jpg "w400") 1967: I/O devices transferring data "independent of the main frame" Datamation, Volume 13. Discusses other sorts of off-line I/O. 1967 Office Equipment & Methods: "By putting your data on magnetic tape and feeding it to your computer in this pre-formatted fashion, you increase your data input rate so dramatically that you may effect main frame time savings as high as 50%." Same in Data Processing Magazine, 1966 Equating the mainframe and the CPU led to a semantic conflict in the 1970s, when the CPU became a microprocessor chip rather than a large box. For the most part, this was resolved by breaking apart the definitions of "mainframe" and "CPU", with the mainframe being the computer or class of computers, while the CPU became the processor chip. However, some non-American usages resolved the conflict by using "CPU" to refer to the box/case/tower of a PC. (See discussion [here](https://news.ycombinator.com/item?id=21336515) and [here](https://superuser.com/questions/1198006/is-it-correct-to-say-that-main-memory-ram-is-a-part-of-cpu).) --> Mainframe vs. peripherals Rather than defining the mainframe as the CPU, some dictionaries defined the mainframe in opposition to the "peripherals", the computer's I/O devices. The two definitions are essentially the same, but have a different focus.20 One example is the IFIP-ICC Vocabulary of Information Processing (1966) which defined "central processor" and "main frame" as "that part of an automatic data processing system which is not considered as peripheral equipment." Computer Dictionary (1982) had the definition "main frame—The fundamental portion of a computer, i.e. the portion that contains the CPU and control elements of a computer system, as contrasted with peripheral or remote devices usually of an input-output or memory nature." One reason for this definition was that computer usage was billed for mainframe time, while other tasks such as printing results could save money by taking place directly on the peripherals without using the mainframe itself.21 A second reason was that the mainframe vs. peripheral split mirrored the composition of the computer industry, especially in the late 1960s and 1970s. Computer systems were built by a handful of companies, led by IBM. Compatible I/O devices and memory were built by many other companies that could sell them at a lower cost than IBM.22 Publications about the computer industry needed convenient terms to describe these two industry sectors, and they often used "mainframe manufacturers" and "peripheral manufacturers." Main Frame or Mainframe? An interesting linguistic shift is from "main frame" as two independent words to a compound word: either hyphenated "main-frame" or the single word "mainframe." This indicates the change from "main frame" being a type of frame to "mainframe" being a new concept. The earliest instance of hyphenated "main-frame" that I found was from 1959 in IBM Information Retrieval Systems Conference. "Mainframe" as a single, non-hyphenated word appears the same year in Datamation, mentioning the mainframe of the NEAC2201 computer. In 1962, the IBM 7090 Installation Instructions refer to a "Mainframe Diag[nostic] and Reliability Program." (Curiously, the document also uses "main frame" as two words in several places.) The 1962 book Information Retrieval Management discusses how much computer time document queries can take: "A run of 100 or more machine questions may require two to five minutes of mainframe time." This shows that by 1962, "main frame" had semantically shifted to a new word, "mainframe." The rise of the minicomputer and how the "mainframe" become a class of computers So far, I've shown how "mainframe" started as a physical frame in the computer, and then was generalized to describe the CPU. But how did "mainframe" change from being part of a computer to being a class of computers? This was a gradual process, largely happening in the mid-1970s as the rise of the minicomputer and microcomputer created a need for a word to describe large computers. Although microcomputers, minicomputers, and mainframes are now viewed as distinct categories, this was not the case at first. For instance, a 1966 computer buyer's guide lumps together computers ranging from desk-sized to 70,000 square feet.23 Around 1968, however, the term "minicomputer" was created to describe small computers. The story is that the head of DEC in England created the term, inspired by the miniskirt and the Mini Minor car.24 While minicomputers had a specific name, larger computers did not.25 Gradually in the 1970s "mainframe" came to be a separate category, distinct from "minicomputer."2627 An early example is Datamation (1970), describing systems of various sizes: "mainframe, minicomputer, data logger, converters, readers and sorters, terminals." The influential business report EDP first split mainframes from minicomputers in 1972.28 The line between minicomputers and mainframes was controversial, with articles such as Distinction Helpful for Minis, Mainframes and Micro, Mini, or Mainframe? Confusion persists (1981) attempting to clarify the issue.29 With the development of the microprocessor, computers became categorized as mainframes, minicomputers or microcomputers. For instance, a 1975 Computerworld article discussed how the minicomputer competes against the microcomputer and mainframes. Adam Osborne's An Introduction to Microcomputers (1977) described computers as divided into mainframes, minicomputers, and microcomputers by price, power, and size. He pointed out the large overlap between categories and avoided specific definitions, stating that "A minicomputer is a minicomputer, and a mainframe is a mainframe, because that is what the manufacturer calls it."32 In the late 1980s, computer industry dictionaries started defining a mainframe as a large computer, often explicitly contrasted with a minicomputer or microcomputer. By 1990, they mentioned the networked aspects of mainframes.33 IBM embraces the mainframe label Even though IBM is almost synonymous with "mainframe" now, IBM avoided marketing use of the word for many years, preferring terms such as "general-purpose computer."35 IBM's book Planning a Computer System (1962) repeatedly referred to "general-purpose computers" and "large-scale computers", but never used the word "mainframe."34 The announcement of the revolutionary System/360 (1964) didn't use the word "mainframe"; it was called a general-purpose computer system. The announcement of the System/370 (1970) discussed "medium- and large-scale systems." The System/32 introduction (1977) said, "System/32 is a general purpose computer..." The 1982 announcement of the 3804, IBM's most powerful computer at the time, called it a "large scale processor" not a mainframe. IBM started using "mainframe" as a marketing term in the mid-1980s. For example, the 3270 PC Guide (1986) refers to "IBM mainframe computers." An IBM 9370 Information System brochure (c. 1986) says the system was "designed to provide mainframe power." IBM's brochure for the 3090 processor (1987) called them "advanced general-purpose computers" but also mentioned "mainframe computers." A System 390 brochure (c. 1990) discussed "entry into the mainframe class." The 1990 announcement of the ES/9000 called them "the most powerful mainframe systems the company has ever offered." The IBM System/390: "The excellent balance between price and performance makes entry into the mainframe class an attractive proposition." IBM System/390 Brochure By 2000, IBM had enthusiastically adopted the mainframe label: the z900 announcement used the word "mainframe" six times, calling it the "reinvented mainframe." In 2003, IBM announced "The Mainframe Charter", describing IBM's "mainframe values" and "mainframe strategy." Now, IBM has retroactively applied the name "mainframe" to their large computers going back to 1959 (link), (link). Mainframes and the general public While "mainframe" was a relatively obscure computer term for many years, it became widespread in the 1980s. The Google Ngram graph below shows the popularity of "microcomputer", "minicomputer", and "mainframe" in books.36 The terms became popular during the late 1970s and 1980s. The popularity of "minicomputer" and "microcomputer" roughly mirrored the development of these classes of computers. Unexpectedly, even though mainframes were the earliest computers, the term "mainframe" peaked later than the other types of computers. N-gram graph from Google Books Ngram Viewer. Dictionary definitions I studied many old dictionaries to see when the word "mainframe" showed up and how they defined it. To summarize, "mainframe" started to appear in dictionaries in the late 1970s, first defining the mainframe in opposition to peripherals or as the CPU. In the 1980s, the definition gradually changed to the modern definition, with a mainframe distinguished as being large, fast, and often centralized system. These definitions were roughly a decade behind industry usage, which switched to the modern meaning in the 1970s. The word didn't appear in older dictionaries, such as the Random House College Dictionary (1968) and Merriam-Webster (1974). The earliest definition I could find was in the supplement to Webster's International Dictionary (1976): "a computer and esp. the computer itself and its cabinet as distinguished from peripheral devices connected with it." Similar definitions appeared in Webster's New Collegiate Dictionary (1976, 1980). A CPU-based definition appeared in Random House College Dictionary (1980): "the device within a computer which contains the central control and arithmetic units, responsible for the essential control and computational functions. Also called central processing unit." The Random House Dictionary (1978, 1988 printing) was similar. The American Heritage Dictionary (1982, 1985) combined the CPU and peripheral approaches: "mainframe. The central processing unit of a computer exclusive of peripheral and remote devices." The modern definition as a large computer appeared alongside the old definition in Webster's Ninth New Collegiate Dictionary (1983): "mainframe (1964): a computer with its cabinet and internal circuits; also: a large fast computer that can handle multiple tasks concurrently." Only the modern definition appears in The New Merriram-Webster Dictionary (1989): "large fast computer", while Webster's Unabridged Dictionary of the English Language (1989): "mainframe. a large high-speed computer with greater storage capacity than a minicomputer, often serving as the central unit in a system of smaller computers. [MAIN + FRAME]." Random House Webster's College Dictionary (1991) and Random House College Dictionary (2001) had similar definitions. The Oxford English Dictionary is the principal historical dictionary, so it is interesting to see its view. The 1989 OED gave historical definitions as well as defining mainframe as "any large or general-purpose computer, exp. one supporting numerous peripherals or subordinate computers." It has seven historical examples from 1964 to 1984; the earliest is the 1964 Honeywell Glossary. It quotes a 1970 Dictionary of Computers as saying that the word "Originally implied the main framework of a central processing unit on which the arithmetic unit and associated logic circuits were mounted, but now used colloquially to refer to the central processor itself." The OED also cited a Hewlett-Packard ad from 1974 that used the word "mainframe", but I consider this a mistake as the usage is completely different.15 Encyclopedias A look at encyclopedias shows that the word "mainframe" started appearing in discussions of computers in the early 1980s, later than in dictionaries. At the beginning of the 1980s, many encyclopedias focused on large computers, without using the word "mainframe", for instance, The Concise Encyclopedia of the Sciences (1980) and World Book (1980). The word "mainframe" started to appear in supplements such as Britannica Book of the Year (1980) and World Book Year Book (1981), at the same time as they started discussing microcomputers. Soon encyclopedias were using the word "mainframe", for example, Funk & Wagnalls Encyclopedia (1983), Encyclopedia Americana (1983), and World Book (1984). By 1986, even the Doubleday Children's Almanac showed a "mainframe computer." Newspapers I examined old newspapers to track the usage of the word "mainframe." The graph below shows the usage of "mainframe" in newspapers. The curve shows a rise in popularity through the 1980s and a steep drop in the late 1990s. The newspaper graph roughly matches the book graph above, although newspapers show a much steeper drop in the late 1990s. Perhaps mainframes aren't in the news anymore, but people still write books about them. Newspaper usage of "mainframe." Graph from newspapers.com from 1975 to 2010 shows usage started growing in 1978, picked up in 1984, and peaked in 1989 and 1997, with a large drop in 2001 and after (y2k?). The first newspaper appearances were in classified ads seeking employees, for instance, a 1960 ad in the San Francisco Examiner for people "to monitor and control main-frame operations of electronic computers...and to operate peripheral equipment..." and a (sexist) 1966 ad in the Philadelphia Inquirer for "men with Digital Computer Bkgrnd [sic] (Peripheral or Mainframe)."37 By 1970, "mainframe" started to appear in news articles, for example, "The computer can't work without the mainframe unit." By 1971, the usage increased with phrases such as "mainframe central processor" and "'main-frame' computer manufacturers". 1972 had usages such as "the mainframe or central processing unit is the heart of any computer, and does all the calculations". A 1975 article explained "'Mainframe' is the industry's word for the computer itself, as opposed to associated items such as printers, which are referred to as 'peripherals.'" By 1980, minicomputers and microcomputers were appearing: "All hardware categories-mainframes, minicomputers, microcomputers, and terminals" and "The mainframe and the minis are interconnected." By 1985, the mainframe was a type of computer, not just the CPU: "These days it's tough to even define 'mainframe'. One definition is that it has for its electronic brain a central processor unit (CPU) that can handle at least 32 bits of information at once. ... A better distinction is that mainframes have numerous processors so they can work on several jobs at once." Articles also discussed "the micro's challenge to the mainframe" and asked, "buy a mainframe, rather than a mini?" By 1990, descriptions of mainframes became florid: "huge machines laboring away in glass-walled rooms", "the big burner which carries the whole computing load for an organization", "behemoth data crunchers", "the room-size machines that dominated computing until the 1980s", "the giant workhorses that form the nucleus of many data-processing centers", "But it is not raw central-processing-power that makes a mainframe a mainframe. Mainframe computers command their much higher prices because they have much more sophisticated input/output systems." Conclusion After extensive searches through archival documents, I found usages of the term "main frame" dating back to 1952, much earlier than previously reported. In particular, the introduction of frames to package the IBM 701 computer led to the use of the word "main frame" for that computer and later ones. The term went through various shades of meaning and remained fairly obscure for many years. In the mid-1970s, the term started describing a large computer, essentially its modern meaning. In the 1980s, the term escaped the computer industry and appeared in dictionaries, encyclopedias, and newspapers. After peaking in the 1990s, the term declined in usage (tracking the decline in mainframe computers), but the term and the mainframe computer both survive. Two factors drove the popularity of the word "mainframe" in the 1980s with its current meaning of a large computer. First, the terms "microcomputer" and "minicomputer" led to linguistic pressure for a parallel term for large computers. For instance, the business press needed a word to describe IBM and other large computer manufacturers. While "server" is the modern term, "mainframe" easily filled the role back then and was nicely alliterative with "microcomputer" and "minicomputer."38 Second, up until the 1980s, the prototype meaning for "computer" was a large mainframe, typically IBM.39 But as millions of home computers were sold in the early 1980s, the prototypical "computer" shifted to smaller machines. This left a need for a term for large computers, and "mainframe" filled that need. In other words, if you were talking about a large computer in the 1970s, you could say "computer" and people would assume you meant a mainframe. But if you said "computer" in the 1980s, you needed to clarify if it was a large computer. The word "mainframe" is almost 75 years old and both the computer and the word have gone through extensive changes in this time. The "death of the mainframe" has been proclaimed for well over 30 years but mainframes are still hanging on. Who knows what meaning "mainframe" will have in another 75 years? Follow me on Bluesky (@righto.com) or RSS. (I'm no longer on Twitter.) Thanks to the Computer History Museum and archivist Sara Lott for access to many documents. Notes and References The Computer History Museum states: "Why are they called “Mainframes”? Nobody knows for sure. There was no mainframe “inventor” who coined the term. Probably “main frame” originally referred to the frames (designed for telephone switches) holding processor circuits and main memory, separate from racks or cabinets holding other components. Over time, main frame became mainframe and came to mean 'big computer.'" (Based on my research, I don't think telephone switches have any connection to computer mainframes.) Several sources explain that the mainframe is named after the frame used to construct the computer. The Jargon File has a long discussion, stating that the term "originally referring to the cabinet containing the central processor unit or ‘main frame’." Ken Uston's Illustrated Guide to the IBM PC (1984) has the definition "MAIN FRAME A large, high-capacity computer, so named because the CPU of this kind of computer used to be mounted on a frame." IBM states that mainframe "Originally referred to the central processing unit of a large computer, which occupied the largest or central frame (rack)." The Microsoft Computer Dictionary (2002) states that the name mainframe "is derived from 'main frame', the cabinet originally used to house the processing unit of such computers." Some discussions of the origin of the word "mainframe" are here, here, here, here, and here. The phrase "main frame" in non-computer contexts has a very old but irrelevant history, describing many things that have a frame. For example, it appears in thousands of patents from the 1800s, including drills, saws, a meat cutter, a cider mill, printing presses, and corn planters. This shows that it was natural to use the phrase "main frame" when describing something constructed from frames. Telephony uses a Main distribution frame or "main frame" for wiring, going back to 1902. Some people claim that the computer use of "mainframe" is related to the telephony use, but I don't think they are related. In particular, a telephone main distribution frame looks nothing like a computer mainframe. Moreover, the computer use and the telephony use developed separately; if the computer use started in, say, Bell Labs, a connection would be more plausible. IBM patents with "main frame" include a scale (1922), a card sorter (1927), a card duplicator (1929), and a card-based accounting machine (1930). IBM's incidental uses of "main frame" are probably unrelated to modern usage, but they are a reminder that punch card data processing started decades before the modern computer. ↩ It is unclear why the IBM 701 installation manual is dated August 27, 1952 but the drawing is dated 1953. I assume the drawing was updated after the manual was originally produced. ↩ This footnote will survey the construction techniques of some early computers; the key point is that building a computer on frames was not an obvious technique. ENIAC (1945), the famous early vacuum tube computer, was constructed from 40 panels forming three walls filling a room (ref, ref). EDVAC (1949) was built from large cabinets or panels (ref) while ORDVAC and CLADIC (1949) were built on racks (ref). One of the first commercial computers, UNIVAC 1 (1951), had a "Central Computer" organized as bays, divided into three sections, with tube "chassis" plugged in (ref ). The Raytheon computer (1951) and Moore School Automatic Computer (1952) (ref) were built from racks. The MONROBOT VI (1955) was described as constructed from the "conventional rack-panel-cabinet form" (ref). ↩ The size and construction of early computers often made it difficult to install or move them. The early computer ENIAC required 9 months to move from Philadelphia to the Aberdeen Proving Ground. For this move, the wall of the Moore School in Philadelphia had to be partially demolished so ENIAC's main panels could be removed. In 1959, moving the SWAC computer required disassembly of the computer and removing one wall of the building (ref). When moving the early computer JOHNNIAC to a different site, the builders discovered the computer was too big for the elevator. They had to raise the computer up the elevator shaft without the elevator (ref). This illustrates the benefits of building a computer from moveable frames. ↩ The IBM 701's main frame was called the Electronic Analytical Control Unit in external documentation. ↩ The 701 installation manual (1952) has a frame arrangement diagram showing the dimensions of the various frames, along with a drawing of the main frame, and power usage of the various frames. Service documentation (1953) refers to "main frame adjustments" (page 74). The 700 Series Data Processing Systems Component Circuits document (1955-1959) lists various types of frames in its abbreviation list (below) Abbreviations used in IBM drawings include MF for main frame. Also note CF for core frame, and DF for drum frame, From 700 Series Data Processing Systems Component Circuits (1955-1959). When repairing an IBM 701, it was important to know which frame held which components, so "main frame" appeared throughout the engineering documents. For instance, in the schematics, each module was labeled with its location; "MF" stands for "main frame." Detail of a 701 schematic diagram. "MF" stands for "main frame." This diagram shows part of a pluggable tube module (type 2891) in mainframe panel 3 (MF3) section J, column 29. The blocks shown are an AND gate, OR gate, and Cathode Follower (buffer). From System Drawings 1.04.1. The "main frame" terminology was used in discussions with customers. For example, notes from a meeting with IBM (April 8, 1952) mention "E. S. [Electrostatic] Memory 15 feet from main frame" and list "main frame" as one of the seven items obtained for the $15,000/month rental cost.  ↩ For more information on how the IBM 701 was designed to fit on elevators and through doorways, see Building IBM: Shaping an Industry and Technology page 170, The Interface: IBM and the Transformation of Corporate Design page 69. This is also mentioned in "Engineering Description of the IBM Type 701 Computer", Proceedings of the IRE Oct 1953, page 1285. ↩ Many early systems used "central computer" to describe the main part of the computer, perhaps more commonly than "main frame." An early example is the "central computer" of the Elecom 125 (1954). The Digital Computer Newsletter (Apr 1955) used "central computer" several times to describe the processor of SEAC. The 1961 BRL report shows "central computer" being used by Univac II, Univac 1107, Univac File 0, DYSEAC and RCA Series 300. The MIT TX-2 Technical Manual (1961) uses "central computer" very frequently. The NAREC glossary (1962) defined "central computer. That part of a computer housed in the main frame." ↩ This footnote lists some other early computers that used the term "main frame." The October 1956 Digital Computer Newsletter mentions the "main frame" of the IBM NORC. Digital Computer Newsletter (Jan 1959) discusses using a RAMAC disk drive to reduce "main frame processing time." This document also mentions the IBM 709 "main frame." The IBM 704 documentation (1958) says "Each DC voltage is distributed to the main frame..." (IBM 736 reference manual) and "Check the air filters in each main frame unit and replace when dirty." (704 Central Processing Unit). The July 1962 Digital Computer Newsletter discusses the LEO III computer: "It has been built on the modular principle with the main frame, individual blocks of storage, and input and output channels all physically separate." The article also mentions that the new computer is more compact with "a reduction of two cabinets for housing the main frame." The IBM 7040 (1964) and IBM 7090 (1962) were constructed from multiple frames, including the processing unit called the "main frame."11 Machines in IBM's System/360 line (1964) were built from frames; some models had a main frame, power frame, wall frame, and so forth, while other models simply numbered the frames sequentially.12 ↩ The 1952 JOHNNIAC progress report is quoted in The History of the JOHNNIAC. This memorandum was dated August 8, 1952, so it is the earliest citation that I found. The June 1953 memorandum also used the term, stating, "The main frame is complete." ↩ A detailed description of IBM's frame-based computer packaging is in Standard Module System Component Circuits pages 6-9. This describes the SMS-based packaging used in the IBM 709x computers, the IBM 1401, and related systems as of 1960. ↩ IBM System/360 computers could have many frames, so they were usually given sequential numbers. The Model 85, for instance, had 12 frames for the processor and four megabytes of memory in 18 frames (at over 1000 pounds each). Some of the frames had descriptive names, though. The Model 40 had a main frame (CPU main frame, CPU frame), a main storage logic frame, a power supply frame, and a wall frame. The Model 50 had a CPU frame, power frame, and main storage frame. The Model 75 had a main frame (consisting of multiple physical frames), storage frames, channel frames, central processing frames, and a maintenance console frame. The compact Model 30 consisted of a single frame, so the documentation refers to the "frame", not the "main frame." For more information on frames in the System/360, see 360 Physical Planning. The Architecture of the IBM System/360 paper refers to the "main-frame hardware." ↩ A few more examples that discuss the minicomputer's mainframe, its physical box: A 1970 article discusses the mainframe of a minicomputer (as opposed to the peripherals) and contrasts minicomputers with large scale computers. A 1971 article on minicomputers discusses "minicomputer mainframes." Computerworld (Jan 28, 1970, p59) discusses minicomputer purchases: "The actual mainframe is not the major cost of the system to the user." Modern Data (1973) mentions minicomputer mainframes several times. ↩ DEC documents refer to the PDP-11 minicomputer as a mainframe. The PDP-11 Conventions manual (1970) defined: "Processor: A unit of a computing system that includes the circuits controlling the interpretation and execution of instructions. The processor does not include the Unibus, core memory, interface, or peripheral devices. The term 'main frame' is sometimes used but this term refers to all components (processor, memory, power supply) in the basic mounting box." In 1976, DEC published the PDP-11 Mainframe Troubleshooting Guide. The PDP-11 mainframe is also mentioned in Computerworld (1977). ↩ Test equipment manufacturers started using the term "main frame" (and later "mainframe") around 1962, to describe an oscilloscope or other test equipment that would accept plug-in modules. I suspect this is related to the use of "mainframe" to describe a computer's box, but it could be independent. Hewlett-Packard even used the term to describe a solderless breadboard, the 5035 Logic Lab. The Oxford English Dictionary (1989) used HP's 1974 ad for the Logic Lab as its earliest citation of mainframe as a single word. It appears that the OED confused this use of "mainframe" with the computer use. 1974 Sci. Amer. Apr. 79. The laboratory station mainframe has the essentials built-in (power supply, logic state indicators and programmers, and pulse sources to provide active stimulus for the student's circuits)." --> Is this a mainframe? The HP 5035A Logic Lab was a power supply and support circuitry for a solderless breadboard. HP's ads referred to this as a "laboratory station mainframe."  ↩↩ In the 1980s, the use of "mainframe" to describe the box holding a microcomputer started to conflict with "mainframe" as a large computer. For example, Radio Electronics (October 1982), started using the short-lived term "micro-mainframe" instead of "mainframe" for a microcomputer's enclosure. By 1985, Byte magazine had largely switched to the modern usage of "mainframe." But even as late as 1987, a review of the Apple IIGC described one of the system's components as the '"mainframe" (i.e. the actual system box)'. ↩ Definitions of "central processing unit" disagreed as to whether storage was part of the CPU, part of the main frame, or something separate. This was largely a consequence of the physical construction of early computers. Smaller computers had memory in the same frame as the processor, while larger computers often had separate storage frames for memory. Other computers had some memory with the processor and some external. Thus, the "main frame" might or might not contain memory, and this ambiguity carried over to definitions of CPU. (In modern usage, the CPU consists of the arithmetic/logic unit (ALU) and control circuitry, but excludes memory.) ↩ Many definitions of mainframe or CPU mention "special register groups", an obscure feature specific to the Honeywell 800 computer (1959). (Processors have registers, special registers are common, and some processors have register groups, but only the Honeywell 800 had "special register groups.") However, computer dictionaries kept using this phrase for decades, even though it doesn't make sense for other computers. I wrote a blog post about special register groups here. ↩ This footnote provides more examples of "mainframe" being defined as the CPU. The Data Processing Equipment Encyclopedia (1961) had a similar definition: "Main Frame: The main part of the computer, i.e. the arithmetic or logic unit; the central processing unit." The 1967 IBM 360 operator's guide defined: "The main frame - the central processing unit and main storage." The Department of the Navy's ADP Glossary (1970): "Central processing unit: A unit of a computer that includes the circuits controlling the interpretation and execution of instructions. Synonymous with main frame." This was a popular definition, originally from the ISO, used by IBM (1979) among others. Funk & Wagnalls Dictionary of Data Processing Terms (1970) defined: "main frame: The basic or essential portion of an assembly of hardware, in particular, the central processing unit of a computer." The American National Standard Vocabulary for Information Processing (1970) defined: "central processing unit: A unit of a computer that includes the circuits controlling the interpretation and execution of instructions. Synonymous with main frame." ↩ Both the mainframe vs. peripheral definition and the mainframe as CPU definition made it unclear exactly what components of the computer were included in the mainframe. It's clear that the arithmetic-logic unit and the processor control circuitry were included, while I/O devices were excluded, but some components such as memory were in a gray area. It's also unclear if the power supply and I/O interfaces (channels) are part of the mainframe. These distinctions were ignored in almost all of the uses of "mainframe" that I saw. An unusual definition in a Goddard Space Center document (1965, below) partitioned equipment into the "main frame" (the electronic equipment), "peripheral equipment" (electromechanical components such as the printer and tape), and "middle ground equipment" (the I/O interfaces). The "middle ground" terminology here appears to be unique. Also note that computers are partitioned into "super speed", "large-scale", "medium-scale", and "small-scale." Definitions from Automatic Data Processing Equipment, Goddard Space Center, 1965. "Main frame" was defined as "The central processing unit of a system including the hi-speed core storage memory bank. (This is the electronic element.)  ↩ This footnote gives some examples of using peripherals to save the cost of mainframe time. IBM 650 documentation (1956) describes how "Data written on tape by the 650 can be processed by the main frame of the 700 series systems." Univac II Marketing Material (1957) discusses various ways of reducing "main frame time" by, for instance, printing from tape off-line. The USAF Guide for auditing automatic data processing systems (1961) discusses how these "off line" operations make the most efficient use of "the more expensive main frame time." ↩ Peripheral manufacturers were companies that built tape drives, printers, and other devices that could be connected to a mainframe built by IBM or another company. The basis for the peripheral industry was antitrust action against IBM that led to the 1956 Consent Decree. Among other things, the consent decree forced IBM to provide reasonable patent licensing, which allowed other firms to build "plug-compatible" peripherals. The introduction of the System/360 in 1964 produced a large market for peripherals and IBM's large profit margins left plenty of room for other companies. ↩ Computers and Automation, March 1965, categorized computers into five classes, from "Teeny systems" (such as the IBM 360/20) renting for $2000/month, through Small, Medium, and Large systems, up to "Family or Economy Size Systems" (such as the IBM 360/92) renting for $75,000 per month. ↩ The term "minicomputer" was supposedly invented by John Leng, head of DEC's England operations. In the 1960s, he sent back a sales report: "Here is the latest minicomputer activity in the land of miniskirts as I drive around in my Mini Minor", which led to the term becoming popular at DEC. This story is described in The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation (1988). I'd trust the story more if I could find a reference that wasn't 20 years after the fact. ↩ For instance, Computers and Automation (1971) discussed the role of the minicomputer as compared to "larger computers." A 1975 minicomputer report compared minicomputers to their "general-purpose cousins." ↩ This footnote provides more on the split between minicomputers and mainframes. In 1971, Modern Data Products, Systems, Services contained .".. will offer mainframe, minicomputer, and peripheral manufacturers a design, manufacturing, and production facility...." Standard & Poor's Industry Surveys (1972) mentions "mainframes, minicomputers, and IBM-compatible peripherals." Computerworld (1975) refers to "mainframe and minicomputer systems manufacturers." The 1974 textbook "Information Systems: Technology, Economics, Applications" couldn't decide if mainframes were a part of the computer or a type of computer separate from minicomputers, saying: "Computer mainframes include the CPU and main memory, and in some usages of the term, the controllers, channels, and secondary storage and I/O devices such as tape drives, disks, terminals, card readers, printers, and so forth. However, the equipment for storage and I/O are usually called peripheral devices. Computer mainframes are usually thought of as medium to large scale, rather than mini-computers." Studying U.S. Industrial Outlook reports provides another perspective over time. U.S. Industrial Outlook 1969 divides computers into small, medium-size, and large-scale. Mainframe manufacturers are in opposition to peripheral manufacturers. The same mainframe vs. peripherals opposition appears in U.S. Industrial Outlook 1970 and U.S. Industrial Outlook 1971. The 1971 report also discusses minicomputer manufacturers entering the "maxicomputer market."30 1973 mentions "large computers, minicomputers, and peripherals." U.S. Industrial Outlook 1976 states, "The distinction between mainframe computers, minis, micros, and also accounting machines and calculators should merge into a spectrum." By 1977, the market was separated into "general purpose mainframe computers", "minicomputers and small business computers" and "microprocessors." Family Computing Magazine (1984) had a "Dictionary of Computer Terms Made Simple." It explained that "A Digital computer is either a "mainframe", a "mini", or a "micro." Forty years ago, large mainframes were the only size that a computer could be. They are still the largest size, and can handle more than 100,000,000 instructions per second. PER SECOND! [...] Mainframes are also called general-purpose computers." ↩ In 1974, Congress held antitrust hearings into IBM. The thousand-page report provides a detailed snapshot of the meanings of "mainframe" at the time. For instance, a market analysis report from IDC illustrates the difficulty of defining mainframes and minicomputers in this era (p4952). The "Mainframe Manufacturers" section splits the market into "general-purpose computers" and "dedicated application computers" including "all the so-called minicomputers." Although this section discusses minicomputers, the emphasis is on the manufacturers of traditional mainframes. A second "Plug-Compatible Manufacturers" section discusses companies that manufactured only peripherals. But there's also a separate "Minicomputers" section that focuses on minicomputers (along with microcomputers "which are simply microprocessor-based minicomputers"). My interpretation of this report is the terminology is in the process of moving from "mainframe vs. peripheral" to "mainframe vs. minicomputer." The statement from Research Shareholders Management (p5416) on the other hand discusses IBM and the five other mainframe companies; they classify minicomputer manufacturers separately. (p5425) p5426 mentions "mainframes, small business computers, industrial minicomputers, terminals, communications equipment, and minicomputers." Economist Ralph Miller mentions the central processing unit "(the so-called 'mainframe')" (p5621) and then contrasts independent peripheral manufacturers with mainframe manufacturers (p5622). The Computer Industry Alliance refers to mainframes and peripherals in multiple places, and "shifting the location of a controller from peripheral to mainframe", as well as "the central processing unit (mainframe)" p5099. On page 5290, "IBM on trial: Monopoly tends to corrupt", from Harper's (May 1974), mentions peripherals compatible with "IBM mainframe units—or, as they are called, central processing computers." ↩ The influential business newsletter EDP provides an interesting view on the struggle to separate the minicomputer market from larger computers. Through 1968, they included minicomputers in the "general-purpose computer" category. But in 1969, they split "general-purpose computers" into "Group A, General Purpose Digital Computers" and "Group B, Dedicated Application Digital Computers." These categories roughly corresponded to larger computers and minicomputers, on the (dubious) assumption that minicomputers were used for a "dedicated application." The important thing to note is that in 1969 they did not use the term "mainframe" for the first category, even though with the modern definition it's the obvious term to use. At the time, EDP used "mainframe manufacturer" or "mainframer"31 to refer to companies that manufactured computers (including minicomputers), as opposed to manufacturers of peripherals. In 1972, EDP first mentioned mainframes and minicomputers as distinct types. In 1973, "microcomputer" was added to the categories. As the 1970s progressed, the separation between minicomputers and mainframes became common. However, the transition was not completely smooth; 1973 included a reference to "mainframe shipments (including minicomputers)." To specific, the EDP Industry Report (Nov. 28, 1969) gave the following definitions of the two groups of computers: Group A—General Purpose Digital Computers: These comprise the bulk of the computers that have been listed in the Census previously. They are character or byte oriented except in the case of the large-scale scientific machines, which have 36, 48, or 60-bit words. The predominant portion (60% to 80%) of these computers is rented, usually for $2,000 a month or more. Higher level languages such as Fortran, Cobol, or PL/1 are the primary means by which users program these computers. Group B—Dedicated Application Digital Computers: This group of computers includes the "mini's" (purchase price below $25,000), the "midi's" ($25,000 to $50,000), and certain larger systems usually designed or used for one dedicated application such as process control, data acquisition, etc. The characteristics of this group are that the computers are usually word oriented (8, 12, 16, or 24-bits per word), the predominant number (70% to 100%) are purchased, and assembly language (at times Fortran) is the predominant means of programming. This type of computer is often sold to an original equipment manufacturer (OEM) for further system integration and resale to the final user. These definitions strike me as rather arbitrary. ↩ In 1981 Computerworld had articles trying to clarify the distinctions between microcomputers, minicomputers, superminicomputers, and mainframes, as the systems started to overlay. One article, Distinction Helpful for Minis, Mainframes said that minicomputers were generally interactive, while mainframes made good batch machines and network hosts. Microcomputers had up to 512 KB of memory, minis were 16-bit machines with 512 KB to 4 MB of memory, costing up to $100,000. Superminis were 16- to 32-bit machines with 4 MB to 8 MB of memory, costing up to $200,000 but with less memory bandwidth than mainframes. Finally, mainframes were 32-bit machines with more than 8 MB of memory, costing over $200,000. Another article Micro, Mini, or Mainframe? Confusion persists described a microcomputer as using an 8-bit architecture and having fewer peripherals, while a minicomputer has a 16-bit architecture and 48 KB to 1 MB of memory. ↩ The miniskirt in the mid-1960s was shortly followed by the midiskirt and maxiskirt. These terms led to the parallel construction of the terms minicomputer, midicomputer, and maxicomputer. The New York Times had a long article Maxi Computers Face Mini Conflict (April 5, 1970) explicitly making the parallel: "Mini vs. Maxi, the reigning issue in the glamorous world of fashion, is strangely enough also a major point of contention in the definitely unsexy realm of computers." Although midicomputer and maxicomputer terminology didn't catch on the way minicomputer did, they still had significant use (example, midicomputer examples, maxicomputer examples). The miniskirt/minicomputer parallel was done with varying degrees of sexism. One example is Electronic Design News (1969): "A minicomputer. Like the miniskirt, the small general-purpose computer presents the same basic commodity in a more appealing way." ↩ Linguistically, one indication that a new word has become integrated in the language is when it can be extended to form additional new words. One example is the formation of "mainframers", referring to companies that build mainframes. This word was moderately popular in the 1970s to 1990s. It was even used by the Department of Justice in their 1975 action against IBM where they described the companies in the systems market as the "mainframe companies" or "mainframers." The word is still used today, but usually refers to people with mainframe skills. Other linguistic extensions of "mainframe" include mainframing, unmainframe, mainframed, nonmainframe, and postmainframe. ↩ More examples of the split between microcomputers and mainframes: Softwide Magazine (1978) describes "BASIC versions for micro, mini and mainframe computers." MSC, a disk system manufacturer, had drives "used with many microcomputer, minicomputer, and mainframe processor types" (1980). ↩ Some examples of computer dictionaries referring to mainframes as a size category: Illustrated Dictionary of Microcomputer Terminology (1978) defines "mainframe" as "(1) The heart of a computer system, which includes the CPU and ALU. (2) A large computer, as opposed to a mini or micro." A Dictionary of Minicomputing and Microcomputing (1982) includes the definition of "mainframe" as "A high-speed computer that is larger, faster, and more expensive than the high-end minicomputers. The boundary between a small mainframe and a large mini is fuzzy indeed." The National Bureau of Standards Future Information Technology (1984) defined: "Mainframe is a term used to designate a medium and large scale CPU." The New American Computer Dictionary (1985) defined "mainframe" as "(1) Specifically, the rack(s) holding the central processing unit and the memory of a large computer. (2) More generally, any large computer. 'We have two mainframes and several minis.'" The 1990 ANSI Dictionary for Information Systems (ANSI X3.172-1990) defined: mainframe. A large computer, usually one to which other computers are connected in order to share its resources and computing power. Microsoft Press Computer Dictionary (1991) defined "mainframe computer" as "A high-level computer designed for the most intensive computational tasks. Mainframe computers are often shared by multiple users connected to the computer via terminals." ISO 2382 (1993) defines a mainframe as "a computer, usually in a computer center, with extensive capabilities and resources to which other computers may be connected so that they can share facilities." The Microsoft Computer Dictionary (2002) had an amusingly critical definition of mainframe: "A type of large computer system (in the past often water-cooled), the primary data processing resource for many large businesses and organizations. Some mainframe operating systems and solutions are over 40 years old and have the capacity to store year values only as two digits." ↩ IBM's 1962 book Planning a Computer System (1962) describes how the Stretch computer's circuitry was assembled into frames, with the CPU consisting of 18 frames. The picture below shows how a "frame" was, in fact, constructed from a metal frame. In the Stretch computer, the circuitry (left) could be rolled out of the frame (right)  ↩ The term "general-purpose computer" is probably worthy of investigation since it was used in a variety of ways. It is one of those phrases that seems obvious until you think about it more closely. On the one hand, a computer such as the Apollo Guidance Computer can be considered general purpose because it runs a variety of programs, even though the computer was designed for one specific mission. On the other hand, minicomputers were often contrasted with "general-purpose computers" because customers would buy a minicomputer for a specific application, unlike a mainframe which would be used for a variety of applications. ↩ The n-gram graph is from the Google Books Ngram Viewer. The curves on the graph should be taken with a grain of salt. First, the usage of words in published books is likely to lag behind "real world" usage. Second, the number of usages in the data set is small, especially at the beginning. Nonetheless, the n-gram graph generally agrees with what I've seen looking at documents directly. ↩ More examples of "mainframe" in want ads: A 1966 ad from Western Union in The Arizona Republic looking for experience "in a systems engineering capacity dealing with both mainframe and peripherals." A 1968 ad in The Minneapolis Star for an engineer with knowledge of "mainframe and peripheral hardware." A 1968 ad from SDS in The Los Angeles Times for an engineer to design "circuits for computer mainframes and peripheral equipment." A 1968 ad in Fort Lauderdale News for "Computer mainframe and peripheral logic design." A 1972 ad in The Los Angeles Times saying "Mainframe or peripheral [experience] highly desired." In most of these ads, the mainframe was in contrast to the peripherals. ↩ A related factor is the development of remote connections from a microcomputer to a mainframe in the 1980s. This led to the need for a word to describe the remote computer, rather than saying "I connected my home computer to the other computer." See the many books and articles on connecting "micro to mainframe." ↩ To see how the prototypical meaning of "computer" changed in the 1980s, I examined the "Computer" article in encyclopedias from that time. The 1980 Concise Encyclopedia of the Sciences discusses a large system with punched-card input. In 1980, the World Book article focused on mainframe systems, starting with a photo of an IBM System/360 Model 40 mainframe. But in the 1981 supplement and the 1984 encyclopedia, the World Book article opened with a handheld computer game, a desktop computer, and a "large-scale computer." The article described microcomputers, minicomputers, and mainframes. Funk & Wagnalls Encyclopedia (1983) was in the middle of the transition; the article focused on large computers and had photos of IBM machines, but mentioned that future growth is expected in microcomputers. By 1994, the World Book article's main focus was the personal computer, although the mainframe still had a few paragraphs and a photo. This is evidence that the prototypical meaning of "computer" underwent a dramatic shift in the early 1980s from a mainframe to a balance between small and large computers, and then to the personal computer. ↩

14 minutes ago 1 votes
This is why people see attacks on DEI as thinly veiled racism

The tragedy in Washington D.C. this week was horrible, and a shocking incident. There should and will be an investigation into what went wrong here, but every politician and official who spoke at the White House today explicitly blamed DEI programs for this crash. The message may as well

yesterday 2 votes
What's in a name

Guillermo posted this recently: What you name your product matters more than people give it credit. It's your first and most universal UI to the world. Designing a good name requires multi-dimensional thinking and is full of edge cases, much like designing software. I first will give credit where credit is due: I spent the first few years thinking "vercel" was phonetically interchangable with "volcel" and therefore fairly irredeemable as a name, but I've since come around to the name a bit as being (and I do not mean this snarkily or negatively!) generically futuristic, like the name of an amoral corporation in a Philip K. Dick novel. A few folks ask every year where the name for Buttondown came from. The answer is unexciting: Its killer feature was Markdown support, so I was trying to find a useful way to play off of that. "Buttondown" evokes, at least for me, the scent and touch of a well-worn OCBD, and that kind of timeless bourgeois aesthetic was what I was going for with the general branding. It was, in retrospect, a good-but-not-great name with two flaws: It's a common term. Setting Google Alerts (et al) for "buttondown" meant a lot of menswear stuff and not a lot of email stuff. Because it's a common term, the .com was an expensive purchase (see Notes on buttondown.com for more on that). We will probably never change the name. It's hard for me to imagine the ROI on a total rebrand like that ever justifying its own cost, and I have a soft spot for it even after all of these years. But all of this is to say: I don't know of any projects that have failed or succeeded because of a name. I would just try to avoid any obvious issues, and follow Seth's advice from 2003.

yesterday 4 votes
Join us for Arduino Day 2025: celebrating 20 years of community!

Mark your calendars for March 21-22, 2025, as we come together for a special Arduino Day to celebrate our 20th anniversary! This free, online event is open to everyone, everywhere. Two decades of creativity and community Over the past 20 years, we have evolved from a simple open-source hardware platform into a global community with […] The post Join us for Arduino Day 2025: celebrating 20 years of community! appeared first on Arduino Blog.

2 days ago 2 votes
Horsey Horseless and the Challenge of AI-native Products

Disruptive technologies call for rethinking product design. We must question assumptions about underlying infrastructure and mental models while acknowledging neither change overnight. For example, self-driving cars don’t need steering wheels. Users direct AI-driven vehicles by giving them a destination address. Keyboards and microphones are better controls for this use case than steering wheels and pedals. But people expect cars to have steering wheels and pedals. Without them, they feel a loss of control – especially if they don’t fully trust the new technology. It’s not just control. The entire experience can – and perhaps must — change as a result. In a self-driving car, passengers needn’t all face forward. Freed from road duties, they can focus on work or leisure during the drive. As a result, designers can rethink the cabin experience from scratch. Such changes don’t happen overnight. People are used to having agency. They expect to actively sit behind the wheel with everyone facing forward. It’ll take time for people to cede control and relax. Moreover, current infrastructure is designed around these assumptions. For example, road signs point toward oncoming traffic because that’s where drivers can see them. Roads transited by robots don’t need signals at all. But it’s going to be a while before roads are used exclusively by AI-driven vehicles. Human drivers will share roads with them for some time, and humans need signs. The presence of robots might even call for new signaling. It’s a liminal situation that a) doesn’t yet accommodate the full potential of the new reality while b) trying to accommodate previous ways of being. The result is awkward “neither fish nor fowl” experiments. My favorite example is a late 19th Century product called Horsey Horseless. Patent diagram of Horsey Horseless (1899) via Wikimedia Yes, it’s a vehicle with a wooden horse head grafted on front. When I first saw this abomination (in a presentation by my friend Andrew Hinton,) I assumed it meant to appeal to early adopters who couldn’t let go of the idea of driving behind a horse. But there was a deeper logic here. At the time, cars shared roads with horse-drawn vehicles. Horsey Horseless was meant to keep motorcars from freaking out the horses. Whether it worked or not doesn’t matter. The important thing to note is people were grappling with the implications of the new technology on the product typology given the existing context. We’re in that situation now. Horsey Horseless is a metaphor for an approach to product evolution after the introduction of a disruptive new technology. To wit, designers seek to align the new technology with existing infrastructure and mental models by “grafting a horse.” Consider how many current products are “adding AI” by including a button that opens a chatbox alongside familiar UI. Here’s Gmail: Gmail’s Gemini AI panel. In this case, the email client UI is a sort of horse’s head that lets us use the new technology without disrupting our workflows. It’s a temporary hack. New products will appear that rethink use cases from the new technology’s unique capabilities. Why have a chat panel on an email client when AI can obviate the need for email altogether? Today, email is assumed infrastructure. Other products expect users to have an email address and a client app to access it. That might not always stand. Eventually, such awkward compromises will go away. But it takes time. We’re entering that liminal period now. It’s exciting – even if it produces weird chimeras for a while.

2 days ago 3 votes