Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
7
In 1974, Gerald Popek and Robert Goldberg published a paper, “Formal Requirements for Virtualizable Third Generation Architectures”, giving a set of characteristics for correct full-machine virtualisation. Today, these characteristics remain very useful. Computer architects will informally cite this paper when debating Instruction Set Architecture (ISA) developments, with arguments like “but that’s not Popek & Goldberg-compliant!” In this post I’m looking at one aspect of computer architecture evolution since 1974, and observing how RISC-style atomic operations provide some potential virtualisation gotchas for both programmers and architects. Principles of virtualisation First, some virtualisation context, because it’s fun! A key P&G requirement is that of equivalence: it’s reasonable to expect software running under virtualisation to have the same behaviour as running it bare-metal! This property is otherwise known as correctness. :-) P&G classify instructions as being sensitive if...
over a year ago

More from axio.ms

MicroMac, a Macintosh for under £5

A microcontroller Macintosh This all started from a conversation about the RP2040 MCU, and building a simple desktop/GUI for it. I’d made a comment along the lines of “or, just run some old OS”, and it got me thinking about the original Macintosh. The original Macintosh was released 40.5 years before this post, and is a pretty cool machine especially considering that the hardware is very simple. Insanely Great and folklore.org are fun reads, and give a glimpse into the Macintosh’s development. Memory was a squeeze; the original 128KB version was underpowered and only sold for a few months before being replaced by the Macintosh 512K, arguably a more appropriate amount of memory. But, the 128 still runs some real applications and, though it pre-dates MultiFinder/actual multitasking, I found it pretty charming. As a tourist. In 1984 the Mac cost roughly 1/3 as much as a VW Golf and, as someone who’s into old computers and old cars, it’s hard to decide which is more frustrating to use. So back to this £3.80 RPi Pico microcontroller board: The RP2040’s 264KB of RAM gives a lot to play with after carving out the Mac’s 128KB – how cool would it be to do a quick hack, and play with a Mac on it? Time passes. A lot of time. But I totally delivered on the janky hack front: You won’t believe that this quality item didn’t take that long to build. So the software was obviously the involved part, and turned into work on 3 distinct projects. This post is going to be a “development journey” story, as a kind of code/design/venting narrative. If you’re just here for the pictures, scroll along! What is pico-mac? A Raspberry Pi RP2040 microcontroller (on a Pico board), driving monochrome VGA video and taking USB keyboard/mouse input, emulating a Macintosh 128K computer and disc storage. The RP2040 has easily enough RAM to house the Mac’s memory, plus that of the emulator; it’s fast enough (with some tricks) to meet the performance of the real machine, has USB host capability, and the PIO department makes driving VGA video fairly uneventful (with some tricks). The basic Pico board’s 2MB of flash is plenty for a disc image with OS and software. Here’s the Pico MicroMac in action, ready for the paperless office of the future: The Pico MicroMac RISC CISC workstation of the future I hadn’t really used a Mac 128K much before; a few clicks on a museum machine once. But I knew they ran MacDraw, and MacWrite, and MacPaint. All three of these applications are pretty cool for a 128K machine; a largely WYSIWYG word processor with multiple fonts, and a vector drawing package. A great way of playing with early Macintosh system software, and applications of these wonderful machines is via https://infinitemac.org, which has shrinkwrapped running the Mini vMac emulator by emscriptening it to run in the browser. Highly recommended, lots to play with. As a spoiler, MicroMac does run MacDraw, and it was great to play with it on “real fake hardware”: (Do you find “Pico Micro Mac” doesn’t really scan? I didn’t think this taxonomy through, did I?) GitHub links are at the bottom of this page: the pico-mac repo has construction directions if you want to build your own! The journey Back up a bit. I wasn’t committed to building a Pico thing, but was vaguely interested in whether it was feasible, so started tinkering with building a Mac 128K emulator on my normal computer first. The three rules I had a few simple rules for this project: It had to be fun. It’s OK to hack stuff to get it working, it’s not as though I’m being paid for this. I like writing emulation stuff, but I really don’t want to learn 68K assembler, or much about the 68K. There’s a lot of love for 68K out there and that’s cool, but meh I don’t adore it as a CPU. So, right from the outset I wanted to use someone else’s 68K interpreter – I knew there were loads around. Similarly, there are a load of OSes whose innards I’d like to learn more about, but the shittiest early Mac System software isn’t high on the list. Get in there, emulate the hardware, boot the OS as a black box, done. I ended up breaking 2 of and sometimes all 3 of these rules during this project. The Mac 128K The machines are generally pretty simple, and of their time. I started with schematics and Inside Macintosh, PDFs of which covered various details of the original Mac hardware, memory map, mouse/keyboard, etc. https://tinkerdifferent.com/resources/macintosh-128k-512k-schematics.79/ https://vintageapple.org/inside_o/ Inside Macintosh Volumes I-III are particularly useful for hardware information; also Guide to Macintosh Family Hardware 2nd Edition. The Macintosh has: A Motorola 68000 CPU running at 7.whatever MHz roughly 8MHz Flat memory, decoded into regions for memory-mapped IO going to the 6522 VIA, the 8530 SCC, and the IWM floppy controller. (Some of the address decoding is a little funky, though.) Keyboard and mouse hang off the VIA/SCC chips. No external interrupt controller: the 68K has 3 IRQ lines, and there are 3 IRQ sources (VIA, SCC, programmer switch/NMI). “No slots” or expansion cards. No DMA controller: a simple autonomous PAL state machine scans video (and audio samples) out of DRAM. Video is fixed at 512x342 1BPP. The only storage is an internal FDD (plus an external drive), driven by the IWM chip. The first three Mac models are extremely similar: The Mac 128K and Mac 512K are the same machine, except for RAM. The Mac Plus added SCSI to a convenient space in the memory map and an 800K floppy drive, which is double-sided whereas the original was a single 400K side. The Mac Plus ROM also supports the 128K/512K, and was an upgrade to create the Macintosh 512Ke. ‘e’ for Extra ROM Goodness. The Mac Plus ROM supports the HD20 external hard disc, and HFS, and Steve Chamberlin has annotated a disassembly of it. This was the ROM to use: I was making a Macintosh 128Ke. Mac emulator: umac After about 8 minutes of research, I chose the Musashi 68K interpreter. It’s C, simple to interface to, and had a simple out-of-box example of a 68K system with RAM, ROM, and some IO. Musashi is structured to be embedded in bigger projects: wire in memory read/write callbacks, a function to raise an IRQ, call execute in a loop, done. I started building an emulator around it, which ultimately became the umac project. The first half (of, say, five halves) went pretty well: A simple commandline app loading the ROM image, allocating RAM, providing debug messages/assertions/logging, and configuring Musashi. Add address decoding: CPU reads/writes are steered to RAM, or ROM. The “overlay” register lets the ROM boot at 0x00000000 and then trampoline up to a high ROM mirror after setting up CPU exception vectors – this affects the address decoding. This is done by poking a VIA register, so decoded just that bit of that register for now. At this point, the ROM starts running and accessing more non-existent VIA and SCC registers. Added more decoding and a skeleton for emulating these devices elsewhere – the MMIO read/writes are just stubbed out. There are some magic addresses that the ROM accesses that “miss” documented devices: there’s a manufacturing test option that probes for a plugin (just thunk it), and then we witness the RAM size probing. The Mac Plus ROM is looking for up to 4MB of RAM. In the large region devoted to RAM, the smaller amount of actual RAM is mirrored over and over, so the probe writes a magic value at high addresses and spots where it starts to wrap around. RAM is then initialised and filled with a known pattern. This was an exciting point to get to because I could dump the RAM, convert the region used for the video framebuffer into an image, and see the “diagonal stripe” pattern used for RAM testing! “She’s alive!” Not all of the device code enjoyed reading all zeroes, so there was a certain amount of referring to the disassembly and returning, uh, 0xffffffff sometimes to push it further. The goal was to get it as far as accessing the IWM chip, i.e. trying to load the OS. After seeing some IWM accesses there and returning random rubbish values, the first wonderful moment was getting the “Unknown Disc” icon with the question mark – real graphics! The ROM was REALLY DOING SOMETHING! I think I hadn’t implemented any IRQs at this point, and found the ROM in an infinite loop: it was counting a few Vsyncs to delay the flashing question mark. Diversion into a better VIA, with callbacks for GPIO register read/write, and IRQ handling. This also needed to wire into Musashi’s IRQ functions. This was motivating to get to – remembering rule #1 – and “graphics”, even though via a manual memory dump/ImageMagick conversion, was great. I knew the IWM was an “interesting” chip, but didn’t know details. I planned to figure it out when I got there (rule #1). IWM, 68K, and disc drivers My god, I’m glad I put IWM off until this point. If I’d read the “datasheet” (vague register documentation) first, I’d’ve just gone to the pub instead of writing this shitty emulator. IWM is very clever, but very very low-level. The disc controllers in other contemporary machines, e.g. WD1770, abstract the disc physics. At one level, you can poke regs to step to track 17 and then ask the controller to grab sector 3. Not so with IWM: first, the discs are Constant Linear Velocity, meaning the angular rotation needs to change appropriate to whichever track you’re on, and second the IWM just gives the CPU a firehose of crap from the disc head (with minimal decoding). I spent a while reading through the disassembly of the ROM’s IWM driver (breaking rule #2 and rule #1): there’s some kind of servo control loop where the driver twiddles PWM values sent to a DAC to control the disc motor, measured against a VIA timer reference to do some sort of dynamic rate-matching to get the correct bitrate from the disc sectors. I think once it finds the track start it then streams the track into memory, and the driver decodes the symbols (more clever encoding) and selects the sector of interest. I was sad. Surely Basilisk II and Mini vMac etc. had solved this in some clever way – they emulated floppy discs. I learned they do not, and do the smart engineering thing instead: avoid the problem. The other emulators do quite a lot of ROM patching: the ROM isn’t run unmodified. You can argue that this then isn’t a perfect hardware emulation if you’re patching out inconvenient parts of the ROM, but so what. I suspect they were also abiding by a rule #1 too. I was going to do the same: I figured out a bit of how the Mac driver interface works (gah, rule #3!) and understood how the other emulators patched this. They use a custom paravirtualised 68K driver which is copied over the ROM’s IWM driver, servicing .Sony requests from the block layer and routing them to more convenient host-side code to manage the requests. Basilisk II uses some custom 68K opcodes and a simple driver, and Mini vMac a complex driver with trappy accesses to a custom region of memory. I reused the Basilisk II driver but converted to access a trappy region (easier to route: just emulate another device). The driver callbacks land in the host/C side and some cut-down Basilisk II code interprets the requests and copies data to/from the OS-provided buffers. Right now, all I needed was to read blocks from one disc: I didn’t need different formats (or even write support), or multiple drives, or ejecting/changing images. Getting the first block loaded from disc took waaaayyy longer than the first part. And, I’d had to learn a bit of 68K (gah), but just in the nick of time I got a Happy Mac icon as the System software started to load. This was still a simple Linux commandline application, with zero UI. No keyboard or mouse, no video. Time to wrap it in an SDL2 frontend (the unix_main test build in the umac project), and I could watch the screen redraw live. I hadn’t coded the 1Hz timer interrupt into the VIA, and after adding that it booted to a desktop! The first boot As an aside, I try to create a dual-target build for all my embedded projects, with a native host build for rapid prototyping/debugging; libSDL instead of an LCD. It means I don’t need to code at the MCU, so I can code in the garden. :) Next was mouse support. Inside Macintosh and the schematics show how it’s wired, to the VIA (good) and the SCC (a beast). The SCC is my second least-favourite chip in this machine; it’s complex and the datasheet/manual seems to be intentionally written to hide information, piss off readers, get one back at the world. (I didn’t go near the serial side, its main purpose, just external IRQ management. But, it’ll do all kinds of exciting 1980s line coding schemes, offloading bitty work from the CPU. It was key for supporting things like AppleTalk.) Life was almost complete at this point; with a working mouse I could build a new disc image (using Mini vMac, an exercise in itself) with Missile Command. This game is pretty fun for under 10KB on disc. So: Video works Boots from disc Mouse works, Missile Command I had no keyboard, but it’s largely working now. Time to start on sub-project numero due: Hardware and RP2040 Completely unrelated to umac, I built up a circuit and firmare with two goals: Display 512x342x1 video to VGA with minimal components, Get the TinyUSB HID example working and integrated. This would just display a test image copied to a framebuffer, and printf() keyboard/mouse events, as a PoC. The video portion was fun: I’d done some I2S audio PIO work before, but here I wanted to scan out video and arbitrarily control Vsync/Hsync. Well, to test I needed a circuit. VGA wants 0.7V max on the video R,G,B signals and (mumble, some volts) on the syncs. The R,G,B signals are 75Ω to ground: with some maths, a 3.3V GPIO driving all three through a 100Ω resistor is roughly right. The day I started soldering it together I needed a VGA connector. I had a DB15 but wanted it for another project, and felt bad about cutting up a VGA cable. But when I took a walk at lunchtime, no shitting you, I passed some street cables. I had a VGA cable – the rust helps with the janky aesthetic. Free VGA cable The VGA PIO side was pretty fun. It ended up as PIO reading config info dynamically to control Hsync width, display position, and so on, and then some tricks with DMA to scan out the config info interleaved with framebuffer data. By shifting the bits in the right direction and by using the byteswap option on the RP2040 DMA, the big-endian Mac framebuffer can be output directly without CPU-side copies or format conversion. Cool. This can be fairly easily re-used in other projects: see video.c. But. I ended up (re)writing the video side three times in total: First version had two DMA channels writing to the PIO TX FIFO. The first would transfer the config info, then trigger the second to transfer video data, then raise an IRQ. The IRQ handler would then have a short time (the FIFO depth!) to choose a new framebuffer address to read from, and reprogram DMA. It worked OK, but was highly sensitive to other activity in the system. First and most obvious fix is that any latency-sensitive IRQ handler must have the __not_in_flash_func() attribute so as to run out of RAM. But even with that, the design didn’t give much time to reconfigure the DMA: random glitches and blanks occurred when moving the mouse rapidly. Second version did double-buffering with the goal of making the IRQ handler’s job trivial: poke in a pre-prepared DMA config quickly, then after the critical rush calculate the buffer to use for next time. Lots better, but still some glitches under some high load. Even weirder, it’d sometimes just blank out completely, requiring a reset. This was puzzling for a while; I ended up printing out the PIO FIFO’s FDEBUG register to try to catch the bug in the act. I saw that the TXOVER overflow flag was set, and this should be impossible: the FIFOs pull data from DMA on demand with DMA requests and a credited flow-contr…OH WAIT. If credits get messed up or duplicated, too many transfers can happen, leading to an overflow at the receiver side. Well, I’d missed a subtle rule in the RP2040 DMA docs: Another caveat is that multiple channels should not be connected to the same DREQ. So the third version…… doesn’t break this rule, and is more complicated as a result: One DMA channel transfers to the PIO TX FIFO Another channel programs the first channel to send from the config data buffer A third channel programs the first to send the video data The programming of the first triggers the corresponding “next reprogram me” channel The nice thing – aside from no lock-ups or video corruption – is that this now triggers a Hsync IRQ during the video line scan-out, greatly relaxing the deadline of reconfiguring the DMA. I’d like to further improve this (with yet another DMA channel) to transfer without an IRQ per line, as the current IRQ overhead of about 1% of CPU time can be avoided. (It would’ve been simpler to just hardwire the VGA display timing in the PIO code, but I like (for future projects) being able to dynamically-reconfigure the video mode.) So now we have a platform and firmware framework to embed umac into, HID in and video out. The hardware’s done, fuggitthat’lldo, let’s throw it over to the software team: How it all works Back to emulating things A glance at the native umac binary showed a few things to fix before it could run on the Pico: Musashi constructed a huge opcode decode jumptable at runtime, in RAM. It’s never built differently, and never changes at runtime. I added a Musashi build-time generator so that this table could be const (and therefore live in flash). The disassembler was large, and not going to be used on the Pico, so another option to build without. Musashi tries to accurately count execution cycles for each instruction, with more large lookup tables. Maybe useful for console games, but the Mac doesn’t have the same degree of timing sensitivity. REMOVED. (This work is in my small-build branch.) pico-mac takes shape, with the ROM and disc image in flash, and enjoyably it now builds and runs on the Pico! With some careful attention to not shoving stuff in RAM, the RAM use is looking pretty good. The emulator plus HID code is using about 35-40KB on top of the Mac’s 128KB RAM area – there’s 95+KB of RAM still free. This was a good time to finish off adding the keyboard support to umac. The Mac keyboard is interfaced serially through the VIA ‘shift register’, a basic synchronous serial interface. This was logically simple, but frustrating because early attempts at replying to the ROM’s “init” command just were persistently ignored. The ROM disassembly was super-useful again: reading the keyboard init code, it looked like a race condition in interrupt acknowledgement if the response byte appears too soon after the request is sent. Shoved in a delay to hold off a reply until a later poll, and then it was just a matter of mapping keycodes (boooooorrrriiiiing). With a keyboard, the end-of-level MacWrite boss is reached: One problem though: it totally sucked. It was suuuuper slow. I added a 1Hz dump of instruction count, and it was doing about 300 KIPS. The 68000 isn’t an amazing CPU in terms of IPC. Okay, there are some instructions that execute in 4 cycles. But you want to use those extravagant addressing modes don’t you, and touching memory is spending those cycles all over the place. Not an expert, but targeting about 1 MIPS for an about 8MHz 68000 seems right. Only 3x improvement needed. Performance I didn’t say I wasn’t gonna cheat: let’s run that Pico at 250MHz instead of 125MHz. Okay better, but not 2x better. From memory, only about 30% better. Damn, no free lunch today. Musashi has a lot of configurable options. My first goal was to get its main loop (as seen from disassembly/post-compile end!) small: the Mac doesn’t report Bus Errors, so the registers don’t need copies for unwinding. The opcodes are always fetched from a 16b boundary, so don’t need alignment checking, and can use halfword loads (instead of two byte loads munged into a halfword!). For the Cortex-M0+/armv6m ISA, reordering some of the CPU context structure fields enabled immediate-offset access and better code. The CPU type, mysteriously, was dynamically-changeable and led to a bunch of runtime indirection. Looking better, maybe 2x improvement, but not enough. Missile Command was still janky and the mouse wasn’t smooth! Next, some naughty/dangerous optimisations: remove address alignment checking, because unaligned accesses don’t happen in this constrained environment. (Then, this work is in my umac-hacks branch.) But the real perf came from a different trick. First, a diversion! RP2040 memory access The RP2040 has fast RAM, which is multi-banked so as to allow generally single-cycle access to multiple users (2 CPUs, DMA, etc.). Out of the box, most code runs via XIP from external QSPI flash. The QSPI usually runs at the core clock (125MHz default), but has a latency of ~20 cycles for a random word read. The RP2040 uses a relatively simple 16KB cache in front of the flash to protect you from horrible access latency, but the more code you have the more likely you are to call a function and have to crank up QSPI. When overclocking to 250MHz, the QSPI can’t go that fast so stays at 125MHz (I think). Bear in mind, then, that your 20ish QSPI cycles on a miss become 40ish CPU cycles. The particular rock-and-a-hard-place here is that Musashi build-time generates a ton of code, a function for each of its 1968 opcodes, plus that 256KB opcode jumptable. Even if we make the inner execution loop completely free, the opcode dispatch might miss in the flash cache, and the opcode function itself too. (If we want to get 1 MIPS out of about 200 MIPS, a few of these delays are going to really add up.) The __not_in_flash_func() attribute can be used to copy a given function into RAM, guaranteeing fast execution. At the very minimum, the main loop and memory accessors are decorated: every instruction is going to access an opcode and most likely read or write RAM. This improves performance a few percent. Then, I tried decorating whole classes of opcodes: move is frequent, as are branches, so put ‘em in RAM. This helped a lot, but the remaining free RAM was used up very quickly, and I wasn’t at my goal of much above 1 MIPS. Remember that RISC architecture is gonna change everything? We want to put some of those 1968 68K opcodes into RAM to make them fast. What are the top 10 most often-used instructions? Top 100? By adding a 64K table of counters to umac, booting the Mac and running key applications (okay, playing Missile Command for a bit), we get a profile of dynamic instruction counts. It turns out that the 100 hottest opcodes (5% of the total) account for 89% of the execution. And the top 200 account for a whopping 98% of execution. Armed with this profile, the umac build post-processes the Musashi auto-generated code and decorates the top 200 functions with __not_in_flash_func(). This adds only 17KB of extra RAM usage (leaving 95KB spare), and hits about 1.4 MIPS! Party on! At last, the world can enjoy Missile Command’s dark subject matter in performant comfort: Missile Command on pico-mac What about MacPaint? Everyone loves MacPaint. Maybe you love MacPaint, and have noticed I’ve deftly avoided mentioning it. Okay, FINE: It doesn’t run on a Mac 128Ke, because the Mac Plus ROM uses more RAM than the original. :sad-face: I’d seen this thread on 68kMLA about a “Mac 256K”: https://68kmla.org/bb/index.php?threads/the-mythical-mac-256k.46149/ Chances are that the Mac 128K was really a Mac 256K in the lab (or maybe even intended to have 256K and cost-cut before release), as the OS functions fine with 256KB. I wondered, does the Mac ROM/OS need a power-of-two amount of RAM? If not, I have that 95K going spare. Could I make a “Mac 200K”, and then run precious MacPaint? Well, I tried a local hack that patches the ROM to update its global memTop variable based on a given memory size, and yes, System 3.2 is happy with non-power-of-2 sizes. I booted with 256K, 208K, and 192K. However, there were some additional problems to solve: the ROM memtest craps itself without a power-of-2 size (totally fair), and NOPping that out leads to other issues. These can be fixed, though also some parts of boot access off the end of RAM. A power-of-2 size means a cheap address mask wraps RAM accesses to the valid buffer, and that can’t be done with 192K. Unfortunately, when I then tested MacPaint it still wouldn’t run because it wanted to write a scratch file to the read-only boot volume. This is totally breaking rule #1 by this point, so we are staying with 128KB for now. However, a 256K MicroMac is extremely possible. We just need an MCU with, say, 300KB of RAM… Then we’d be cooking on gas. Goodbye, friend Well, dear reader, this has been a blast. I hope there’s been something fun here for ya. Ring off now, caller! The MicroMac! HDMI monitor, using a VGA-to-HDMI box umac screenshot System 3.2, Finder 5.3 Performance tuning Random disc image working OK Resources https://github.com/evansm7/umac https://github.com/evansm7/pico-mac https://www.macintoshrepository.org/7038-all-macintosh-roms-68k-ppc- https://winworldpc.com/product/mac-os-0-6/system-3x https://68kmla.org/bb/index.php?threads/macintosh-128k-mac-plus-roms.4006/ https://docs.google.com/spreadsheets/d/1wB2HnysPp63fezUzfgpk0JX_b7bXvmAg6-Dk7QDyKPY/edit#gid=840977089

7 months ago 6 votes
A small ode to the CRT

Built October 2018 I used to hate Cathode Ray Tubes. As a kid in Europe, everything flickered at 50Hz, or made a loud whistle at 15.625KHz (back when I could still hear it). CRTs just seemed crude, “electro-brutalist” contraptions from the valve era. They were heavy, and delicate, and distorted, and blurry, and whistled, and gave people electric shocks when they weren’t busy imploding and spreading glass shards around the place. When I saw the film Brazil, I remember getting anxious about exposed CRTs all over the place — seems I was the kind of kid who was more worried about someone touching the anode or electron gun than the totalitarian bureaucratic world they lived in. 🤷🏻‍♂️ As ever, I digress. Now in the 2020s, the CRT is pretty much gone. We have astonishing flat-panel LCD and OLED screens. Nothing flickers, everything’s pin-sharp, multi-megapixel resolutions, nothing whines (except me), and display life is pretty incredible for those of us old enough to remember green-screen computing (but young enough to still see the details). But, the march to betterness marches away from accessible: if you take apart a phone, the LCD is a magic glass rectangle, and that’s it. Maybe you can see some LEDs if you tear it apart, but it’s really not obvious how it works. CRTs are also magic, but in a pleasing 19th century top-hat-and-cane science kind of way. Invisible beams trace out images through a foot of empty space. They respond colourfully to magnets (also magic) held to their screens by curious children whose glee rapidly decays into panic, and trying to undo the effect using the other pole before their mother looks around and discovers what they’ve done (allegedly). The magnet-game is a clue: (most) CRTs use electromagnets that scans the invisible electron beam to light an image at the front. There’s something enjoyable about moving the beam yourself, with a magnet in hand, and you can kind of intuitively figure out how it works from doing this. (Remember the Left-hand Rule?) I started to warm to CRTs, maybe a fondness when I realised I hadn’t had to seriously use one for over a decade. I wanted to build something. I also like smol displays, and found an excellent source for a small CRT — a video camera viewfinder. Home cameras had tiny CRTs, roughly 1cm picture size, but I looked to find a higher-end professional viewfinder because they tended to have larger tubes for a higher-quality image. Eventually I found a Sony HVF-2000 viewfinder, from ca. 1980. This viewfinder contained a monochrome 1.5” CRT, and its drive circuitry on stinky 1970s phenolic resin PCBs. All it needs are two turntables and an 8V DC power supply and composite video input. It displays nice, sharp images on a cool white phosphor. I built this from it: Small CRT floating in a box I wanted to show the CRT from all angles, without hiding any of it, in the trusty “desktop curiosity” style. The idea was to show off this beautiful little obsolete glass thingy, in a way that you could sorta guess how it worked. Switching it on with a pleasing clack, it starts silently playing a selection of 1980s TV shows, over and over and over: I had this on my desk at work, and a Young PersonTM came into my office one day to ask about it. He hadn’t really seen a CRT close-up before, and we had a fun chat about how it worked (including waving a magnet at it – everyone has a spare magnet on their desk for these moments, don’t they? Hello…?). Yay! If you’re unfamiliar with CRTs, they work roughly like this: The glass envelope contains a vacuum. The neck contains a heating filament (like a lightbulb) which gives off electrons into the void. This “electron gun” is near some metal plates (with variously high positive and negative voltages), which act to focus the fizz of electrons into a narrow beam, directing it forward. The inside of the front face of the tube is covered by a phosphorescent material which lights up when hit with electrons. The front face is connected to the anode terminal, a high positive voltage. This attracts the beam of electrons, which accelerate to the front. The beam hits the front and creates light in a small spot. To create the picture, the beam is steered in rasters/lines using horizontal and vertical electromagnets wrapped around the neck of the tube. (The magnets are called the “yoke”.) For PAL at 50Hz, lines are drawn 15625 times a second. Relying on the principle of persistence of vision, this creates the illusion of a steady image. The tube is sealed and electron gun inside is largely invisible, but here you can see the malicious-looking thick anode wire, and how dainty the tube really is with the yoke removed: Note: the anode voltage for this tube is, from memory, about 2.5 kilovolts, so not particularly spicy. A large computer monitor will give you 25KV! Did I mention the X-rays? Circuit The original viewfinder was a two-board affair, fitting in a strange transverse shape for the viewfinder case. I removed a couple of controls and indicators unrelated to the CRT operation, and extended the wires slightly so they could be stacked. The viewfinder’s eyepiece looks onto a mirror, turning 90º to the CRT face — so the image is horizontally flipped. This was undone by swapping the horizontal deflection coil wires, reversing the field direction. The circuit’s pretty trivial. It just takes a DC input (9-12V) and uses two DC-DC converter modules to create an 8V supply for the CRT board and a 5V supply for a Raspberry Pi Zero layered at the bottom. The whole thing uses under 2W. The Pi’s composite output drops straight into the CRT board. The Pi starts up a simple shell script that picks a file to play. There’s a rotary encoder on the back, to change channel, but I haven’t wired it up yet. Case For me, the case was the best bit. I had just got (and since lost :((( ) access to a decent laser cutter, and wanted to make a dovetailed transparent case for the parts. It’s made from 3mm colourless and sky-blue acrylic. Rubber bands make the world go round The CRT is supported from two “hangers”, and two trays below hold the circuitry. These are fixed to the sides using a slot/tab approach, with captive nuts. In the close-up pictures you can see there are some hairline stress fractures around the corners of some of the tab cut-outs: they could evidently do with being a few hundred µm wider! The front/top/back/bottom faces are glued together, then the left/right sides are screwed into the shelves/hangers with captive M3 nuts. This sandwiches it all together. The back holds a barrel-style DC jack, power switch, and (as-yet unused) rotary encoder. The encoder was intended to eventually be a kind of “channel select”: The acrylic is a total magnet for fingerprints and dust, which is excellent if you’re into that kind of thing. There seems to also be little flecks filling the case, probably some aquadag flaking off the CRT. This technology just keeps on giving. OpenSCAD The case is designed in OpenSCAD, and is somewhat parameterised: the XYZ dimensions, dovetailing, spacing of shelves and so forth can be tweaked till it looks good. One nice OpenSCAD laser-cutting trick I saw is that 2D parts can be rendered into a “preview” 3D view, tweaked and fettled, and then re-rendered flat on a 2D plane to create a template for cutting. So, make a 3D prototype, change the parameters until it looks good (maybe printing stuff out to see whether the physical items actually fit!)… …then change the mode variable, and the same parts are laid out in 2D for cutting: Feel free to hack on and re-use this template. Resources OpenSCAD box sources Pics Tiny dmesg! Edmund Esq

over a year ago 7 votes
Mac SE/30 odyssey

I’ve always wanted an Apple Macintosh SE/30. Released in 1989, they look quite a lot like the other members of the original “compact Mac” series, but pack in a ton of interesting features that the other compact Macs don’t have. This is the story of my journey to getting to the point of owning a working Mac SE/30, which turns out not to be as simple as just buying one. Stay tuned for tales of debugging and its repair. So, the Mac. Check it out, with the all-in-one-style 9” monochrome display: The beautiful Macintosh SE/30 I mean, look at it, isn’t it lovely? :) The key technical difference between the SE/30 and the other compact Macs is that the SE/30 is much much less crap. It’s like a sleeper workstation, compared to the Mac Plus, SE, or Classic. 8MHz 68K? No! ~16MHz 68030. Emulating FP on a slow 68K? No! It ships with a real FPU! Limited to 4MB of RAM? Naw, this thing takes up to 128MB! Look, I wouldn’t normally condone use of CISC machines (and – unpopular opinion – I’m not actually a 68K fan :D ), but not only has this machine a bunch of capability RAM-wise and CPU-wise, but this machine has an MMU. In my book, MMUs make things interesting (as well as ‘interesting’). Unlike all the other compact Macs, this one can run real operating systems like BSD, and Linux. And, I needed to experience A/UX first-hand. Unpopular opinion #2: I don’t really like ye olde Mac OS/System 7 either! :) It was very cool at the time, and made long-lasting innovations, but lack of memory protection or preemptive scheduling made it a little delicate. At the time, as a kid, it was frustrating that there was no CLI, or any way to mess around and program them without expensive developer tools – so I gravitated to the Acorn Archimedes machines, and RISC OS (coincidentally with the same delicate OS drawbacks), which were much more accessible programming-wise. Anyway, one week during one of the 2020 lockdowns I was reminded of the SE/30, and got a bit obsessed with getting hold of one. I was thinking about them at 2am (when I wasn’t stressing about things like work), planning which OSes to try out, which upgrades to make, how to network it, etc. Took myself to that overpriced auction site, and bought one from a nearby seller. We got one! I picked it up. I was so excited. It was a good deal (hollow laugh from future-Matt), as it came in a shoulder bag and included mouse/keyboard, an external SCSI drive and various cables. Getting it into the car, I noticed an OMINOUS GRITTY SLIDING SOUND. Oh, did I mention that these machines are practically guaranteed to self-destruct because either the on-board electrolytic caps ooze out gross stuff, or the on-board Varta lithium battery poos its plentiful and corrosive contents over the logic board? [If you own one of these machines or, let’s face it, any machine from this era, go right now and remove the batteries if you haven’t already! Go on, it’s important. (I’m also looking at you, Acorn RISC PC owners.) I’ll wait.] I opened up the machine, and the first small clue appeared: Matt: Oh. That’s not a great omen. Matt, with strained optimism: “But maybe the logic board will be okay!” Mac SE/30: “Nah mate, proper fucked sry.” Matt: :( At this point I’d like to say that the seller was a volunteer selling donated items from a charity shop, and it was clear they didn’t really know much about the machine. It was disappointing, but the money paid for this one is just a charitable donation and I’m happy at that. (If it were a private seller taking money for a machine that sounded like it washed up on a beach, it’d be a different level of fury.) Undeterred (give it up, Matt, come on), I spent a weekend trying to resurrect it. Much of the gross stuff washed off, bathing it in a sequence of detergents/vinegar/IPA/etc: You can see some green discolouring of the silkscreen in the bottom right. Submerged in (distilled) water, you can see a number of tracks that vanish halfway, or have disappeared completely. Or, components whose pads and leads have been destroyed! The battery chemicals are very ingenious; they don’t just wash like lava across the board and destroy the top, but they also wick down into the vias and capillary action seems to draw them into the inner layers. Broken tracks, missing pads, missing components, missing vias Poring over schematics and beeping out connections, I started airwiring the broken tracks (absolutely determined to get this machine running, as though some perverse challenge). But, once I found broken tracks on the inner layers, it moved from perverse to Sisyphean because I couldn’t just see where the damage was: wouldn’t even finding the broken tracks by beeping out all connections be O(intractible)? Making the best decision so far in the odyssey, I gave up and searched for another SE/30. At least I got a spare keyboard and mouse out of it. But also, a spare enclosure/CRT/analog board, etc., which will be super-useful in a few paragraphs. Meet the new Mac, same as the old Mac I found someone selling one who happeend to be in the same city (and it turns out, we even worked for the same company – city like village). This one was advertised as having been ‘professionally re-capped’, and came loaded: 128MB of RAM, and a sought-after Ethernet card. Perfecto! Paranoid me immediately took it apart to check the re-capping and battery. :) Whilst there was a teeny bit of evidence of prior capacitor-leakage, it was really clean for a 31 year old machine and I was really pleased with it. The re-capping job looked sensible, check. The battery looked new, but I’m taking no chances this time and pulled it out. I had a good 2 hours merrily pissing about doing the kinds of things you do with a new old computer, setting up networking and getting some utilities copied over, such as a Telnet client: Telnet client, life is complete Disaster strikes After the two hour happiness timer expired, the machine stopped working. Here’s what it did: Otherwise, the Mac made the startup “bong” sound, so the logic board was alive, just unhappy video. I think we’re thinking the same thing: the CRT’s Y-deflection circuit is obviously broken. This family of Macs have a common fault where solder joints on the Analogue board crack, or the drive transistor fails. The excellent “Dead Mac Scrolls” book covers common faults, and fixes. But, remember the first Mac: the logic board was a gonner, but the Analog board/CRT seemed good. I could just swap the logic board over, and I’ve got a working Mac again and can watch the end of Telnet Star Wars. It did exactly the same thing! Bollocks, the problem was on the logic board. Debugging the problem We were both wrong: it wasn’t the Y-deflection circuit for the CRT. The symptoms of that would be that the CRT scans, but all lines get compressed and overdrawn along the centre – no deflection creating one super-bright line in the centre. Debug clues Clue 1: This line wasn’t super-bright. Let’s take a closer look: Clue 2: It’s a dotted line, as though it’s one line of the stippled background when the Mac boots. That’s interesting because it’s clearly not being overdrawn; multiple lines merged together would overlay even/odd odd/even pixels and come out solid white. The line also doesn’t provide any of the “happy Mac” icon in the middle, so it isn’t one of the centre lines of the framebuffer. SE/30 logic board on The Bench, provided with +5V/+12V and probed with scope/LA If you’ve an SE/30 (or a Classic/Plus/128/512 etc.) logic board on a workbench, they’re easy enough to power up without the Analog board/CRT but be aware the /RESET circuitry is a little funky. Reset is generated by the sound chip (…obviously) which requires both +5V and +12V to come out of reset, so you’ll need a dual-rail bench supply. I’d also recommend plugging headphones in, so you can hear the boot chime (or lack of) as you tinker. Note the audio amp technically requires -5V too, but with +5V alone you should still be able to hear something. This generation of machines are one of the last to have significant subsystems still being implemented as multi-chip sections. It’s quite instructive to follow along in the schematic: the SE/30 video system is a cluster of discrete X and Y pixel counters which generate addresses into VRAM (which spits out pixels). Some PALs generate VRAM addresses/strobes/refresh, and video syncs. Clue 3: The video output pin on the chonky connector is being driven, and HSYNC is running correctly (we can deduce this already, though, because the CRT lights up meaning its HT supply is running, and that’s driven from HSYNC). But, there was no VSYNC signal at all. VSYNC comes from a PAL taking a Y-count from a counter clocked by 'TWOLINE' Working backwards, I traced VSYNC from the connector to PAL UG6. It wasn’t simply a broken trace, UG6 wasn’t generating it. UG6 appears to be a comparator that generates vertical timing strobes when the Y line count VADR[7:0] reaches certain lines. The Y line count is generated from a dual hex counter, UF8. Clue 4: The Y line count wasn’t incrementing at all. That explains the lack of VSYNC, as UG6 never saw the “VSYNC starts now” line come past. The UF8 counter is clocked/incremented by the TWOLINE signal output from PAL UG7. Clue 5a: PAL UG7’s TWOLINE output was stuck/not transitioning. Its other outputs (such as HSYNC) were transitioning fine. PALs do die, but it seems unusual for only a single output to conk out. Clue 5b: PAL UG7 was unusually hot! Clue 6, and the root problem: Pulling the PALs out, the TWOLINE pin measures 3Ω to ground. AHA! Debug epiphany *Something is shorting the TWOLINE signal to a power rail. Here’s how the clues correspond to the observations: There is no VSYNC; the Y line count is stuck at 0. The X counter is working fine. (HSYNC is produced, and a stippled pattern line is displayed correctly.) The display shows the top line of the video buffer (from address 0, over and over) but never advances onto the next line. The CRT Y deflection is never “charged up” by a VSYNC so the raster stays in the centre on one line, instead of showing 384 identical lines. We can work with this. TWOLINE is shorted somehow. Tracing it across the PCB, every part of the trace looked fine, except I couldn’t see the part that ran underneath C7 (one of the originally-electrolytic caps replaced with a tantalum). I removed C7: See the problem? It’s pleasingly subtle… How about now? A tiny amount of soldermask has come off the track just south of the silkscreen ‘+’. This was Very Close to the capacitor’s contact, and was shorting against it! Above I thought it was shorting to ground: it’s shorting to +5V (which, when you measure it might be a low number of ohms to ground). My theory is that it wasn’t completely contacting, or wasn’t making a good connection, and that the heat from my 2-hour joyride expanded the material such that it made good contact. You can see that there’s some tarnish on the IC above C7 – this is damage from the previous C7 leaking. This, or the re-capping job, lifted the insulating soldermask leading to the short. Fixed The fix was simple, add some insulation using kapton tape and replace the capacitor: After that, I could see VSYNC being produced! But would it work? The sweet 1bpp stippled smell of success Yasssssss! :) Time to put it all back together, trying not to touch or break the CRT. And now for something completely different, but eerily familiar I mentioned I wanted this particular model because it could run “interesting OSes”. Did you know that, way before NeXT and OS X, Apple was a UNIX vendor? Apple A/UX operating system I’ve always wanted to play with Apple’s A/UX. By version 3.1, it had a very highly-integrated Mac OS ‘Classic’ GUI running on a real UNIX. It's like Mac OS, but... there's a UNIX dmesg too? It’s not X11 (though an X server is available), it really is running the Mac Toolbox etc., and it seems to have some similarities with the later OS X Blue Box/Classic environment in that it runs portions of Mac OS as a UNIX process. In the same way as OS X + Blue Box, A/UX will run unmodified Mac OS applications. The Finder is integrated with the UNIX filesystems in both directions (i.e. from a shell you can manipulate Mac files). These screenshots don’t do it justice, but there are good A/UX screenshots elsewhere. As an OS geek, I’m really impressed with the level of integration between the two OSes! It’s very thorough. Since the usual UNIX development tools are available, there’s a bit of cognitive dissonance of being able to “program a Mac” right out of the box: A/UX example application I mean, not just building normal UNIX command-line apps with cc/make etc., but the development examples include Mac OS GUI apps as well! It’s truly living in the future™. Plug for RASCSI Playing with ancient machines and multiple OSes is pretty painful when using ancient SCSI discs because: Old discs don’t work Old discs are small Transferring stuff to and from discs means plugging it into your Linux box and… I don’t have SCSI there Old discs don’t work and will pretend to and then screw up and ruin your week I built a RASCSI adapter (write-up and PCB posting TBD), software and circuit originally by GIMONS. This is a Raspberry Pi adapter that allows a userspace program to bit-bang the SCSI-I protocol, serving emulated disc/CD-ROM images from SD card. It works beautifully on the SE/30, and lets it both have several discs present at once, and switch between images quickly. Homemade RASCSI clone, SCSI emulator for Raspberry Pi The end, seeeeeeya! Resources https://archive.org/details/mac_The_Dead_Mac_Scrolls_1992 https://winworldpc.com/product/a-ux/3x https://68kmla.org/bb/index.php

over a year ago 10 votes
32-bit hat, with LEDs

Built in November 2015 (now-traditional multi-year writeup delay applied) A hat, bejewelled with 38 RGB LEDs Is this thing on..? It’s been a while since I’ve written one of these. So, the hat. It’s been on the writeup pile for almost 6 years, nagging away. Finally it’s its time to shine! NO PUN ESCAPES Anyway, the hat. It seemed like a good idea, and I even wore it out dancing. I know, so cool. This hat had been through at least two fancy-dress events, and had a natty band aftermarket mod even before the LEDs. Long story short, got a hat, put a battery, ARM Cortex-M0 microcontroller, accelerometer in it and a strip of full-colour RGB LEDs around it. The LEDs then react to movement, with an effect similar to a spirit level: as it tilts, a spark travels to the highest point. The spark rolls around, fading out nicely. Hardware Pretty much full bodge-city, and made in a real rush before a party. Parts: Charity shop Trilby (someone’s going to correct me that this is not an ISO standard Trilby and is in fact a Westcountry Colonel Chap Trilby, or something). Bugger it – a hat. A WS2812B strip of 38 LEDs. 38 is what would fit around the hat. Cheapo ADXL345 board. Cheapo STM32F030 board (I <3 these boards! So power, such price wow). Cheapo Li-Ion charging board and 5V step-up module all-in-one (AKA “powerbank board”). Li-Ion flat/pouch-style battery. Obviously some hot glue in there somewhere too. No schematic, sorry, it was quite freeform. The battery is attached to charging board. That connects to the rest of the system via a 0.1” header/disconnectable “power switch” cable. The 5V power then directly feeds the LED strip, from Cortex-M0 board (which then generates 3.3V itself). The ADXL345 accelerometer is joined directly to the the STM32 board at what was the UART header, which is configured for I2C: The STM32 board is also stripped of any unnecessary or especially pointy parts, such as jumpers/pin headers, to make it as flat and pain-free as possible. The LED strip is bent into a ring and soldered back onto itself. 5V and ground are linked at the join, whereas DI enters at the join and DO is left hanging. This is done for mechanical stability, and can’t hurt for power distribution too. Here’s the ring in testing: The electronics are mounted in an antistatic bag (with a hole for the power “switch” header pins, wires, etc.), and the bag sewn into the top of the hat: The LED ring is attached via a small hole, and sewn on with periodic thread loops: Software The firmware goes through an initial “which way is up?” calibration phase for the first few seconds, where it: Lights a simple red dotted pattern to warn the user it’s about to sample which way is up, so put it on quick and stand as naturally as you can with such exciting technology on your head, Lights a simple white dotted pattern, as it measures the “resting vector”, i.e. which way is up. This “resting vector” is thereafter used as the reference for determining whether the hat is tilted, and in which direction. Tilt direction vectors The main loop’s job is to regulate the rate of LED updates, read the accelerometer, calculate a position to draw a bright spark “blob”, and update the LEDs. The accelerometer returns a 3D vector of a force; when not being externally accelerated, the vector represents the direction of Earth’s gravity, i.e. ‘down’. Trigonometry is both fun and useful Roughly, the calculations that are performed are: Relative to “vertical” (approximated by the resting vector), calculate the hat’s tilt in terms of angle of the measured vector to vertical, and its bearing to “12 o’clock” in the horizontal (XY) plane. Convert the bearing of the vector into a position in the LED hoop. Use the radius of the vector in the XY plane as a crude magnitude, scaling up the spark intensity for a larger tilt. All this talk of tilt and gravity vectors assumes the hat isn’t being moved (i.e. worn by a human). It doesn’t correct for the fact that the hat is likely actually accelerating, rather than sitting static at a tilt but, hey, this is a hat with LEDs and not a rocket. It is incorrect and looks good. Floating-point I never use floating point in any of my embedded projects. I’m a die-hard fixed-point kind of guy. You know where you are with fixed point. Sooo anyway, the firmware uses the excellent Qfplib, from https://www.quinapalus.com/qfplib-m0-tiny.html. This provides tiny single-precision floating point routines, including the trigonometric routines I needed for the angle calculations. Bizarrely, with an embedded hat on, it was way easier using gosh-darnit real FP than it was to do the trigonometry in fixed point. Framebuffer The framebuffer is only one dimensional :) It’s a line of pixels representing the LEDs. Blobs are drawn into the framebuffer at given position, and start off “bright”. Every frame, the brightness of all pixels is decremented, giving a fade-out effect. The code drawing blobs uses a pre-calculated colour look-up table, to give a cool white-blue-purple transition to the spark. Driving the WS2812B RGB LEDs The WS2812B LEDs take a 1-bit stream of data encoding 24b of RGB data, in a fixed-time frame using relative timing of rising/falling edges to give a 0 or 1 bit. The code uses a timer in PWM mode to output a 1/0 data bit, refilled from a neat little DMA routine. Once a framebuffer has been drawn, the LEDs are refreshed. For each pixel in the line, the brightness bits are converted into an array of timer values each representing a PWM period (therefore a 0-time or a 1-time). A double-buffered DMA scheme is used to stream these values into the timer PWM register. This costs a few bytes of memory for the intermediate buffers, and is complicated, but has several advantages: It’s completely flicker-free and largely immune to any other interrupt/DMA activity compared to bitbanging approaches. It goes on in the background, freeing up CPU time to calculate the next frame. Though the CPU is pretty fast, this allows LEDHat to update at over 100Hz, giving incredibly fluid motion. Resources Firmware sourcecode: https://github.com/evansm7/LEDHat

over a year ago 9 votes

More in technology

The origin and unexpected evolution of the word "mainframe"

What is the origin of the word "mainframe", referring to a large, complex computer? Most sources agree that the term is related to the frames that held early computers, but the details are vague.1 It turns out that the history is more interesting and complicated than you'd expect. Based on my research, the earliest computer to use the term "main frame" was the IBM 701 computer (1952), which consisted of boxes called "frames." The 701 system consisted of two power frames, a power distribution frame, an electrostatic storage frame, a drum frame, tape frames, and most importantly a main frame. The IBM 701's main frame is shown in the documentation below.2 This diagram shows how the IBM 701 mainframe swings open for access to the circuitry. From "Type 701 EDPM [Electronic Data Processing Machine] Installation Manual", IBM. From Computer History Museum archives. The meaning of "mainframe" has evolved, shifting from being a part of a computer to being a type of computer. For decades, "mainframe" referred to the physical box of the computer; unlike modern usage, this "mainframe" could be a minicomputer or even microcomputer. Simultaneously, "mainframe" was a synonym for "central processing unit." In the 1970s, the modern meaning started to develop—a large, powerful computer for transaction processing or business applications—but it took decades for this meaning to replace the earlier ones. In this article, I'll examine the history of these shifting meanings in detail. Early computers and the origin of "main frame" Early computers used a variety of mounting and packaging techniques including panels, cabinets, racks, and bays.3 This packaging made it very difficult to install or move a computer, often requiring cranes or the removal of walls.4 To avoid these problems, the designers of the IBM 701 computer came up with an innovative packaging technique. This computer was constructed as individual units that would pass through a standard doorway, would fit on a standard elevator, and could be transported with normal trucking or aircraft facilities.7 These units were built from a metal frame with covers attached, so each unit was called a frame. The frames were named according to their function, such as the power frames and the tape frame. Naturally, the main part of the computer was called the main frame. An IBM 701 system at General Motors. On the left: tape drives in front of power frames. Back: drum unit/frame, control panel and electronic analytical control unit (main frame), electrostatic storage unit/frame (with circular storage CRTs). Right: printer, card punch. Photo from BRL Report, thanks to Ed Thelen. The IBM 701's internal documentation used "main frame" frequently to indicate the main box of the computer, alongside "power frame", "core frame", and so forth. For instance, each component in the schematics was labeled with its location in the computer, "MF" for the main frame.6 Externally, however, IBM documentation described the parts of the 701 computer as units rather than frames.5 The term "main frame" was used by a few other computers in the 1950s.8 For instance, the JOHNNIAC Progress Report (August 8, 1952) mentions that "the main frame for the JOHNNIAC is ready to receive registers" and they could test the arithmetic unit "in the JOHNNIAC main frame in October."10 An article on the RAND Computer in 1953 stated that "The main frame is completed and partially wired" The main body of a computer called ERMA is labeled "main frame" in the 1955 Proceedings of the Eastern Computer Conference.9 Operator at console of IBM 701. The main frame is on the left with the cover removed. The console is in the center. The power frame (with gauges) is on the right. Photo from NOAA. The progression of the word "main frame" can be seen in reports from the Ballistics Research Lab (BRL) that list almost all the computers in the United States. In the 1955 BRL report, most computers were built from cabinets or racks; the phrase "main frame" was only used with the IBM 650, 701, and 704. By 1961, the BRL report shows "main frame" appearing in descriptions of the IBM 702, 705, 709, and 650 RAMAC, as well as the Univac FILE 0, FILE I, RCA 501, READIX, and Teleregister Telefile. This shows that the use of "main frame" was increasing, but still mostly an IBM term. The physical box of a minicomputer or microcomputer In modern usage, mainframes are distinct from minicomputers or microcomputers. But until the 1980s, the word "mainframe" could also mean the main physical part of a minicomputer or microcomputer. For instance, a "minicomputer mainframe" was not a powerful minicomputer, but simply the main part of a minicomputer.13 For example, the PDP-11 is an iconic minicomputer, but DEC discussed its "mainframe."14. Similarly, the desktop-sized HP 2115A and Varian Data 620i computers also had mainframes.15 As late as 1981, the book Mini and Microcomputers mentioned "a minicomputer mainframe." "Mainframes for Hobbyists" on the front cover of Radio-Electronics, Feb 1978. Even microcomputers had a mainframe: the cover of Radio Electronics (1978, above) stated, "Own your own Personal Computer: Mainframes for Hobbyists", using the definition below. An article "Introduction to Personal Computers" in Radio Electronics (Mar 1979) uses a similar meaning: "The first choice you will have to make is the mainframe or actual enclosure that the computer will sit in." The popular hobbyist magazine BYTE also used "mainframe" to describe a microprocessor's box in the 1970s and early 1980s16. BYTE sometimes used the word "mainframe" both to describe a large IBM computer and to describe a home computer box in the same issue, illustrating that the two distinct meanings coexisted. Definition from Radio-Electronics: main-frame n: COMPUTER; esp: a cabinet housing the computer itself as distinguished from peripheral devices connected with it: a cabinet containing a motherboard and power supply intended to house the CPU, memory, I/O ports, etc., that comprise the computer itself. Main frame synonymous with CPU Words often change meaning through metonymy, where a word takes on the meaning of something closely associated with the original meaning. Through this process, "main frame" shifted from the physical frame (as a box) to the functional contents of the frame, specifically the central processing unit.17 The earliest instance that I could find of the "main frame" being equated with the central processing unit was in 1955. Survey of Data Processors stated: "The central processing unit is known by other names; the arithmetic and ligical [sic] unit, the main frame, the computer, etc. but we shall refer to it, usually, as the central processing unit." A similar definition appeared in Radio Electronics (June 1957, p37): "These arithmetic operations are performed in what is called the arithmetic unit of the machine, also sometimes referred to as the 'main frame.'" The US Department of Agriculture's Glossary of ADP Terminology (1960) uses the definition: "MAIN FRAME - The central processor of the computer system. It contains the main memory, arithmetic unit and special register groups." I'll mention that "special register groups" is nonsense that was repeated for years.18 This definition was reused and extended in the government's Automatic Data Processing Glossary, published in 1962 "for use as an authoritative reference by all officials and employees of the executive branch of the Government" (below). This definition was reused in many other places, notably the Oxford English Dictionary.19 Definition from Bureau of the Budget: frame, main, (1) the central processor of the computer system. It contains the main storage, arithmetic unit and special register groups. Synonymous with (CPU) and (central processing unit). (2) All that portion of a computer exclusive of the input, output, peripheral and in some instances, storage units. By the early 1980s, defining a mainframe as the CPU had become obsolete. IBM stated that "mainframe" was a deprecated term for "processing unit" in the Vocabulary for Data Processing, Telecommunications, and Office Systems (1981); the American National Dictionary for Information Processing Systems (1982) was similar. Computers and Business Information Processing (1983) bluntly stated: "According to the official definition, 'mainframe' and 'CPU' are synonyms. Nobody uses the word mainframe that way." Guide for auditing automatic data processing systems (1961).](mainframe-diagram.jpg "w400") 1967: I/O devices transferring data "independent of the main frame" Datamation, Volume 13. Discusses other sorts of off-line I/O. 1967 Office Equipment & Methods: "By putting your data on magnetic tape and feeding it to your computer in this pre-formatted fashion, you increase your data input rate so dramatically that you may effect main frame time savings as high as 50%." Same in Data Processing Magazine, 1966 Equating the mainframe and the CPU led to a semantic conflict in the 1970s, when the CPU became a microprocessor chip rather than a large box. For the most part, this was resolved by breaking apart the definitions of "mainframe" and "CPU", with the mainframe being the computer or class of computers, while the CPU became the processor chip. However, some non-American usages resolved the conflict by using "CPU" to refer to the box/case/tower of a PC. (See discussion [here](https://news.ycombinator.com/item?id=21336515) and [here](https://superuser.com/questions/1198006/is-it-correct-to-say-that-main-memory-ram-is-a-part-of-cpu).) --> Mainframe vs. peripherals Rather than defining the mainframe as the CPU, some dictionaries defined the mainframe in opposition to the "peripherals", the computer's I/O devices. The two definitions are essentially the same, but have a different focus.20 One example is the IFIP-ICC Vocabulary of Information Processing (1966) which defined "central processor" and "main frame" as "that part of an automatic data processing system which is not considered as peripheral equipment." Computer Dictionary (1982) had the definition "main frame—The fundamental portion of a computer, i.e. the portion that contains the CPU and control elements of a computer system, as contrasted with peripheral or remote devices usually of an input-output or memory nature." One reason for this definition was that computer usage was billed for mainframe time, while other tasks such as printing results could save money by taking place directly on the peripherals without using the mainframe itself.21 A second reason was that the mainframe vs. peripheral split mirrored the composition of the computer industry, especially in the late 1960s and 1970s. Computer systems were built by a handful of companies, led by IBM. Compatible I/O devices and memory were built by many other companies that could sell them at a lower cost than IBM.22 Publications about the computer industry needed convenient terms to describe these two industry sectors, and they often used "mainframe manufacturers" and "peripheral manufacturers." Main Frame or Mainframe? An interesting linguistic shift is from "main frame" as two independent words to a compound word: either hyphenated "main-frame" or the single word "mainframe." This indicates the change from "main frame" being a type of frame to "mainframe" being a new concept. The earliest instance of hyphenated "main-frame" that I found was from 1959 in IBM Information Retrieval Systems Conference. "Mainframe" as a single, non-hyphenated word appears the same year in Datamation, mentioning the mainframe of the NEAC2201 computer. In 1962, the IBM 7090 Installation Instructions refer to a "Mainframe Diag[nostic] and Reliability Program." (Curiously, the document also uses "main frame" as two words in several places.) The 1962 book Information Retrieval Management discusses how much computer time document queries can take: "A run of 100 or more machine questions may require two to five minutes of mainframe time." This shows that by 1962, "main frame" had semantically shifted to a new word, "mainframe." The rise of the minicomputer and how the "mainframe" become a class of computers So far, I've shown how "mainframe" started as a physical frame in the computer, and then was generalized to describe the CPU. But how did "mainframe" change from being part of a computer to being a class of computers? This was a gradual process, largely happening in the mid-1970s as the rise of the minicomputer and microcomputer created a need for a word to describe large computers. Although microcomputers, minicomputers, and mainframes are now viewed as distinct categories, this was not the case at first. For instance, a 1966 computer buyer's guide lumps together computers ranging from desk-sized to 70,000 square feet.23 Around 1968, however, the term "minicomputer" was created to describe small computers. The story is that the head of DEC in England created the term, inspired by the miniskirt and the Mini Minor car.24 While minicomputers had a specific name, larger computers did not.25 Gradually in the 1970s "mainframe" came to be a separate category, distinct from "minicomputer."2627 An early example is Datamation (1970), describing systems of various sizes: "mainframe, minicomputer, data logger, converters, readers and sorters, terminals." The influential business report EDP first split mainframes from minicomputers in 1972.28 The line between minicomputers and mainframes was controversial, with articles such as Distinction Helpful for Minis, Mainframes and Micro, Mini, or Mainframe? Confusion persists (1981) attempting to clarify the issue.29 With the development of the microprocessor, computers became categorized as mainframes, minicomputers or microcomputers. For instance, a 1975 Computerworld article discussed how the minicomputer competes against the microcomputer and mainframes. Adam Osborne's An Introduction to Microcomputers (1977) described computers as divided into mainframes, minicomputers, and microcomputers by price, power, and size. He pointed out the large overlap between categories and avoided specific definitions, stating that "A minicomputer is a minicomputer, and a mainframe is a mainframe, because that is what the manufacturer calls it."32 In the late 1980s, computer industry dictionaries started defining a mainframe as a large computer, often explicitly contrasted with a minicomputer or microcomputer. By 1990, they mentioned the networked aspects of mainframes.33 IBM embraces the mainframe label Even though IBM is almost synonymous with "mainframe" now, IBM avoided marketing use of the word for many years, preferring terms such as "general-purpose computer."35 IBM's book Planning a Computer System (1962) repeatedly referred to "general-purpose computers" and "large-scale computers", but never used the word "mainframe."34 The announcement of the revolutionary System/360 (1964) didn't use the word "mainframe"; it was called a general-purpose computer system. The announcement of the System/370 (1970) discussed "medium- and large-scale systems." The System/32 introduction (1977) said, "System/32 is a general purpose computer..." The 1982 announcement of the 3804, IBM's most powerful computer at the time, called it a "large scale processor" not a mainframe. IBM started using "mainframe" as a marketing term in the mid-1980s. For example, the 3270 PC Guide (1986) refers to "IBM mainframe computers." An IBM 9370 Information System brochure (c. 1986) says the system was "designed to provide mainframe power." IBM's brochure for the 3090 processor (1987) called them "advanced general-purpose computers" but also mentioned "mainframe computers." A System 390 brochure (c. 1990) discussed "entry into the mainframe class." The 1990 announcement of the ES/9000 called them "the most powerful mainframe systems the company has ever offered." The IBM System/390: "The excellent balance between price and performance makes entry into the mainframe class an attractive proposition." IBM System/390 Brochure By 2000, IBM had enthusiastically adopted the mainframe label: the z900 announcement used the word "mainframe" six times, calling it the "reinvented mainframe." In 2003, IBM announced "The Mainframe Charter", describing IBM's "mainframe values" and "mainframe strategy." Now, IBM has retroactively applied the name "mainframe" to their large computers going back to 1959 (link), (link). Mainframes and the general public While "mainframe" was a relatively obscure computer term for many years, it became widespread in the 1980s. The Google Ngram graph below shows the popularity of "microcomputer", "minicomputer", and "mainframe" in books.36 The terms became popular during the late 1970s and 1980s. The popularity of "minicomputer" and "microcomputer" roughly mirrored the development of these classes of computers. Unexpectedly, even though mainframes were the earliest computers, the term "mainframe" peaked later than the other types of computers. N-gram graph from Google Books Ngram Viewer. Dictionary definitions I studied many old dictionaries to see when the word "mainframe" showed up and how they defined it. To summarize, "mainframe" started to appear in dictionaries in the late 1970s, first defining the mainframe in opposition to peripherals or as the CPU. In the 1980s, the definition gradually changed to the modern definition, with a mainframe distinguished as being large, fast, and often centralized system. These definitions were roughly a decade behind industry usage, which switched to the modern meaning in the 1970s. The word didn't appear in older dictionaries, such as the Random House College Dictionary (1968) and Merriam-Webster (1974). The earliest definition I could find was in the supplement to Webster's International Dictionary (1976): "a computer and esp. the computer itself and its cabinet as distinguished from peripheral devices connected with it." Similar definitions appeared in Webster's New Collegiate Dictionary (1976, 1980). A CPU-based definition appeared in Random House College Dictionary (1980): "the device within a computer which contains the central control and arithmetic units, responsible for the essential control and computational functions. Also called central processing unit." The Random House Dictionary (1978, 1988 printing) was similar. The American Heritage Dictionary (1982, 1985) combined the CPU and peripheral approaches: "mainframe. The central processing unit of a computer exclusive of peripheral and remote devices." The modern definition as a large computer appeared alongside the old definition in Webster's Ninth New Collegiate Dictionary (1983): "mainframe (1964): a computer with its cabinet and internal circuits; also: a large fast computer that can handle multiple tasks concurrently." Only the modern definition appears in The New Merriram-Webster Dictionary (1989): "large fast computer", while Webster's Unabridged Dictionary of the English Language (1989): "mainframe. a large high-speed computer with greater storage capacity than a minicomputer, often serving as the central unit in a system of smaller computers. [MAIN + FRAME]." Random House Webster's College Dictionary (1991) and Random House College Dictionary (2001) had similar definitions. The Oxford English Dictionary is the principal historical dictionary, so it is interesting to see its view. The 1989 OED gave historical definitions as well as defining mainframe as "any large or general-purpose computer, exp. one supporting numerous peripherals or subordinate computers." It has seven historical examples from 1964 to 1984; the earliest is the 1964 Honeywell Glossary. It quotes a 1970 Dictionary of Computers as saying that the word "Originally implied the main framework of a central processing unit on which the arithmetic unit and associated logic circuits were mounted, but now used colloquially to refer to the central processor itself." The OED also cited a Hewlett-Packard ad from 1974 that used the word "mainframe", but I consider this a mistake as the usage is completely different.15 Encyclopedias A look at encyclopedias shows that the word "mainframe" started appearing in discussions of computers in the early 1980s, later than in dictionaries. At the beginning of the 1980s, many encyclopedias focused on large computers, without using the word "mainframe", for instance, The Concise Encyclopedia of the Sciences (1980) and World Book (1980). The word "mainframe" started to appear in supplements such as Britannica Book of the Year (1980) and World Book Year Book (1981), at the same time as they started discussing microcomputers. Soon encyclopedias were using the word "mainframe", for example, Funk & Wagnalls Encyclopedia (1983), Encyclopedia Americana (1983), and World Book (1984). By 1986, even the Doubleday Children's Almanac showed a "mainframe computer." Newspapers I examined old newspapers to track the usage of the word "mainframe." The graph below shows the usage of "mainframe" in newspapers. The curve shows a rise in popularity through the 1980s and a steep drop in the late 1990s. The newspaper graph roughly matches the book graph above, although newspapers show a much steeper drop in the late 1990s. Perhaps mainframes aren't in the news anymore, but people still write books about them. Newspaper usage of "mainframe." Graph from newspapers.com from 1975 to 2010 shows usage started growing in 1978, picked up in 1984, and peaked in 1989 and 1997, with a large drop in 2001 and after (y2k?). The first newspaper appearances were in classified ads seeking employees, for instance, a 1960 ad in the San Francisco Examiner for people "to monitor and control main-frame operations of electronic computers...and to operate peripheral equipment..." and a (sexist) 1966 ad in the Philadelphia Inquirer for "men with Digital Computer Bkgrnd [sic] (Peripheral or Mainframe)."37 By 1970, "mainframe" started to appear in news articles, for example, "The computer can't work without the mainframe unit." By 1971, the usage increased with phrases such as "mainframe central processor" and "'main-frame' computer manufacturers". 1972 had usages such as "the mainframe or central processing unit is the heart of any computer, and does all the calculations". A 1975 article explained "'Mainframe' is the industry's word for the computer itself, as opposed to associated items such as printers, which are referred to as 'peripherals.'" By 1980, minicomputers and microcomputers were appearing: "All hardware categories-mainframes, minicomputers, microcomputers, and terminals" and "The mainframe and the minis are interconnected." By 1985, the mainframe was a type of computer, not just the CPU: "These days it's tough to even define 'mainframe'. One definition is that it has for its electronic brain a central processor unit (CPU) that can handle at least 32 bits of information at once. ... A better distinction is that mainframes have numerous processors so they can work on several jobs at once." Articles also discussed "the micro's challenge to the mainframe" and asked, "buy a mainframe, rather than a mini?" By 1990, descriptions of mainframes became florid: "huge machines laboring away in glass-walled rooms", "the big burner which carries the whole computing load for an organization", "behemoth data crunchers", "the room-size machines that dominated computing until the 1980s", "the giant workhorses that form the nucleus of many data-processing centers", "But it is not raw central-processing-power that makes a mainframe a mainframe. Mainframe computers command their much higher prices because they have much more sophisticated input/output systems." Conclusion After extensive searches through archival documents, I found usages of the term "main frame" dating back to 1952, much earlier than previously reported. In particular, the introduction of frames to package the IBM 701 computer led to the use of the word "main frame" for that computer and later ones. The term went through various shades of meaning and remained fairly obscure for many years. In the mid-1970s, the term started describing a large computer, essentially its modern meaning. In the 1980s, the term escaped the computer industry and appeared in dictionaries, encyclopedias, and newspapers. After peaking in the 1990s, the term declined in usage (tracking the decline in mainframe computers), but the term and the mainframe computer both survive. Two factors drove the popularity of the word "mainframe" in the 1980s with its current meaning of a large computer. First, the terms "microcomputer" and "minicomputer" led to linguistic pressure for a parallel term for large computers. For instance, the business press needed a word to describe IBM and other large computer manufacturers. While "server" is the modern term, "mainframe" easily filled the role back then and was nicely alliterative with "microcomputer" and "minicomputer."38 Second, up until the 1980s, the prototype meaning for "computer" was a large mainframe, typically IBM.39 But as millions of home computers were sold in the early 1980s, the prototypical "computer" shifted to smaller machines. This left a need for a term for large computers, and "mainframe" filled that need. In other words, if you were talking about a large computer in the 1970s, you could say "computer" and people would assume you meant a mainframe. But if you said "computer" in the 1980s, you needed to clarify if it was a large computer. The word "mainframe" is almost 75 years old and both the computer and the word have gone through extensive changes in this time. The "death of the mainframe" has been proclaimed for well over 30 years but mainframes are still hanging on. Who knows what meaning "mainframe" will have in another 75 years? Follow me on Bluesky (@righto.com) or RSS. (I'm no longer on Twitter.) Thanks to the Computer History Museum and archivist Sara Lott for access to many documents. Notes and References The Computer History Museum states: "Why are they called “Mainframes”? Nobody knows for sure. There was no mainframe “inventor” who coined the term. Probably “main frame” originally referred to the frames (designed for telephone switches) holding processor circuits and main memory, separate from racks or cabinets holding other components. Over time, main frame became mainframe and came to mean 'big computer.'" (Based on my research, I don't think telephone switches have any connection to computer mainframes.) Several sources explain that the mainframe is named after the frame used to construct the computer. The Jargon File has a long discussion, stating that the term "originally referring to the cabinet containing the central processor unit or ‘main frame’." Ken Uston's Illustrated Guide to the IBM PC (1984) has the definition "MAIN FRAME A large, high-capacity computer, so named because the CPU of this kind of computer used to be mounted on a frame." IBM states that mainframe "Originally referred to the central processing unit of a large computer, which occupied the largest or central frame (rack)." The Microsoft Computer Dictionary (2002) states that the name mainframe "is derived from 'main frame', the cabinet originally used to house the processing unit of such computers." Some discussions of the origin of the word "mainframe" are here, here, here, here, and here. The phrase "main frame" in non-computer contexts has a very old but irrelevant history, describing many things that have a frame. For example, it appears in thousands of patents from the 1800s, including drills, saws, a meat cutter, a cider mill, printing presses, and corn planters. This shows that it was natural to use the phrase "main frame" when describing something constructed from frames. Telephony uses a Main distribution frame or "main frame" for wiring, going back to 1902. Some people claim that the computer use of "mainframe" is related to the telephony use, but I don't think they are related. In particular, a telephone main distribution frame looks nothing like a computer mainframe. Moreover, the computer use and the telephony use developed separately; if the computer use started in, say, Bell Labs, a connection would be more plausible. IBM patents with "main frame" include a scale (1922), a card sorter (1927), a card duplicator (1929), and a card-based accounting machine (1930). IBM's incidental uses of "main frame" are probably unrelated to modern usage, but they are a reminder that punch card data processing started decades before the modern computer. ↩ It is unclear why the IBM 701 installation manual is dated August 27, 1952 but the drawing is dated 1953. I assume the drawing was updated after the manual was originally produced. ↩ This footnote will survey the construction techniques of some early computers; the key point is that building a computer on frames was not an obvious technique. ENIAC (1945), the famous early vacuum tube computer, was constructed from 40 panels forming three walls filling a room (ref, ref). EDVAC (1949) was built from large cabinets or panels (ref) while ORDVAC and CLADIC (1949) were built on racks (ref). One of the first commercial computers, UNIVAC 1 (1951), had a "Central Computer" organized as bays, divided into three sections, with tube "chassis" plugged in (ref ). The Raytheon computer (1951) and Moore School Automatic Computer (1952) (ref) were built from racks. The MONROBOT VI (1955) was described as constructed from the "conventional rack-panel-cabinet form" (ref). ↩ The size and construction of early computers often made it difficult to install or move them. The early computer ENIAC required 9 months to move from Philadelphia to the Aberdeen Proving Ground. For this move, the wall of the Moore School in Philadelphia had to be partially demolished so ENIAC's main panels could be removed. In 1959, moving the SWAC computer required disassembly of the computer and removing one wall of the building (ref). When moving the early computer JOHNNIAC to a different site, the builders discovered the computer was too big for the elevator. They had to raise the computer up the elevator shaft without the elevator (ref). This illustrates the benefits of building a computer from moveable frames. ↩ The IBM 701's main frame was called the Electronic Analytical Control Unit in external documentation. ↩ The 701 installation manual (1952) has a frame arrangement diagram showing the dimensions of the various frames, along with a drawing of the main frame, and power usage of the various frames. Service documentation (1953) refers to "main frame adjustments" (page 74). The 700 Series Data Processing Systems Component Circuits document (1955-1959) lists various types of frames in its abbreviation list (below) Abbreviations used in IBM drawings include MF for main frame. Also note CF for core frame, and DF for drum frame, From 700 Series Data Processing Systems Component Circuits (1955-1959). When repairing an IBM 701, it was important to know which frame held which components, so "main frame" appeared throughout the engineering documents. For instance, in the schematics, each module was labeled with its location; "MF" stands for "main frame." Detail of a 701 schematic diagram. "MF" stands for "main frame." This diagram shows part of a pluggable tube module (type 2891) in mainframe panel 3 (MF3) section J, column 29. The blocks shown are an AND gate, OR gate, and Cathode Follower (buffer). From System Drawings 1.04.1. The "main frame" terminology was used in discussions with customers. For example, notes from a meeting with IBM (April 8, 1952) mention "E. S. [Electrostatic] Memory 15 feet from main frame" and list "main frame" as one of the seven items obtained for the $15,000/month rental cost.  ↩ For more information on how the IBM 701 was designed to fit on elevators and through doorways, see Building IBM: Shaping an Industry and Technology page 170, The Interface: IBM and the Transformation of Corporate Design page 69. This is also mentioned in "Engineering Description of the IBM Type 701 Computer", Proceedings of the IRE Oct 1953, page 1285. ↩ Many early systems used "central computer" to describe the main part of the computer, perhaps more commonly than "main frame." An early example is the "central computer" of the Elecom 125 (1954). The Digital Computer Newsletter (Apr 1955) used "central computer" several times to describe the processor of SEAC. The 1961 BRL report shows "central computer" being used by Univac II, Univac 1107, Univac File 0, DYSEAC and RCA Series 300. The MIT TX-2 Technical Manual (1961) uses "central computer" very frequently. The NAREC glossary (1962) defined "central computer. That part of a computer housed in the main frame." ↩ This footnote lists some other early computers that used the term "main frame." The October 1956 Digital Computer Newsletter mentions the "main frame" of the IBM NORC. Digital Computer Newsletter (Jan 1959) discusses using a RAMAC disk drive to reduce "main frame processing time." This document also mentions the IBM 709 "main frame." The IBM 704 documentation (1958) says "Each DC voltage is distributed to the main frame..." (IBM 736 reference manual) and "Check the air filters in each main frame unit and replace when dirty." (704 Central Processing Unit). The July 1962 Digital Computer Newsletter discusses the LEO III computer: "It has been built on the modular principle with the main frame, individual blocks of storage, and input and output channels all physically separate." The article also mentions that the new computer is more compact with "a reduction of two cabinets for housing the main frame." The IBM 7040 (1964) and IBM 7090 (1962) were constructed from multiple frames, including the processing unit called the "main frame."11 Machines in IBM's System/360 line (1964) were built from frames; some models had a main frame, power frame, wall frame, and so forth, while other models simply numbered the frames sequentially.12 ↩ The 1952 JOHNNIAC progress report is quoted in The History of the JOHNNIAC. This memorandum was dated August 8, 1952, so it is the earliest citation that I found. The June 1953 memorandum also used the term, stating, "The main frame is complete." ↩ A detailed description of IBM's frame-based computer packaging is in Standard Module System Component Circuits pages 6-9. This describes the SMS-based packaging used in the IBM 709x computers, the IBM 1401, and related systems as of 1960. ↩ IBM System/360 computers could have many frames, so they were usually given sequential numbers. The Model 85, for instance, had 12 frames for the processor and four megabytes of memory in 18 frames (at over 1000 pounds each). Some of the frames had descriptive names, though. The Model 40 had a main frame (CPU main frame, CPU frame), a main storage logic frame, a power supply frame, and a wall frame. The Model 50 had a CPU frame, power frame, and main storage frame. The Model 75 had a main frame (consisting of multiple physical frames), storage frames, channel frames, central processing frames, and a maintenance console frame. The compact Model 30 consisted of a single frame, so the documentation refers to the "frame", not the "main frame." For more information on frames in the System/360, see 360 Physical Planning. The Architecture of the IBM System/360 paper refers to the "main-frame hardware." ↩ A few more examples that discuss the minicomputer's mainframe, its physical box: A 1970 article discusses the mainframe of a minicomputer (as opposed to the peripherals) and contrasts minicomputers with large scale computers. A 1971 article on minicomputers discusses "minicomputer mainframes." Computerworld (Jan 28, 1970, p59) discusses minicomputer purchases: "The actual mainframe is not the major cost of the system to the user." Modern Data (1973) mentions minicomputer mainframes several times. ↩ DEC documents refer to the PDP-11 minicomputer as a mainframe. The PDP-11 Conventions manual (1970) defined: "Processor: A unit of a computing system that includes the circuits controlling the interpretation and execution of instructions. The processor does not include the Unibus, core memory, interface, or peripheral devices. The term 'main frame' is sometimes used but this term refers to all components (processor, memory, power supply) in the basic mounting box." In 1976, DEC published the PDP-11 Mainframe Troubleshooting Guide. The PDP-11 mainframe is also mentioned in Computerworld (1977). ↩ Test equipment manufacturers started using the term "main frame" (and later "mainframe") around 1962, to describe an oscilloscope or other test equipment that would accept plug-in modules. I suspect this is related to the use of "mainframe" to describe a computer's box, but it could be independent. Hewlett-Packard even used the term to describe a solderless breadboard, the 5035 Logic Lab. The Oxford English Dictionary (1989) used HP's 1974 ad for the Logic Lab as its earliest citation of mainframe as a single word. It appears that the OED confused this use of "mainframe" with the computer use. 1974 Sci. Amer. Apr. 79. The laboratory station mainframe has the essentials built-in (power supply, logic state indicators and programmers, and pulse sources to provide active stimulus for the student's circuits)." --> Is this a mainframe? The HP 5035A Logic Lab was a power supply and support circuitry for a solderless breadboard. HP's ads referred to this as a "laboratory station mainframe."  ↩↩ In the 1980s, the use of "mainframe" to describe the box holding a microcomputer started to conflict with "mainframe" as a large computer. For example, Radio Electronics (October 1982), started using the short-lived term "micro-mainframe" instead of "mainframe" for a microcomputer's enclosure. By 1985, Byte magazine had largely switched to the modern usage of "mainframe." But even as late as 1987, a review of the Apple IIGC described one of the system's components as the '"mainframe" (i.e. the actual system box)'. ↩ Definitions of "central processing unit" disagreed as to whether storage was part of the CPU, part of the main frame, or something separate. This was largely a consequence of the physical construction of early computers. Smaller computers had memory in the same frame as the processor, while larger computers often had separate storage frames for memory. Other computers had some memory with the processor and some external. Thus, the "main frame" might or might not contain memory, and this ambiguity carried over to definitions of CPU. (In modern usage, the CPU consists of the arithmetic/logic unit (ALU) and control circuitry, but excludes memory.) ↩ Many definitions of mainframe or CPU mention "special register groups", an obscure feature specific to the Honeywell 800 computer (1959). (Processors have registers, special registers are common, and some processors have register groups, but only the Honeywell 800 had "special register groups.") However, computer dictionaries kept using this phrase for decades, even though it doesn't make sense for other computers. I wrote a blog post about special register groups here. ↩ This footnote provides more examples of "mainframe" being defined as the CPU. The Data Processing Equipment Encyclopedia (1961) had a similar definition: "Main Frame: The main part of the computer, i.e. the arithmetic or logic unit; the central processing unit." The 1967 IBM 360 operator's guide defined: "The main frame - the central processing unit and main storage." The Department of the Navy's ADP Glossary (1970): "Central processing unit: A unit of a computer that includes the circuits controlling the interpretation and execution of instructions. Synonymous with main frame." This was a popular definition, originally from the ISO, used by IBM (1979) among others. Funk & Wagnalls Dictionary of Data Processing Terms (1970) defined: "main frame: The basic or essential portion of an assembly of hardware, in particular, the central processing unit of a computer." The American National Standard Vocabulary for Information Processing (1970) defined: "central processing unit: A unit of a computer that includes the circuits controlling the interpretation and execution of instructions. Synonymous with main frame." ↩ Both the mainframe vs. peripheral definition and the mainframe as CPU definition made it unclear exactly what components of the computer were included in the mainframe. It's clear that the arithmetic-logic unit and the processor control circuitry were included, while I/O devices were excluded, but some components such as memory were in a gray area. It's also unclear if the power supply and I/O interfaces (channels) are part of the mainframe. These distinctions were ignored in almost all of the uses of "mainframe" that I saw. An unusual definition in a Goddard Space Center document (1965, below) partitioned equipment into the "main frame" (the electronic equipment), "peripheral equipment" (electromechanical components such as the printer and tape), and "middle ground equipment" (the I/O interfaces). The "middle ground" terminology here appears to be unique. Also note that computers are partitioned into "super speed", "large-scale", "medium-scale", and "small-scale." Definitions from Automatic Data Processing Equipment, Goddard Space Center, 1965. "Main frame" was defined as "The central processing unit of a system including the hi-speed core storage memory bank. (This is the electronic element.)  ↩ This footnote gives some examples of using peripherals to save the cost of mainframe time. IBM 650 documentation (1956) describes how "Data written on tape by the 650 can be processed by the main frame of the 700 series systems." Univac II Marketing Material (1957) discusses various ways of reducing "main frame time" by, for instance, printing from tape off-line. The USAF Guide for auditing automatic data processing systems (1961) discusses how these "off line" operations make the most efficient use of "the more expensive main frame time." ↩ Peripheral manufacturers were companies that built tape drives, printers, and other devices that could be connected to a mainframe built by IBM or another company. The basis for the peripheral industry was antitrust action against IBM that led to the 1956 Consent Decree. Among other things, the consent decree forced IBM to provide reasonable patent licensing, which allowed other firms to build "plug-compatible" peripherals. The introduction of the System/360 in 1964 produced a large market for peripherals and IBM's large profit margins left plenty of room for other companies. ↩ Computers and Automation, March 1965, categorized computers into five classes, from "Teeny systems" (such as the IBM 360/20) renting for $2000/month, through Small, Medium, and Large systems, up to "Family or Economy Size Systems" (such as the IBM 360/92) renting for $75,000 per month. ↩ The term "minicomputer" was supposedly invented by John Leng, head of DEC's England operations. In the 1960s, he sent back a sales report: "Here is the latest minicomputer activity in the land of miniskirts as I drive around in my Mini Minor", which led to the term becoming popular at DEC. This story is described in The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation (1988). I'd trust the story more if I could find a reference that wasn't 20 years after the fact. ↩ For instance, Computers and Automation (1971) discussed the role of the minicomputer as compared to "larger computers." A 1975 minicomputer report compared minicomputers to their "general-purpose cousins." ↩ This footnote provides more on the split between minicomputers and mainframes. In 1971, Modern Data Products, Systems, Services contained .".. will offer mainframe, minicomputer, and peripheral manufacturers a design, manufacturing, and production facility...." Standard & Poor's Industry Surveys (1972) mentions "mainframes, minicomputers, and IBM-compatible peripherals." Computerworld (1975) refers to "mainframe and minicomputer systems manufacturers." The 1974 textbook "Information Systems: Technology, Economics, Applications" couldn't decide if mainframes were a part of the computer or a type of computer separate from minicomputers, saying: "Computer mainframes include the CPU and main memory, and in some usages of the term, the controllers, channels, and secondary storage and I/O devices such as tape drives, disks, terminals, card readers, printers, and so forth. However, the equipment for storage and I/O are usually called peripheral devices. Computer mainframes are usually thought of as medium to large scale, rather than mini-computers." Studying U.S. Industrial Outlook reports provides another perspective over time. U.S. Industrial Outlook 1969 divides computers into small, medium-size, and large-scale. Mainframe manufacturers are in opposition to peripheral manufacturers. The same mainframe vs. peripherals opposition appears in U.S. Industrial Outlook 1970 and U.S. Industrial Outlook 1971. The 1971 report also discusses minicomputer manufacturers entering the "maxicomputer market."30 1973 mentions "large computers, minicomputers, and peripherals." U.S. Industrial Outlook 1976 states, "The distinction between mainframe computers, minis, micros, and also accounting machines and calculators should merge into a spectrum." By 1977, the market was separated into "general purpose mainframe computers", "minicomputers and small business computers" and "microprocessors." Family Computing Magazine (1984) had a "Dictionary of Computer Terms Made Simple." It explained that "A Digital computer is either a "mainframe", a "mini", or a "micro." Forty years ago, large mainframes were the only size that a computer could be. They are still the largest size, and can handle more than 100,000,000 instructions per second. PER SECOND! [...] Mainframes are also called general-purpose computers." ↩ In 1974, Congress held antitrust hearings into IBM. The thousand-page report provides a detailed snapshot of the meanings of "mainframe" at the time. For instance, a market analysis report from IDC illustrates the difficulty of defining mainframes and minicomputers in this era (p4952). The "Mainframe Manufacturers" section splits the market into "general-purpose computers" and "dedicated application computers" including "all the so-called minicomputers." Although this section discusses minicomputers, the emphasis is on the manufacturers of traditional mainframes. A second "Plug-Compatible Manufacturers" section discusses companies that manufactured only peripherals. But there's also a separate "Minicomputers" section that focuses on minicomputers (along with microcomputers "which are simply microprocessor-based minicomputers"). My interpretation of this report is the terminology is in the process of moving from "mainframe vs. peripheral" to "mainframe vs. minicomputer." The statement from Research Shareholders Management (p5416) on the other hand discusses IBM and the five other mainframe companies; they classify minicomputer manufacturers separately. (p5425) p5426 mentions "mainframes, small business computers, industrial minicomputers, terminals, communications equipment, and minicomputers." Economist Ralph Miller mentions the central processing unit "(the so-called 'mainframe')" (p5621) and then contrasts independent peripheral manufacturers with mainframe manufacturers (p5622). The Computer Industry Alliance refers to mainframes and peripherals in multiple places, and "shifting the location of a controller from peripheral to mainframe", as well as "the central processing unit (mainframe)" p5099. On page 5290, "IBM on trial: Monopoly tends to corrupt", from Harper's (May 1974), mentions peripherals compatible with "IBM mainframe units—or, as they are called, central processing computers." ↩ The influential business newsletter EDP provides an interesting view on the struggle to separate the minicomputer market from larger computers. Through 1968, they included minicomputers in the "general-purpose computer" category. But in 1969, they split "general-purpose computers" into "Group A, General Purpose Digital Computers" and "Group B, Dedicated Application Digital Computers." These categories roughly corresponded to larger computers and minicomputers, on the (dubious) assumption that minicomputers were used for a "dedicated application." The important thing to note is that in 1969 they did not use the term "mainframe" for the first category, even though with the modern definition it's the obvious term to use. At the time, EDP used "mainframe manufacturer" or "mainframer"31 to refer to companies that manufactured computers (including minicomputers), as opposed to manufacturers of peripherals. In 1972, EDP first mentioned mainframes and minicomputers as distinct types. In 1973, "microcomputer" was added to the categories. As the 1970s progressed, the separation between minicomputers and mainframes became common. However, the transition was not completely smooth; 1973 included a reference to "mainframe shipments (including minicomputers)." To specific, the EDP Industry Report (Nov. 28, 1969) gave the following definitions of the two groups of computers: Group A—General Purpose Digital Computers: These comprise the bulk of the computers that have been listed in the Census previously. They are character or byte oriented except in the case of the large-scale scientific machines, which have 36, 48, or 60-bit words. The predominant portion (60% to 80%) of these computers is rented, usually for $2,000 a month or more. Higher level languages such as Fortran, Cobol, or PL/1 are the primary means by which users program these computers. Group B—Dedicated Application Digital Computers: This group of computers includes the "mini's" (purchase price below $25,000), the "midi's" ($25,000 to $50,000), and certain larger systems usually designed or used for one dedicated application such as process control, data acquisition, etc. The characteristics of this group are that the computers are usually word oriented (8, 12, 16, or 24-bits per word), the predominant number (70% to 100%) are purchased, and assembly language (at times Fortran) is the predominant means of programming. This type of computer is often sold to an original equipment manufacturer (OEM) for further system integration and resale to the final user. These definitions strike me as rather arbitrary. ↩ In 1981 Computerworld had articles trying to clarify the distinctions between microcomputers, minicomputers, superminicomputers, and mainframes, as the systems started to overlay. One article, Distinction Helpful for Minis, Mainframes said that minicomputers were generally interactive, while mainframes made good batch machines and network hosts. Microcomputers had up to 512 KB of memory, minis were 16-bit machines with 512 KB to 4 MB of memory, costing up to $100,000. Superminis were 16- to 32-bit machines with 4 MB to 8 MB of memory, costing up to $200,000 but with less memory bandwidth than mainframes. Finally, mainframes were 32-bit machines with more than 8 MB of memory, costing over $200,000. Another article Micro, Mini, or Mainframe? Confusion persists described a microcomputer as using an 8-bit architecture and having fewer peripherals, while a minicomputer has a 16-bit architecture and 48 KB to 1 MB of memory. ↩ The miniskirt in the mid-1960s was shortly followed by the midiskirt and maxiskirt. These terms led to the parallel construction of the terms minicomputer, midicomputer, and maxicomputer. The New York Times had a long article Maxi Computers Face Mini Conflict (April 5, 1970) explicitly making the parallel: "Mini vs. Maxi, the reigning issue in the glamorous world of fashion, is strangely enough also a major point of contention in the definitely unsexy realm of computers." Although midicomputer and maxicomputer terminology didn't catch on the way minicomputer did, they still had significant use (example, midicomputer examples, maxicomputer examples). The miniskirt/minicomputer parallel was done with varying degrees of sexism. One example is Electronic Design News (1969): "A minicomputer. Like the miniskirt, the small general-purpose computer presents the same basic commodity in a more appealing way." ↩ Linguistically, one indication that a new word has become integrated in the language is when it can be extended to form additional new words. One example is the formation of "mainframers", referring to companies that build mainframes. This word was moderately popular in the 1970s to 1990s. It was even used by the Department of Justice in their 1975 action against IBM where they described the companies in the systems market as the "mainframe companies" or "mainframers." The word is still used today, but usually refers to people with mainframe skills. Other linguistic extensions of "mainframe" include mainframing, unmainframe, mainframed, nonmainframe, and postmainframe. ↩ More examples of the split between microcomputers and mainframes: Softwide Magazine (1978) describes "BASIC versions for micro, mini and mainframe computers." MSC, a disk system manufacturer, had drives "used with many microcomputer, minicomputer, and mainframe processor types" (1980). ↩ Some examples of computer dictionaries referring to mainframes as a size category: Illustrated Dictionary of Microcomputer Terminology (1978) defines "mainframe" as "(1) The heart of a computer system, which includes the CPU and ALU. (2) A large computer, as opposed to a mini or micro." A Dictionary of Minicomputing and Microcomputing (1982) includes the definition of "mainframe" as "A high-speed computer that is larger, faster, and more expensive than the high-end minicomputers. The boundary between a small mainframe and a large mini is fuzzy indeed." The National Bureau of Standards Future Information Technology (1984) defined: "Mainframe is a term used to designate a medium and large scale CPU." The New American Computer Dictionary (1985) defined "mainframe" as "(1) Specifically, the rack(s) holding the central processing unit and the memory of a large computer. (2) More generally, any large computer. 'We have two mainframes and several minis.'" The 1990 ANSI Dictionary for Information Systems (ANSI X3.172-1990) defined: mainframe. A large computer, usually one to which other computers are connected in order to share its resources and computing power. Microsoft Press Computer Dictionary (1991) defined "mainframe computer" as "A high-level computer designed for the most intensive computational tasks. Mainframe computers are often shared by multiple users connected to the computer via terminals." ISO 2382 (1993) defines a mainframe as "a computer, usually in a computer center, with extensive capabilities and resources to which other computers may be connected so that they can share facilities." The Microsoft Computer Dictionary (2002) had an amusingly critical definition of mainframe: "A type of large computer system (in the past often water-cooled), the primary data processing resource for many large businesses and organizations. Some mainframe operating systems and solutions are over 40 years old and have the capacity to store year values only as two digits." ↩ IBM's 1962 book Planning a Computer System (1962) describes how the Stretch computer's circuitry was assembled into frames, with the CPU consisting of 18 frames. The picture below shows how a "frame" was, in fact, constructed from a metal frame. In the Stretch computer, the circuitry (left) could be rolled out of the frame (right)  ↩ The term "general-purpose computer" is probably worthy of investigation since it was used in a variety of ways. It is one of those phrases that seems obvious until you think about it more closely. On the one hand, a computer such as the Apollo Guidance Computer can be considered general purpose because it runs a variety of programs, even though the computer was designed for one specific mission. On the other hand, minicomputers were often contrasted with "general-purpose computers" because customers would buy a minicomputer for a specific application, unlike a mainframe which would be used for a variety of applications. ↩ The n-gram graph is from the Google Books Ngram Viewer. The curves on the graph should be taken with a grain of salt. First, the usage of words in published books is likely to lag behind "real world" usage. Second, the number of usages in the data set is small, especially at the beginning. Nonetheless, the n-gram graph generally agrees with what I've seen looking at documents directly. ↩ More examples of "mainframe" in want ads: A 1966 ad from Western Union in The Arizona Republic looking for experience "in a systems engineering capacity dealing with both mainframe and peripherals." A 1968 ad in The Minneapolis Star for an engineer with knowledge of "mainframe and peripheral hardware." A 1968 ad from SDS in The Los Angeles Times for an engineer to design "circuits for computer mainframes and peripheral equipment." A 1968 ad in Fort Lauderdale News for "Computer mainframe and peripheral logic design." A 1972 ad in The Los Angeles Times saying "Mainframe or peripheral [experience] highly desired." In most of these ads, the mainframe was in contrast to the peripherals. ↩ A related factor is the development of remote connections from a microcomputer to a mainframe in the 1980s. This led to the need for a word to describe the remote computer, rather than saying "I connected my home computer to the other computer." See the many books and articles on connecting "micro to mainframe." ↩ To see how the prototypical meaning of "computer" changed in the 1980s, I examined the "Computer" article in encyclopedias from that time. The 1980 Concise Encyclopedia of the Sciences discusses a large system with punched-card input. In 1980, the World Book article focused on mainframe systems, starting with a photo of an IBM System/360 Model 40 mainframe. But in the 1981 supplement and the 1984 encyclopedia, the World Book article opened with a handheld computer game, a desktop computer, and a "large-scale computer." The article described microcomputers, minicomputers, and mainframes. Funk & Wagnalls Encyclopedia (1983) was in the middle of the transition; the article focused on large computers and had photos of IBM machines, but mentioned that future growth is expected in microcomputers. By 1994, the World Book article's main focus was the personal computer, although the mainframe still had a few paragraphs and a photo. This is evidence that the prototypical meaning of "computer" underwent a dramatic shift in the early 1980s from a mainframe to a balance between small and large computers, and then to the personal computer. ↩

4 hours ago 3 votes
Discwasher SpikeMaster

Meet the Mighty SpikeMaster, Protector of Computers.

2 hours ago 1 votes
This is why people see attacks on DEI as thinly veiled racism

The tragedy in Washington D.C. this week was horrible, and a shocking incident. There should and will be an investigation into what went wrong here, but every politician and official who spoke at the White House today explicitly blamed DEI programs for this crash. The message may as well

yesterday 2 votes
What's in a name

Guillermo posted this recently: What you name your product matters more than people give it credit. It's your first and most universal UI to the world. Designing a good name requires multi-dimensional thinking and is full of edge cases, much like designing software. I first will give credit where credit is due: I spent the first few years thinking "vercel" was phonetically interchangable with "volcel" and therefore fairly irredeemable as a name, but I've since come around to the name a bit as being (and I do not mean this snarkily or negatively!) generically futuristic, like the name of an amoral corporation in a Philip K. Dick novel. A few folks ask every year where the name for Buttondown came from. The answer is unexciting: Its killer feature was Markdown support, so I was trying to find a useful way to play off of that. "Buttondown" evokes, at least for me, the scent and touch of a well-worn OCBD, and that kind of timeless bourgeois aesthetic was what I was going for with the general branding. It was, in retrospect, a good-but-not-great name with two flaws: It's a common term. Setting Google Alerts (et al) for "buttondown" meant a lot of menswear stuff and not a lot of email stuff. Because it's a common term, the .com was an expensive purchase (see Notes on buttondown.com for more on that). We will probably never change the name. It's hard for me to imagine the ROI on a total rebrand like that ever justifying its own cost, and I have a soft spot for it even after all of these years. But all of this is to say: I don't know of any projects that have failed or succeeded because of a name. I would just try to avoid any obvious issues, and follow Seth's advice from 2003.

yesterday 4 votes
Join us for Arduino Day 2025: celebrating 20 years of community!

Mark your calendars for March 21-22, 2025, as we come together for a special Arduino Day to celebrate our 20th anniversary! This free, online event is open to everyone, everywhere. Two decades of creativity and community Over the past 20 years, we have evolved from a simple open-source hardware platform into a global community with […] The post Join us for Arduino Day 2025: celebrating 20 years of community! appeared first on Arduino Blog.

2 days ago 3 votes