More from computers are bad
I have an ongoing fascination with "interactive TV": a series of efforts, starting in the 1990s and continuing today, to drag the humble living room television into the world of the computer. One of the big appeals of interactive TV was adoption, the average household had a TV long before the average household had a computer. So, it seems like interactive TV services should have proliferated before personal computers, at least following the logic that many in the industry did at the time. This wasn't untrue! In the UK, for example, Ceefax was a widespread success by the 1980s. In general, TV-based teletext systems were pretty common in Europe. In North America, they never had much of an impact---but not for lack of trying. In fact, there were multiple competing efforts at teletext in the US and Canada, and it may very well have been the sheer number of independent efforts that sunk the whole idea. But let's start at the beginning. The BBC went live with Ceefax in 1974, the culmination of years of prototype development and test broadcasts over the BBC network. Ceefax was quickly joined by other teletext standards in Europe, and the concept enjoyed a high level of adoption. This must have caught the attention of many in the television industry on this side of the ocean, but it was Bonneville International that first bit [1]. Its premier holding, KSL-TV of Salt Lake City, has an influence larger than its name suggests: KSL was carried by an extensive repeater network and reached a large portion of the population throughout the Mountain States. Because of the wide reach of KSL and the even wider reach of the religion that relied on Bonneville for communications, Bonneville was also an early innovator in satellite distribution of television and data. These were ingredients that made for a promising teletext network, one that could quickly reach a large audience and expand to broader television networks through satellite distribution. KSL applied to the FCC for an experimental license to broadcast teletext in addition to its television signal, and received it in June of 1978. I am finding some confusion in the historical record over whether KSL adopted the BBC's Ceefax protocol or the competing ORACLE, used in the UK by the independent broadcasters. A 1982 paper on KSL's experiment confusingly says they used "the British CEEFAX/Oracle," but then in the next sentence the author gives the first years of service for Ceefax and ORACLE the wrong way around, so I think it's safe to say that they were just generally confused. I think I know the reason why: in the late '70s, the British broadcasters were developing something called World System Teletext (WST), a new common standard based on aspects of both Ceefax and ORACLE. Although WST wasn't quite final in 1978, I believe that what KSL adopted was actually a draft of WST. That actually hints at an interesting detail which becomes important to these proposals: in Europe, where teletext thrived, there were usually not very many TV channels. The US's highly competitive media landscape lead to a proliferation of different TV networks, and local operations in addition. It was a far cry from the UK, for example, where 1982 saw the introduction of a fourth channel called, well, Channel 4. By contrast, Salt Lake City viewers with cable were picking from over a dozen channels in 1982, and that wasn't an especially crowded media market. This difference in the industry, between a few major nationwide channels and a longer list of often local ones, has widespread ramifications on how UK and US television technology evolved. One of them is that, in the UK, space in the VBI to transmit data became a hotly contested commodity. By the '80s, obtaining a line of the VBI on any UK network to use for your new datacasting scheme involved a bidding war with your potential competitors, not unlike the way spectrum was allocated in the US. Teletext schemes were made and broken by the outcomes of these auctions. Over here, there was a long list of television channels and on most of them only a single line of the VBI was in use for data (line 21 for closed captions). You might think this would create fertile ground for VBI-based services, but it also posed a challenge: the market was extensively fractured. You could not win a BBC or IBA VBI allocation and then have nationwide coverage, you would have to negotiate such a deal with a long list of TV stations and then likely provide your own infrastructure for injecting the signal. In short, this seems to be one of the main reasons for the huge difference in teletext adoption between Europe and North America: throughout Europe, broadcasting tended to be quite centralized, which made it difficult to get your foot in the door but very easy to reach a large customer base once you had. In the US, it was easier to get started, but you had to fight for each market area. "Critical mass" was very hard to achieve [2]. Back at KSL, $40,000 (~$200,000 today) bought a General Automation computer and Tektronix NTSC signal generator that made up the broadcast system. The computer could manage as many as 800 pages of 20x32 teletext, but KSL launched with 120. Texas Instruments assisted KSL in modifying thirty television sets with a new decoder board and a wired remote control for page selection. This setup, very similar to teletext sets in Europe, nearly doubled the price of the TV set. This likely would have become a problem later on, but for the pilot stage, KSL provided the modified sets gratis to their 30 test households. One of the selling points of teletext in Europe was its ability to provide real-time data. Things like sports scores and stock quotations could be quickly updated in teletext, and news headlines could make it to teletext before the next TV news broadcast. Of course, collecting all that data and preparing it as teletext pages required either a substantial investment in automation or a staff of typists. At the pilot stage, KSL opted for neither, so much of the information that KSL provided was out-of-date. It was very much a prototype. Over time, KSL invested more in the system. In 1979, for example, KSL partnered with the National Weather Service to bring real-time weather updates to teletext---all automatically via the NWS's computerized system called AFOS. At that time, KSL was still operating under an experimental license, one that didn't allow them to onboard customers beyond their 30-set test market. The goal was to demonstrate the technology and its compatibility with the broader ecosystem. In 1980, the FCC granted a similar experimental license to CBS affiliated KMOX in St. Louis, who started a similar pilot effort using a French system called Antiope. Over the following few years, the FCC allowed expansion of this test to other CBS affiliates including KNXT in Los Angeles. To emphasize the educational and practical value of teletext (and no doubt attract another funding source), CBS partnered with Los Angeles PBS affiliate KCET who carried their own Teletext programming with a characteristic slant towards enrichment. Meanwhile, in Chicago, station WFLD introduced a teletext service called Keyfax, built on Ceefax technology as a joint venture with Honeywell and telecom company Centel. Despite the lack of consumer availability, teletext was becoming a crowded field---and for the sake of narrative simplicity I am leaving out a whole set of other North American ventures right now. In 1983, there were at least a half dozen stations broadcasting teletext based on British or French technology, and yet, there were zero teletext decoders on the US market. Besides their use of an experimental license, the teletext pilot projects were constrained by the need for largely custom prototype decoders integrated into customer's television sets. Broadcast executives promised the price could come down to $25, but the modifications actually available continued to cost in the hundreds. The director of public affairs at KSL, asked about this odd conundrum of a nearly five-year-old service that you could not buy, pointed out that electronics manufacturers were hesitant to mass produce an inexpensive teletext decoder as long as it was unclear which of several standards would prevail. The reason that no one used teletext, then, was in part the sheer number of different teletext efforts underway. And, of course, things were looking pretty evenly split: CBS had fully endorsed the French-derived system, and was a major nationwide TV network. But most non-network stations with teletext projects had gone the British route. In terms of broadcast channels, it was looking about 50/50. Further complicating things, teletext proper was not the only contender. There was also videotex. The terminology has become somewhat confused, but I will stick to the nomenclature used in the 1980s: teletext services used a continuous one-way broadcast of every page and decoders simply displayed the requested page when it came around in the loop. Videotex systems were two-way, with the customer using a phone line to request a specific page which was then sent on-demand. Videotex systems tended to operate over telephone lines rather than television cable, but were frequently integrated into television sets. Videotex is not as well remembered as teletext because it was a massive commercial failure, with the very notable exception of the French Minitel. But in the '80s they didn't know that yet, and the UK had its own videotex venture called Prestel. Prestel had the backing of the Post Office, because they ran the telephones and thus stood to make a lot of money off of it. For the exact same reason, US telephone company GTE bought the rights to the system in the US. Videotex is significantly closer to "the internet" in its concept than teletext, and GTE was entering a competitive market. In 1981, Radio Shack had introduced a videotex terminal for several years already, a machine originally developed as the "AgVision" for use with an experimental Kentucky agricultural videotex service and then offered nationwide. This creates an amusing irony: teletext services existed but it was very difficult to obtain a decoder to use them. Radio Shack was selling a videotex client nationwide, but what service would you use it with? In practice, the "TRS-80 Videotex" as the AgVision came to be known was used mostly as a client for CompuServe and Dow Jones. Neither of these were actually videotex services, using neither the videotex UX model nor the videotex-specific features of the machine. The TRS-80 Videotex was reduced to just a slightly weird terminal with a telephone modem, and never sold well until Radio Shack beefed it up into a complete microcomputer and relaunched it as the TRS-80 Color Computer. Radio Shack also sold a backend videotex system, and apparently some newspapers bought it in an effort to launch a "digital edition." The only one to achieve long-term success seems to have been StarText, a service of the Fort Worth Star-Telegram. It was popular enough to be remembered by many from the Fort Worth area, but there was little national impact. It was clearly not enough to float sales of the TRS-80 Videotex and the whole thing has been forgotten. Well, with such a promising market, GTE brought its US Prestel service to market in 1982. As the TRS-80 dropped its Videotex ambitions, Zenith launched a US television set with a built-in Prestel client. Prestel wasn't the only videotex operation, and GTE wasn't the only company marketing videotex in the US. If the British Post Office and GTE thought they could make money off of something, you know AT&T was somewhere around. They were, and in classic AT&T fashion. During the 1970s, the Canadian Communications Research Center developed a vector-based drawing system. Ontario manufacturer Norpak developed a consumer terminal that could request full-color pages from this system using a videotex-like protocol. Based on the model of Ceefax, the CRC designed a system called Telidon that worked over television (in a more teletext-like fashion) or phone lines (like videotex), with the capability of publishing far more detailed graphics than the simple box drawings of teletext. Telidon had several cool aspects, like the use of a pretty complicated vector-drawing terminal and a flexible protocol designed for interoperability between different communications media. That's the kind of thing AT&T loved, so they joined the effort. With CRC, AT&T developed NABTS, the North American Broadcast Teletext Specification---based on Telidon and intended for one-way broadcast over TV networks. NABTS was complex and expensive compared to Ceefax/ORACLE/WST based systems. A review of KSL's pilot notes how the $40,000 budget for their origination system compared to the cost quoted by AT&T for an NABTS headend: as much as $2 million. While KSL's estimates of $25 for a teletext decoder had not been achieved, the prototypes were still running cheaper than NABTS clients that ran into the hundreds. Still, the graphical capabilities of NABTS were immediately impressive compared to text-only services. Besides, the extensibility of NABTS onto telephone systems, where pages could be delivered on-demand, made it capable of far larger databases. When KSL first introduced teletext, they spoke of a scheme where a customer could call a phone number and, via DTMF menus, request an "extended" page beyond the 120 normally transmitted. They could then request that page on their teletext decoder and, at the end of the normal 120 page loop, it would be sent just for them. I'm not sure if that was ever implemented or just a concept. In any case, videotex systems could function this way natively, with pages requested and retrieved entirely by telephone modem, or using hybrid approaches. NABTS won the support of NBC, who launched a pilot NABTS service (confusingly called NBC Teletext) in 1981 and went into full service in 1983. CBS wasn't going to be left behind, and trialed and launched NABTS (as CBS ExtraVision) at the same time. That was an ignominious end for CBS's actual teletext pilot, which quietly shut down without ever having gone into full service. ExtraVision and NBC Teletext are probably the first US interactive TV services that consumers could actually buy and use. Teletext was not dead, though. In 1982, Cincinnati station WKRC ran test broadcasts for a WST-based teletext service called Electra. WKRC's parent company, Taft, partnered with Zenith to develop a real US-market consumer WST decoder for use with the Electra service. In 1983, the same year that ExtraVision and CBS Teletext went live, Zenith teletext decoders appeared on the shelves of Cincinnati stores. They were plug-in modules for recent Zenith televisions, meaning that customers would likely also need to buy a whole new TV to use the service... but it was the only option, and seems to have remained that way for the life of US teletext. I believe that Taft's Electra was the first teletext service to achieve a regular broadcast license. Through the mid 1980s, Electra would expand to more television stations, reaching similar penetration to the videotex services. In 1982, KeyFax (remember KeyFax? it was the one on WFLD in Chicago) had made the pivot from teletext to videotex as well, adopting the Prestel-derived technology from GTE. In 1984, KeyFax gave up on their broadcast television component and became a telephone modem service only. Electra jumped on the now-free VBI lines of WFLD and launched in Chicago. WTBS in Atlanta carried Electra, and then in the biggest expansion of teletext, Electra appeared on SPN---a satellite network that would later become CNBC. While major networks, and major companies like GTE and AT&T, pushed for the videotex NABTS, teletext continued to have its supporters among independent stations. Los Angeles's KTTV started its own teletext service in 1984, which combined locally-developed pages with national news syndicated from Electra. This seemed like the start of a promising model for teletext across independent stations, but it wasn't often repeated. Oh, and KSL? at some point, uncertain to me but before 1984, they switched to NABTS. Let's stop for a moment and recap the situation. Between about 1978 and 1984, over a dozen major US television stations launched interactive TV offerings using four major protocols that fell into two general categories. One of those categories was one-way over television while the other was two-way over telephone or one-way over television with some operators offering both. Several TV stations switched between types. The largest telcos and TV networks favored one option, but it was significantly more expensive than the other, leading smaller operators to choose differently. The hardware situation was surprisingly straightforward in that, within teletext and videotex, consumers only had one option and it was very expensive. Oh, and that's just the technical aspects. The business arrangements could get even stranger. Teletext services were generally free, but videotex services often charged a service fee. This was universally true for videotex services offered over telephone and often, but not always, true for videotex services over cable. Were the videotex services over cable even videotex? doesn't that contradict the definition I gave earlier? is that why NBC called their videotex service teletext? And isn't videotex over telephone barely differentiated from computer-based services like CompuServe and The Source that were gaining traction at the same time? I think this all explains the failure of interactive TV in the 1980s. As you've seen, it's not that no one tried. It's that everyone tried, and they were all tripping over each other the entire time. Even in Canada, where the government had sponsored development of the Telidon system ground-up to be a nationwide standard, the influence of US teletext services created similar confusion. For consumers, there were so many options that they didn't know what to buy, and besides, the price of the hardware was difficult to justify with the few stations that offered teletext. The fact that teletext had been hyped as the "next big thing" by newspapers since 1978, and only reached the market in 1983 as a shambled mess, surely did little for consumer confidence. You might wonder: where was the FCC during this whole thing? In the US, we do not have a state broadcaster, but we do have state regulation of broadcast media that is really quite strict as to content and form. During the late '70s, under those first experimental licenses, the general perception seemed to be that the FCC was waiting for broadcasters to evaluate the different options before selecting a nationwide standard. Given that the FCC had previously dictated standards for television receivers, it didn't seem like that far of a stretch to think that a national-standard teletext decoder might become mandatory equipment on new televisions. Well, it was political. The long, odd experimental period from 1978 to 1983 was basically a result of FCC indecision. The commission wasn't prepared to approve anything as a national standard, but the lack of approval meant that broadcasters weren't really allowed to use anything outside of limited experimental programs. One assumes that they were being aggressively lobbied by every side of the debate, which no doubt factored into the FCC's 1981 decision that teletext content would be unregulated, and 1982 statements from commissioners suggesting that the FCC would not, in fact, adopt any technical standards for teletext. There is another factor wrapped up in this whole story, another tumultuous effort to deliver text over television: closed captioning. PBS introduced closed captioning in 1980, transmitting text over line 21 of the VBI for decoding by a set-top box. There are meaningful technical similarities between closed captioning and teletext, to the extent that the two became competitors. Some broadcasters that added NABTS dropped closed captioning because of incompatibility between the equipment in use. This doesn't seem to have been a real technical constraint, and was perhaps more likely cover for a cost-savings decision, but it generated considerable controversy that lead to the National Association for the Deaf organizing for closed captioning and against teletext. The topic of closed captioning continued to haunt interactive TV. TV networks tended to view teletext or videotex as the obvious replacements for line 21 closed captioning, due to their more sophisticated technical features. Of course, the problems that limited interactive TV adoption in general, high cost and fragmentation, made it unappealing to the deaf. Closed captioning had only just barely become well-standardized in the mid-1980s and its users were not keen to give it up for another decade of frustration. While some deaf groups did support NABTS, the industry still set up a conflict between closed captioning and interactive TV that must have contributed to the FCC's cold feet. In April of 1983, at the dawn of US broadcast teletext, the FCC voted 6-1 to allow television networks and equipment manufacturers to support any teletext or videotex protocol of their choice. At the same time, they declined to require cable networks to carry teletext content from broadcast television stations, making it more difficult for any TV network to achieve widespread adoption [3]. The FCC adopted what was often termed a "market-based" solution to the question of interactive TV. The market would not provide that solution. It had already failed. In November of 1983, Time ended their teletext service. That's right, Time used to have a TV network and it used to have teletext; it was actually one of the first on the market. It was also the first to fall, but they had company. CBS and NBC had significantly scaled back their NABTS programs, which were failing to make any money because of the lack of hardware that could decode the service. On the WST side of the industry, Taft reported poor adoption of Electra and Zenith reported that they had sold very few decoders, so few that they were considering ending the product line. Taft was having a hard time anyway, going through a rough reorganization in 1986 that seems to have eliminated most of the budget for Electra. Electra actually seems to have still been operational in 1992, an impressive lifespan, but it says something about the level of adoption that we have to speculate as to the time of death. Interactive TV services had so little adoption that they ended unnoticed, and by 1990, almost none remained. Conflict with closed captioning still haunted teletext. There had been some efforts towards integrating teletext decoders into TV sets, by Zenith for example, but in 1990 line 21 closed caption decoding became mandatory. The added cost of a closed captioning decoder, and the similarity to teletext, seems to have been enough for the manufacturing industry to decide that teletext had lost the fight. Few, possibly no teletext decoders were commercially available after that date. In Canada, Telidon met a similar fate. Most Telidon services were gone by 1986, and it seems likely that none were ever profitable. On the other hand, the government-sponsored, open-standards nature of Telidon mean that it and descendants like NABTS saw a number of enduring niche uses. Environment Canada distributed weather data via a dedicated Telidon network, and Transport Canada installed Telidon terminals in airports to distribute real-time advisories. Overall, the Telidon project is widely considered a failure, but it has had enduring impact. The original drawing vector drawing language, the idea that had started the whole thing, came to be known as NAPLPS, the North American Presentation Layer Protocol Syntax. NAPLPS had some conceptual similarities to HTML, as Telidon's concept of interlinking did to the World Wide Web. That similarity wasn't just theoretical: Prodigy, the second largest information service after CompuServe and first to introduce a GUI, ran on NAPLPS. Prodigy is now viewed as an important precursor to the internet, but seen in a different light, it was just another videotex---but one that actually found success. I know that there are entire branches of North American teletext and videotex and interactive TV services that I did not address in this article, and I've become confused enough in the timeline and details that I'm sure at least one thing above is outright wrong. But that kind of makes the point, doesn't it? The thing about teletext here is that we tried, we really tried, but we badly fumbled it. Even if the internet hadn't happened, I'm skeptical that interactive television efforts would have gotten anywhere without a complete fresh start. And the internet did happen, so abruptly that it nearly killed the whole concept while television carriers were still tossing it around. Nearly killed... but not quite. Even at the beginning of the internet age, televisions were still more widespread than computers. In fact, from a TV point of view, wasn't the internet a tremendous opportunity? Internet technology and more compact computers could enable more sophisticated interactive television services at lower prices. At least, that's what a lot of people thought. I've written before about (Cablesoft)[https://computer.rip/2024-11-23-cablesoft.html] and it is just one small part of an entire 1990s renaissance of interactive TV. There's a few major 1980s-era services that I didn't get to here either. Stick around and you'll hear more. You know what's sort of funny? Remember the AgVision, the first form of the TRS-80? It was built as a client for AGTEXT, a joint project of Kentucky Educational Television (who carried it on the VBI of their television network) and the Kentucky College of Agriculture. At some point, AGTEXT switched over the line 21 closed captioning protocol and operated until 1998. It was almost the first teletext service, and it may very well have been the last. [1] There's this weird thing going on where I keep tangentially bringing up Bonnevilles. I think it's just a coincidence of what order I picked topics off of my list but maybe it reflects some underlying truth about the way my brain works. This Bonneville, Bonneville International, is a subsidiary of the LDS Church that owns television and radio stations. It is unrelated, except by being indirectly named after the same person, to the Bonneville Power Administration that operated an early large-area microwave communications network. [2] There were of course large TV networks in the US, and they will factor into the story later, but they still relied on a network of independent but affiliated stations to reach their actual audience---which meant a degree of technical inconsistency that made it hard to rollout nationwide VBI services. Providing another hint at how centralization vs. decentralization affected these datacasting services, adoption of new datacasting technologies in the US has often been highest among PBS and NPR affiliates, our closest equivalents to something like the BBC or ITV. [3] The regulatory relationship between broadcast TV stations, cable network TV stations, and cable carriers is a complex one. The FCC's role in refereeing the competition between these different parts of the television industry, which are all generally trying to kill each other off, has lead to many odd details of US television regulation and some of the everyday weirdness of the American TV experience. It's also another area where the US television industry stands in contrast to the European television industry, where state-owned or state-chartered broadcasting meant that the slate of channels available to a consumer was generally the same regardless of how they physically received them. Not so in the US! This whole thing will probably get its own article one day.
One of the most significant single advancements in telecommunications technology was the development of microwave radio. Essentially an evolution of radar, the middle of the Second World War saw the first practical microwave telephone system. By the time Japan surrendered, AT&T had largely abandoned their plan to build an extensive nationwide network of coaxial telephone cables. Microwave relay offered greater capacity at a lower cost. When Japan and the US signed their peace treaty in 1951, it was broadcast from coast to coast over what AT&T called the "skyway": the first transcontinental telephone lead made up entirely of radio waves. The fact that live television coverage could be sent over the microwave system demonstrated its core advantage. The bandwidth of microwave links, their capacity, was truly enormous. Within the decade, a single microwave antenna could handle over 1,000 simultaneous calls. Microwave's great capacity, its chief advantage, comes from the high frequencies and large bandwidths involved. The design of microwave-frequency radio electronics was an engineering challenge that was aggressively attacked during the war because microwave frequency's short wavelengths made them especially suitable for radar. The cavity magnetron, one of the first practical microwave transmitters, was an invention of such import that it was the UK's key contribution to a technical partnership that lead to the UK's access to US nuclear weapons research. Unlike the "peaceful atom," though, the "peaceful microwave" spread fast after the war. By the end of the 1950s, most long-distance telephone calls were carried over microwave. While coaxial long-distance carriers such as L-carrier saw continued use in especially congested areas, the supremacy of microwave for telephone communications would not fall until adoption of fiber optics in the 1980s. The high frequency, and short wavelength, of microwave radio is a limitation as well as an advantage. Historically, "microwave" was often used to refer to radio bands above VHF, including UHF. As RF technology improved, microwave shifted higher, and microwave telephone links operated mostly between 1 and 9 GHz. These frequencies are well beyond the limits of beyond-line-of-sight propagation mechanisms, and penetrate and reflect only poorly. Microwave signals could be received over 40 or 50 miles in ideal conditions, but the two antennas needed to be within direct line of sight. Further complicating planning, microwave signals are especially vulnerable to interference due to obstacles within the "fresnel zone," the region around the direct line of sight through which most of the received RF energy passes. Today, these problems have become relatively easy to overcome. Microwave relays, stations that receive signals and rebroadcast them further along a route, are located in positions of geographical advantage. We tend to think of mountain peaks and rocky ridges, but 1950s microwave equipment was large and required significant power and cooling, not to mention frequent attendance by a technician for inspection and adjustment. This was a tube-based technology, with analog and electromechanical control. Microwave stations ran over a thousand square feet, often of thick hardened concrete in the post-war climate and for more consistent temperature regulation, critical to keeping analog equipment on calibration. Where commercial power wasn't available they consumed a constant supply of diesel fuel. It simply wasn't practical to put microwave stations in remote locations. In the flatter regions of the country, locating microwave stations on hills gave them appreciably better range with few downsides. This strategy often stopped at the Rocky Mountains. In much of the American West, telephone construction had always been exceptionally difficult. Open-wire telephone leads had been installed through incredible terrain by the dedication and sacrifice of crews of men and horses. Wire strung over telephone poles proved able to handle steep inclines and rocky badlands, so long as the poles could be set---although inclement weather on the route could make calls difficult to understand. When the first transcontinental coaxial lead was installed, the route was carefully planned to follow flat valley floors whenever possible. This was an important requirement since it was installed mostly by mechanized equipment, heavy machines, which were incapable of navigating the obstacles that the old pole and wire crews had on foot. The first installations of microwave adopted largely the same strategy. Despite the commanding views offered by mountains on both sides of the Rio Grande Valley, AT&T's microwave stations are often found on low mesas or even at the center of the valley floor. Later installations, and those in the especially mountainous states where level ground was scarce, became more ambitious. At Mt. Rose, in Nevada, an aerial tramway carried technicians up the slope to the roof of the microwave station---the only access during winter when snowpack reached high up the building's walls. Expansion in the 1960s involved increasing use of helicopters as the main access to stations, although roads still had to be graded for construction and electrical service. These special arrangements for mountain locations were expensive, within the reach of the Long Lines department's monopoly-backed budget but difficult for anyone else, even Bell Operating Companies, to sustain. And the West---where these difficult conditions were encountered the most---also contained some of the least profitable telephone territory, areas where there was no interconnected phone service at all until government subsidy under the Rural Electrification Act. Independent telephone companies and telephone cooperatives, many of them scrappy operations that had expanded out from the manager's personal home, could scarcely afford a mountaintop fortress and a helilift operation to sustain it. For the telephone industry's many small players, and even the more rural Bell Operating Companies, another property of microwave became critical: with a little engineering, you can bounce it off of a mirror. James Kreitzberg was, at least as the obituary reads, something of a wunderkind. Raised in Missoula, Montana, he earned his pilots license at 15 and joined the Army Air Corps as soon as he was allowed. The Second World War came to a close shortly after, and so, he went on to the University of Washington where he studied aeronautical engineering and then went back home to Montana, taking up work as an engineer at one of the states' largest electrical utilities. His brother, George, had taken a similar path: a stint in the Marine Corps and an aeronautical engineering degree from Oklahoma. While James worked at Montana Power in Butte, George moved to Salem, Oregon, where he started an aviation company that supplemented their cropdusting revenue by modifying Army-surplus aircraft for other uses. Montana Power operated hydroelectric dams, coal mines, and power plants, a portfolio of facilities across a sparse and mountainous state that must have made communications a difficult problem. During the 1950s, James was involved in an effort to build a new private telephone system connecting the utility's facilities. It required negotiating some type of obstacle, perhaps a mountain pass. James proposed an idea: a mirror. Because the wavelength of microwaves are so short, say 30cm to 5cm (1GHz-6GHz), it's practical to build a flat metallic panel that spans multiple wavelengths. Such a panel will function like a reflector or mirror, redirecting microwave energy at an angle proportional to the angle on which it arrived. Much like you can redirect a laser using reflectors, you can also redirect a microwave signal. Some early commenters referred to this technique as a "radio mirror," but by the 1950s the use of "active" microwave repeaters with receivers and transmitters had become well established, so by comparison reflectors came to be known as "passive repeaters." James believed a passive repeater to be a practical solution, but Montana Power lacked the expertise to build one. For a passive repeater to work efficiently, its surface must be very flat and regular, even under varying temperature. Wind loading had to be accounted for, and the face sufficiently rigid to not flex under the wind. Of course, with his education in aeronautics, James knew that similar problems were encountered in aircraft: the need for lightweight metal structures with surfaces that kept an engineered shape. Wasn't he fortunate, then, that his brother owned a shop that repaired and modified aircraft. I know very little about the original Montana Power installation, which is unfortunate, as it may very well be the first passive microwave repeater ever put into service. What I do know is that in the fall of 1955, James called his brother George and asked if his company, Kreitzberg Aviation, could fabricate a passive repeater for Montana Power. George, he later recounted, said that "I can build anything you can draw." The repeater was made in a hangar on the side of Salem's McNary Field, erected by the flightline as a test, and then shipped in parts to Montana for reassembly in the field. It worked. It worked so well, in fact, that as word of Montana Power's new telephone system spread, other utilities wrote to inquire about obtaining passive repeaters for their own telephone systems. In 1956, James Kreitzberg moved to Salem and the two brothers formed the Microflect Company. From the sidelines of McNary Field, Microflect built aluminum "billboards" that can still be found on mountain passes and forested slopes throughout the western United States, and in many other parts of the world where mountainous terrain, adverse weather, and limited utilities made the construction of active repeaters impractical. Passive repeaters can be used in two basic configurations, defined by the angle at which the signal is reflected. In the first case, the reflection angle is around 90 degrees (the closer to this ideal angle, of course, the more efficiently the repeater performs). This situation is often encountered when there is an obstacle that the microwave path needs to "maneuver" around. For example, a ridge or even a large structure like a building in between two sites. In the second case, the microwave signal must travel in something closer to a straight line---over a mountain pass between two towns, for example. When the reflection angle is greater than 135 degrees, the use of a single passive repeater becomes inefficient or impossible, so Microflect recommends the use of two. Arranged like a dogleg or periscope, the two repeaters reflect the signal to the side and then onward in the intended direction. Microflect published an excellent engineering manual with many examples of passive repeater installations along with the signal calculations. You might think that passive repeaters would be so inefficient as to be impractical, especially when more than one was required, but this is surprisingly untrue. Flat aluminum panels are almost completely efficient reflectors of microwave, and somewhat counterintuitively, passive repeaters can even provide gain. In an active repeater, it's easy to see how gain is achieved: power is added. A receiver picks up a signal, and then a powered transmitter retransmits it, stronger than it was before. But passive repeaters require no power at all, one of their key advantages. How do they pull off this feat? The design manual explains with an ITU definition of gain that only an engineer could love, but in an article for "Electronics World," Microflect field engineer Ray Thrower provided a more intuitive explanation. A passive repeater, he writes, functions essentially identically to a parabolic antenna, or a telescope: Quite probably the difficulty many people have in understanding how the passive repeater, a flat surface, can have gain relates back to the common misconception about parabolic antennas. It is commonly believed that it is the focusing characteristics of the parabolic antenna that gives it its gain. Therefore, goes the faulty conclusion, how can the passive repeater have gain? The truth is, it isn't focusing that gives a parabola its gain; it is its larger projected aperture. The focusing is a convenient means of transition from a large aperture (the dish) to a small aperture (the feed device). And since it is projected aperture that provides gain, rather than focusing, the passive repeater with its larger aperture will provide high gain that can be calculated and measured reliably. A check of the method of determining antenna gain in any antenna engineering handbook will show that focusing does not enter into the basic gain calculation. We can also think of it this way: the beam of energy emitted by a microwave antenna expands in an arc as it travels, dissipating the "density" of the energy such that a dish antenna of the same size will receive a weaker and weaker signal as it moves further away (this is the major component of path loss, the "dilution" of the energy over space). A passive repeater employs a reflecting surface which is quite large, larger than practical antennas, and so it "collects" a large cross section of that energy for reemission. Projected aperture is the effective "window" of energy seen by the antenna at the active terminal as it views the passive repeater. The passive repeater also sees the antenna as a "window" of energy. If the two are far enough away from one another, they will appear to each other as essentially point sources. In practice, a passive repeater functions a bit like an active repeater that collects a signal with a large antenna and then reemits it with a smaller directional antenna. To be quite honest, I still find it a bit challenging to intuit this effect, but the mathematics bear it out as well. Interestingly, the effect only occurs when the passive repeater is far enough from either terminal so as to be usefully approximated as a point source. Microflect refers to this as the far field condition. When the passive repeater is very close to one of the active sites, within the near field, it is more effective to consider the passive reflector as part of the transmitting antenna itself, and disregard it for path loss calculations. This dichotomy between far field and near field behavior is actually quite common in antenna engineering (where an "antenna" is often multiple radiating and nonradiating elements within the near field of each other), but it's yet another of the things that gives antenna design the feeling of a dark art. One of the most striking things about passive repeaters is their size. As a passive repeater becomes larger, it reflects a larger cross section of the RF energy and thus provides more gain. Much like with dish or horn antennas, the size of a passive repeater can be traded off with transmitter power (and the size of other antennas involved) to design an economical solution. Microflect offered as standard sizes ranging from 8'x10' (gain at around 6.175GHz: 90.95 dB) to 40'x60' (120.48dB, after a "rough estimate" reduction of 1dB due to interference effects possible from such a short wavelength reflecting off of such a large panel as to invoke multipath effects). By comparison, a typical active microwave repeater site might provide a gain of around 140dB---and we must bear in mind that dB is a logarithmic unit, so the difference between 121 and 140 is bigger than it sounds. Still, there's a reason that logarithms are used when discussing radio paths... in practice, it is orders of magnitude that make the difference in reliable reception. The reduction in gain from an active repeater to a passive repeater can be made up for with higher-gain terminal antennas and more powerful transmitters. Given that the terminal sites are often at far more convenient locations than the passive repeater, that tradeoff can be well worth it. Keep in mind that, as Microflect emphasizes, passive repeaters require no power and very little ("virtually no") maintenance. Microflect passive repeaters were manufactured in sections that bolted together in the field, and the support structures provided for fine adjustment of the panel alignment after mounting. These features made it possible to install passive repeaters by helicopter onto simple site-built foundations, and many are found on mountainsides that are difficult to reach even on foot. Even in less difficult locations, these advantages made passive repeaters less expensive to install and operate than active repeaters. Even when the repeater side was readily accessible, passives were often selected simply for cost savings. Let's consider some examples of passive repeater installations. Microflect was born of the power industry, and electrical generators and utilities remained one of their best customers. Even today, you can find passive repeaters at many hydroelectric dams. There is a practical need to communicate by telephone between a dispatch center (often at the utility's city headquarters) and the operators in the dam's powerhouse, but the powerhouse is at the base of the dam, often in a canyon where microwave signals are completely blocked. A passive repeater set on the canyon rim, at an angle downwards, solves the problem by redirecting the signal from horizontal to vertical. Such an installation can be seen, for example, at the Hoover Dam. In some sense, these passive repeaters "relocate" the radio equipment from the canyon rim (where the desirable signal path is located) to a more convenient location with the other powerhouse equipment. Because of the short distance from the powerhouse to the repeater, these passives were usually small. This idea can be extended to relocating en-route repeaters to a more serviceable site. In Glacier National Park, Mountain States Telephone and Telegraph installed a telephone system to serve various small towns and National Park Service sites. Glacier is incredibly mountainous, with only narrow valleys and passes. The only points with long sight ranges tend to be very inaccessible. Mt. Furlong provided ideal line of sight to East Glacier and Essex along highway 2, but it would have been extremely challenging to install and maintain a microwave site on the steep peak. Instead, two passive repeaters were installed near the mountaintop, redirecting the signals from those two destinations to an active repeater installed downslope near the highway and railroad. This example raises another advantage of passive repeaters: their reduced environmental impact, something that Microflect emphasized as the environmental movement of the 1970s made agencies like the Forest Service (which controlled many of the most appealing mountaintop radio sites) less willing to grant permits that would lead to extensive environmental disruption. Construction by helicopter and the lack of a need for power meant that passive repeaters could be installed without extensive clearing of trees for roads and power line rights of way. They eliminated the persistent problem of leakage from standby generator fuel tanks. Despite their large size, passive repeaters could be camouflaged. Many in national forests were painted green to make them less conspicuous. And while they did have a large surface area, Microflect argued that since they could be installed on slopes rather than requiring a large leveled area, passive repeaters would often fall below the ridge or treeline behind them. This made them less visually conspicuous than a traditional active repeater site that would require a tower. Indeed, passive repeaters are only rarely found on towers, with most elevated off the ground only far enough for the bottom edge to be free of undergrowth and snow. Other passive repeater installations were less a result of exceptionally difficult terrain and more a simple cost optimization. In rural Nevada, Nevada Bell and a dozen independents and coops faced the challenge of connecting small towns with ridges between them. The need for an active repeater at the top of each ridge, even for short routes, made these rural lines excessively expensive. Instead, such towns were linked with dual passive repeaters on the ridge in a "straight through" configuration, allowing microwave antennas at the towns' existing telephone exchange buildings to reach each other. This was the case with the installation I photographed above Pioche. I have been frustratingly unable to confirm the original use of these repeaters, but from context they were likely installed by the Lincoln County Telephone System to link their "hub" microwave site at Mt. Wilson (with direct sight to several towns) to their site near Caliente. The Microflect manual describes, as an example, a very similar installation connecting Elko to Carlin. Two 20'x32' passive repeaters on a ridge between the two (unfortunately since demolished) provided a direct connection between the two telephone exchanges. As an example of a typical use, it might be interesting to look at the manual's calculations for this route. From Elko to the repeaters is 13.73 miles, the repeaters are close enough to each other as to be in near field (and so considered as a single antenna system), and from the repeaters to Carlin is 6.71 miles. The first repeater reflects the signal at a 68 degree angle, then the second reflects it back at a 45 degree angle, for a net change in direction of 23 degrees---a mostly straight route. The transmitter produces 33.0 dBm, both antennas provide a 34.5 dB gain, and the passive repeater assembly provides 88 dB gain (this calculated basically by consulting a table in the manual). That means there is 190 dB of gain in the total system. The 6.71 and 13.73 mile paths add up to 244 dB of free space path loss, and Microflect throws in a few more dB of loss to account for connectors and cables and the less than ideal performance of the double passive repeater. The net result is a received signal of -58 dBm, which is plenty acceptable for a 72-channel voice carrier system. This is all done at a significantly lower price than the construction of a full radio site on the ridge [1]. The combination of relocating radio equipment to a more convenient location and simply saving money leads to one of the iconic applications of passive repeaters, the "periscope" or "flyswatter" antenna. Microwave antennas of the 1960s were still quite large and heavy, and most were pressurized. You needed a sturdy tower to support one, and then a way to get up the tower for regular maintenance. This lead to most AT&T microwave sites using short, squat square towers, often with surprisingly convenient staircases to access the antenna decks. In areas where a very tall tower was needed, it might just not be practical to build one strong enough. You could often dodge the problem by putting the site up a hill, but that wasn't always possible, and besides, good hilltop sites that weren't already taken became harder to find. When Western Union built out their microwave network, they widely adopted the flyswatter antenna as an optimization. Here's how it works: the actual microwave antenna is installed directly on the roof of the equipment building facing up. Only short waveguides are needed, weight isn't an issue, and technicians can conveniently service the antenna without even fall protection. Then, at the top of a tall guyed lattice tower similar to an AM mast, a passive repeater is installed at a 45 degree angle to the ground, redirecting the signal from the rooftop antenna to the horizontal. The passive repeater is much lighter than the antenna, allowing for a thinner tower, and will rarely if ever need service. Western Union often employed two side-by-side lattice towers with a "crossbar" between them at the top for convenient mounting of reflectors each direction, and similar towers were used in some other installations such as the FAA's radar data links. Some of these towers are still in use, although generally with modern lightweight drum antennas replacing the reflectors. Passive microwave repeaters experienced their peak popularity during the 1960s and 1970s, as the technology became mature and communications infrastructure proliferated. Microflect manufactured thousands of units from there new, larger warehouse, across the street from their old hangar on McNary Field. Microflect's customer list grew to just about every entity in the Bell System, from Long Lines to Western Electric to nearly all of the BOCs. The list includes GTE, dozens of smaller independent telephone companies, most of the nation's major railroads, electrical utilities from the original Montana Power to the Tennessee Valley Authority. Microflect repeaters were used by ITT Arctic Services and RCA Alascom in the far north, and overseas by oil companies and telecoms on islands and in mountainous northern Europe. In Hawaii, a single passive repeater dodged a mountain to connect Lanai City telephones to the Hawaii Telephone Company network at Tantalus on Oahu---nearly 70 miles in one jump. In Nevada, six passive repeaters joined two active sites to connect six substations to the Sierra Pacific Power Company's control center in Reno. Jamaica's first high-capacity telephone network involved 11 passive repeaters, one as large as 40'x60'. The Rocky Mountains are still dotted with passive repeaters, structures that are sometimes hard to spot but seem to loom over the forest once noticed. In Seligman, AZ, a sun-faded passive repeater looks over the cemetery. BC Telephone installed passive repeaters to phase out active sites that were inaccessible for maintenance during the winter. Passive repeaters were, it turns out, quite common---and yet they are little known today. First, it cannot be ignored that passive repeaters are most common in areas where communications infrastructure was built post-1960 through difficult terrain. In North America, this means mostly the West [2], far away from the Eastern cities where we think of telephone history being concentrated. Second, the days of passive repeaters were relatively short. After widespread adoption in the '60s, fiber optics began to cut into microwave networks during the '80s and rendered microwave long-distance links largely obsolete by the late '90s. Considerable improvements in cable-laying equipment, not to mention the lighter and more durable cables, made fiber optics easier to install in difficult terrain than coaxial had ever been. Besides, during the 1990s, more widespread electrical infrastructure, miniaturization of radio equipment, and practical photovoltaic solar systems all combined to make active repeaters easier to install. Today, active repeater systems installed by helicopter with independent power supplies are not that unusual, supporting cellular service in the Mojave Desert, for example. Most passive repeaters have been obsoleted by changes in communications networks and technologies. Satellite communications offer an even more cost effective option for the most difficult installations, and there really aren't that many places left that a small active microwave site can't be installed. Moreover, little has been done to preserve the history of passive repeaters. In the wake of the 2015 Wired article on the Long Lines network, considerable enthusiasm has been directed towards former AT&T microwave stations, having been mostly preserved by their haphazard transfer to companies like American Tower. Passive repeaters, lacking even the minimal commercial potential of old AT&T sites, were mostly abandoned in place. Often being found in national forests and other resource management areas, many have been demolished for restoration. In 2019, a historic resources report was written on the Bonneville Power Administration's extensive microwave network. It was prepared to address the responsibility that federal agencies have for historical preservation under the National Historic Preservation Act and National Environmental Policy Act, policies intended to ensure that at least the government takes measures to preserve history before demolishing artifacts. The report reads: "Due to their limited features, passive repeaters are not considered historic resources, and are not evaluated as part of this study." In 1995, Valmont Industries acquired Microflect. Valmont is known mostly for their agricultural products, including center-pivot irrigation systems, but they had expanded their agricultural windmill business into a general infrastructure division that manufactured radio masts and communication towers. For a time, Valmont continued to manufacture passive repeaters as Valmont Microflect, but business seems to have dried up. Today, Valmont Structures manufactures modular telecom towers from their facility across the street from McNary Field in Salem, Oregon. A Salem local, descended from early Microflect employees, once shared a set of photos on Facebook: a beat-up hangar with a sign reading "Aircraft Repair Center," and in front of it, stacks of aluminum panel sections. Microflect workers erecting a passive repeater in front of a Douglas A-26. Rows of reflector sections beside a Shell aviation fuel station. George Kreitzberg died in 2004, James in 2017. As of 2025, Valmont no longer manufactures passive repeaters. Postscript If you are interested in the history of passive repeaters, there are a few useful tips I can give you. Nearly all passive repeaters in North America were built by Microflect, so they have a very consistent design. Locals sometimes confuse passive repeaters with old billboards or even drive-in theater screens, the clearest way to differentiate them is that passive repeaters have a face made up of aluminum modules with deep sidewalls for rigidity and flatness. Take a look at the Microflect manual for many photos. Because passive repeaters are passive, they do not require a radio license proper. However, for site-based microwave licenses, the FCC does require that passive repeaters be included in paths (i.e. a license will be for an active site but with a passive repeater as the location at the other end of the path). These sites are almost always listed with a name ending in "PR". I don't have any straight answer on whether or not any passive repeaters are still in use. It has likely become very rare but there are probably still examples. Two sources suggest that Rachel, NV still relies on a passive repeater for telephone and DSL. I have not been able to confirm that, and the tendency of these systems to be abandoned in place means that people sometimes think they are in use long after they were retired. I can find documentation of a new utility SCADA system being installed, making use of existing passive repeaters, as recently as 2017. [1] If you find these dB gain/loss calculations confusing, you are not alone. It is deceptively simple in a way that was hard for me to learn, and perhaps I will devote an article to it one day. [2] Although not exclusively, with installations in places like Vermont and Newfoundland where similar constraints applied.
Alcatraz first operated as a prison in 1859, when the military fort first held convicted soldiers. The prison technology of the time was simple, consisting of little more than a basement room with a trap-door entrance. Only small numbers of prisoners were held in this period, but it established Alcatraz as a center of incarceration. Later, the Civil War triggered construction of a "political prison," a term with fewer negative connotations at the time, for confederate sympathizers. This prison was more purpose-built (although actually a modification of an existing shop), but it was small and not designed for an especially high security level. It presaged, though, a much larger construction project to come. Alcatraz had several properties that made it an attractive prison. First, it had seen heavy military construction as a Civil War defensive facility, but just decades later improvements in artillery made its fortifications obsolete. That left Alcatraz surplus property, a complete military installation available for new use. Second, Alcatraz was formidable. The small island was made up of steep rock walls, and it was miles from shore in a bay known for its strong currents. Escape, even for prisoners who had seized control of the island, would be exceptionally difficult. These advantages were also limitations. Alcatraz was isolated and difficult to support, requiring a substantial roster of military personnel to ferry supplies back and forth. There were no connections to the mainland, requiring on-site power and water plants. Corrosive sea spray, sent over the island by the Bay's strong winds, lay perpetual siege on the island. Buildings needed constant maintenance, rust covered everything. Alcatraz was not just a famous prison, it was a particularly complicated one. In 1909, Alcatraz lost its previous defensive role and pivoted entirely to military prison. The Citadel, a hardened barracks building dating to the original fortifications, was partially demolished. On top of it, a new cellblock was built. This was a purpose-built prison, designed to house several hundred inmates under high security conditions. Unfortunately, few records seem to survive from the construction and operation of the cellblock as a disciplinary barracks. At some point, a manual telephone exchange was installed to provide service between buildings on the island. I only really know that because it was recorded as being removed later on. Communications to and from Alcatraz were a challenge. Radio and even light signals were used to convey messages between the island and other military installations on the bay. There was a constant struggle to maintain cables. Early efforts to lay cables in the bay were less about communications and more about triggering. Starting in 1883, the Army Corps of Engineers began the installation of "torpedoes" in the San Francisco bay. These were different from what we think of as torpedoes today, they were essentially remotely-operated mines. Each device floated in the water by its own buoyancy, anchored to the bottom by a cable that then ran to shore. An electrical signal sent down the cable detonated the torpedo. The system was intended primarily to protect the bay from submarines, a new threat that often required technically complex defenses. Submarines are, of course, difficult to spot. To make the torpedoes effective, the Army had to devise a targeting system. Observation posts on each side of the Golden Gate made sightings of possible submarines and reported them to a control post, where they were plotted on the map. With a threat confirmed, the control post would begin to detonate nearby torpedoes. A second set of observation posts, and a second line of torpedoes, were located further into the bay to address any submarines that made it through the first barrage. By 1891, there were three such control points in total: Fort Mason, Angel Island, and Yerba Buena. The rather florid San Francisco Examiner of the day described the control point at Fort Mason, a "chamber of death and destruction" in a tunnel twenty feet underground. The Army "death-dealers" that manned the plotting table in that bunker had access to a board that "greatly resemble[d] the switch board in the great operating rooms of the telephone companies." By cords and buttons, they could select chains of mines and send the signal to fire. NPS historians found that a torpedo control point had been planned at Alcatraz, and one of the fortifications modified to accommodate it, but never seems to have been used. The 1891 article gives a hint of the reason, noting that the line from Alcatraz to Fort Mason was "favorable for a line of torpedoes" but that currents were so strong that it was difficult to keep them anchored. Perhaps this problem was discovered after construction was already underway. Somewhere around 1887-1888, the Army Signal Corps had joined the cable-laying fray. A telegraph cable was constructed from the Presidio to Alcatraz, and provided good service except for the many times that it was drug up by anchors and severed. This was a tremendous problem: in 1898, Gen. A. W. Greely of the Signal Corps called San Francisco the "worst bay in the country" for cable laying and said that no cable across the Golden Gate had lasted more than three years. The General attributed the problem mainly to the heavy shipping traffic, but I suspect that the notorious currents must have been a factor in just how many anchors were dragged through cables [1]. In 1889, a brand new Army telegraph cable was announced, one that would run from Alcatraz to Angel Island, and then from Angel Island to Marin County. An existing commercial cable crossed the Golden Gate, providing a connection all the way to the Presidio. The many failures of Alcatraz cables makes it difficult to keep track. For example, a cable from Fort Mason to Alcatraz Island was apparently laid in 1891---but a few years later, it was lamented that Alcatraz's only cable connection to Fort Mason was indirect, via the 1889 Angel Island cable. Presumably the 1891 cable was damaged at some point and not replaced, but that event doesn't seem to have made the papers (or at least my search results!). In 1900, a Signal Corps officer on Angel Island made a routine check of the cable to Alcatraz, finding it in good working order---but noticing that a "four masted schooner... in direct line with the cable" seemed to be in trouble just off the island and was being assisted by a tug. That evening, the officer returned to the cable landing box to find the ship gone... along with the cable. A French ship, "Lamoriciere," had drifted from anchor overnight. A Signal Corps sergeant, apparently having spoken with harbor officials, reported that the ship would have run completely aground had the anchor not caught the Alcatraz cable and pulled it taught. Of course the efforts of the tug to free Lamoriciere seems to have freed a little more than intended, and the cable was broken away from its landing. "Its end has been carried into the bay and probably quite a distance from land," the Signal Corps reported. This ongoing struggle, of laying new cables to Alcatraz and then seeing them dragged away a few years later, has dogged the island basically to the modern day---when we have finally just given up. Today, as during many points in its history, Alcatraz must generate its own power and communicate with the mainland via radio. When the Bureau of Prisons took control of Alcatraz in 1933, they installed entirely new radio systems. A marine AM radio was used to reach the Coast Guard, their main point of contact in any emergency. Another radio was used to contact "Alcatraz Landing" from which BOP ferries sailed, and over the years several radios were installed to permit direct communications with military installations and police departments around the Bay Area. At some point, equipment was made available to connect telephone calls to the island. I'm not sure if this was manual patching by BOP or Coast Guard radio operators, or if a contract was made with PT&T to provide telephone service by radio. Such an arrangement seems to have been in place by 1937, when an unexplained distress call from the island made the warden impossible to contact (by the press or Bureau of Prisons) because "all lines [were] tied up." Unfortunately I have not been able to find much on the radiotelephone arrangements. The BOP, no doubt concerned about security, did not follow the Army's habit of announcing new construction projects to the press. Fortunately, the BOP-era history of Alcatraz is much better covered by modern NPS documentation than the Army era (presumably because the more recent closure of the BOP prison meant that much of the original documentation was archived). Unfortunately, the NPS reports are mostly concerned with the history of the structures on the island and do not pay much attention to outside communications or the infrastructure that supported it. Internal arrangements on the island almost completely changed when the BOP took over. The Army had left Alcatraz in a degree of disrepair (discussions about closing it having started by at least 1913), and besides, the BOP intended to provide a much higher level of security than the Army had. Extensive renovations were made of the main cellblock and many supporting buildings from 1933 to about 1939. The 1930s had seen a great deal of innovation in technical security. Technologies like electrostatic and microwave motion sensors were available in early forms. On Alcatraz, though, the island was small and buildings tightly spaced. The prison staff, and in some cases their families, would be housed on the island just a stones throw from the cellblock. That meant there would be quite a few people moving around exterior to the prison, ruling out motion sensors as a means of escape detection. Exterior security would instead be provided by guard and dog patrols. There was still some cutting-edge technical security when Alcatraz opened, including early metal detectors. At first, the BOP contracted the Teletouch Corporation of New York City. Teletouch, a manufactured burglar alarms and other electronic devices, was owned by or at least affiliated with famed electromagnetics inventor and Soviet spy Leon Theremin. Besides the instrument we remember him for today, Theremin had invented a number of devices for security applications, and the metal detectors were probably of his design. In practice, the Teletouch machines proved unsatisfactory. They were later replaced with machines made by Forewarn. I believe the metal detector on display today is one of the Forewarn products, although the NPS documents are a little unclear on this. Sensitive common areas like the mess hall, kitchen, and sallyport wre fitted with electrically-activated teargas canisters. Originally, the mess hall teargas was controlled by a set of toggle switches in a corner gun gallery, while the sallyport teargas was controlled from the armory. While the teargas system was never used, it was probably the most radical of Alcatraz's technical security measures. As more electronic systems were installed, the armory, with its hardened vault entrance and gun issue window, served as a de facto control center for Alcatraz's initial security systems. The Army's small manual telephone switchboard was considered unsuitable for the prison's use. The telephone system provided communication between the guards, making it a critical part of the overall security measures, and the BOP specified that all equipment and cabling needed to be better secured from any access by prisoners. Modifications to the cellblock building's entrance created a new room, just to the side of the sallyport, that housed a 100-line automatic exchange. Automatic Electric telephones that appear throughout historic photos of the prison would suggest that this exchange had been built by AE. Besides providing dial service between prison offices and the many other structures on the island, the exchange was equipped with a conference circuit that included annunciator panels in each of the prison's main offices. Assuming this was the type provided by Automatic Electric, it provided an emergency communications system in which the guard telephones could ring all of the office and guard phones simultaneously, even interrupting calls already in progress. Annunciator panels in the armory and offices showed which phone had started the emergency conference, and which phones had picked up. From the armory, a siren on the building roof could be sounded to alert the entire island to any attempted escape. Some locations, including the armory and the warden's office, were also fitted with fire annunciators. I am less clear on this system. Fire circuits similar to the previously described conference circuit (and sometimes called "crash alarms" after their use on airfields) were an optional feature on telephone exchanges of the time. Crash alarms were usually activated by dedicated "hotline" phones, and mentions of "emergency phones" in various prison locations support that this system worked the same way. Indeed, 1950s and 60s photos show a red phone alongside other telephones in several prison locations. The fire annunciator panels probably would have indicated which of the emergency phones had been lifted to initiate the alarm. One of the most fascinating parts of Alcatraz, to a person like me, is the prison doors. Prison doors have a long history, one that is interrelated with but largely distinct from other forms of physical security. Take a look, for example, at the keys used in prisons. Prisons of the era, and even many today, rely on lever locks manufactured by specialty companies like Folger Adams and Sargent and Greenleaf. These locks are prized for their durability, and that extends to the keys, huge brass plates that could hold up to daily wear well beyond most locks. At Alcatraz, the first warden adopted a "sterile area" model in which areas accessible to prisoners should be kept as clear as possible of dangerous items like guns and keys. Guards on the cellblock carried no keys, and cell doors lacked traditional locks. Instead, the cell doors were operated by a central mechanical system designed by Stewart Iron Works. To let prisoners out of cells in the morning, a guard in the elevated gun gallery passed keys to a cellblock guard in a bucket or on a string. The guard unlocked the cabinet of a cell row's control system, revealing a set of large levers. The design is quite ingenious: by purely mechanical means, the guard could select individual cells or the entire row to be unlocked, and then by throwing the largest lever the guard could pull the cell doors open---after returning the necessary key to the gun gallery above. This 1934 system represents a major innovation in centralized access control, designed specifically for Alcatraz. Stewart Iron Works is still in business, although not building prison doors. Some years ago, the company assisted NPS's work to restore the locking system to its original function. The present-day CEO provided replicas of the original Stewart logo plate for the restored locking cabinets. Interviewing him about the restoration work, the San Francisco Chronicle wrote that "Alcatraz, he believes, is part of the American experience." The Stewart mechanical system seems to have remained in use on the B and C blocks until the prison closed, but the D block was either originally fitted, or later upgraded, with electrically locked cell doors. These were controlled from a set of switches in the gun gallery. In 1960, the BOP launched another wave of renovations on Alcatraz, mostly to modernize its access and security arrangements to modern standards. The telephone exchange was moved away from the sallyport to an upper floor of the administration building, freeing up its original space for a new control center. This is the modern sallyport control area that visitors look into through the ballistic windows; the old service windows and viewports into the armory anteroom that had been the de facto control center are now removed. This control center is more typical of what you will see in modern prisons. Through large windows, guards observed the sallyport and visitor areas and controlled the electrically operated main gates. An electrical interlock prevented opening the full path from the cellblock to the outside, creating a mantrap in the visitor area through which the guards in the control room could identify everyone entering and leaving. Photos from the 1960 control room, and other parts of the prison around the same time, clearly show consoles for a Western Electric 507B PBX. The 507B is really a manual exchange, although it used keys rather than the more traditional plugboard for a more modern look. It dates back to about 1929---so I assume the 507B had been installed well before the 1960 renovation, and its appearance then is just a bias of more and better photos available from the prison's later days. Fortunately, the NPS Historic Furnishings Report for the cellblock building includes a complete copy of a 1960s memo describing the layout and requirements for the control center. We're fortunate to get such a detailed listing of the equipment: Four phones (these are Automatic Electric instruments, based on the photo). One is a fire reporting phone (presumably on the exchange's "crash alarm" circuit), one is the watch call reporting phone (detailed in a moment), a regular outgoing call telephone, and an "executive right of way" phone that I assume will disconnect other calls from the outgoing trunks. The 507B PBX switchboard An intercom for communication with each of the guard towers Controls for five electrically operated doors Intercoms to each of the electrically operated doors (many of these are right outside of the control center, but the glass is very thick and you would not otherwise be able to converse) An "annunciator panel for the interior telephone system" which presumably combines the conference circuit, fire circuit, and watch call annunciators. An intercom to the visitor registration area A "paging intercom for group control purposes." I don't really know what that is, possibly it is for the public address speakers installed in many parts of the cellblock. Monitor speaker for the inmate radio system. This presumably allowed the control center to check the operation of the two-channel wired radio system installed in the cells. The "watch call answering device," discussed later. An indicator panel that shows any open doors in the D cell block (which is the higher security unit and the only one equipped with electrically locking cell doors). Two-way radio remote console Tear gas controls Many of these are things we are already familiar with, but the watch call telephone system deserves some more discussion. It was clearly present back in the 1930s, but it wasn't clear to me what it actually did. Fortunately this memo gives some details on the operation. Guards calling in to report their watch call extension 3331. This connects to the watch call answering device in the control center, which when enabled, automatically answers the call during the first ring. The answering device then allows a guard anywhere in the control center to converse with the caller via a loudspeaker and microphone. So, the watch call system is essentially just a speaker phone. This approach is probably a holdover from the 1930s system (older documents mention a watch call phone as well), and that would have been the early days for speakerphones, making it a somewhat specialized device. Clearly it made these routine watch calls a lot more convenient for the control center, especially since the guard there didn't even have to do anything to answer. It might be useful to mention why this kind of system was used: I have never found any mention of two-way radios used on Alcatraz, and that's not surprising. Portable two-way radios were a nascent technology even in the 1960s---the handheld radio had basically been invented for the Second World War, and it took years for them to come down in size and price. If Alcatraz ever did issue radios to guards, it probably would have been in the last decade of operation. Instead, telephones were provided at enough places in the facility that guards could report their watch tour and any important events by finding a phone and calling the control center. Guards were probably required to report their location at various points as they patrolled, so the control center would receive quite a few calls that were just a guard saying where they were---to be written down in a log by a control room guard, who no doubt appreciated not having to walk to a phone to hear these reports. This provided both the functions of a "guard tour" system, ensuring that guards were actually performing their rounds, and improved the safety of guards by making it likely that the control center would notice fairly promptly that they had stopped reporting in. Alcatraz closed as a BOP prison in 1963, and after a surprising number of twists and turns ranging from plans to develop a shopping center to occupation by the Indians of All Tribes, Alcatraz opened to tourists. Most technology past this point might not be considered "historic," having been installed by NPS for operational purposes. I can't help but mention, though, that there were more attempts at a cable. For the NPS, operating the power plant at Alcatraz was a significant expense that they would much rather save. The idea of a buried power cable isn't new. I have seen references, although no solid documentation, that the BOP laid a power cable in 1934. They built a new power plant in 1939 and operated it for the rest of the life of the prison, so either that cable failed and was never replaced, or it never existed at all... I should take a moment here to mention that LLM-generated "AI slop" has become a pervasive and unavoidable problem around any "hot SEO topic" like tourism. Unfortunately the history of tourist sites like Alcatraz has become more and more difficult to learn as websites with well-researched history are displaced in search results by SEO spam---articles that often contains confident but unsourced and often incorrect information. This has always been a problem but it has increased by orders of magnitude over the last couple of years, and it seems that the LLM-generated articles are more likely to contain details that are outright made up than the older human-generated kind. It's really depressing. That's basically all I have to say about it. It seems that a power cable was installed to Alcatraz sometime in the 1960s but failed by about 1971. I'm a little skeptical of that because that was the era in which it was surplus GSA property, making such a large investment an odd choice, so maybe the 1980s article with that detail is wrong or confusing power with one of the several telephone cables that seem to have been laid (and failed) during BOP operations). In any case, in late 1980 or early 1981, Paul F. Pugh and Associates of Oakland designed a novel type of underwater power cable for the NPS. It was expected to provide power to Alcatraz at much reduced cost compared to more traditional underwater power cable technologies. It never even made it to day 1: after the cable was laid, but before commissioning, some failure caused a large span of it to float to the surface. The cable was evidently not repairable, and it was pulled back to shore. 'I don't know where we go from here,' William J. Whalen, superintendent of the Golden Gate National Recreation Area, said after the broken cable was hauled in. We do know now: where the NPS went from there was decades of operating two diesel generators on the island, until a 2017 DoE-sponsored project that installed solar panels on the cellblock building roof. The panels were intentionally installed such that they are not visible anywhere from the ground, preserving the historic integrity of the site. In aerial photos, though, they give Alcatraz a curiously modern look. The DoE calls the project, which incorporates battery storage and backup diesel generators, as "one of the largest microgrids in the United States." That is an interesting framing, one that emphasizes the modern valance of "microgrid," since Alcatraz had been a self-sufficient electrical system since the island's first electric lights. But what's old is, apparently, new again. I originally wrote much of this as part of a larger travelogue on my most recent trip to Alcatraz, which was coincidentally the same day as a visit by Pam Bondi and Doug Burgum to "survey" the prison for potential reopening. That piece became long and unwieldy, so I am breaking it up into more focused articles---this one on the technical history, a travelogue about the experience of visiting the island in this political context and its history as a symbol of justice and retribution, and probably a third piece on the way that the NPS interprets the site today. I am pitching the travelogue itself to other publications so it may not have a clear fate for a while, but if it doesn't appear here I'll let you know where. In any case there probably will be a loose part two to look forward to. [1] Greely had a rather illustrious Army career. His term as chief of the Signal Corps was something of a retirement after he led several arctic expeditions, the topic of his numerous popular books and articles. He received the Medal of Honor shortly before his death in 1935.
A long time ago I wrote about secret government telephone numbers, and before that, secret military telephone buttons. I suppose this is becoming a series. To be clear, the "secret" here is a joke, but more charitably I could say that it refers to obscurity rather than any real effort to keep them secret. Actually, today's examples really make this point: they're specifically intended to be well known, but are still pretty obscure in practice. If you've been around for a while, you know how much I love telephone numbers. Here in North America, we have a system called the North American Numbering Plan (NANP) that has rigidly standardized telephone dialing practices since the middle of the 20th century. The US, Canada, and a number of Central American countries benefit from a very orderly system of area codes (more formally numbering plan areas or NPAs) followed by a subscriber number written in the format NXX-XXXX (this is a largely NANP-centric notation for describing phone number patterns, N represents the digits 2-9 and X any digit). All of these NANP numbers reside under the country code 1, allowing at least theoretically seamless international dialing within the NANP community. It's really a pretty elegant system. NANP is the way it is for many reasons, but it mostly reflects technical requirements of the telephone exchanges of the 1940s. This is more thoroughly explained in the link above, but one of the goals of NANP is to ensure that step-by-step (SxS) exchanges can process phone numbers digit by digit as they are dialed. In other words, it needs to be possible to navigate the decision tree of telephone routing using only the digits dialed so far. Readers with a computer science education might have some tidy way to describe this in terms of Chompsky or something, but I do not have a computer science education; I have an Information Technology education. That means I prefer flow charts to automata, and we can visualize a basic SxS exchange as a big tree. When you pick up your phone, you start at the root of the tree, and each digit dialed chooses the edge to follow. Eventually you get to a leaf that is hopefully someone's telephone, but at no point in the process does any node benefit from the context of digits you dial before, after, or how many total digits you dial. This creates all kinds of practical constraints, and is the reason, for example, that we tend to write ten-digit phone numbers with a "1" before them. That requirement was in some ways long-lived (The last SxS exchange on the public telephone network was retired in 1999), and in other ways not so long lived... "common control" telephone exchanges, which did store the entire number in electromechanical memory before making a routing decision, were already in use by the time the NANP scheme was adopted. They just weren't universal, and a common nationwide numbering scheme had to be designed to accommodate the lowest common denominator. This discussion so far is all applicable to the land-line telephone. There is a whole telephone network that is, these days, almost completely separate but interconnected: cellular phones. Early cellular phones (where "early" extends into CDMA and early GSM deployments) were much more closely attached to the "POTS" (Plain Old Telephone System). AT&T and Verizon both operated traditional telephone exchanges, for example 5ESS, that routed calls to and from their customers. These telephone exchanges have become increasingly irrelevant to mobile telephony, and you won't find a T-Mobile ESS or DMS anywhere. All US cellular carriers have adopted the GSM technology stack, and GSM has its own definition of the switching element that can be, and often is, fulfilled by an AWS EC2 instance running RHEL 8. Calls between cell phones today, even between different carriers, are often connected completely over IP and never touch a traditional telephone exchange. The point is that not only is telephone number parsing less constrained on today's telephone network, in the case of cellular phones, it is outright required to be more flexible. GSM also defines the properties of phone numbers, and it is a very loose definition. Keep in mind that GSM is deeply European, and was built from the start to accommodate the wide variety of dialing practices found in Europe. This manifests in ways big and small; one of the notable small ways is that the European emergency number 112 works just as well as 911 on US cell phones because GSM dictates special handling for emergency numbers and dictates that 112 is one of those numbers. In fact, the definition of an "emergency call" on modern GSM networks is requesting a SIP URI of "urn:service:sos". This reveals that dialed number handling on cellular networks is fundamentally different. When you dial a number on your cellular phone, the phone collects the entire number and then applies a series of rules to determine what to do, often leading to a GSM call setup process where the entire number, along with various flags, is sent to the network. This is all software-defined. In the immortal words of our present predicament, "everything's computer." The bottom line is that, within certain regulatory boundaries and requirements set by GSM, cellular carriers can do pretty much whatever they want with phone numbers. Obviously numbers need to be NANP-compliant to be carried by the POTS, but many modern cellular calls aren't carried by the POTS, they are completed entirely within cellular carrier systems through their own interconnection agreements. This freedom allows all kinds of things like "HD voice" (cellular calls connected without the narrow filtering and companding used by the traditional network), and a lot of flexibility in dialing. Most people already know about some weird cellular phone numbers. For example, you can dial *#06# to display your phone's various serial numbers. This is an example of a GSM MMI (man-machine interface) code, phone numbers that are handled entirely within your device but nonetheless defined as dialable numbers by GSM for compatibility with even the most basic flip phones. GSM also defined numbers called USSD for unstructured supplementary service data, which set up connections to the network that can be used in any arbitrary way the network pleases. Older prepaid phone services used to implement balance check and top-up operations using USSD numbers, and they're also often used in ways similar to Vertical Service Codes (VSCs) on the landline network to control carrier features. USSDs also enabled the first forms of mobile data, which involved a "special telephone call" to a USSD in order to download a cut-down form of ESPN in a weird mobile-specific markup language. Now, put yourself in the shoes of an enterprising cellular network. The flexibility of processing phone numbers as you please opens up all kinds of possibilities. Innovative services! Customer convenience! Sell them for money! Oh my god, sell them for money! It seems like this started with customer service. It is an old practice, dating to the Bell operating companies, to have special short phone numbers to reach the telephone company itself. The details varied by company (often based on technical constraints in their switching system), but a common early setup was that dialing 114 got you the repair service operator to report a problem with your phone line. These numbers were usually listed in the front of the phone book, and for the phone company the fact that they were "special" or nonstandard was sort of a feature, since they could ensure that they were always routed within the same switch. The selection of "911" as the US emergency number seems rooted in this practice, as later on several major telcos used the "N11" numbers for their service lines. This became immortalized in the form of 611, which will get you customer service for most phone carriers. So cellular companies did the same, allocating themselves "special" numbers for various service lines. Verizon offers #PMT to make a payment. Naturally, there's also room for upsell services: #ROAD for roadside assistance on Verizon. The odd thing about these phone numbers is that there's really no standard involved, they're just the arbitrary practices of specific cellular companies. The term "mobile dial code" (MDC) is usually used to refer to them, although that term seems to have arisen organically rather than by intent. Remember, these aren't a real thing! The carriers just make them up, all on their own. The only real constraint on MDCs is that they need to not collide with any POTS number, which is most easily achieved by prefixing them with some combination of * and #, and usually not "*#" because it's referenced by the GSM standard for MMI. MDCs are available for purchase, but the terms don't seem to be public and you have to negotiate separately with each carrier. That's because there is no centralization. This is where MDCs stand in clear contrast to the better known SMS Short Code, or SMSSC. Those are the five or six-digit numbers widely used in advertising campaigns. SMSSCs are centrally managed by the SMS Short Code Registry, which is a function of industry association CTIA but contracted to iConectiv. iConectiv is sort of like the SAIC of the communications industry, a huge company that dates back to the Bell System (where it became Bellcore after divestiture) and that no one has heard of but nonetheless is a critically important part of the telephone system. Providers that want to have an SMSSC (typically on behalf of one of their customers) pay a fee, and usually recoup it from the end user. That fee is not cheap, typical end-user rates for an SMSSC run over $10k a year. But at least it's straightforward, and your SMS A2P or marketing company can make it happen for you. MDCs have no such centralization, no standardized registration process. You negotiate with each carrier individually. That means it's pretty difficult to put together "complete coverage" on an MDC by getting the same one assigned by every major carrier. And this is one of those areas where "good enough" is seldom good enough; people get pissed off when something you advertise doesn't work. Putting a phone number that only works for some people on a billboard can quickly turn into an expensive embarrassment, so companies will be wary of using an MDC in marketing if they don't feel really confident that it works for the vast majority of cellphone users. Because of this fragmentation, adoption of MDCs for marketing purposes has been very low. The only going concern I know of is #250, operated by a company called Mobile Direct Response. The premise of #250 is very simple: users call 250 and are greeted by a simple IVR. They say a keyword, and they're either forwarded to the phone number of the business that paid for the keyword or they receive a text message response with more information. #250 is specifically oriented towards radio advertising, where asking people to remember a ten-digit phone number is, well, asking a lot. It's also made the jump to podcast advertising. #250 is priced in a very radio-centric way, by the keyword and the size of the market area in which the advertisement that gives the keyword is played. 250 was founded by Dave Robinett, who used to work on marketing at Sprint, presumably where he became aware that these MDCs were a possibility. He has negotiated for #250 to work across a substantial list of cellular carriers in the US and Canada, providing almost complete coverage. That wasn't easy, Robinett said in an interview that it took five years to get AT&T, T-Mobile, Verizon, and Sprint on board. 250 does not appear to be especially widely used. For one, the website is a little junky, with some broken links and other indications that it is not backed by a large communications department. Dave Robinett may be the entire company. They've been operating since at least 2017, and I've only ever heard it in an ad once---a podcast ad that ended with "Call #250 and say I need a dentist." One thing you quickly notice when you look into telephone marketing is that dentists are apparently about 80% of the market. He does mention success with shows like "Rush, Hannity, and Levin," so it's safe to say that my radio habits are a little different from Robinett's. That's not to say that #250 is a failure. In the same interview Robinett says that the company pays his mortgage and, well, that ain't too bad. But it's also nothing like the widespread adoption of SMSSCs. One wonders if the limitation of MDCs to one company that is so focused on radio marketing limits their potential. It might really open things up if some company created a registration service, and prenegotiated terms with carriers so that companies could pick up their own MDCs to use as they please. Well, yeah, someone's trying. Around 2006, a recently-founded mobile marketing company called Zoove announced StarStar dialing. I'm a little unclear on Zoove's history. It seems that they were originally founded as Teleractive in Rhode Island as an SMS short code keyword response service, and after an infusion of VC cash moved to Palo Alto and started looking for something bigger. In 2016, they were acquired by a call center technology company called Mindful. Or maybe Zoove sold the StarStar business to Mindful? Stick a pin in that. I don't love the name StarStar, which has shades of Spacestar Ordering. But it refers to their chosen MDC prefix, two stars. Well, that point is a little odd, according to their marketing material you can also get numbers with a # prefix or * prefix, but all of the examples use **. I would say that, in general, StarStar has it a little less together than #250. Their website is kind of broken, it only loads intermittently and some of the images are missing. At one point it uses the term "CADC" to describe these numbers but I can't find that expanded anywhere. Plus the "About" page refers repeatedly to Virtual Hold Technologies, which renamed to VHT in 2018 and Mindful 2022. It really feels like the vestigial website of a dead company. I know about StarStar because, for a time, trucks from moving franchise All My Sons prominently bore the number MOVE on the side. Indeed, this is still one of the headline examples on the StarStar website, but it doesn't work. I just get a loud click and then the call ends. And it's not that StarStar doesn't work with my mobile carrier, because StarStar's own number MOBILE does connect to their IVR. That IVR promises that a representative will speak with me shortly, plays about five seconds of hold music, and then dumps me on a voicemail system. Despite StarStar numbers apparently basically working, I'm finding that most of the examples they give on their website won't even connect. Perhaps results will vary depending on the mobile network. Well, perhaps not that much is lost. StarStar was founded by Steve Doumar, a serial telephone marketing entrepreneur with a colorful past founding various inbound call center companies. Perhaps his most famous venture is R360, a "lead acquisition" service memorialized by headlines like "Drug treatment referral service took advantage of addictions to make a quick buck" from the Federal Trade Commission. He's one of those guys whose bio involves founding a new company every two years, which he has to spin as entrepreneurial dynamism rather than some combination of fleeing dissatisfied investors and fleeing angered regulators. Today he runs whisp.io, a "customer activation platform" that appears to be a glorified SMS advertising service featuring something ominously called "simplified opt-in." Whisp has a YouTube channel which features the 48-second gem "Fun Fact We Absolutely Love About Steve Doumar". Description: Our very own CEO, Steve Doumar is a kind and generous person who has given back to the community in many ways; this man is absolutely a man with a heart of gold. Do you want to know the fun fact? Yes you do! Here it is: "He is an incredible philanthropist. He loves helping other people. Every time I'm with him he comes up with new ways and new ideas to help other people. Which I think is amazing. And he doesn't brag about it, he doesn't talk about it a lot." Except he's got his CMO making a YouTube video about it? From Steve Doumar's blog: American entrepreneur Ray Kroc expressed the importance of persisting in a busy world where everyone wants a bite of success. This man is no exception. An entrepreneur. A family man. A visionary. These are the many names of a man that has made it possible for opt-ins to be safe, secure, and accurate; Steve Doumar. I love this stuff, you just can't make it up. I'm pretty sure what's going on here is just an SEO effort to outrank the FTC releases and other articles about the R360 case when you search for his name. It's only partially working, "FTC Hits R360 and its Owner With $3.8 Million Civil ..." still comes in at Google result #4 for "Steve Doumar," at least for me. But hey, #4 is better than #1. Well, to be fair to StarStar, I don't think Steve Doumar has been involved for some years, but also to be fair, some of their current situation clearly dates to past behavior that is maybe less than savory. Zoove originally styled itself as "The National StarStar Registry," clearly trying to draw parallels to CTIA/iConectiv's SMSSC registry. Their largest customer was evidently a company called Sumotext, which leased a number of StarStar numbers to offer an SMS and telephone marketing service. In 2016, Sumotext sued StarStar, Zoove, VHT (now Mindful), and a healthy list of other entities all involved in StarStar including the intriguingly named StarSteve LLC. I'm not alone in finding the corporate history a little baffling; in a footnote on one ruling the court expressed confusion about all the different names and opted to call them all Zoove. In any case, Sumotext alleged that Zoove, StarSteve, and VHT all merged as part of a scheme to illegally monopolize the StarStar market by undercutting the companies that had been leasing the numbers and effectively giving VHT (Mindful) an exclusive ability to offer marketing services with StarStar numbers. The case didn't end up going anywhere for Sumotext, the jury found that Sumotext hadn't established a relevant market which is a key part of a Sherman act case. An appeal was made all the way to the Supreme Court, but they didn't take it up. What the case did do was publicize some pretty sketchy sounding details, like the seemingly uncontested accusation that VHT got Sumotext's customer list from the registry database and used it to convert them all into StarSteve customers. And yes, the Steve in StarSteve is Steve Doumar. As best I can tell, the story here is that Steve Doumar founded Zoove (or bought Teleractive and renamed it or something?) to establish the National StarStar Registry, then founded a marketing company called StarSteve that resold StarStar numbers, then merged StarSteve and the National StarStar Registry together and cut off all of the other resellers. Apparently not a Sherman act violation but it sure is a bad look, and I wonder how much it contributed to the lack of adoption of the whole StarStar idea---especially given that Sumotext seems to have been responsible for most of that adoption, including the All My Sons deal for MOVE. I wonder if All My Sons had to take MOVE off of their trucks because of the whole StarSteve maneuver? That seems to be what happened. Look, ten-digit phone numbers are had to remember, that much is true. But as is, the "MDC" industry doesn't seem stable enough for advertising applications where the number needs to continue to work into the future. I think the #250 service is probably here to stay, but confined to the niche of audio advertising. StarStar raised at least $30 million in capital in the 2010s, but seems to have shot itself in the foot. StarStar owner VHT/Mindful, now acquired by Medallia, doesn't even mention StarStar as a product offering. Hey, remember how Steve Doumar is such a great philanthropist? There are a lot of vestiges around of StarStar Inc., a nonprofit that made StarStar numbers available to charitable organizations. Their website, starstar.org, is now a Wix error page. You can find old articles about StarStar Me, also written **me, which sounds lewd but was a $3/mo offering that allowed customers to get a vanity short code (such as ** followed by their name)---the original form of StarStar, dating back to 2012 and the beginning of Zoove. In a press release announcing the StarStar Me, Zoove CEO Joe Gillespie said: With two-thirds of smartphone users having downloaded social networking apps to their phones, there’s a rapidly growing trend in today's on-the-go lifestyle to extend our personal communications and identity into the digital realm via our mobile phones. And somehow this leads to paying $3 for to get StarStarred? I love it! It's so meaningless! And years later it would be StarStar Mobile formerly Zoove by VHT now known as Mindful a Medallia company. Truly an inspiring story of industry, and just one little corner of the vast tapestry of phone numbers.
Some time ago, via a certain orange website, I came across a report about a mission to recover nuclear material from a former Soviet test site. I don't know what you're doing here, go read that instead. But it brought up a topic that I have only known very little about: Hydronuclear testing. One of the key reasons for the nonproliferation concern at Semipalatinsk was the presence of a large quantity of weapons grade material. This created a substantial risk that someone would recover the material and either use it directly or sell it---either way giving a significant leg up on the construction of a nuclear weapon. That's a bit odd, though, isn't it? Material refined for use in weapons in scarce and valuable, and besides that rather dangerous. It's uncommon to just leave it lying around, especially not hundreds of kilograms of it. This material was abandoned in place because the nature of the testing performed required that a lot of weapons-grade material be present, and made it very difficult to remove. As the Semipalatinsk document mentions in brief, similar tests were conducted in the US and led to a similar abandonment of special nuclear material at Los Alamos's TA-49. Today, I would like to give the background on hydronuclear testing---the what and why. Then we'll look specifically at LANL's TA-49 and the impact of the testing performed there. First we have to discuss the boosted fission weapon. Especially in the 21st century, we tend to talk about "nuclear weapons" as one big category. The distinction between an "A-bomb" and an "H-bomb," for example, or between a conventional nuclear weapon and a thermonuclear weapon, is mostly forgotten. That's no big surprise: thermonuclear weapons have been around since the 1950s, so it's no longer a great innovation or escalation in weapons design. The thermonuclear weapon was not the only post-WWII design innovation. At around the same time, Los Alamos developed a related concept: the boosted weapon. Boosted weapons were essentially an improvement in the efficiency of nuclear weapons. When the core of a weapon goes supercritical, the fission produces a powerful pulse of neutrons. Those neutrons cause more fission, the chain reaction that makes up the basic principle of the atomic bomb. The problem is that the whole process isn't fast enough: the energy produced blows the core apart before it's been sufficiently "saturated" with neutrons to completely fission. That leads to a lot of the fuel in the core being scattered, rather than actually contributing to the explosive energy. In boosted weapons, a material that will fusion is added to the mix, typically tritium and deuterium gas. The immense heat of the beginning of the supercritical stage causes the gas to undergo fusion, and it emits far more neutrons than the fissioning fuel does alone. The additional neutrons cause more fission to occur, improving the efficiency of the weapon. Even better, despite the theoretical complexity of driving a gas into fusion¸ the mechanics of this mechanism are actually simpler than the techniques used to improve yield in non-boosted weapons (pushers and tampers). The result is that boosted weapons produce a more powerful yield in comparison to the amount of fuel, and the non-nuclear components can be made simpler and more compact as well. This was a pretty big advance in weapons design and boosting is now a ubiquitous technique. It came with some downsides, though. The big one is that whole property of making supercriticality easier to achieve. Early implosion weapons were remarkably difficult to detonate, requiring an extremely precisely timed detonation of the high explosive shell. While an inconvenience from an engineering perspective, the inherent difficulty of achieving a nuclear yield also provided a safety factor. If the high explosives detonated for some unintended reason, like being struck by canon fire as a bomber was intercepted, or impacting the ground following an accidental release, it wouldn't "work right." Uneven detonation of the shell would scatter the core, rather than driving it into supercriticality. This property was referred to as "one point safety:" a detonation at one point on the high explosive assembly should not produce a nuclear yield. While it has its limitations, it became one of the key safety principles of weapon design. The design of boosted weapons complicated this story. Just a small fission yield, from a small fragment of the core, could potentially start the fusion process and trigger the rest of the core to detonate as well. In other words, weapon designers became concerned that boosted weapons would not have one point safety. As it turns out, two-stage thermonuclear weapons, which were being fielded around the same time, posed a similar set of problems. The safety problems around more advanced weapon designs came to a head in the late '50s. Incidentally, so did something else: shifts in Soviet politics had given Khrushchev extensive power over Soviet military planning, and he was no fan of nuclear weapons. After some on-again, off-again dialog between the time's nuclear powers, the US and UK agreed to a voluntary moratorium on nuclear testing which began in late 1958. For weapons designers this was, of course, a problem. They had planned to address the safety of advanced weapon designs through a testing campaign, and that was now off the table for the indefinite future. An alternative had to be developed, and quickly. In 1959, the Hydronuclear Safety Program was initiated. By reducing the amount of material in otherwise real weapon cores, physicists realized they could run a complete test of the high explosive system and observe its effects on the core without producing a meaningful nuclear yield. These tests were dubbed "hydronuclear," because of the desire to observe the behavior of the core as it flowed like water under the immense explosive force. While the test devices were in some ways real nuclear weapons, the nuclear yield would be vastly smaller than the high explosive yield, practically nill. Weapons designers seemed to agree that these experiments complied with the spirit of the moratorium, being far from actual nuclear tests, but there was enough concern that Los Alamos went to the AEC and President Eisenhower for approval. They evidently agreed, and work started immediately to identify a suitable site for hydronuclear testing. While hydronuclear tests do not create a nuclear yield, they do involve a lot of high explosives and radioactive material. The plan was to conduct the tests underground, where the materials cast off by the explosion would be trapped. This would solve the immediate problem of scattering nuclear material, but it would obviously be impractical to recover the dangerous material once it was mixed with unstable soil deep below the surface. The material would stay, and it had to stay put! The US Army Corps of Engineers, a center of expertise in hydrology because of their reclamation work, arrived in October 1959 to begin an extensive set of studies on the Frijoles Mesa site. This was an unused area near a good road but far on the east edge of the laboratory, well separated from the town of Los Alamos and pretty much anything else. More importantly, it was a classic example of northern New Mexican geology: high up on a mesa built of tuff and volcanic sediments, well-drained and extremely dry soil in an area that received little rain. One of the main migration paths for underground contaminants is their interaction with water, and specifically the tendency of many materials to dissolve into groundwater and flow with it towards aquifers. The Corps of Engineers drilled test wells, about 1,500' deep, and a series of 400' core samples. They found that on the Frijoles Mesa, ground water was over 1,000' below the surface, and that everything above was far from saturation. That means no mobility of the water, which is trapped in the soil. It's just about the ideal situation for putting something underground and having it stay. Incidentally, this study would lead to the development of a series of new water wells for Los Alamos's domestic water supply. It also gave the green light for hydronuclear testing, and Frijoles Mesa was dubbed Technical Area 49 and subdivided into a set of test areas. Over the following three years, these test areas would see about 35 hydronuclear detonations carried out in the bottom of shafts that were about 200' deep and 3-6' wide. It seems that for most tests, the hole was excavated and lined with a ladder installed to reach the bottom. Technicians worked at the bottom of the hole to prepare the test device, which was connected by extensive cabling to instrumentation trailers on the surface. When the "shot" was ready, the hole was backfilled with sand and sealed at the top with a heavy plate. The material on top of the device held everything down, preventing migration of nuclear material to the surface. The high explosives did, of course, destroy the test device and the cabling, but not before the instrumentation trailers had recorded a vast amount of data. If you read these kinds of articles, you must know that the 1958 moratorium did not last. Soviet politics shifted again, France began nuclear testing, negotiations over a more formal test ban faltered. US intelligence suspected that the Soviet Union had operated their nuclear weapons program at full tilt during the test ban, and the military suspected clandestine tests, although there was no evidence they had violated the treaty. Of course, that they continued their research efforts is guaranteed, we did as well. Physicist Edward Teller, ever the nuclear weapons hawk, opposed the moratorium and pushed to resume testing. In 1961, the Soviet Union resumed testing, culminating in the test of the record-holding "Tsar Bomba," a 50 megaton device. The US resumed testing as well. The arms race was back on. US hydronuclear testing largely ended with the resumption of full-scale testing. The same safety studies could be completed on real weapons, and those tests would serve other purposes in weapons development as well. Although post-moratorium testing included atmospheric detonations, the focus had shifted towards underground tests and the 1963 Partial Test Ban Treaty restricted the US and USSR to underground tests only. One wonders about the relationship between hydronuclear testing at TA-49 and the full-scale underground tests extensively performed at the NTS. Underground testing began in 1951 with Buster-Jangle Uncle, a test to determine how big of a crater could be produced by a ground-penetrating weapon. Uncle wasn't really an underground test in the modern sense, the device was emplaced only 17 feet deep and still produced a huge cloud of fallout. It started a trend, though: a similar 1955 test was set 67 feet deep, producing a spectacular crater, before the 1957 Plumbbob Pascal-A was detonated at 486 feet and produced radically less fallout. 1957's Plumbbob Rainier was the first fully-contained underground test, set at the end of a tunnel excavated far into a hillside. This test emitted no fallout at all, proving the possibility of containment. Thus both the idea of emplacing a test device in a deep hole, and the fact that testing underground could contain all of the fallout, were known when the moratorium began in 1959. What's very interesting about the hydronuclear tests is the fact that technicians actually worked "downhole," at the bottom of the excavation. Later underground tests were prepared by assembling the test device at the surface, as part of a rocket-like "rack," and then lowering it to the bottom just before detonation. These techniques hadn't yet been developed in the '50s, thus the use of a horizontal tunnel for the first fully-contained test. Many of the racks used for underground testing were designed and built by LANL, but others (called "canisters" in an example of the tendency of the labs to not totally agree on things) were built by Lawrence Livermore. I'm not actually sure which of the two labs started building them first, a question for future research. It does seem likely that the hydronuclear testing at LANL advanced the state of the art in remote instrumentation and underground test design, facilitating the adoption of fully-contained underground tests in the following years. During the three years of hydronuclear testing, shafts were excavated in four testing areas. It's estimated that the test program at TA-49 left about 40kg of plutonium and 93kg of enriched uranium underground, along with 92kg of depleted uranium and 13kg of beryllium (both toxic contaminants). Because of the lack of a nuclear yield, these tests did not create the caverns associated with underground testing. Material from the weapons likely spread within just a 10-20' area, as holes were drilled on a 25' grid and contamination from previous neighboring tests was encountered only once. The tests also produced quite a bit of ancillary waste: things like laboratory equipment, handling gear, cables and tubing, that are not directly radioactive but were contaminated with radioactive or toxic materials. In the fashion typical of the time, this waste was buried on site, often as part of the backfilling of the test shafts. During the excavation of one of the test shafts, 2-M in December 1960, contamination was detected at the surface. It seems that the geology allowed plutonium from a previous test to spread through cracks into the area where 2-M was being drilled. The surface soil contaminated by drill cuttings was buried back in hole 2-M, but this incident made area 2 the most heavily contaminated part of TA-49. When hydronuclear testing ended in 1961, area 2 was covered by a 6' of gravel and 4-6" of asphalt to better contain any contaminated soil. Several support buildings on the surface were also contaminated, most notably a building used as a radiochemistry laboratory to support the tests. An underground calibration facility that allowed for exposure of test equipment to a contained source in an underground chamber was also built at TA-49 and similarly contaminated by use with radioisotopes. The Corps of Engineers continued to monitor the hydrology of the site from 1961 to 1970, and test wells and soil samples showed no indication that any contamination was spreading. In 1971, LANL established a new environmental surveillance department that assumed responsibility for legacy sites like TA-49. That department continued to sample wells, soil, and added air sampling. Monitoring of stream sediment downhill from the site was added in the '70s, as many of the contaminants involved can bind to silt and travel with surface water. This monitoring has not found any spread either. That's not to say that everything is perfect. In 1975, a section of the asphalt pad over Area 2 collapsed, leaving a three foot deep depression. Rainwater pooled in the depression and then flowed through the gravel into hole 2-M itself, collecting in the bottom of the lining of the former experimental shaft. In 1976, the asphalt cover was replaced, but concerns remained about the water that had already entered 2-M. It could potentially travel out of the hole, continue downwards, and carry contamination into the aquifer around 800' below. Worse, a nearby core sample hole had picked up some water too, suggesting that the water was flowing out of 2-M through cracks and into nearby features. Since the core hole had a slotted liner, it would be easier for water to leave it and soak into the ground below. In 1980, the water that had accumulated in 2-M was removed by lifting about 24 gallons to the surface. While the water was plutonium contaminated, it fell within acceptable levels for controlled laboratory areas. Further inspections through 1986 did not find additional water in the hole, suggesting that the asphalt pad was continuing to function correctly. Several other investigations were conducted, including the drilling of some additional sample wells and examination of other shafts in the area, to determine if there were other routes for water to enter the Area 2 shafts. Fortunately no evidence of ongoing water ingress was found. In 1986, TA-49 was designated a hazardous waste site under the Resource Conservation and Recovery Act. Shortly after, the site was evaluated under CERCLA to prioritize remediation. Scoring using the Hazard Ranking System determined a fairly low risk for the site, due to the lack of spread of the contamination and evidence suggesting that it was well contained by the geology. Still, TA-49 remains an environmental remediation site and now falls under a license granted by the New Mexico Environment Department. This license requires ongoing monitoring and remediation of any problems with the containment. For example, in 1991 the asphalt cover of Area 2 was found to have cracked and allowed more water to enter the sample wells. The covering was repaired once again, and investigations made every few years from 1991 to 2015 to check for further contamination. Ongoing monitoring continues today. So far, Area 2 has not been found to pose an unacceptable risk to human health or a risk to the environment. NMED permitting also covers the former radiological laboratory and calibration facility, and infrastructure related to them like a leach field from drains. Sampling found some surface contamination, so the affected soil was removed and disposed of at a hazardous waste landfill where it will be better contained. TA-49 was reused for other purposes after hydronuclear testing. These activities included high explosive experiments contained in metal "bottles," carried out in a metal-lined pit under a small structure called the "bottle house." Part of the bottle house site was later reused to build a huge hydraulic ram used to test steel cables at their failure strength. I am not sure of the exact purpose of this "Cable Test Facility," but given the timeline of its use during the peak of underground testing and the design I suspect LANL used it as a quality control measure for the cable assemblies used in lowering underground test racks into their shafts. No radioactive materials were involved in either of these activities, but high explosives and hydraulic oil can both be toxic, so both were investigated and received some surface soil cleanup. Finally, the NMED permit covers the actual test shafts. These have received numerous investigations over the sixty years since the original tests, and significant contamination is present as expected. However, that contamination does not seem to be spreading, and modeling suggests that it will stay that way. In 2022, the NMED issued Certificates of Completion releasing most of the TA-49 remediation sites without further environmental controls. The test shafts themselves, known to NMED by the punchy name of Solid Waste Management Unit 49-001(e), received a certificate of completion that requires ongoing controls to ensure that the land is used only for industrial purposes. Environmental monitoring of the TA-49 site continues under LANL's environmental management program and federal regulation, but TA-49 is no longer an active remediation project. The plutonium and uranium is just down there, and it'll have to stay.
More in technology
There are a lot of very good reasons to monitor local environmental conditions, especially as they pertain to climate change research and various ecological protection initiatives. But doing so in urban locales is a challenge, as simply mounting a device can be difficult. That’s why a team from MIT’s Senseable City Lab built Octopus, which […] The post MIT’s Octopus is a delightful environmental sensor for urban deployment appeared first on Arduino Blog.
I have an ongoing fascination with "interactive TV": a series of efforts, starting in the 1990s and continuing today, to drag the humble living room television into the world of the computer. One of the big appeals of interactive TV was adoption, the average household had a TV long before the average household had a computer. So, it seems like interactive TV services should have proliferated before personal computers, at least following the logic that many in the industry did at the time. This wasn't untrue! In the UK, for example, Ceefax was a widespread success by the 1980s. In general, TV-based teletext systems were pretty common in Europe. In North America, they never had much of an impact---but not for lack of trying. In fact, there were multiple competing efforts at teletext in the US and Canada, and it may very well have been the sheer number of independent efforts that sunk the whole idea. But let's start at the beginning. The BBC went live with Ceefax in 1974, the culmination of years of prototype development and test broadcasts over the BBC network. Ceefax was quickly joined by other teletext standards in Europe, and the concept enjoyed a high level of adoption. This must have caught the attention of many in the television industry on this side of the ocean, but it was Bonneville International that first bit [1]. Its premier holding, KSL-TV of Salt Lake City, has an influence larger than its name suggests: KSL was carried by an extensive repeater network and reached a large portion of the population throughout the Mountain States. Because of the wide reach of KSL and the even wider reach of the religion that relied on Bonneville for communications, Bonneville was also an early innovator in satellite distribution of television and data. These were ingredients that made for a promising teletext network, one that could quickly reach a large audience and expand to broader television networks through satellite distribution. KSL applied to the FCC for an experimental license to broadcast teletext in addition to its television signal, and received it in June of 1978. I am finding some confusion in the historical record over whether KSL adopted the BBC's Ceefax protocol or the competing ORACLE, used in the UK by the independent broadcasters. A 1982 paper on KSL's experiment confusingly says they used "the British CEEFAX/Oracle," but then in the next sentence the author gives the first years of service for Ceefax and ORACLE the wrong way around, so I think it's safe to say that they were just generally confused. I think I know the reason why: in the late '70s, the British broadcasters were developing something called World System Teletext (WST), a new common standard based on aspects of both Ceefax and ORACLE. Although WST wasn't quite final in 1978, I believe that what KSL adopted was actually a draft of WST. That actually hints at an interesting detail which becomes important to these proposals: in Europe, where teletext thrived, there were usually not very many TV channels. The US's highly competitive media landscape lead to a proliferation of different TV networks, and local operations in addition. It was a far cry from the UK, for example, where 1982 saw the introduction of a fourth channel called, well, Channel 4. By contrast, Salt Lake City viewers with cable were picking from over a dozen channels in 1982, and that wasn't an especially crowded media market. This difference in the industry, between a few major nationwide channels and a longer list of often local ones, has widespread ramifications on how UK and US television technology evolved. One of them is that, in the UK, space in the VBI to transmit data became a hotly contested commodity. By the '80s, obtaining a line of the VBI on any UK network to use for your new datacasting scheme involved a bidding war with your potential competitors, not unlike the way spectrum was allocated in the US. Teletext schemes were made and broken by the outcomes of these auctions. Over here, there was a long list of television channels and on most of them only a single line of the VBI was in use for data (line 21 for closed captions). You might think this would create fertile ground for VBI-based services, but it also posed a challenge: the market was extensively fractured. You could not win a BBC or IBA VBI allocation and then have nationwide coverage, you would have to negotiate such a deal with a long list of TV stations and then likely provide your own infrastructure for injecting the signal. In short, this seems to be one of the main reasons for the huge difference in teletext adoption between Europe and North America: throughout Europe, broadcasting tended to be quite centralized, which made it difficult to get your foot in the door but very easy to reach a large customer base once you had. In the US, it was easier to get started, but you had to fight for each market area. "Critical mass" was very hard to achieve [2]. Back at KSL, $40,000 (~$200,000 today) bought a General Automation computer and Tektronix NTSC signal generator that made up the broadcast system. The computer could manage as many as 800 pages of 20x32 teletext, but KSL launched with 120. Texas Instruments assisted KSL in modifying thirty television sets with a new decoder board and a wired remote control for page selection. This setup, very similar to teletext sets in Europe, nearly doubled the price of the TV set. This likely would have become a problem later on, but for the pilot stage, KSL provided the modified sets gratis to their 30 test households. One of the selling points of teletext in Europe was its ability to provide real-time data. Things like sports scores and stock quotations could be quickly updated in teletext, and news headlines could make it to teletext before the next TV news broadcast. Of course, collecting all that data and preparing it as teletext pages required either a substantial investment in automation or a staff of typists. At the pilot stage, KSL opted for neither, so much of the information that KSL provided was out-of-date. It was very much a prototype. Over time, KSL invested more in the system. In 1979, for example, KSL partnered with the National Weather Service to bring real-time weather updates to teletext---all automatically via the NWS's computerized system called AFOS. At that time, KSL was still operating under an experimental license, one that didn't allow them to onboard customers beyond their 30-set test market. The goal was to demonstrate the technology and its compatibility with the broader ecosystem. In 1980, the FCC granted a similar experimental license to CBS affiliated KMOX in St. Louis, who started a similar pilot effort using a French system called Antiope. Over the following few years, the FCC allowed expansion of this test to other CBS affiliates including KNXT in Los Angeles. To emphasize the educational and practical value of teletext (and no doubt attract another funding source), CBS partnered with Los Angeles PBS affiliate KCET who carried their own Teletext programming with a characteristic slant towards enrichment. Meanwhile, in Chicago, station WFLD introduced a teletext service called Keyfax, built on Ceefax technology as a joint venture with Honeywell and telecom company Centel. Despite the lack of consumer availability, teletext was becoming a crowded field---and for the sake of narrative simplicity I am leaving out a whole set of other North American ventures right now. In 1983, there were at least a half dozen stations broadcasting teletext based on British or French technology, and yet, there were zero teletext decoders on the US market. Besides their use of an experimental license, the teletext pilot projects were constrained by the need for largely custom prototype decoders integrated into customer's television sets. Broadcast executives promised the price could come down to $25, but the modifications actually available continued to cost in the hundreds. The director of public affairs at KSL, asked about this odd conundrum of a nearly five-year-old service that you could not buy, pointed out that electronics manufacturers were hesitant to mass produce an inexpensive teletext decoder as long as it was unclear which of several standards would prevail. The reason that no one used teletext, then, was in part the sheer number of different teletext efforts underway. And, of course, things were looking pretty evenly split: CBS had fully endorsed the French-derived system, and was a major nationwide TV network. But most non-network stations with teletext projects had gone the British route. In terms of broadcast channels, it was looking about 50/50. Further complicating things, teletext proper was not the only contender. There was also videotex. The terminology has become somewhat confused, but I will stick to the nomenclature used in the 1980s: teletext services used a continuous one-way broadcast of every page and decoders simply displayed the requested page when it came around in the loop. Videotex systems were two-way, with the customer using a phone line to request a specific page which was then sent on-demand. Videotex systems tended to operate over telephone lines rather than television cable, but were frequently integrated into television sets. Videotex is not as well remembered as teletext because it was a massive commercial failure, with the very notable exception of the French Minitel. But in the '80s they didn't know that yet, and the UK had its own videotex venture called Prestel. Prestel had the backing of the Post Office, because they ran the telephones and thus stood to make a lot of money off of it. For the exact same reason, US telephone company GTE bought the rights to the system in the US. Videotex is significantly closer to "the internet" in its concept than teletext, and GTE was entering a competitive market. In 1981, Radio Shack had introduced a videotex terminal for several years already, a machine originally developed as the "AgVision" for use with an experimental Kentucky agricultural videotex service and then offered nationwide. This creates an amusing irony: teletext services existed but it was very difficult to obtain a decoder to use them. Radio Shack was selling a videotex client nationwide, but what service would you use it with? In practice, the "TRS-80 Videotex" as the AgVision came to be known was used mostly as a client for CompuServe and Dow Jones. Neither of these were actually videotex services, using neither the videotex UX model nor the videotex-specific features of the machine. The TRS-80 Videotex was reduced to just a slightly weird terminal with a telephone modem, and never sold well until Radio Shack beefed it up into a complete microcomputer and relaunched it as the TRS-80 Color Computer. Radio Shack also sold a backend videotex system, and apparently some newspapers bought it in an effort to launch a "digital edition." The only one to achieve long-term success seems to have been StarText, a service of the Fort Worth Star-Telegram. It was popular enough to be remembered by many from the Fort Worth area, but there was little national impact. It was clearly not enough to float sales of the TRS-80 Videotex and the whole thing has been forgotten. Well, with such a promising market, GTE brought its US Prestel service to market in 1982. As the TRS-80 dropped its Videotex ambitions, Zenith launched a US television set with a built-in Prestel client. Prestel wasn't the only videotex operation, and GTE wasn't the only company marketing videotex in the US. If the British Post Office and GTE thought they could make money off of something, you know AT&T was somewhere around. They were, and in classic AT&T fashion. During the 1970s, the Canadian Communications Research Center developed a vector-based drawing system. Ontario manufacturer Norpak developed a consumer terminal that could request full-color pages from this system using a videotex-like protocol. Based on the model of Ceefax, the CRC designed a system called Telidon that worked over television (in a more teletext-like fashion) or phone lines (like videotex), with the capability of publishing far more detailed graphics than the simple box drawings of teletext. Telidon had several cool aspects, like the use of a pretty complicated vector-drawing terminal and a flexible protocol designed for interoperability between different communications media. That's the kind of thing AT&T loved, so they joined the effort. With CRC, AT&T developed NABTS, the North American Broadcast Teletext Specification---based on Telidon and intended for one-way broadcast over TV networks. NABTS was complex and expensive compared to Ceefax/ORACLE/WST based systems. A review of KSL's pilot notes how the $40,000 budget for their origination system compared to the cost quoted by AT&T for an NABTS headend: as much as $2 million. While KSL's estimates of $25 for a teletext decoder had not been achieved, the prototypes were still running cheaper than NABTS clients that ran into the hundreds. Still, the graphical capabilities of NABTS were immediately impressive compared to text-only services. Besides, the extensibility of NABTS onto telephone systems, where pages could be delivered on-demand, made it capable of far larger databases. When KSL first introduced teletext, they spoke of a scheme where a customer could call a phone number and, via DTMF menus, request an "extended" page beyond the 120 normally transmitted. They could then request that page on their teletext decoder and, at the end of the normal 120 page loop, it would be sent just for them. I'm not sure if that was ever implemented or just a concept. In any case, videotex systems could function this way natively, with pages requested and retrieved entirely by telephone modem, or using hybrid approaches. NABTS won the support of NBC, who launched a pilot NABTS service (confusingly called NBC Teletext) in 1981 and went into full service in 1983. CBS wasn't going to be left behind, and trialed and launched NABTS (as CBS ExtraVision) at the same time. That was an ignominious end for CBS's actual teletext pilot, which quietly shut down without ever having gone into full service. ExtraVision and NBC Teletext are probably the first US interactive TV services that consumers could actually buy and use. Teletext was not dead, though. In 1982, Cincinnati station WKRC ran test broadcasts for a WST-based teletext service called Electra. WKRC's parent company, Taft, partnered with Zenith to develop a real US-market consumer WST decoder for use with the Electra service. In 1983, the same year that ExtraVision and CBS Teletext went live, Zenith teletext decoders appeared on the shelves of Cincinnati stores. They were plug-in modules for recent Zenith televisions, meaning that customers would likely also need to buy a whole new TV to use the service... but it was the only option, and seems to have remained that way for the life of US teletext. I believe that Taft's Electra was the first teletext service to achieve a regular broadcast license. Through the mid 1980s, Electra would expand to more television stations, reaching similar penetration to the videotex services. In 1982, KeyFax (remember KeyFax? it was the one on WFLD in Chicago) had made the pivot from teletext to videotex as well, adopting the Prestel-derived technology from GTE. In 1984, KeyFax gave up on their broadcast television component and became a telephone modem service only. Electra jumped on the now-free VBI lines of WFLD and launched in Chicago. WTBS in Atlanta carried Electra, and then in the biggest expansion of teletext, Electra appeared on SPN---a satellite network that would later become CNBC. While major networks, and major companies like GTE and AT&T, pushed for the videotex NABTS, teletext continued to have its supporters among independent stations. Los Angeles's KTTV started its own teletext service in 1984, which combined locally-developed pages with national news syndicated from Electra. This seemed like the start of a promising model for teletext across independent stations, but it wasn't often repeated. Oh, and KSL? at some point, uncertain to me but before 1984, they switched to NABTS. Let's stop for a moment and recap the situation. Between about 1978 and 1984, over a dozen major US television stations launched interactive TV offerings using four major protocols that fell into two general categories. One of those categories was one-way over television while the other was two-way over telephone or one-way over television with some operators offering both. Several TV stations switched between types. The largest telcos and TV networks favored one option, but it was significantly more expensive than the other, leading smaller operators to choose differently. The hardware situation was surprisingly straightforward in that, within teletext and videotex, consumers only had one option and it was very expensive. Oh, and that's just the technical aspects. The business arrangements could get even stranger. Teletext services were generally free, but videotex services often charged a service fee. This was universally true for videotex services offered over telephone and often, but not always, true for videotex services over cable. Were the videotex services over cable even videotex? doesn't that contradict the definition I gave earlier? is that why NBC called their videotex service teletext? And isn't videotex over telephone barely differentiated from computer-based services like CompuServe and The Source that were gaining traction at the same time? I think this all explains the failure of interactive TV in the 1980s. As you've seen, it's not that no one tried. It's that everyone tried, and they were all tripping over each other the entire time. Even in Canada, where the government had sponsored development of the Telidon system ground-up to be a nationwide standard, the influence of US teletext services created similar confusion. For consumers, there were so many options that they didn't know what to buy, and besides, the price of the hardware was difficult to justify with the few stations that offered teletext. The fact that teletext had been hyped as the "next big thing" by newspapers since 1978, and only reached the market in 1983 as a shambled mess, surely did little for consumer confidence. You might wonder: where was the FCC during this whole thing? In the US, we do not have a state broadcaster, but we do have state regulation of broadcast media that is really quite strict as to content and form. During the late '70s, under those first experimental licenses, the general perception seemed to be that the FCC was waiting for broadcasters to evaluate the different options before selecting a nationwide standard. Given that the FCC had previously dictated standards for television receivers, it didn't seem like that far of a stretch to think that a national-standard teletext decoder might become mandatory equipment on new televisions. Well, it was political. The long, odd experimental period from 1978 to 1983 was basically a result of FCC indecision. The commission wasn't prepared to approve anything as a national standard, but the lack of approval meant that broadcasters weren't really allowed to use anything outside of limited experimental programs. One assumes that they were being aggressively lobbied by every side of the debate, which no doubt factored into the FCC's 1981 decision that teletext content would be unregulated, and 1982 statements from commissioners suggesting that the FCC would not, in fact, adopt any technical standards for teletext. There is another factor wrapped up in this whole story, another tumultuous effort to deliver text over television: closed captioning. PBS introduced closed captioning in 1980, transmitting text over line 21 of the VBI for decoding by a set-top box. There are meaningful technical similarities between closed captioning and teletext, to the extent that the two became competitors. Some broadcasters that added NABTS dropped closed captioning because of incompatibility between the equipment in use. This doesn't seem to have been a real technical constraint, and was perhaps more likely cover for a cost-savings decision, but it generated considerable controversy that lead to the National Association for the Deaf organizing for closed captioning and against teletext. The topic of closed captioning continued to haunt interactive TV. TV networks tended to view teletext or videotex as the obvious replacements for line 21 closed captioning, due to their more sophisticated technical features. Of course, the problems that limited interactive TV adoption in general, high cost and fragmentation, made it unappealing to the deaf. Closed captioning had only just barely become well-standardized in the mid-1980s and its users were not keen to give it up for another decade of frustration. While some deaf groups did support NABTS, the industry still set up a conflict between closed captioning and interactive TV that must have contributed to the FCC's cold feet. In April of 1983, at the dawn of US broadcast teletext, the FCC voted 6-1 to allow television networks and equipment manufacturers to support any teletext or videotex protocol of their choice. At the same time, they declined to require cable networks to carry teletext content from broadcast television stations, making it more difficult for any TV network to achieve widespread adoption [3]. The FCC adopted what was often termed a "market-based" solution to the question of interactive TV. The market would not provide that solution. It had already failed. In November of 1983, Time ended their teletext service. That's right, Time used to have a TV network and it used to have teletext; it was actually one of the first on the market. It was also the first to fall, but they had company. CBS and NBC had significantly scaled back their NABTS programs, which were failing to make any money because of the lack of hardware that could decode the service. On the WST side of the industry, Taft reported poor adoption of Electra and Zenith reported that they had sold very few decoders, so few that they were considering ending the product line. Taft was having a hard time anyway, going through a rough reorganization in 1986 that seems to have eliminated most of the budget for Electra. Electra actually seems to have still been operational in 1992, an impressive lifespan, but it says something about the level of adoption that we have to speculate as to the time of death. Interactive TV services had so little adoption that they ended unnoticed, and by 1990, almost none remained. Conflict with closed captioning still haunted teletext. There had been some efforts towards integrating teletext decoders into TV sets, by Zenith for example, but in 1990 line 21 closed caption decoding became mandatory. The added cost of a closed captioning decoder, and the similarity to teletext, seems to have been enough for the manufacturing industry to decide that teletext had lost the fight. Few, possibly no teletext decoders were commercially available after that date. In Canada, Telidon met a similar fate. Most Telidon services were gone by 1986, and it seems likely that none were ever profitable. On the other hand, the government-sponsored, open-standards nature of Telidon mean that it and descendants like NABTS saw a number of enduring niche uses. Environment Canada distributed weather data via a dedicated Telidon network, and Transport Canada installed Telidon terminals in airports to distribute real-time advisories. Overall, the Telidon project is widely considered a failure, but it has had enduring impact. The original drawing vector drawing language, the idea that had started the whole thing, came to be known as NAPLPS, the North American Presentation Layer Protocol Syntax. NAPLPS had some conceptual similarities to HTML, as Telidon's concept of interlinking did to the World Wide Web. That similarity wasn't just theoretical: Prodigy, the second largest information service after CompuServe and first to introduce a GUI, ran on NAPLPS. Prodigy is now viewed as an important precursor to the internet, but seen in a different light, it was just another videotex---but one that actually found success. I know that there are entire branches of North American teletext and videotex and interactive TV services that I did not address in this article, and I've become confused enough in the timeline and details that I'm sure at least one thing above is outright wrong. But that kind of makes the point, doesn't it? The thing about teletext here is that we tried, we really tried, but we badly fumbled it. Even if the internet hadn't happened, I'm skeptical that interactive television efforts would have gotten anywhere without a complete fresh start. And the internet did happen, so abruptly that it nearly killed the whole concept while television carriers were still tossing it around. Nearly killed... but not quite. Even at the beginning of the internet age, televisions were still more widespread than computers. In fact, from a TV point of view, wasn't the internet a tremendous opportunity? Internet technology and more compact computers could enable more sophisticated interactive television services at lower prices. At least, that's what a lot of people thought. I've written before about (Cablesoft)[https://computer.rip/2024-11-23-cablesoft.html] and it is just one small part of an entire 1990s renaissance of interactive TV. There's a few major 1980s-era services that I didn't get to here either. Stick around and you'll hear more. You know what's sort of funny? Remember the AgVision, the first form of the TRS-80? It was built as a client for AGTEXT, a joint project of Kentucky Educational Television (who carried it on the VBI of their television network) and the Kentucky College of Agriculture. At some point, AGTEXT switched over the line 21 closed captioning protocol and operated until 1998. It was almost the first teletext service, and it may very well have been the last. [1] There's this weird thing going on where I keep tangentially bringing up Bonnevilles. I think it's just a coincidence of what order I picked topics off of my list but maybe it reflects some underlying truth about the way my brain works. This Bonneville, Bonneville International, is a subsidiary of the LDS Church that owns television and radio stations. It is unrelated, except by being indirectly named after the same person, to the Bonneville Power Administration that operated an early large-area microwave communications network. [2] There were of course large TV networks in the US, and they will factor into the story later, but they still relied on a network of independent but affiliated stations to reach their actual audience---which meant a degree of technical inconsistency that made it hard to rollout nationwide VBI services. Providing another hint at how centralization vs. decentralization affected these datacasting services, adoption of new datacasting technologies in the US has often been highest among PBS and NPR affiliates, our closest equivalents to something like the BBC or ITV. [3] The regulatory relationship between broadcast TV stations, cable network TV stations, and cable carriers is a complex one. The FCC's role in refereeing the competition between these different parts of the television industry, which are all generally trying to kill each other off, has lead to many odd details of US television regulation and some of the everyday weirdness of the American TV experience. It's also another area where the US television industry stands in contrast to the European television industry, where state-owned or state-chartered broadcasting meant that the slate of channels available to a consumer was generally the same regardless of how they physically received them. Not so in the US! This whole thing will probably get its own article one day.