Full Width [alt+shift+f] Shortcuts [alt+shift+k] TRY SIMPLE MODE
Sign Up [alt+shift+s] Log In [alt+shift+l]
118
The majority of US states have something called a "Department of Motor Vehicles," or DMV. Actually, the universality of the term "DMV" seems to be overstated. A more general term is "motor vehicle administrator," used for example by the American Association of Motor Vehicle Administrators to address the inconsistent terminology. Not happy with merely noting that I live in a state with an "MVD" rather than a "DMV," I did the kind of serious investigative journalism that you have come to expect from me. Of These Fifty United States plus six territories, I count 28 DMVs, 5 MVDs, 5 BMVs, 2 OMVs, 2 "Driver Services," and the remainder are hard to describe succinctly. In fact, there's a surprising amount of ambiguity across the board. A number of states don't seem to formally have an agency or division called the DMV, but nonetheless use the term "DMV" to describe something like the Office of Driver Licensing of the Department of Transportation. Indeed, the very topic of where the motor...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from computers are bad

2025-08-16 passive microwave repeaters

One of the most significant single advancements in telecommunications technology was the development of microwave radio. Essentially an evolution of radar, the middle of the Second World War saw the first practical microwave telephone system. By the time Japan surrendered, AT&T had largely abandoned their plan to build an extensive nationwide network of coaxial telephone cables. Microwave relay offered greater capacity at a lower cost. When Japan and the US signed their peace treaty in 1951, it was broadcast from coast to coast over what AT&T called the "skyway": the first transcontinental telephone lead made up entirely of radio waves. The fact that live television coverage could be sent over the microwave system demonstrated its core advantage. The bandwidth of microwave links, their capacity, was truly enormous. Within the decade, a single microwave antenna could handle over 1,000 simultaneous calls. Microwave's great capacity, its chief advantage, comes from the high frequencies and large bandwidths involved. The design of microwave-frequency radio electronics was an engineering challenge that was aggressively attacked during the war because microwave frequency's short wavelengths made them especially suitable for radar. The cavity magnetron, one of the first practical microwave transmitters, was an invention of such import that it was the UK's key contribution to a technical partnership that lead to the UK's access to US nuclear weapons research. Unlike the "peaceful atom," though, the "peaceful microwave" spread fast after the war. By the end of the 1950s, most long-distance telephone calls were carried over microwave. While coaxial long-distance carriers such as L-carrier saw continued use in especially congested areas, the supremacy of microwave for telephone communications would not fall until adoption of fiber optics in the 1980s. The high frequency, and short wavelength, of microwave radio is a limitation as well as an advantage. Historically, "microwave" was often used to refer to radio bands above VHF, including UHF. As RF technology improved, microwave shifted higher, and microwave telephone links operated mostly between 1 and 9 GHz. These frequencies are well beyond the limits of beyond-line-of-sight propagation mechanisms, and penetrate and reflect only poorly. Microwave signals could be received over 40 or 50 miles in ideal conditions, but the two antennas needed to be within direct line of sight. Further complicating planning, microwave signals are especially vulnerable to interference due to obstacles within the "fresnel zone," the region around the direct line of sight through which most of the received RF energy passes. Today, these problems have become relatively easy to overcome. Microwave relays, stations that receive signals and rebroadcast them further along a route, are located in positions of geographical advantage. We tend to think of mountain peaks and rocky ridges, but 1950s microwave equipment was large and required significant power and cooling, not to mention frequent attendance by a technician for inspection and adjustment. This was a tube-based technology, with analog and electromechanical control. Microwave stations ran over a thousand square feet, often of thick hardened concrete in the post-war climate and for more consistent temperature regulation, critical to keeping analog equipment on calibration. Where commercial power wasn't available they consumed a constant supply of diesel fuel. It simply wasn't practical to put microwave stations in remote locations. In the flatter regions of the country, locating microwave stations on hills gave them appreciably better range with few downsides. This strategy often stopped at the Rocky Mountains. In much of the American West, telephone construction had always been exceptionally difficult. Open-wire telephone leads had been installed through incredible terrain by the dedication and sacrifice of crews of men and horses. Wire strung over telephone poles proved able to handle steep inclines and rocky badlands, so long as the poles could be set---although inclement weather on the route could make calls difficult to understand. When the first transcontinental coaxial lead was installed, the route was carefully planned to follow flat valley floors whenever possible. This was an important requirement since it was installed mostly by mechanized equipment, heavy machines, which were incapable of navigating the obstacles that the old pole and wire crews had on foot. The first installations of microwave adopted largely the same strategy. Despite the commanding views offered by mountains on both sides of the Rio Grande Valley, AT&T's microwave stations are often found on low mesas or even at the center of the valley floor. Later installations, and those in the especially mountainous states where level ground was scarce, became more ambitious. At Mt. Rose, in Nevada, an aerial tramway carried technicians up the slope to the roof of the microwave station---the only access during winter when snowpack reached high up the building's walls. Expansion in the 1960s involved increasing use of helicopters as the main access to stations, although roads still had to be graded for construction and electrical service. These special arrangements for mountain locations were expensive, within the reach of the Long Lines department's monopoly-backed budget but difficult for anyone else, even Bell Operating Companies, to sustain. And the West---where these difficult conditions were encountered the most---also contained some of the least profitable telephone territory, areas where there was no interconnected phone service at all until government subsidy under the Rural Electrification Act. Independent telephone companies and telephone cooperatives, many of them scrappy operations that had expanded out from the manager's personal home, could scarcely afford a mountaintop fortress and a helilift operation to sustain it. For the telephone industry's many small players, and even the more rural Bell Operating Companies, another property of microwave became critical: with a little engineering, you can bounce it off of a mirror. James Kreitzberg was, at least as the obituary reads, something of a wunderkind. Raised in Missoula, Montana, he earned his pilots license at 15 and joined the Army Air Corps as soon as he was allowed. The Second World War came to a close shortly after, and so, he went on to the University of Washington where he studied aeronautical engineering and then went back home to Montana, taking up work as an engineer at one of the states' largest electrical utilities. His brother, George, had taken a similar path: a stint in the Marine Corps and an aeronautical engineering degree from Oklahoma. While James worked at Montana Power in Butte, George moved to Salem, Oregon, where he started an aviation company that supplemented their cropdusting revenue by modifying Army-surplus aircraft for other uses. Montana Power operated hydroelectric dams, coal mines, and power plants, a portfolio of facilities across a sparse and mountainous state that must have made communications a difficult problem. During the 1950s, James was involved in an effort to build a new private telephone system connecting the utility's facilities. It required negotiating some type of obstacle, perhaps a mountain pass. James proposed an idea: a mirror. Because the wavelength of microwaves are so short, say 30cm to 5cm (1GHz-6GHz), it's practical to build a flat metallic panel that spans multiple wavelengths. Such a panel will function like a reflector or mirror, redirecting microwave energy at an angle proportional to the angle on which it arrived. Much like you can redirect a laser using reflectors, you can also redirect a microwave signal. Some early commenters referred to this technique as a "radio mirror," but by the 1950s the use of "active" microwave repeaters with receivers and transmitters had become well established, so by comparison reflectors came to be known as "passive repeaters." James believed a passive repeater to be a practical solution, but Montana Power lacked the expertise to build one. For a passive repeater to work efficiently, its surface must be very flat and regular, even under varying temperature. Wind loading had to be accounted for, and the face sufficiently rigid to not flex under the wind. Of course, with his education in aeronautics, James knew that similar problems were encountered in aircraft: the need for lightweight metal structures with surfaces that kept an engineered shape. Wasn't he fortunate, then, that his brother owned a shop that repaired and modified aircraft. I know very little about the original Montana Power installation, which is unfortunate, as it may very well be the first passive microwave repeater ever put into service. What I do know is that in the fall of 1955, James called his brother George and asked if his company, Kreitzberg Aviation, could fabricate a passive repeater for Montana Power. George, he later recounted, said that "I can build anything you can draw." The repeater was made in a hangar on the side of Salem's McNary Field, erected by the flightline as a test, and then shipped in parts to Montana for reassembly in the field. It worked. It worked so well, in fact, that as word of Montana Power's new telephone system spread, other utilities wrote to inquire about obtaining passive repeaters for their own telephone systems. In 1956, James Kreitzberg moved to Salem and the two brothers formed the Microflect Company. From the sidelines of McNary Field, Microflect built aluminum "billboards" that can still be found on mountain passes and forested slopes throughout the western United States, and in many other parts of the world where mountainous terrain, adverse weather, and limited utilities made the construction of active repeaters impractical. Passive repeaters can be used in two basic configurations, defined by the angle at which the signal is reflected. In the first case, the reflection angle is around 90 degrees (the closer to this ideal angle, of course, the more efficiently the repeater performs). This situation is often encountered when there is an obstacle that the microwave path needs to "maneuver" around. For example, a ridge or even a large structure like a building in between two sites. In the second case, the microwave signal must travel in something closer to a straight line---over a mountain pass between two towns, for example. When the reflection angle is greater than 135 degrees, the use of a single passive repeater becomes inefficient or impossible, so Microflect recommends the use of two. Arranged like a dogleg or periscope, the two repeaters reflect the signal to the side and then onward in the intended direction. Microflect published an excellent engineering manual with many examples of passive repeater installations along with the signal calculations. You might think that passive repeaters would be so inefficient as to be impractical, especially when more than one was required, but this is surprisingly untrue. Flat aluminum panels are almost completely efficient reflectors of microwave, and somewhat counterintuitively, passive repeaters can even provide gain. In an active repeater, it's easy to see how gain is achieved: power is added. A receiver picks up a signal, and then a powered transmitter retransmits it, stronger than it was before. But passive repeaters require no power at all, one of their key advantages. How do they pull off this feat? The design manual explains with an ITU definition of gain that only an engineer could love, but in an article for "Electronics World," Microflect field engineer Ray Thrower provided a more intuitive explanation. A passive repeater, he writes, functions essentially identically to a parabolic antenna, or a telescope: Quite probably the difficulty many people have in understanding how the passive repeater, a flat surface, can have gain relates back to the common misconception about parabolic antennas. It is commonly believed that it is the focusing characteristics of the parabolic antenna that gives it its gain. Therefore, goes the faulty conclusion, how can the passive repeater have gain? The truth is, it isn't focusing that gives a parabola its gain; it is its larger projected aperture. The focusing is a convenient means of transition from a large aperture (the dish) to a small aperture (the feed device). And since it is projected aperture that provides gain, rather than focusing, the passive repeater with its larger aperture will provide high gain that can be calculated and measured reliably. A check of the method of determining antenna gain in any antenna engineering handbook will show that focusing does not enter into the basic gain calculation. We can also think of it this way: the beam of energy emitted by a microwave antenna expands in an arc as it travels, dissipating the "density" of the energy such that a dish antenna of the same size will receive a weaker and weaker signal as it moves further away (this is the major component of path loss, the "dilution" of the energy over space). A passive repeater employs a reflecting surface which is quite large, larger than practical antennas, and so it "collects" a large cross section of that energy for reemission. Projected aperture is the effective "window" of energy seen by the antenna at the active terminal as it views the passive repeater. The passive repeater also sees the antenna as a "window" of energy. If the two are far enough away from one another, they will appear to each other as essentially point sources. In practice, a passive repeater functions a bit like an active repeater that collects a signal with a large antenna and then reemits it with a smaller directional antenna. To be quite honest, I still find it a bit challenging to intuit this effect, but the mathematics bear it out as well. Interestingly, the effect only occurs when the passive repeater is far enough from either terminal so as to be usefully approximated as a point source. Microflect refers to this as the far field condition. When the passive repeater is very close to one of the active sites, within the near field, it is more effective to consider the passive reflector as part of the transmitting antenna itself, and disregard it for path loss calculations. This dichotomy between far field and near field behavior is actually quite common in antenna engineering (where an "antenna" is often multiple radiating and nonradiating elements within the near field of each other), but it's yet another of the things that gives antenna design the feeling of a dark art. One of the most striking things about passive repeaters is their size. As a passive repeater becomes larger, it reflects a larger cross section of the RF energy and thus provides more gain. Much like with dish or horn antennas, the size of a passive repeater can be traded off with transmitter power (and the size of other antennas involved) to design an economical solution. Microflect offered as standard sizes ranging from 8'x10' (gain at around 6.175GHz: 90.95 dB) to 40'x60' (120.48dB, after a "rough estimate" reduction of 1dB due to interference effects possible from such a short wavelength reflecting off of such a large panel as to invoke multipath effects). By comparison, a typical active microwave repeater site might provide a gain of around 140dB---and we must bear in mind that dB is a logarithmic unit, so the difference between 121 and 140 is bigger than it sounds. Still, there's a reason that logarithms are used when discussing radio paths... in practice, it is orders of magnitude that make the difference in reliable reception. The reduction in gain from an active repeater to a passive repeater can be made up for with higher-gain terminal antennas and more powerful transmitters. Given that the terminal sites are often at far more convenient locations than the passive repeater, that tradeoff can be well worth it. Keep in mind that, as Microflect emphasizes, passive repeaters require no power and very little ("virtually no") maintenance. Microflect passive repeaters were manufactured in sections that bolted together in the field, and the support structures provided for fine adjustment of the panel alignment after mounting. These features made it possible to install passive repeaters by helicopter onto simple site-built foundations, and many are found on mountainsides that are difficult to reach even on foot. Even in less difficult locations, these advantages made passive repeaters less expensive to install and operate than active repeaters. Even when the repeater side was readily accessible, passives were often selected simply for cost savings. Let's consider some examples of passive repeater installations. Microflect was born of the power industry, and electrical generators and utilities remained one of their best customers. Even today, you can find passive repeaters at many hydroelectric dams. There is a practical need to communicate by telephone between a dispatch center (often at the utility's city headquarters) and the operators in the dam's powerhouse, but the powerhouse is at the base of the dam, often in a canyon where microwave signals are completely blocked. A passive repeater set on the canyon rim, at an angle downwards, solves the problem by redirecting the signal from horizontal to vertical. Such an installation can be seen, for example, at the Hoover Dam. In some sense, these passive repeaters "relocate" the radio equipment from the canyon rim (where the desirable signal path is located) to a more convenient location with the other powerhouse equipment. Because of the short distance from the powerhouse to the repeater, these passives were usually small. This idea can be extended to relocating en-route repeaters to a more serviceable site. In Glacier National Park, Mountain States Telephone and Telegraph installed a telephone system to serve various small towns and National Park Service sites. Glacier is incredibly mountainous, with only narrow valleys and passes. The only points with long sight ranges tend to be very inaccessible. Mt. Furlong provided ideal line of sight to East Glacier and Essex along highway 2, but it would have been extremely challenging to install and maintain a microwave site on the steep peak. Instead, two passive repeaters were installed near the mountaintop, redirecting the signals from those two destinations to an active repeater installed downslope near the highway and railroad. This example raises another advantage of passive repeaters: their reduced environmental impact, something that Microflect emphasized as the environmental movement of the 1970s made agencies like the Forest Service (which controlled many of the most appealing mountaintop radio sites) less willing to grant permits that would lead to extensive environmental disruption. Construction by helicopter and the lack of a need for power meant that passive repeaters could be installed without extensive clearing of trees for roads and power line rights of way. They eliminated the persistent problem of leakage from standby generator fuel tanks. Despite their large size, passive repeaters could be camouflaged. Many in national forests were painted green to make them less conspicuous. And while they did have a large surface area, Microflect argued that since they could be installed on slopes rather than requiring a large leveled area, passive repeaters would often fall below the ridge or treeline behind them. This made them less visually conspicuous than a traditional active repeater site that would require a tower. Indeed, passive repeaters are only rarely found on towers, with most elevated off the ground only far enough for the bottom edge to be free of undergrowth and snow. Other passive repeater installations were less a result of exceptionally difficult terrain and more a simple cost optimization. In rural Nevada, Nevada Bell and a dozen independents and coops faced the challenge of connecting small towns with ridges between them. The need for an active repeater at the top of each ridge, even for short routes, made these rural lines excessively expensive. Instead, such towns were linked with dual passive repeaters on the ridge in a "straight through" configuration, allowing microwave antennas at the towns' existing telephone exchange buildings to reach each other. This was the case with the installation I photographed above Pioche. I have been frustratingly unable to confirm the original use of these repeaters, but from context they were likely installed by the Lincoln County Telephone System to link their "hub" microwave site at Mt. Wilson (with direct sight to several towns) to their site near Caliente. The Microflect manual describes, as an example, a very similar installation connecting Elko to Carlin. Two 20'x32' passive repeaters on a ridge between the two (unfortunately since demolished) provided a direct connection between the two telephone exchanges. As an example of a typical use, it might be interesting to look at the manual's calculations for this route. From Elko to the repeaters is 13.73 miles, the repeaters are close enough to each other as to be in near field (and so considered as a single antenna system), and from the repeaters to Carlin is 6.71 miles. The first repeater reflects the signal at a 68 degree angle, then the second reflects it back at a 45 degree angle, for a net change in direction of 23 degrees---a mostly straight route. The transmitter produces 33.0 dBm, both antennas provide a 34.5 dB gain, and the passive repeater assembly provides 88 dB gain (this calculated basically by consulting a table in the manual). That means there is 190 dB of gain in the total system. The 6.71 and 13.73 mile paths add up to 244 dB of free space path loss, and Microflect throws in a few more dB of loss to account for connectors and cables and the less than ideal performance of the double passive repeater. The net result is a received signal of -58 dBm, which is plenty acceptable for a 72-channel voice carrier system. This is all done at a significantly lower price than the construction of a full radio site on the ridge [1]. The combination of relocating radio equipment to a more convenient location and simply saving money leads to one of the iconic applications of passive repeaters, the "periscope" or "flyswatter" antenna. Microwave antennas of the 1960s were still quite large and heavy, and most were pressurized. You needed a sturdy tower to support one, and then a way to get up the tower for regular maintenance. This lead to most AT&T microwave sites using short, squat square towers, often with surprisingly convenient staircases to access the antenna decks. In areas where a very tall tower was needed, it might just not be practical to build one strong enough. You could often dodge the problem by putting the site up a hill, but that wasn't always possible, and besides, good hilltop sites that weren't already taken became harder to find. When Western Union built out their microwave network, they widely adopted the flyswatter antenna as an optimization. Here's how it works: the actual microwave antenna is installed directly on the roof of the equipment building facing up. Only short waveguides are needed, weight isn't an issue, and technicians can conveniently service the antenna without even fall protection. Then, at the top of a tall guyed lattice tower similar to an AM mast, a passive repeater is installed at a 45 degree angle to the ground, redirecting the signal from the rooftop antenna to the horizontal. The passive repeater is much lighter than the antenna, allowing for a thinner tower, and will rarely if ever need service. Western Union often employed two side-by-side lattice towers with a "crossbar" between them at the top for convenient mounting of reflectors each direction, and similar towers were used in some other installations such as the FAA's radar data links. Some of these towers are still in use, although generally with modern lightweight drum antennas replacing the reflectors. Passive microwave repeaters experienced their peak popularity during the 1960s and 1970s, as the technology became mature and communications infrastructure proliferated. Microflect manufactured thousands of units from there new, larger warehouse, across the street from their old hangar on McNary Field. Microflect's customer list grew to just about every entity in the Bell System, from Long Lines to Western Electric to nearly all of the BOCs. The list includes GTE, dozens of smaller independent telephone companies, most of the nation's major railroads, electrical utilities from the original Montana Power to the Tennessee Valley Authority. Microflect repeaters were used by ITT Arctic Services and RCA Alascom in the far north, and overseas by oil companies and telecoms on islands and in mountainous northern Europe. In Hawaii, a single passive repeater dodged a mountain to connect Lanai City telephones to the Hawaii Telephone Company network at Tantalus on Oahu---nearly 70 miles in one jump. In Nevada, six passive repeaters joined two active sites to connect six substations to the Sierra Pacific Power Company's control center in Reno. Jamaica's first high-capacity telephone network involved 11 passive repeaters, one as large as 40'x60'. The Rocky Mountains are still dotted with passive repeaters, structures that are sometimes hard to spot but seem to loom over the forest once noticed. In Seligman, AZ, a sun-faded passive repeater looks over the cemetery. BC Telephone installed passive repeaters to phase out active sites that were inaccessible for maintenance during the winter. Passive repeaters were, it turns out, quite common---and yet they are little known today. First, it cannot be ignored that passive repeaters are most common in areas where communications infrastructure was built post-1960 through difficult terrain. In North America, this means mostly the West [2], far away from the Eastern cities where we think of telephone history being concentrated. Second, the days of passive repeaters were relatively short. After widespread adoption in the '60s, fiber optics began to cut into microwave networks during the '80s and rendered microwave long-distance links largely obsolete by the late '90s. Considerable improvements in cable-laying equipment, not to mention the lighter and more durable cables, made fiber optics easier to install in difficult terrain than coaxial had ever been. Besides, during the 1990s, more widespread electrical infrastructure, miniaturization of radio equipment, and practical photovoltaic solar systems all combined to make active repeaters easier to install. Today, active repeater systems installed by helicopter with independent power supplies are not that unusual, supporting cellular service in the Mojave Desert, for example. Most passive repeaters have been obsoleted by changes in communications networks and technologies. Satellite communications offer an even more cost effective option for the most difficult installations, and there really aren't that many places left that a small active microwave site can't be installed. Moreover, little has been done to preserve the history of passive repeaters. In the wake of the 2015 Wired article on the Long Lines network, considerable enthusiasm has been directed towards former AT&T microwave stations, having been mostly preserved by their haphazard transfer to companies like American Tower. Passive repeaters, lacking even the minimal commercial potential of old AT&T sites, were mostly abandoned in place. Often being found in national forests and other resource management areas, many have been demolished for restoration. In 2019, a historic resources report was written on the Bonneville Power Administration's extensive microwave network. It was prepared to address the responsibility that federal agencies have for historical preservation under the National Historic Preservation Act and National Environmental Policy Act, policies intended to ensure that at least the government takes measures to preserve history before demolishing artifacts. The report reads: "Due to their limited features, passive repeaters are not considered historic resources, and are not evaluated as part of this study." In 1995, Valmont Industries acquired Microflect. Valmont is known mostly for their agricultural products, including center-pivot irrigation systems, but they had expanded their agricultural windmill business into a general infrastructure division that manufactured radio masts and communication towers. For a time, Valmont continued to manufacture passive repeaters as Valmont Microflect, but business seems to have dried up. Today, Valmont Structures manufactures modular telecom towers from their facility across the street from McNary Field in Salem, Oregon. A Salem local, descended from early Microflect employees, once shared a set of photos on Facebook: a beat-up hangar with a sign reading "Aircraft Repair Center," and in front of it, stacks of aluminum panel sections. Microflect workers erecting a passive repeater in front of a Douglas A-26. Rows of reflector sections beside a Shell aviation fuel station. George Kreitzberg died in 2004, James in 2017. As of 2025, Valmont no longer manufactures passive repeaters. Postscript If you are interested in the history of passive repeaters, there are a few useful tips I can give you. Nearly all passive repeaters in North America were built by Microflect, so they have a very consistent design. Locals sometimes confuse passive repeaters with old billboards or even drive-in theater screens, the clearest way to differentiate them is that passive repeaters have a face made up of aluminum modules with deep sidewalls for rigidity and flatness. Take a look at the Microflect manual for many photos. Because passive repeaters are passive, they do not require a radio license proper. However, for site-based microwave licenses, the FCC does require that passive repeaters be included in paths (i.e. a license will be for an active site but with a passive repeater as the location at the other end of the path). These sites are almost always listed with a name ending in "PR". I don't have any straight answer on whether or not any passive repeaters are still in use. It has likely become very rare but there are probably still examples. Two sources suggest that Rachel, NV still relies on a passive repeater for telephone and DSL. I have not been able to confirm that, and the tendency of these systems to be abandoned in place means that people sometimes think they are in use long after they were retired. I can find documentation of a new utility SCADA system being installed, making use of existing passive repeaters, as recently as 2017. [1] If you find these dB gain/loss calculations confusing, you are not alone. It is deceptively simple in a way that was hard for me to learn, and perhaps I will devote an article to it one day. [2] Although not exclusively, with installations in places like Vermont and Newfoundland where similar constraints applied.

yesterday 2 votes
2025-07-27 a technical history of alcatraz

Alcatraz first operated as a prison in 1859, when the military fort first held convicted soldiers. The prison technology of the time was simple, consisting of little more than a basement room with a trap-door entrance. Only small numbers of prisoners were held in this period, but it established Alcatraz as a center of incarceration. Later, the Civil War triggered construction of a "political prison," a term with fewer negative connotations at the time, for confederate sympathizers. This prison was more purpose-built (although actually a modification of an existing shop), but it was small and not designed for an especially high security level. It presaged, though, a much larger construction project to come. Alcatraz had several properties that made it an attractive prison. First, it had seen heavy military construction as a Civil War defensive facility, but just decades later improvements in artillery made its fortifications obsolete. That left Alcatraz surplus property, a complete military installation available for new use. Second, Alcatraz was formidable. The small island was made up of steep rock walls, and it was miles from shore in a bay known for its strong currents. Escape, even for prisoners who had seized control of the island, would be exceptionally difficult. These advantages were also limitations. Alcatraz was isolated and difficult to support, requiring a substantial roster of military personnel to ferry supplies back and forth. There were no connections to the mainland, requiring on-site power and water plants. Corrosive sea spray, sent over the island by the Bay's strong winds, lay perpetual siege on the island. Buildings needed constant maintenance, rust covered everything. Alcatraz was not just a famous prison, it was a particularly complicated one. In 1909, Alcatraz lost its previous defensive role and pivoted entirely to military prison. The Citadel, a hardened barracks building dating to the original fortifications, was partially demolished. On top of it, a new cellblock was built. This was a purpose-built prison, designed to house several hundred inmates under high security conditions. Unfortunately, few records seem to survive from the construction and operation of the cellblock as a disciplinary barracks. At some point, a manual telephone exchange was installed to provide service between buildings on the island. I only really know that because it was recorded as being removed later on. Communications to and from Alcatraz were a challenge. Radio and even light signals were used to convey messages between the island and other military installations on the bay. There was a constant struggle to maintain cables. Early efforts to lay cables in the bay were less about communications and more about triggering. Starting in 1883, the Army Corps of Engineers began the installation of "torpedoes" in the San Francisco bay. These were different from what we think of as torpedoes today, they were essentially remotely-operated mines. Each device floated in the water by its own buoyancy, anchored to the bottom by a cable that then ran to shore. An electrical signal sent down the cable detonated the torpedo. The system was intended primarily to protect the bay from submarines, a new threat that often required technically complex defenses. Submarines are, of course, difficult to spot. To make the torpedoes effective, the Army had to devise a targeting system. Observation posts on each side of the Golden Gate made sightings of possible submarines and reported them to a control post, where they were plotted on the map. With a threat confirmed, the control post would begin to detonate nearby torpedoes. A second set of observation posts, and a second line of torpedoes, were located further into the bay to address any submarines that made it through the first barrage. By 1891, there were three such control points in total: Fort Mason, Angel Island, and Yerba Buena. The rather florid San Francisco Examiner of the day described the control point at Fort Mason, a "chamber of death and destruction" in a tunnel twenty feet underground. The Army "death-dealers" that manned the plotting table in that bunker had access to a board that "greatly resemble[d] the switch board in the great operating rooms of the telephone companies." By cords and buttons, they could select chains of mines and send the signal to fire. NPS historians found that a torpedo control point had been planned at Alcatraz, and one of the fortifications modified to accommodate it, but never seems to have been used. The 1891 article gives a hint of the reason, noting that the line from Alcatraz to Fort Mason was "favorable for a line of torpedoes" but that currents were so strong that it was difficult to keep them anchored. Perhaps this problem was discovered after construction was already underway. Somewhere around 1887-1888, the Army Signal Corps had joined the cable-laying fray. A telegraph cable was constructed from the Presidio to Alcatraz, and provided good service except for the many times that it was drug up by anchors and severed. This was a tremendous problem: in 1898, Gen. A. W. Greely of the Signal Corps called San Francisco the "worst bay in the country" for cable laying and said that no cable across the Golden Gate had lasted more than three years. The General attributed the problem mainly to the heavy shipping traffic, but I suspect that the notorious currents must have been a factor in just how many anchors were dragged through cables [1]. In 1889, a brand new Army telegraph cable was announced, one that would run from Alcatraz to Angel Island, and then from Angel Island to Marin County. An existing commercial cable crossed the Golden Gate, providing a connection all the way to the Presidio. The many failures of Alcatraz cables makes it difficult to keep track. For example, a cable from Fort Mason to Alcatraz Island was apparently laid in 1891---but a few years later, it was lamented that Alcatraz's only cable connection to Fort Mason was indirect, via the 1889 Angel Island cable. Presumably the 1891 cable was damaged at some point and not replaced, but that event doesn't seem to have made the papers (or at least my search results!). In 1900, a Signal Corps officer on Angel Island made a routine check of the cable to Alcatraz, finding it in good working order---but noticing that a "four masted schooner... in direct line with the cable" seemed to be in trouble just off the island and was being assisted by a tug. That evening, the officer returned to the cable landing box to find the ship gone... along with the cable. A French ship, "Lamoriciere," had drifted from anchor overnight. A Signal Corps sergeant, apparently having spoken with harbor officials, reported that the ship would have run completely aground had the anchor not caught the Alcatraz cable and pulled it taught. Of course the efforts of the tug to free Lamoriciere seems to have freed a little more than intended, and the cable was broken away from its landing. "Its end has been carried into the bay and probably quite a distance from land," the Signal Corps reported. This ongoing struggle, of laying new cables to Alcatraz and then seeing them dragged away a few years later, has dogged the island basically to the modern day---when we have finally just given up. Today, as during many points in its history, Alcatraz must generate its own power and communicate with the mainland via radio. When the Bureau of Prisons took control of Alcatraz in 1933, they installed entirely new radio systems. A marine AM radio was used to reach the Coast Guard, their main point of contact in any emergency. Another radio was used to contact "Alcatraz Landing" from which BOP ferries sailed, and over the years several radios were installed to permit direct communications with military installations and police departments around the Bay Area. At some point, equipment was made available to connect telephone calls to the island. I'm not sure if this was manual patching by BOP or Coast Guard radio operators, or if a contract was made with PT&T to provide telephone service by radio. Such an arrangement seems to have been in place by 1937, when an unexplained distress call from the island made the warden impossible to contact (by the press or Bureau of Prisons) because "all lines [were] tied up." Unfortunately I have not been able to find much on the radiotelephone arrangements. The BOP, no doubt concerned about security, did not follow the Army's habit of announcing new construction projects to the press. Fortunately, the BOP-era history of Alcatraz is much better covered by modern NPS documentation than the Army era (presumably because the more recent closure of the BOP prison meant that much of the original documentation was archived). Unfortunately, the NPS reports are mostly concerned with the history of the structures on the island and do not pay much attention to outside communications or the infrastructure that supported it. Internal arrangements on the island almost completely changed when the BOP took over. The Army had left Alcatraz in a degree of disrepair (discussions about closing it having started by at least 1913), and besides, the BOP intended to provide a much higher level of security than the Army had. Extensive renovations were made of the main cellblock and many supporting buildings from 1933 to about 1939. The 1930s had seen a great deal of innovation in technical security. Technologies like electrostatic and microwave motion sensors were available in early forms. On Alcatraz, though, the island was small and buildings tightly spaced. The prison staff, and in some cases their families, would be housed on the island just a stones throw from the cellblock. That meant there would be quite a few people moving around exterior to the prison, ruling out motion sensors as a means of escape detection. Exterior security would instead be provided by guard and dog patrols. There was still some cutting-edge technical security when Alcatraz opened, including early metal detectors. At first, the BOP contracted the Teletouch Corporation of New York City. Teletouch, a manufactured burglar alarms and other electronic devices, was owned by or at least affiliated with famed electromagnetics inventor and Soviet spy Leon Theremin. Besides the instrument we remember him for today, Theremin had invented a number of devices for security applications, and the metal detectors were probably of his design. In practice, the Teletouch machines proved unsatisfactory. They were later replaced with machines made by Forewarn. I believe the metal detector on display today is one of the Forewarn products, although the NPS documents are a little unclear on this. Sensitive common areas like the mess hall, kitchen, and sallyport wre fitted with electrically-activated teargas canisters. Originally, the mess hall teargas was controlled by a set of toggle switches in a corner gun gallery, while the sallyport teargas was controlled from the armory. While the teargas system was never used, it was probably the most radical of Alcatraz's technical security measures. As more electronic systems were installed, the armory, with its hardened vault entrance and gun issue window, served as a de facto control center for Alcatraz's initial security systems. The Army's small manual telephone switchboard was considered unsuitable for the prison's use. The telephone system provided communication between the guards, making it a critical part of the overall security measures, and the BOP specified that all equipment and cabling needed to be better secured from any access by prisoners. Modifications to the cellblock building's entrance created a new room, just to the side of the sallyport, that housed a 100-line automatic exchange. Automatic Electric telephones that appear throughout historic photos of the prison would suggest that this exchange had been built by AE. Besides providing dial service between prison offices and the many other structures on the island, the exchange was equipped with a conference circuit that included annunciator panels in each of the prison's main offices. Assuming this was the type provided by Automatic Electric, it provided an emergency communications system in which the guard telephones could ring all of the office and guard phones simultaneously, even interrupting calls already in progress. Annunciator panels in the armory and offices showed which phone had started the emergency conference, and which phones had picked up. From the armory, a siren on the building roof could be sounded to alert the entire island to any attempted escape. Some locations, including the armory and the warden's office, were also fitted with fire annunciators. I am less clear on this system. Fire circuits similar to the previously described conference circuit (and sometimes called "crash alarms" after their use on airfields) were an optional feature on telephone exchanges of the time. Crash alarms were usually activated by dedicated "hotline" phones, and mentions of "emergency phones" in various prison locations support that this system worked the same way. Indeed, 1950s and 60s photos show a red phone alongside other telephones in several prison locations. The fire annunciator panels probably would have indicated which of the emergency phones had been lifted to initiate the alarm. One of the most fascinating parts of Alcatraz, to a person like me, is the prison doors. Prison doors have a long history, one that is interrelated with but largely distinct from other forms of physical security. Take a look, for example, at the keys used in prisons. Prisons of the era, and even many today, rely on lever locks manufactured by specialty companies like Folger Adams and Sargent and Greenleaf. These locks are prized for their durability, and that extends to the keys, huge brass plates that could hold up to daily wear well beyond most locks. At Alcatraz, the first warden adopted a "sterile area" model in which areas accessible to prisoners should be kept as clear as possible of dangerous items like guns and keys. Guards on the cellblock carried no keys, and cell doors lacked traditional locks. Instead, the cell doors were operated by a central mechanical system designed by Stewart Iron Works. To let prisoners out of cells in the morning, a guard in the elevated gun gallery passed keys to a cellblock guard in a bucket or on a string. The guard unlocked the cabinet of a cell row's control system, revealing a set of large levers. The design is quite ingenious: by purely mechanical means, the guard could select individual cells or the entire row to be unlocked, and then by throwing the largest lever the guard could pull the cell doors open---after returning the necessary key to the gun gallery above. This 1934 system represents a major innovation in centralized access control, designed specifically for Alcatraz. Stewart Iron Works is still in business, although not building prison doors. Some years ago, the company assisted NPS's work to restore the locking system to its original function. The present-day CEO provided replicas of the original Stewart logo plate for the restored locking cabinets. Interviewing him about the restoration work, the San Francisco Chronicle wrote that "Alcatraz, he believes, is part of the American experience." The Stewart mechanical system seems to have remained in use on the B and C blocks until the prison closed, but the D block was either originally fitted, or later upgraded, with electrically locked cell doors. These were controlled from a set of switches in the gun gallery. In 1960, the BOP launched another wave of renovations on Alcatraz, mostly to modernize its access and security arrangements to modern standards. The telephone exchange was moved away from the sallyport to an upper floor of the administration building, freeing up its original space for a new control center. This is the modern sallyport control area that visitors look into through the ballistic windows; the old service windows and viewports into the armory anteroom that had been the de facto control center are now removed. This control center is more typical of what you will see in modern prisons. Through large windows, guards observed the sallyport and visitor areas and controlled the electrically operated main gates. An electrical interlock prevented opening the full path from the cellblock to the outside, creating a mantrap in the visitor area through which the guards in the control room could identify everyone entering and leaving. Photos from the 1960 control room, and other parts of the prison around the same time, clearly show consoles for a Western Electric 507B PBX. The 507B is really a manual exchange, although it used keys rather than the more traditional plugboard for a more modern look. It dates back to about 1929---so I assume the 507B had been installed well before the 1960 renovation, and its appearance then is just a bias of more and better photos available from the prison's later days. Fortunately, the NPS Historic Furnishings Report for the cellblock building includes a complete copy of a 1960s memo describing the layout and requirements for the control center. We're fortunate to get such a detailed listing of the equipment: Four phones (these are Automatic Electric instruments, based on the photo). One is a fire reporting phone (presumably on the exchange's "crash alarm" circuit), one is the watch call reporting phone (detailed in a moment), a regular outgoing call telephone, and an "executive right of way" phone that I assume will disconnect other calls from the outgoing trunks. The 507B PBX switchboard An intercom for communication with each of the guard towers Controls for five electrically operated doors Intercoms to each of the electrically operated doors (many of these are right outside of the control center, but the glass is very thick and you would not otherwise be able to converse) An "annunciator panel for the interior telephone system" which presumably combines the conference circuit, fire circuit, and watch call annunciators. An intercom to the visitor registration area A "paging intercom for group control purposes." I don't really know what that is, possibly it is for the public address speakers installed in many parts of the cellblock. Monitor speaker for the inmate radio system. This presumably allowed the control center to check the operation of the two-channel wired radio system installed in the cells. The "watch call answering device," discussed later. An indicator panel that shows any open doors in the D cell block (which is the higher security unit and the only one equipped with electrically locking cell doors). Two-way radio remote console Tear gas controls Many of these are things we are already familiar with, but the watch call telephone system deserves some more discussion. It was clearly present back in the 1930s, but it wasn't clear to me what it actually did. Fortunately this memo gives some details on the operation. Guards calling in to report their watch call extension 3331. This connects to the watch call answering device in the control center, which when enabled, automatically answers the call during the first ring. The answering device then allows a guard anywhere in the control center to converse with the caller via a loudspeaker and microphone. So, the watch call system is essentially just a speaker phone. This approach is probably a holdover from the 1930s system (older documents mention a watch call phone as well), and that would have been the early days for speakerphones, making it a somewhat specialized device. Clearly it made these routine watch calls a lot more convenient for the control center, especially since the guard there didn't even have to do anything to answer. It might be useful to mention why this kind of system was used: I have never found any mention of two-way radios used on Alcatraz, and that's not surprising. Portable two-way radios were a nascent technology even in the 1960s---the handheld radio had basically been invented for the Second World War, and it took years for them to come down in size and price. If Alcatraz ever did issue radios to guards, it probably would have been in the last decade of operation. Instead, telephones were provided at enough places in the facility that guards could report their watch tour and any important events by finding a phone and calling the control center. Guards were probably required to report their location at various points as they patrolled, so the control center would receive quite a few calls that were just a guard saying where they were---to be written down in a log by a control room guard, who no doubt appreciated not having to walk to a phone to hear these reports. This provided both the functions of a "guard tour" system, ensuring that guards were actually performing their rounds, and improved the safety of guards by making it likely that the control center would notice fairly promptly that they had stopped reporting in. Alcatraz closed as a BOP prison in 1963, and after a surprising number of twists and turns ranging from plans to develop a shopping center to occupation by the Indians of All Tribes, Alcatraz opened to tourists. Most technology past this point might not be considered "historic," having been installed by NPS for operational purposes. I can't help but mention, though, that there were more attempts at a cable. For the NPS, operating the power plant at Alcatraz was a significant expense that they would much rather save. The idea of a buried power cable isn't new. I have seen references, although no solid documentation, that the BOP laid a power cable in 1934. They built a new power plant in 1939 and operated it for the rest of the life of the prison, so either that cable failed and was never replaced, or it never existed at all... I should take a moment here to mention that LLM-generated "AI slop" has become a pervasive and unavoidable problem around any "hot SEO topic" like tourism. Unfortunately the history of tourist sites like Alcatraz has become more and more difficult to learn as websites with well-researched history are displaced in search results by SEO spam---articles that often contains confident but unsourced and often incorrect information. This has always been a problem but it has increased by orders of magnitude over the last couple of years, and it seems that the LLM-generated articles are more likely to contain details that are outright made up than the older human-generated kind. It's really depressing. That's basically all I have to say about it. It seems that a power cable was installed to Alcatraz sometime in the 1960s but failed by about 1971. I'm a little skeptical of that because that was the era in which it was surplus GSA property, making such a large investment an odd choice, so maybe the 1980s article with that detail is wrong or confusing power with one of the several telephone cables that seem to have been laid (and failed) during BOP operations). In any case, in late 1980 or early 1981, Paul F. Pugh and Associates of Oakland designed a novel type of underwater power cable for the NPS. It was expected to provide power to Alcatraz at much reduced cost compared to more traditional underwater power cable technologies. It never even made it to day 1: after the cable was laid, but before commissioning, some failure caused a large span of it to float to the surface. The cable was evidently not repairable, and it was pulled back to shore. 'I don't know where we go from here,' William J. Whalen, superintendent of the Golden Gate National Recreation Area, said after the broken cable was hauled in. We do know now: where the NPS went from there was decades of operating two diesel generators on the island, until a 2017 DoE-sponsored project that installed solar panels on the cellblock building roof. The panels were intentionally installed such that they are not visible anywhere from the ground, preserving the historic integrity of the site. In aerial photos, though, they give Alcatraz a curiously modern look. The DoE calls the project, which incorporates battery storage and backup diesel generators, as "one of the largest microgrids in the United States." That is an interesting framing, one that emphasizes the modern valance of "microgrid," since Alcatraz had been a self-sufficient electrical system since the island's first electric lights. But what's old is, apparently, new again. I originally wrote much of this as part of a larger travelogue on my most recent trip to Alcatraz, which was coincidentally the same day as a visit by Pam Bondi and Doug Burgum to "survey" the prison for potential reopening. That piece became long and unwieldy, so I am breaking it up into more focused articles---this one on the technical history, a travelogue about the experience of visiting the island in this political context and its history as a symbol of justice and retribution, and probably a third piece on the way that the NPS interprets the site today. I am pitching the travelogue itself to other publications so it may not have a clear fate for a while, but if it doesn't appear here I'll let you know where. In any case there probably will be a loose part two to look forward to. [1] Greely had a rather illustrious Army career. His term as chief of the Signal Corps was something of a retirement after he led several arctic expeditions, the topic of his numerous popular books and articles. He received the Medal of Honor shortly before his death in 1935.

3 weeks ago 18 votes
2025-07-06 secret cellular phone numbers

A long time ago I wrote about secret government telephone numbers, and before that, secret military telephone buttons. I suppose this is becoming a series. To be clear, the "secret" here is a joke, but more charitably I could say that it refers to obscurity rather than any real effort to keep them secret. Actually, today's examples really make this point: they're specifically intended to be well known, but are still pretty obscure in practice. If you've been around for a while, you know how much I love telephone numbers. Here in North America, we have a system called the North American Numbering Plan (NANP) that has rigidly standardized telephone dialing practices since the middle of the 20th century. The US, Canada, and a number of Central American countries benefit from a very orderly system of area codes (more formally numbering plan areas or NPAs) followed by a subscriber number written in the format NXX-XXXX (this is a largely NANP-centric notation for describing phone number patterns, N represents the digits 2-9 and X any digit). All of these NANP numbers reside under the country code 1, allowing at least theoretically seamless international dialing within the NANP community. It's really a pretty elegant system. NANP is the way it is for many reasons, but it mostly reflects technical requirements of the telephone exchanges of the 1940s. This is more thoroughly explained in the link above, but one of the goals of NANP is to ensure that step-by-step (SxS) exchanges can process phone numbers digit by digit as they are dialed. In other words, it needs to be possible to navigate the decision tree of telephone routing using only the digits dialed so far. Readers with a computer science education might have some tidy way to describe this in terms of Chompsky or something, but I do not have a computer science education; I have an Information Technology education. That means I prefer flow charts to automata, and we can visualize a basic SxS exchange as a big tree. When you pick up your phone, you start at the root of the tree, and each digit dialed chooses the edge to follow. Eventually you get to a leaf that is hopefully someone's telephone, but at no point in the process does any node benefit from the context of digits you dial before, after, or how many total digits you dial. This creates all kinds of practical constraints, and is the reason, for example, that we tend to write ten-digit phone numbers with a "1" before them. That requirement was in some ways long-lived (The last SxS exchange on the public telephone network was retired in 1999), and in other ways not so long lived... "common control" telephone exchanges, which did store the entire number in electromechanical memory before making a routing decision, were already in use by the time the NANP scheme was adopted. They just weren't universal, and a common nationwide numbering scheme had to be designed to accommodate the lowest common denominator. This discussion so far is all applicable to the land-line telephone. There is a whole telephone network that is, these days, almost completely separate but interconnected: cellular phones. Early cellular phones (where "early" extends into CDMA and early GSM deployments) were much more closely attached to the "POTS" (Plain Old Telephone System). AT&T and Verizon both operated traditional telephone exchanges, for example 5ESS, that routed calls to and from their customers. These telephone exchanges have become increasingly irrelevant to mobile telephony, and you won't find a T-Mobile ESS or DMS anywhere. All US cellular carriers have adopted the GSM technology stack, and GSM has its own definition of the switching element that can be, and often is, fulfilled by an AWS EC2 instance running RHEL 8. Calls between cell phones today, even between different carriers, are often connected completely over IP and never touch a traditional telephone exchange. The point is that not only is telephone number parsing less constrained on today's telephone network, in the case of cellular phones, it is outright required to be more flexible. GSM also defines the properties of phone numbers, and it is a very loose definition. Keep in mind that GSM is deeply European, and was built from the start to accommodate the wide variety of dialing practices found in Europe. This manifests in ways big and small; one of the notable small ways is that the European emergency number 112 works just as well as 911 on US cell phones because GSM dictates special handling for emergency numbers and dictates that 112 is one of those numbers. In fact, the definition of an "emergency call" on modern GSM networks is requesting a SIP URI of "urn:service:sos". This reveals that dialed number handling on cellular networks is fundamentally different. When you dial a number on your cellular phone, the phone collects the entire number and then applies a series of rules to determine what to do, often leading to a GSM call setup process where the entire number, along with various flags, is sent to the network. This is all software-defined. In the immortal words of our present predicament, "everything's computer." The bottom line is that, within certain regulatory boundaries and requirements set by GSM, cellular carriers can do pretty much whatever they want with phone numbers. Obviously numbers need to be NANP-compliant to be carried by the POTS, but many modern cellular calls aren't carried by the POTS, they are completed entirely within cellular carrier systems through their own interconnection agreements. This freedom allows all kinds of things like "HD voice" (cellular calls connected without the narrow filtering and companding used by the traditional network), and a lot of flexibility in dialing. Most people already know about some weird cellular phone numbers. For example, you can dial *#06# to display your phone's various serial numbers. This is an example of a GSM MMI (man-machine interface) code, phone numbers that are handled entirely within your device but nonetheless defined as dialable numbers by GSM for compatibility with even the most basic flip phones. GSM also defined numbers called USSD for unstructured supplementary service data, which set up connections to the network that can be used in any arbitrary way the network pleases. Older prepaid phone services used to implement balance check and top-up operations using USSD numbers, and they're also often used in ways similar to Vertical Service Codes (VSCs) on the landline network to control carrier features. USSDs also enabled the first forms of mobile data, which involved a "special telephone call" to a USSD in order to download a cut-down form of ESPN in a weird mobile-specific markup language. Now, put yourself in the shoes of an enterprising cellular network. The flexibility of processing phone numbers as you please opens up all kinds of possibilities. Innovative services! Customer convenience! Sell them for money! Oh my god, sell them for money! It seems like this started with customer service. It is an old practice, dating to the Bell operating companies, to have special short phone numbers to reach the telephone company itself. The details varied by company (often based on technical constraints in their switching system), but a common early setup was that dialing 114 got you the repair service operator to report a problem with your phone line. These numbers were usually listed in the front of the phone book, and for the phone company the fact that they were "special" or nonstandard was sort of a feature, since they could ensure that they were always routed within the same switch. The selection of "911" as the US emergency number seems rooted in this practice, as later on several major telcos used the "N11" numbers for their service lines. This became immortalized in the form of 611, which will get you customer service for most phone carriers. So cellular companies did the same, allocating themselves "special" numbers for various service lines. Verizon offers #PMT to make a payment. Naturally, there's also room for upsell services: #ROAD for roadside assistance on Verizon. The odd thing about these phone numbers is that there's really no standard involved, they're just the arbitrary practices of specific cellular companies. The term "mobile dial code" (MDC) is usually used to refer to them, although that term seems to have arisen organically rather than by intent. Remember, these aren't a real thing! The carriers just make them up, all on their own. The only real constraint on MDCs is that they need to not collide with any POTS number, which is most easily achieved by prefixing them with some combination of * and #, and usually not "*#" because it's referenced by the GSM standard for MMI. MDCs are available for purchase, but the terms don't seem to be public and you have to negotiate separately with each carrier. That's because there is no centralization. This is where MDCs stand in clear contrast to the better known SMS Short Code, or SMSSC. Those are the five or six-digit numbers widely used in advertising campaigns. SMSSCs are centrally managed by the SMS Short Code Registry, which is a function of industry association CTIA but contracted to iConectiv. iConectiv is sort of like the SAIC of the communications industry, a huge company that dates back to the Bell System (where it became Bellcore after divestiture) and that no one has heard of but nonetheless is a critically important part of the telephone system. Providers that want to have an SMSSC (typically on behalf of one of their customers) pay a fee, and usually recoup it from the end user. That fee is not cheap, typical end-user rates for an SMSSC run over $10k a year. But at least it's straightforward, and your SMS A2P or marketing company can make it happen for you. MDCs have no such centralization, no standardized registration process. You negotiate with each carrier individually. That means it's pretty difficult to put together "complete coverage" on an MDC by getting the same one assigned by every major carrier. And this is one of those areas where "good enough" is seldom good enough; people get pissed off when something you advertise doesn't work. Putting a phone number that only works for some people on a billboard can quickly turn into an expensive embarrassment, so companies will be wary of using an MDC in marketing if they don't feel really confident that it works for the vast majority of cellphone users. Because of this fragmentation, adoption of MDCs for marketing purposes has been very low. The only going concern I know of is #250, operated by a company called Mobile Direct Response. The premise of #250 is very simple: users call 250 and are greeted by a simple IVR. They say a keyword, and they're either forwarded to the phone number of the business that paid for the keyword or they receive a text message response with more information. #250 is specifically oriented towards radio advertising, where asking people to remember a ten-digit phone number is, well, asking a lot. It's also made the jump to podcast advertising. #250 is priced in a very radio-centric way, by the keyword and the size of the market area in which the advertisement that gives the keyword is played. 250 was founded by Dave Robinett, who used to work on marketing at Sprint, presumably where he became aware that these MDCs were a possibility. He has negotiated for #250 to work across a substantial list of cellular carriers in the US and Canada, providing almost complete coverage. That wasn't easy, Robinett said in an interview that it took five years to get AT&T, T-Mobile, Verizon, and Sprint on board. 250 does not appear to be especially widely used. For one, the website is a little junky, with some broken links and other indications that it is not backed by a large communications department. Dave Robinett may be the entire company. They've been operating since at least 2017, and I've only ever heard it in an ad once---a podcast ad that ended with "Call #250 and say I need a dentist." One thing you quickly notice when you look into telephone marketing is that dentists are apparently about 80% of the market. He does mention success with shows like "Rush, Hannity, and Levin," so it's safe to say that my radio habits are a little different from Robinett's. That's not to say that #250 is a failure. In the same interview Robinett says that the company pays his mortgage and, well, that ain't too bad. But it's also nothing like the widespread adoption of SMSSCs. One wonders if the limitation of MDCs to one company that is so focused on radio marketing limits their potential. It might really open things up if some company created a registration service, and prenegotiated terms with carriers so that companies could pick up their own MDCs to use as they please. Well, yeah, someone's trying. Around 2006, a recently-founded mobile marketing company called Zoove announced StarStar dialing. I'm a little unclear on Zoove's history. It seems that they were originally founded as Teleractive in Rhode Island as an SMS short code keyword response service, and after an infusion of VC cash moved to Palo Alto and started looking for something bigger. In 2016, they were acquired by a call center technology company called Mindful. Or maybe Zoove sold the StarStar business to Mindful? Stick a pin in that. I don't love the name StarStar, which has shades of Spacestar Ordering. But it refers to their chosen MDC prefix, two stars. Well, that point is a little odd, according to their marketing material you can also get numbers with a # prefix or * prefix, but all of the examples use **. I would say that, in general, StarStar has it a little less together than #250. Their website is kind of broken, it only loads intermittently and some of the images are missing. At one point it uses the term "CADC" to describe these numbers but I can't find that expanded anywhere. Plus the "About" page refers repeatedly to Virtual Hold Technologies, which renamed to VHT in 2018 and Mindful 2022. It really feels like the vestigial website of a dead company. I know about StarStar because, for a time, trucks from moving franchise All My Sons prominently bore the number MOVE on the side. Indeed, this is still one of the headline examples on the StarStar website, but it doesn't work. I just get a loud click and then the call ends. And it's not that StarStar doesn't work with my mobile carrier, because StarStar's own number MOBILE does connect to their IVR. That IVR promises that a representative will speak with me shortly, plays about five seconds of hold music, and then dumps me on a voicemail system. Despite StarStar numbers apparently basically working, I'm finding that most of the examples they give on their website won't even connect. Perhaps results will vary depending on the mobile network. Well, perhaps not that much is lost. StarStar was founded by Steve Doumar, a serial telephone marketing entrepreneur with a colorful past founding various inbound call center companies. Perhaps his most famous venture is R360, a "lead acquisition" service memorialized by headlines like "Drug treatment referral service took advantage of addictions to make a quick buck" from the Federal Trade Commission. He's one of those guys whose bio involves founding a new company every two years, which he has to spin as entrepreneurial dynamism rather than some combination of fleeing dissatisfied investors and fleeing angered regulators. Today he runs whisp.io, a "customer activation platform" that appears to be a glorified SMS advertising service featuring something ominously called "simplified opt-in." Whisp has a YouTube channel which features the 48-second gem "Fun Fact We Absolutely Love About Steve Doumar". Description: Our very own CEO, Steve Doumar is a kind and generous person who has given back to the community in many ways; this man is absolutely a man with a heart of gold. Do you want to know the fun fact? Yes you do! Here it is: "He is an incredible philanthropist. He loves helping other people. Every time I'm with him he comes up with new ways and new ideas to help other people. Which I think is amazing. And he doesn't brag about it, he doesn't talk about it a lot." Except he's got his CMO making a YouTube video about it? From Steve Doumar's blog: American entrepreneur Ray Kroc expressed the importance of persisting in a busy world where everyone wants a bite of success. This man is no exception. An entrepreneur. A family man. A visionary. These are the many names of a man that has made it possible for opt-ins to be safe, secure, and accurate; Steve Doumar. I love this stuff, you just can't make it up. I'm pretty sure what's going on here is just an SEO effort to outrank the FTC releases and other articles about the R360 case when you search for his name. It's only partially working, "FTC Hits R360 and its Owner With $3.8 Million Civil ..." still comes in at Google result #4 for "Steve Doumar," at least for me. But hey, #4 is better than #1. Well, to be fair to StarStar, I don't think Steve Doumar has been involved for some years, but also to be fair, some of their current situation clearly dates to past behavior that is maybe less than savory. Zoove originally styled itself as "The National StarStar Registry," clearly trying to draw parallels to CTIA/iConectiv's SMSSC registry. Their largest customer was evidently a company called Sumotext, which leased a number of StarStar numbers to offer an SMS and telephone marketing service. In 2016, Sumotext sued StarStar, Zoove, VHT (now Mindful), and a healthy list of other entities all involved in StarStar including the intriguingly named StarSteve LLC. I'm not alone in finding the corporate history a little baffling; in a footnote on one ruling the court expressed confusion about all the different names and opted to call them all Zoove. In any case, Sumotext alleged that Zoove, StarSteve, and VHT all merged as part of a scheme to illegally monopolize the StarStar market by undercutting the companies that had been leasing the numbers and effectively giving VHT (Mindful) an exclusive ability to offer marketing services with StarStar numbers. The case didn't end up going anywhere for Sumotext, the jury found that Sumotext hadn't established a relevant market which is a key part of a Sherman act case. An appeal was made all the way to the Supreme Court, but they didn't take it up. What the case did do was publicize some pretty sketchy sounding details, like the seemingly uncontested accusation that VHT got Sumotext's customer list from the registry database and used it to convert them all into StarSteve customers. And yes, the Steve in StarSteve is Steve Doumar. As best I can tell, the story here is that Steve Doumar founded Zoove (or bought Teleractive and renamed it or something?) to establish the National StarStar Registry, then founded a marketing company called StarSteve that resold StarStar numbers, then merged StarSteve and the National StarStar Registry together and cut off all of the other resellers. Apparently not a Sherman act violation but it sure is a bad look, and I wonder how much it contributed to the lack of adoption of the whole StarStar idea---especially given that Sumotext seems to have been responsible for most of that adoption, including the All My Sons deal for MOVE. I wonder if All My Sons had to take MOVE off of their trucks because of the whole StarSteve maneuver? That seems to be what happened. Look, ten-digit phone numbers are had to remember, that much is true. But as is, the "MDC" industry doesn't seem stable enough for advertising applications where the number needs to continue to work into the future. I think the #250 service is probably here to stay, but confined to the niche of audio advertising. StarStar raised at least $30 million in capital in the 2010s, but seems to have shot itself in the foot. StarStar owner VHT/Mindful, now acquired by Medallia, doesn't even mention StarStar as a product offering. Hey, remember how Steve Doumar is such a great philanthropist? There are a lot of vestiges around of StarStar Inc., a nonprofit that made StarStar numbers available to charitable organizations. Their website, starstar.org, is now a Wix error page. You can find old articles about StarStar Me, also written **me, which sounds lewd but was a $3/mo offering that allowed customers to get a vanity short code (such as ** followed by their name)---the original form of StarStar, dating back to 2012 and the beginning of Zoove. In a press release announcing the StarStar Me, Zoove CEO Joe Gillespie said: With two-thirds of smartphone users having downloaded social networking apps to their phones, there’s a rapidly growing trend in today's on-the-go lifestyle to extend our personal communications and identity into the digital realm via our mobile phones. And somehow this leads to paying $3 for to get StarStarred? I love it! It's so meaningless! And years later it would be StarStar Mobile formerly Zoove by VHT now known as Mindful a Medallia company. Truly an inspiring story of industry, and just one little corner of the vast tapestry of phone numbers.

a month ago 39 votes
2025-06-19 hydronuclear testing

Some time ago, via a certain orange website, I came across a report about a mission to recover nuclear material from a former Soviet test site. I don't know what you're doing here, go read that instead. But it brought up a topic that I have only known very little about: Hydronuclear testing. One of the key reasons for the nonproliferation concern at Semipalatinsk was the presence of a large quantity of weapons grade material. This created a substantial risk that someone would recover the material and either use it directly or sell it---either way giving a significant leg up on the construction of a nuclear weapon. That's a bit odd, though, isn't it? Material refined for use in weapons in scarce and valuable, and besides that rather dangerous. It's uncommon to just leave it lying around, especially not hundreds of kilograms of it. This material was abandoned in place because the nature of the testing performed required that a lot of weapons-grade material be present, and made it very difficult to remove. As the Semipalatinsk document mentions in brief, similar tests were conducted in the US and led to a similar abandonment of special nuclear material at Los Alamos's TA-49. Today, I would like to give the background on hydronuclear testing---the what and why. Then we'll look specifically at LANL's TA-49 and the impact of the testing performed there. First we have to discuss the boosted fission weapon. Especially in the 21st century, we tend to talk about "nuclear weapons" as one big category. The distinction between an "A-bomb" and an "H-bomb," for example, or between a conventional nuclear weapon and a thermonuclear weapon, is mostly forgotten. That's no big surprise: thermonuclear weapons have been around since the 1950s, so it's no longer a great innovation or escalation in weapons design. The thermonuclear weapon was not the only post-WWII design innovation. At around the same time, Los Alamos developed a related concept: the boosted weapon. Boosted weapons were essentially an improvement in the efficiency of nuclear weapons. When the core of a weapon goes supercritical, the fission produces a powerful pulse of neutrons. Those neutrons cause more fission, the chain reaction that makes up the basic principle of the atomic bomb. The problem is that the whole process isn't fast enough: the energy produced blows the core apart before it's been sufficiently "saturated" with neutrons to completely fission. That leads to a lot of the fuel in the core being scattered, rather than actually contributing to the explosive energy. In boosted weapons, a material that will fusion is added to the mix, typically tritium and deuterium gas. The immense heat of the beginning of the supercritical stage causes the gas to undergo fusion, and it emits far more neutrons than the fissioning fuel does alone. The additional neutrons cause more fission to occur, improving the efficiency of the weapon. Even better, despite the theoretical complexity of driving a gas into fusion¸ the mechanics of this mechanism are actually simpler than the techniques used to improve yield in non-boosted weapons (pushers and tampers). The result is that boosted weapons produce a more powerful yield in comparison to the amount of fuel, and the non-nuclear components can be made simpler and more compact as well. This was a pretty big advance in weapons design and boosting is now a ubiquitous technique. It came with some downsides, though. The big one is that whole property of making supercriticality easier to achieve. Early implosion weapons were remarkably difficult to detonate, requiring an extremely precisely timed detonation of the high explosive shell. While an inconvenience from an engineering perspective, the inherent difficulty of achieving a nuclear yield also provided a safety factor. If the high explosives detonated for some unintended reason, like being struck by canon fire as a bomber was intercepted, or impacting the ground following an accidental release, it wouldn't "work right." Uneven detonation of the shell would scatter the core, rather than driving it into supercriticality. This property was referred to as "one point safety:" a detonation at one point on the high explosive assembly should not produce a nuclear yield. While it has its limitations, it became one of the key safety principles of weapon design. The design of boosted weapons complicated this story. Just a small fission yield, from a small fragment of the core, could potentially start the fusion process and trigger the rest of the core to detonate as well. In other words, weapon designers became concerned that boosted weapons would not have one point safety. As it turns out, two-stage thermonuclear weapons, which were being fielded around the same time, posed a similar set of problems. The safety problems around more advanced weapon designs came to a head in the late '50s. Incidentally, so did something else: shifts in Soviet politics had given Khrushchev extensive power over Soviet military planning, and he was no fan of nuclear weapons. After some on-again, off-again dialog between the time's nuclear powers, the US and UK agreed to a voluntary moratorium on nuclear testing which began in late 1958. For weapons designers this was, of course, a problem. They had planned to address the safety of advanced weapon designs through a testing campaign, and that was now off the table for the indefinite future. An alternative had to be developed, and quickly. In 1959, the Hydronuclear Safety Program was initiated. By reducing the amount of material in otherwise real weapon cores, physicists realized they could run a complete test of the high explosive system and observe its effects on the core without producing a meaningful nuclear yield. These tests were dubbed "hydronuclear," because of the desire to observe the behavior of the core as it flowed like water under the immense explosive force. While the test devices were in some ways real nuclear weapons, the nuclear yield would be vastly smaller than the high explosive yield, practically nill. Weapons designers seemed to agree that these experiments complied with the spirit of the moratorium, being far from actual nuclear tests, but there was enough concern that Los Alamos went to the AEC and President Eisenhower for approval. They evidently agreed, and work started immediately to identify a suitable site for hydronuclear testing. While hydronuclear tests do not create a nuclear yield, they do involve a lot of high explosives and radioactive material. The plan was to conduct the tests underground, where the materials cast off by the explosion would be trapped. This would solve the immediate problem of scattering nuclear material, but it would obviously be impractical to recover the dangerous material once it was mixed with unstable soil deep below the surface. The material would stay, and it had to stay put! The US Army Corps of Engineers, a center of expertise in hydrology because of their reclamation work, arrived in October 1959 to begin an extensive set of studies on the Frijoles Mesa site. This was an unused area near a good road but far on the east edge of the laboratory, well separated from the town of Los Alamos and pretty much anything else. More importantly, it was a classic example of northern New Mexican geology: high up on a mesa built of tuff and volcanic sediments, well-drained and extremely dry soil in an area that received little rain. One of the main migration paths for underground contaminants is their interaction with water, and specifically the tendency of many materials to dissolve into groundwater and flow with it towards aquifers. The Corps of Engineers drilled test wells, about 1,500' deep, and a series of 400' core samples. They found that on the Frijoles Mesa, ground water was over 1,000' below the surface, and that everything above was far from saturation. That means no mobility of the water, which is trapped in the soil. It's just about the ideal situation for putting something underground and having it stay. Incidentally, this study would lead to the development of a series of new water wells for Los Alamos's domestic water supply. It also gave the green light for hydronuclear testing, and Frijoles Mesa was dubbed Technical Area 49 and subdivided into a set of test areas. Over the following three years, these test areas would see about 35 hydronuclear detonations carried out in the bottom of shafts that were about 200' deep and 3-6' wide. It seems that for most tests, the hole was excavated and lined with a ladder installed to reach the bottom. Technicians worked at the bottom of the hole to prepare the test device, which was connected by extensive cabling to instrumentation trailers on the surface. When the "shot" was ready, the hole was backfilled with sand and sealed at the top with a heavy plate. The material on top of the device held everything down, preventing migration of nuclear material to the surface. The high explosives did, of course, destroy the test device and the cabling, but not before the instrumentation trailers had recorded a vast amount of data. If you read these kinds of articles, you must know that the 1958 moratorium did not last. Soviet politics shifted again, France began nuclear testing, negotiations over a more formal test ban faltered. US intelligence suspected that the Soviet Union had operated their nuclear weapons program at full tilt during the test ban, and the military suspected clandestine tests, although there was no evidence they had violated the treaty. Of course, that they continued their research efforts is guaranteed, we did as well. Physicist Edward Teller, ever the nuclear weapons hawk, opposed the moratorium and pushed to resume testing. In 1961, the Soviet Union resumed testing, culminating in the test of the record-holding "Tsar Bomba," a 50 megaton device. The US resumed testing as well. The arms race was back on. US hydronuclear testing largely ended with the resumption of full-scale testing. The same safety studies could be completed on real weapons, and those tests would serve other purposes in weapons development as well. Although post-moratorium testing included atmospheric detonations, the focus had shifted towards underground tests and the 1963 Partial Test Ban Treaty restricted the US and USSR to underground tests only. One wonders about the relationship between hydronuclear testing at TA-49 and the full-scale underground tests extensively performed at the NTS. Underground testing began in 1951 with Buster-Jangle Uncle, a test to determine how big of a crater could be produced by a ground-penetrating weapon. Uncle wasn't really an underground test in the modern sense, the device was emplaced only 17 feet deep and still produced a huge cloud of fallout. It started a trend, though: a similar 1955 test was set 67 feet deep, producing a spectacular crater, before the 1957 Plumbbob Pascal-A was detonated at 486 feet and produced radically less fallout. 1957's Plumbbob Rainier was the first fully-contained underground test, set at the end of a tunnel excavated far into a hillside. This test emitted no fallout at all, proving the possibility of containment. Thus both the idea of emplacing a test device in a deep hole, and the fact that testing underground could contain all of the fallout, were known when the moratorium began in 1959. What's very interesting about the hydronuclear tests is the fact that technicians actually worked "downhole," at the bottom of the excavation. Later underground tests were prepared by assembling the test device at the surface, as part of a rocket-like "rack," and then lowering it to the bottom just before detonation. These techniques hadn't yet been developed in the '50s, thus the use of a horizontal tunnel for the first fully-contained test. Many of the racks used for underground testing were designed and built by LANL, but others (called "canisters" in an example of the tendency of the labs to not totally agree on things) were built by Lawrence Livermore. I'm not actually sure which of the two labs started building them first, a question for future research. It does seem likely that the hydronuclear testing at LANL advanced the state of the art in remote instrumentation and underground test design, facilitating the adoption of fully-contained underground tests in the following years. During the three years of hydronuclear testing, shafts were excavated in four testing areas. It's estimated that the test program at TA-49 left about 40kg of plutonium and 93kg of enriched uranium underground, along with 92kg of depleted uranium and 13kg of beryllium (both toxic contaminants). Because of the lack of a nuclear yield, these tests did not create the caverns associated with underground testing. Material from the weapons likely spread within just a 10-20' area, as holes were drilled on a 25' grid and contamination from previous neighboring tests was encountered only once. The tests also produced quite a bit of ancillary waste: things like laboratory equipment, handling gear, cables and tubing, that are not directly radioactive but were contaminated with radioactive or toxic materials. In the fashion typical of the time, this waste was buried on site, often as part of the backfilling of the test shafts. During the excavation of one of the test shafts, 2-M in December 1960, contamination was detected at the surface. It seems that the geology allowed plutonium from a previous test to spread through cracks into the area where 2-M was being drilled. The surface soil contaminated by drill cuttings was buried back in hole 2-M, but this incident made area 2 the most heavily contaminated part of TA-49. When hydronuclear testing ended in 1961, area 2 was covered by a 6' of gravel and 4-6" of asphalt to better contain any contaminated soil. Several support buildings on the surface were also contaminated, most notably a building used as a radiochemistry laboratory to support the tests. An underground calibration facility that allowed for exposure of test equipment to a contained source in an underground chamber was also built at TA-49 and similarly contaminated by use with radioisotopes. The Corps of Engineers continued to monitor the hydrology of the site from 1961 to 1970, and test wells and soil samples showed no indication that any contamination was spreading. In 1971, LANL established a new environmental surveillance department that assumed responsibility for legacy sites like TA-49. That department continued to sample wells, soil, and added air sampling. Monitoring of stream sediment downhill from the site was added in the '70s, as many of the contaminants involved can bind to silt and travel with surface water. This monitoring has not found any spread either. That's not to say that everything is perfect. In 1975, a section of the asphalt pad over Area 2 collapsed, leaving a three foot deep depression. Rainwater pooled in the depression and then flowed through the gravel into hole 2-M itself, collecting in the bottom of the lining of the former experimental shaft. In 1976, the asphalt cover was replaced, but concerns remained about the water that had already entered 2-M. It could potentially travel out of the hole, continue downwards, and carry contamination into the aquifer around 800' below. Worse, a nearby core sample hole had picked up some water too, suggesting that the water was flowing out of 2-M through cracks and into nearby features. Since the core hole had a slotted liner, it would be easier for water to leave it and soak into the ground below. In 1980, the water that had accumulated in 2-M was removed by lifting about 24 gallons to the surface. While the water was plutonium contaminated, it fell within acceptable levels for controlled laboratory areas. Further inspections through 1986 did not find additional water in the hole, suggesting that the asphalt pad was continuing to function correctly. Several other investigations were conducted, including the drilling of some additional sample wells and examination of other shafts in the area, to determine if there were other routes for water to enter the Area 2 shafts. Fortunately no evidence of ongoing water ingress was found. In 1986, TA-49 was designated a hazardous waste site under the Resource Conservation and Recovery Act. Shortly after, the site was evaluated under CERCLA to prioritize remediation. Scoring using the Hazard Ranking System determined a fairly low risk for the site, due to the lack of spread of the contamination and evidence suggesting that it was well contained by the geology. Still, TA-49 remains an environmental remediation site and now falls under a license granted by the New Mexico Environment Department. This license requires ongoing monitoring and remediation of any problems with the containment. For example, in 1991 the asphalt cover of Area 2 was found to have cracked and allowed more water to enter the sample wells. The covering was repaired once again, and investigations made every few years from 1991 to 2015 to check for further contamination. Ongoing monitoring continues today. So far, Area 2 has not been found to pose an unacceptable risk to human health or a risk to the environment. NMED permitting also covers the former radiological laboratory and calibration facility, and infrastructure related to them like a leach field from drains. Sampling found some surface contamination, so the affected soil was removed and disposed of at a hazardous waste landfill where it will be better contained. TA-49 was reused for other purposes after hydronuclear testing. These activities included high explosive experiments contained in metal "bottles," carried out in a metal-lined pit under a small structure called the "bottle house." Part of the bottle house site was later reused to build a huge hydraulic ram used to test steel cables at their failure strength. I am not sure of the exact purpose of this "Cable Test Facility," but given the timeline of its use during the peak of underground testing and the design I suspect LANL used it as a quality control measure for the cable assemblies used in lowering underground test racks into their shafts. No radioactive materials were involved in either of these activities, but high explosives and hydraulic oil can both be toxic, so both were investigated and received some surface soil cleanup. Finally, the NMED permit covers the actual test shafts. These have received numerous investigations over the sixty years since the original tests, and significant contamination is present as expected. However, that contamination does not seem to be spreading, and modeling suggests that it will stay that way. In 2022, the NMED issued Certificates of Completion releasing most of the TA-49 remediation sites without further environmental controls. The test shafts themselves, known to NMED by the punchy name of Solid Waste Management Unit 49-001(e), received a certificate of completion that requires ongoing controls to ensure that the land is used only for industrial purposes. Environmental monitoring of the TA-49 site continues under LANL's environmental management program and federal regulation, but TA-49 is no longer an active remediation project. The plutonium and uranium is just down there, and it'll have to stay.

a month ago 40 votes
2025-06-08 Omnimax

In a previous life, I worked for a location-based entertainment company, part of a huge team of people developing a location for Las Vegas, Nevada. It was COVID, a rough time for location-based anything, and things were delayed more than usual. Coworkers paid a lot of attention to another upcoming Las Vegas attraction, one with a vastly larger budget but still struggling to make schedule: the MSG (Madison Square Garden) Sphere. I will set aside jokes about it being a square sphere, but they were perhaps one of the reasons that it underwent a pre-launch rebranding to merely the Sphere. If you are not familiar, the Sphere is a theater and venue in Las Vegas. While it's know mostly for the video display on the outside, that's just marketing for the inside: a digital dome theater, with seating at a roughly 45 degree stadium layout facing a near hemisphere of video displays. It is a "near" hemisphere because the lower section is truncated to allow a flat floor, which serves as a stage for events but is also a practical architectural decision to avoid completely unsalable front rows. It might seem a little bit deceptive that an attraction called the Sphere does not quite pull off even a hemisphere of "payload," but the same compromise has been reached by most dome theaters. While the use of digital display technology is flashy, especially on the exterior, the Sphere is not quite the innovation that it presents itself as. It is just a continuation of a long tradition of dome theaters. Only time will tell, but the financial difficulties of the Sphere suggest that follows the tradition faithfully: towards commercial failure. You could make an argument that the dome theater is hundreds of years old, but I will omit it. Things really started developing, at least in our modern tradition of domes, with the 1923 introduction of the Zeiss planetarium projector. Zeiss projectors and their siblings used a complex optical and mechanical design to project accurate representations of the night sky. Many auxiliary projectors, incorporated into the chassis and giving these projectors famously eccentric shapes, rendered planets and other celestial bodies. Rather than digital light modulators, the images from these projectors were formed by purely optical means: perforated metal plates, glass plates with etched metalized layers, and fiber optics. The large, precisely manufactured image elements and specialized optics created breathtaking images. While these projectors had considerable entertainment value, especially in the mid-century when they represented some of the most sophisticated projection technology yet developed, their greatest potential was obviously in education. Planetarium projectors were fantastically expensive (being hand-built in Germany with incredible component counts) [1], they were widely installed in science museums around the world. Most of us probably remember a dogbone-shaped Zeiss, or one of their later competitors like Spitz or Minolta, from our youths. Unfortunately, these marvels of artistic engineering were mostly retired as digital projection of near comparable quality became similarly priced in the 2000s. But we aren't talking about projectors, we're talking about theaters. Planetarium projectors were highly specialized to rendering the night sky, and everything about them was intrinsically spherical. For both a reasonable viewing experience, and for the projector to produce a geometrically correct image, the screen had to be a spherical section. Thus the planetarium itself: in its most traditional form, rings of heavily reclined seats below a hemispherical dome. The dome was rarely a full hemisphere, but was usually truncated at the horizon. This was mostly a practical decision but integrated well into the planetarium experience, given that sky viewing is usually poor near the horizon anyway. Many planetaria painted a city skyline or forest silhouette around the lower edge to make the transition from screen to wall more natural. Later, theatrical lighting often replaced the silhouette, reproducing twilight or the haze of city lights. Unsurprisingly, the application-specific design of these theaters also limits their potential. Despite many attempts, the collective science museum industry has struggled to find entertainment programming for planetaria much beyond Pink Floyd laser shows [1]. There just aren't that many things that you look up at. Over time, planetarium shows moved in more narrative directions. Film projection promised new flexibility---many planetaria with optical star projectors were also equipped with film projectors, which gave show producers exciting new options. Documentary video of space launches and animations of physical principles became natural parts of most science museum programs, but were a bit awkward on the traditional dome. You might project four copies of the image just above the horizon in the four cardinal directions, for example. It was very much a compromise. With time, the theater adapted to the projection once again: the domes began to tilt. By shifting the dome in one direction, and orienting the seating towards that direction, you could create a sort of compromise point between the traditional dome and traditional movie theater. The lower central area of the screen was a reasonable place to show conventional film, while the full size of the dome allowed the starfield to almost fill the audience's vision. The experience of the tilted dome is compared to "floating in space," as opposed to looking up at the sky. In true Cold War fashion, it was a pair of weapons engineers (one nuclear weapons, the other missiles) who designed the first tilted planetarium. In 1973, the planetarium of what is now called the Fleet Science Center in San Diego, California opened to the public. Its dome was tilted 25 degrees to the horizon, with the seating installed on a similar plane and facing in one direction. It featured a novel type of planetarium projector developed by Spitz and called the Space Transit Simulator. The STS was not the first, but still an early mechanical projector to be controlled by a computer---a computer that also had simultaneous control of other projectors and lighting in the theater, what we now call a show control system. Even better, the STS's innovative optical design allowed it to warp or bend the starfield to simulate its appearance from locations other than earth. This was the "transit" feature: with a joystick connected to the control computer, the planetarium presenter could "fly" the theater through space in real time. The STS was installed in a well in the center of the seating area, and its compact chassis kept it low in the seating area, preserving the spherical geometry (with the projector at the center of the sphere) without blocking the view of audience members sitting behind it and facing forward. And yet my main reason for discussing the Fleet planetarium is not the the planetarium projector at all. It is a second projector, an "auxiliary" one, installed in a second well behind the STS. The designers of the planetarium intended to show film as part of their presentations, but they were not content with a small image at the center viewpoint. The planetarium commissioned a few of the industry's leading film projection experts to design a film projection system that could fill the entire dome, just as the planetarium projector did. They knew that such a large dome would require an exceptionally sharp image. Planetarium projectors, with their large lithographed slides, offered excellent spatial resolution. They made stars appear as point sources, the same as in the night sky. 35mm film, spread across such a large screen, would be obviously blurred in comparison. They would need a very large film format. Fortuitously, almost simultaneously the Multiscreen Corporation was developing a "sideways" 70mm format. This 15-perf format used 70mm film but fed it through the projector sideways, making each frame much larger than typical 70mm film. In its debut, at a temporary installation in the 1970 Expo Osaka, it was dubbed IMAX. IMAX made an obvious basis for a high-resolution projection system, and so the then-named IMAX Corporation was added to the planetarium project. The Fleet's film projector ultimately consisted of an IMAX film transport with a custom-built compact, liquid-cooled lamphouse and spherical fisheye lens system. The large size of the projector, the complex IMAX framing system and cooling equipment, made it difficult to conceal in the theater's projector well. Threading film into IMAX projectors is quite complex, with several checks the projectionist must make during a pre-show inspection. The projectionist needed room to handle the large film, and to route it to and from the enormous reels. The projector's position in the middle of the seating area left no room for any of this. We can speculate that it was, perhaps, one of the designer's missile experience that lead to the solution: the projector was serviced in a large projection room beneath the theater's seating. Once it was prepared for each show, it rose on near-vertical rails until just the top emerged in the theater. Rollers guided the film as it ran from a platter, up the shaft to the projector, and back down to another platter. Cables and hoses hung below the projector, following it up and down like the traveling cable of an elevator. To advertise this system, probably the greatest advance in film projection since the IMAX format itself, the planetarium coined the term Omnimax. Omnimax was not an easy or economical format. Ideally, footage had to be taken in the same format, using a 70mm camera with a spherical lens system. These cameras were exceptionally large and heavy, and the huge film format limited cinematographers to short takes. The practical problems with Omnimax filming were big enough that the first Omnimax films faked it, projecting to the larger spherical format from much smaller conventional negatives. This was the case for "Voyage to the Outer Planets" and "Garden Isle," the premier films at the Fleet planetarium. The history of both is somewhat obscure, the latter especially. "Voyage to the Outer Planets" was executive-produced by Preston Fleet, a founder of the Fleet center (which was ultimately named for his father, a WWII aviator). We have Fleet's sense of showmanship to thank for the invention of Omnimax: He was an accomplished business executive, particularly in the photography industry, and an aviation enthusiast who had his hands in more than one museum. Most tellingly, though, he had an eccentric hobby. He was a theater organist. I can't help but think that his passion for the theater organ, an instrument almost defined by the combination of many gizmos under electromechanical control, inspired "Voyage." The film, often called a "multimedia experience," used multiple projectors throughout the planetarium to depict a far-future journey of exploration. The Omnimax film depicted travel through space, with slide projectors filling in artist's renderings of the many wonders of space. The ten-minute Omnimax film was produced by Graphic Films Corporation, a brand that would become closely associated with Omnimax in the following decades. Graphic was founded in the midst of the Second World War by Lester Novros, a former Disney animator who found a niche creating training films for the military. Novros's fascination with motion and expertise in presenting complicated 3D scenes drew him to aerospace, and after the war he found much of his business in the newly formed Air Force and NASA. He was also an enthusiast of niche film formats, and Omnimax was not his first dome. For the 1964 New York World's Fair, Novros and Graphic Films had produced "To the Moon and Beyond," a speculative science film with thematic similarities to "Voyage" and more than just a little mechanical similarity. It was presented in Cinerama 360, a semi-spherical, dome-theater 70mm format presented in a special theater called the Moon Dome. "To the Moon and Beyond" was influential in many ways, leading to Graphic Films' involvement in "2001: A Space Odyssey" and its enduring expertise in domes. The Fleet planetarium would not remain the only Omnimax for long. In 1975, the city of Spokane, Washington struggled to find a new application for the pavilion built for Expo '74 [3]. A top contender: an Omnimax theater, in some ways a replacement for the temporary IMAX theater that had been constructed for the actual Expo. Alas, this project was not to be, but others came along: in 1978, the Detroit Science Center opened the second Omnimax theater ("the machine itself looks like and is the size of a front loader," the Detroit Free Press wrote). The Science Museum of Minnesota, in St. Paul, followed shortly after. The Carnegie Science Center, in Pittsburgh, rounded out the year's new launches. Omnimax hit prime time the next year, with the 1979 announcement of an Omnimax theater at Caesars Palace in Las Vegas, Nevada. Unlike the previous installations, this 380-seat theater was purely commercial. It opened with the 1976 IMAX film "To Fly!," which had been optically modified to fit the Omnimax format. This choice of first film is illuminating. "To Fly!" is a 27 minute documentary on the history of aviation in the United States, originally produced for the IMAX theater at the National Air and Space Museum [4]. It doesn't exactly seem like casino fare. The IMAX format, the flat-screen one, was born of world's fairs. It premiered at an Expo, reappeared a couple of years later at another one, and for the first years of the format most of the IMAX theaters built were associated with either a major festival or an educational institution. This noncommercial history is a bit hard to square with the modern IMAX brand, closely associated with major theater chains and the Marvel Cinematic Universe. Well, IMAX took off, and in many ways it sold out. Over the decades since the 1970 Expo, IMAX has met widespread success with commercial films and theater owners. Simultaneously, the definition or criteria for IMAX theaters have relaxed, with smaller screens made permissible until, ultimately, the transition to digital projection eliminated the 70mm film and more or less reduce IMAX to just another ticket surcharge brand. It competes directly with Cinemark xD, for example. To the theater enthusiast, this is a pretty sad turn of events, a Westinghouse-esque zombification of a brand that once heralded the field's most impressive technical achievements. The same never happened to Omnimax. The Caesar's Omnimax theater was an odd exception; the vast majority of Omnimax theaters were built by science museums and the vast majority of Omnimax films were science documentaries. Quite a few of those films had been specifically commissioned by science museums, often on the occasion of their Omnimax theater opening. The Omnimax community was fairly tight, and so the same names recur. The Graphic Films Corporation, which had been around since the beginning, remained so closely tied to the IMAX brand that they practically shared identities. Most Omnimax theaters, and some IMAX theaters, used to open with a vanity card often known as "the wormhole." It might be hard to describe beyond "if you know you know," it certainly made an impression on everyone I know that grew up near a theater that used it. There are some videos, although unfortunately none of them are very good. I have spent more hours of my life than I am proud to admit trying to untangle the history of this clip. Over time, it has appeared in many theaters with many different logos at the end, and several variations of the audio track. This is in part informed speculation, but here is what I believe to be true: the "wormhole" was originally created by Graphic Films for the Fleet planetarium specifically, and ran before "Voyage to the Outer Planets" and its double-feature companion "Garden Isle," both of which Graphic Films had worked on. This original version ended with the name Graphic Films, accompanied by an odd sketchy drawing that was also used as an early logo of the IMAX Corporation. Later, the same animation was re-edited to end with an IMAX logo. This version ran in both Omnimax and conventional IMAX theaters, probably as a result of the extensive "cross-pollination" of films between the two formats. Many Omnimax films through the life of the format had actually been filmed for IMAX, with conventional lenses, and then optically modified to fit the Omnimax dome after the fact. You could usually tell: the reprojection process created an unusual warp in the image, and more tellingly, these pseudo-Omnimax films almost always centered the action at the middle of the IMAX frame, which was too high to be quite comfortable in an Omnimax theater (where the "frame center" was well above the "front center" point of the theater). Graphic Films had been involved in a lot of these as well, perhaps explaining the animation reuse, but it's just as likely that they had sold it outright to the IMAX corporation which used it as they pleased. For some reason, this version also received new audio that is mostly the same but slightly different. I don't have a definitive explanation, but I think there may have been an audio format change between the very early Omnimax theaters and later IMAX/Omnimax systems, which might have required remastering. Later, as Omnimax domes proliferated at science museums, the IMAX Corporation (which very actively promoted Omnimax to education) gave many of these theaters custom versions of the vanity card that ended with the science museum's own logo. I have personally seen two of these, so I feel pretty confident that they exist and weren't all that rare (basically 2 out of 2 Omnimax theaters I've visited used one), but I cannot find any preserved copies. Another recurring name in the world of IMAX and Omnimax is MacGillivray Freeman Films. MacGillivray and Freeman were a pair of teenage friends from Laguna Beach who dropped out of school in the '60s to make skateboard and surf films. This is, of course, a rather cliché start for documentary filmmakers but we must allow that it was the '60s and they were pretty much the ones creating the cliché. Their early films are hard to find in anything better than VHS rip quality, but worth watching: Wikipedia notes their significance in pioneering "action cameras," mounting 16mm cinema cameras to skateboards and surfboards, but I would say that their cinematography was innovative in more ways than just one. The 1970 "Catch the Joy," about sandrails, has some incredible shots that I struggle to explain. There's at least one where they definitely cut the shot just a couple of frames before a drifting sandrail flung their camera all the way down the dune. For some reason, I would speculate due to their reputation for exciting cinematography, the National Air and Space Museum chose MacGillivray and Freeman for "To Fly!". While not the first science museum IMAX documentary by any means (that was, presumably, "Voyage to the Outer Planets" given the different subject matter of the various Expo films), "To Fly!" might be called the first modern one. It set the pattern that decades of science museum films followed: a film initially written by science educators, punched up by producers, and filmed with the very best technology of the time. Fearing that the film's history content would be dry, they pivoted more towards entertainment, adding jokes and action sequences. "To Fly!" was a hit, running in just about every science museum with an IMAX theater, including Omnimax. Sadly, Jim Freeman died in a helicopter crash shortly after production. Nonetheless, MacGillivray Freeman Films went on. Over the following decades, few IMAX science documentaries were made that didn't involve them somehow. Besides the films they produced, the company consulted on action sequences in most of the format's popular features. I had hoped to present here a thorough history of the films were actually produced in the Omnimax format. Unfortunately, this has proven very difficult: the fact that most of them were distributed only to science museums means that they are very spottily remembered, and besides, so many of the films that ran in Omnimax theaters were converted from IMAX presentations that it's hard to tell the two apart. I'm disappointed that this part of cinema history isn't better recorded, and I'll continue to put time into the effort. Science museum documentaries don't get a lot of attention, but many of the have involved formidable technical efforts. Consider, for example, the cameras: befitting the large film, IMAX cameras themselves are very large. When filming "To Fly!", MacGillivray and Freeman complained that the technically very basic 80 pound cameras required a lot of maintenance, were complex to operate, and wouldn't fit into the "action cam" mounting positions they were used to. The cameras were so expensive, and so rare, that they had to be far more conservative than their usual approach out of fear of damaging a camera they would not be able to replace. It turns out that they had it easy. Later IMAX science documentaries would be filmed in space ("The Dream is Alive" among others) and deep underwater ("Deep Sea 3D" among others). These IMAX cameras, modified for simpler operation and housed for such difficult environments, weighed over 1,000 pounds. Astronauts had to be trained to operate the cameras; mission specialists on Hubble service missions had wrangling a 70-pound handheld IMAX camera around the cabin and developing its film in a darkroom bag among their duties. There was a lot of film to handle: as a rule of thumb, one mile of IMAX film is good for eight and a half minutes. I grew up in Portland, Oregon, and so we will make things a bit more approachable by focusing on one example: The Omnimax theater of the Oregon Museum of Science and Industry, which opened as part of the museum's new waterfront location in 1992. This 330-seat boasted a 10,000 sq ft dome and 15 kW of sound. The premier feature was "Ring of Fire," a volcano documentary originally commissioned by the Fleet, the Fort Worth Museum of Science and Industry, and the Science Museum of Minnesota. By the 1990s, the later era of Omnimax, the dome format was all but abandoned as a commercial concept. There were, an announcement article notes, around 90 total IMAX theaters (including Omnimax) and 80 Omnimax films (including those converted from IMAX) in '92. Considering the heavy bias towards science museums among these theaters, it was very common for the films to be funded by consortia of those museums. Considering the high cost of filming in IMAX, a lot of the documentaries had a sort of "mashup" feel. They would combine footage taken in different times and places, often originally for other projects, into a new narrative. "Ring of Fire" was no exception, consisting of a series of sections that were sometimes more loosely connected to the theme. The 1982 Loma Prieta earthquake was a focus, and the eruption of Mt. St. Helens, and lava flows in Hawaii. Perhaps one of the reasons it's hard to catalog IMAX films is this mashup quality, many of the titles carried at science museums were something along the lines of "another ocean one." I don't mean this as a criticism, many of the IMAX documentaries were excellent, but they were necessarily composed from painstakingly gathered fragments and had to cover wide topics. Given that I have an announcement feature piece in front of me, let's also use the example of OMSI to discuss the technical aspects. OMSI's projector cost about $2 million and weighted about two tons. To avoid dust damaging the expensive prints, the "projection room" under the seating was a positive-pressure cleanroom. This was especially important since the paucity of Omnimax content meant that many films ran regularly for years. The 15 kW water-cooled lamp required replacement at 800 to 1,000 hours, but unfortunately, the price is not noted. By the 1990s, Omnimax had become a rare enough system that the projection technology was a major part of the appeal. OMSI's installation, like most later Omnimax theaters, had the audience queue below the seating, separated from the projection room by a glass wall. The high cost of these theaters meant that they operated on high turnovers, so patrons would wait in line to enter immediately after the previous showing had exited. While they waited, they could watch the projectionist prepare the next show while a museum docent explained the equipment. I have written before about multi-channel audio formats, and Omnimax gives us some more to consider. The conventional audio format for much of Omnimax's life was six-channel: left rear, left screen, center screen, right screen, right rear, and top. Each channel had an independent bass cabinet (in one theater, a "caravan-sized" enclosure with eight JBL 2245H 46cm woofers), and a crossover network fed the lowest end of all six channels to a "sub-bass" array at screen bottom. The original Fleet installation also had sub-bass speakers located beneath the audience seating, although that doesn't seem to have become common. IMAX titles of the '70s and '80s delivered audio on eight-track magnetic tape, with the additional tracks used for synchronization to the film. By the '90s, IMAX had switched to distributing digital audio on three CDs (one for each two channels). OMSI's theater was equipped for both, and the announcement amusingly notes the availability of cassette decks. A semi-custom audio processor made for IMAX, the Sonics TAC-86, managed synchronization with film playback and applied equalization curves individually calibrated to the theater. IMAX domes used perforated aluminum screens (also the norm in later planetaria), so the speakers were placed behind the screen in the scaffold-like superstructure that supported it. When I was young, OMSI used to start presentations with a demo program that explained the large size of IMAX film before illuminating work lights behind the screen to make the speakers visible. Much of this was the work of the surprisingly sophisticated show control system employed by Omnimax theaters, a descendent of the PDP-15 originally installed in the Fleet. Despite Omnimax's almost complete consignment to science museums, there were some efforts it bringing commercial films. Titles like Disney's "Fantasia" and "Star Wars: Episode III" were distributed to Omnimax theaters via optical reprojection, sometimes even from 35mm originals. Unfortunately, the quality of these adaptations was rarely satisfactory, and the short runtimes (and marketing and exclusivity deals) typical of major commercial releases did not always work well with science museum schedules. Still, the cost of converting an existing film to dome format is pretty low, so the practice continues today. "Star Wars: The Force Awakens," for example, ran on at least one science museum dome. This trickle of blockbusters was not enough to make commercial Omnimax theaters viable. Caesars Palace closed, and then demolished, their Omnimax theater in 2000. The turn of the 21st century was very much the beginning of the end for the dome theater. IMAX was moving away from their film system and towards digital projection, but digital projection systems suitable for large domes were still a nascent technology and extremely expensive. The end of aggressive support from IMAX meant that filming costs became impractical for documentaries, so while some significant IMAX science museum films were made in the 2000s, the volume definitely began to lull and the overall industry moved away from IMAX in general and Omnimax especially. It's surprising how unforeseen this was, at least to some. A ten-screen commercial theater in Duluth opened an Omnimax theater in 1996! Perhaps due to the sunk cost, it ran until 2010, not a bad closing date for an Omnimax theater. Science museums, with their relatively tight budgets and less competitive nature, did tend to hold over existing Omnimax installations well past their prime. Unfortunately, many didn't: OMSI, for example, closed its Omnimax theater in 2013 for replacement with a conventional digital theater that has a large screen but is not IMAX branded. Fortunately, some operators hung onto their increasingly costly Omnimax domes long enough for modernization to become practical. The IMAX Corporation abandoned the Omnimax name as more of the theaters closed, but continued to support "IMAX Dome" with the introduction of a digital laser projector with spherical optics. There are only ten examples of this system. Others, including Omnimax's flagship at the Fleet Science Center, have been replaced by custom dome projection systems built by competitors like Sony. Few Omnimax projectors remain. The Fleet, to their credit, installed the modern laser projectors in front of the projector well so that the original film projector could remain in place. It's still functional and used for reprisals of Omnimax-era documentaries. IMAX projectors in general are a dying breed, a number of them have been preserved but their complex, specialized design and the end of vendor support means that it may become infeasible to keep them operating. We are, of course, well into the digital era. While far from inexpensive, digital projection systems are now able to match the quality of Omnimax projection. The newest dome theaters, like the Sphere, dispense with projection entirely. Instead, they use LED display panels capable of far brighter and more vivid images than projection, and with none of the complexity of water-cooled arc lamps. Still, something has been lost. There was once a parallel theater industry, a world with none of the glamor of Hollywood but for whom James Cameron hauled a camera to the depths of the ocean and Leonardo DiCaprio narrated repairs to the Hubble. In a good few dozen science museums, two-ton behemoths rose from beneath the seats, the zenith of film projection technology. After decades of documentaries, I think people forgot how remarkable these theaters were. Science museums stopped promoting them as aggressively, and much of the showmanship faded away. Sometime in the 2000s, OMSI stopped running the pre-show demonstration, instead starting the film directly. They stopped explaining the projectionist's work in preparing the show, and as they shifted their schedule towards direct repetition of one feature, there was less for the projectionist to do anyway. It became just another museum theater, so it's no wonder that they replaced it with just another museum theater: a generic big-screen setup with the exceptionally dull name of "Empirical Theater." From time to time, there have been whispers of a resurgence of 70mm film. Oppenheimer, for example, was distributed to a small number of theaters in this giant of film formats: 53 reels, 11 miles, 600 pounds of film. Even conventional IMAX is too costly for the modern theater industry, though. Omnimax has fallen completely by the wayside, with the few remaining dome operators doomed to recycling the same films with a sprinkling of newer reformatted features. It is hard to imagine a collective of science museums sending another film camera to space. Omnimax poses a preservation challenge in more ways than one. Besides the lack of documentation on Omnimax theaters and films, there are precious few photographs of Omnimax theaters and even fewer videos of their presentations. Of course, the historian suffers where Madison Square Garden hopes to succeed: the dome theater is perhaps the ultimate in location-based entertainment. Photos and videos, represented on a flat screen, cannot reproduce the experience of the Omnimax theater. The 180 horizontal degrees of screen, the sound that was always a little too loud, in no small part to mask the sound of the projector that made its own racket in the middle of the seating. You had to be there. IMAGES: Omnimax projection room at OMSI, Flickr user truk. Omnimax dome with work lights on at MSI Chicago, Wikimedia Commons user GualdimG. Omnimax projector at St. Louis Science Center, Flickr user pasa47. [1] I don't have extensive information on pricing, but I know that in the 1960s an "economy" Spitz came in over $30,000 (~10x that much today). [2] Pink Floyd's landmark album Dark Side of The Moon debuted in a release event held at the London Planetarium. This connection between Pink Floyd and planetaria, apparently much disliked by the band itself, has persisted to the present day. Several generations of Pink Floyd laser shows have been licensed by science museums around the world, and must represent by far the largest success of fixed-installation laser projection. [3] Are you starting to detect a theme with these Expos? the World's Fairs, including in their various forms as Expos, were long one of the main markets for niche film formats. Any given weird projection format you run into, there's a decent chance that it was originally developed for some short film for an Expo. Keep in mind that it's the nature of niche projection formats that they cannot easily be shown in conventional theaters, so they end up coupled to these crowd events where a custom venue can be built. [4] The Smithsonian Institution started looking for an exciting new theater in 1970. As an example of the various niche film formats at the time, the Smithsonian considered a dome (presumably Omnimax), Cinerama (a three-projector ultrawide system), and Circle-Vision 360 (known mostly for the few surviving Expo films at Disney World's EPCOT) before settling on IMAX. The Smithsonian theater, first planned for the Smithsonian Museum of Natural History before being integrated into the new National Air and Space Museum, was tremendously influential on the broader world of science museum films. That is perhaps an understatement, it is sometimes credited with popularizing IMAX in general, and the newspaper coverage the new theater received throughout North America lends credence to the idea. It is interesting, then, to imagine how different our world would be if they had chosen Circle-Vision. "Captain America: Brave New World" in Cinemark 360.

2 months ago 34 votes

More in technology

2025-08-16 passive microwave repeaters

One of the most significant single advancements in telecommunications technology was the development of microwave radio. Essentially an evolution of radar, the middle of the Second World War saw the first practical microwave telephone system. By the time Japan surrendered, AT&T had largely abandoned their plan to build an extensive nationwide network of coaxial telephone cables. Microwave relay offered greater capacity at a lower cost. When Japan and the US signed their peace treaty in 1951, it was broadcast from coast to coast over what AT&T called the "skyway": the first transcontinental telephone lead made up entirely of radio waves. The fact that live television coverage could be sent over the microwave system demonstrated its core advantage. The bandwidth of microwave links, their capacity, was truly enormous. Within the decade, a single microwave antenna could handle over 1,000 simultaneous calls. Microwave's great capacity, its chief advantage, comes from the high frequencies and large bandwidths involved. The design of microwave-frequency radio electronics was an engineering challenge that was aggressively attacked during the war because microwave frequency's short wavelengths made them especially suitable for radar. The cavity magnetron, one of the first practical microwave transmitters, was an invention of such import that it was the UK's key contribution to a technical partnership that lead to the UK's access to US nuclear weapons research. Unlike the "peaceful atom," though, the "peaceful microwave" spread fast after the war. By the end of the 1950s, most long-distance telephone calls were carried over microwave. While coaxial long-distance carriers such as L-carrier saw continued use in especially congested areas, the supremacy of microwave for telephone communications would not fall until adoption of fiber optics in the 1980s. The high frequency, and short wavelength, of microwave radio is a limitation as well as an advantage. Historically, "microwave" was often used to refer to radio bands above VHF, including UHF. As RF technology improved, microwave shifted higher, and microwave telephone links operated mostly between 1 and 9 GHz. These frequencies are well beyond the limits of beyond-line-of-sight propagation mechanisms, and penetrate and reflect only poorly. Microwave signals could be received over 40 or 50 miles in ideal conditions, but the two antennas needed to be within direct line of sight. Further complicating planning, microwave signals are especially vulnerable to interference due to obstacles within the "fresnel zone," the region around the direct line of sight through which most of the received RF energy passes. Today, these problems have become relatively easy to overcome. Microwave relays, stations that receive signals and rebroadcast them further along a route, are located in positions of geographical advantage. We tend to think of mountain peaks and rocky ridges, but 1950s microwave equipment was large and required significant power and cooling, not to mention frequent attendance by a technician for inspection and adjustment. This was a tube-based technology, with analog and electromechanical control. Microwave stations ran over a thousand square feet, often of thick hardened concrete in the post-war climate and for more consistent temperature regulation, critical to keeping analog equipment on calibration. Where commercial power wasn't available they consumed a constant supply of diesel fuel. It simply wasn't practical to put microwave stations in remote locations. In the flatter regions of the country, locating microwave stations on hills gave them appreciably better range with few downsides. This strategy often stopped at the Rocky Mountains. In much of the American West, telephone construction had always been exceptionally difficult. Open-wire telephone leads had been installed through incredible terrain by the dedication and sacrifice of crews of men and horses. Wire strung over telephone poles proved able to handle steep inclines and rocky badlands, so long as the poles could be set---although inclement weather on the route could make calls difficult to understand. When the first transcontinental coaxial lead was installed, the route was carefully planned to follow flat valley floors whenever possible. This was an important requirement since it was installed mostly by mechanized equipment, heavy machines, which were incapable of navigating the obstacles that the old pole and wire crews had on foot. The first installations of microwave adopted largely the same strategy. Despite the commanding views offered by mountains on both sides of the Rio Grande Valley, AT&T's microwave stations are often found on low mesas or even at the center of the valley floor. Later installations, and those in the especially mountainous states where level ground was scarce, became more ambitious. At Mt. Rose, in Nevada, an aerial tramway carried technicians up the slope to the roof of the microwave station---the only access during winter when snowpack reached high up the building's walls. Expansion in the 1960s involved increasing use of helicopters as the main access to stations, although roads still had to be graded for construction and electrical service. These special arrangements for mountain locations were expensive, within the reach of the Long Lines department's monopoly-backed budget but difficult for anyone else, even Bell Operating Companies, to sustain. And the West---where these difficult conditions were encountered the most---also contained some of the least profitable telephone territory, areas where there was no interconnected phone service at all until government subsidy under the Rural Electrification Act. Independent telephone companies and telephone cooperatives, many of them scrappy operations that had expanded out from the manager's personal home, could scarcely afford a mountaintop fortress and a helilift operation to sustain it. For the telephone industry's many small players, and even the more rural Bell Operating Companies, another property of microwave became critical: with a little engineering, you can bounce it off of a mirror. James Kreitzberg was, at least as the obituary reads, something of a wunderkind. Raised in Missoula, Montana, he earned his pilots license at 15 and joined the Army Air Corps as soon as he was allowed. The Second World War came to a close shortly after, and so, he went on to the University of Washington where he studied aeronautical engineering and then went back home to Montana, taking up work as an engineer at one of the states' largest electrical utilities. His brother, George, had taken a similar path: a stint in the Marine Corps and an aeronautical engineering degree from Oklahoma. While James worked at Montana Power in Butte, George moved to Salem, Oregon, where he started an aviation company that supplemented their cropdusting revenue by modifying Army-surplus aircraft for other uses. Montana Power operated hydroelectric dams, coal mines, and power plants, a portfolio of facilities across a sparse and mountainous state that must have made communications a difficult problem. During the 1950s, James was involved in an effort to build a new private telephone system connecting the utility's facilities. It required negotiating some type of obstacle, perhaps a mountain pass. James proposed an idea: a mirror. Because the wavelength of microwaves are so short, say 30cm to 5cm (1GHz-6GHz), it's practical to build a flat metallic panel that spans multiple wavelengths. Such a panel will function like a reflector or mirror, redirecting microwave energy at an angle proportional to the angle on which it arrived. Much like you can redirect a laser using reflectors, you can also redirect a microwave signal. Some early commenters referred to this technique as a "radio mirror," but by the 1950s the use of "active" microwave repeaters with receivers and transmitters had become well established, so by comparison reflectors came to be known as "passive repeaters." James believed a passive repeater to be a practical solution, but Montana Power lacked the expertise to build one. For a passive repeater to work efficiently, its surface must be very flat and regular, even under varying temperature. Wind loading had to be accounted for, and the face sufficiently rigid to not flex under the wind. Of course, with his education in aeronautics, James knew that similar problems were encountered in aircraft: the need for lightweight metal structures with surfaces that kept an engineered shape. Wasn't he fortunate, then, that his brother owned a shop that repaired and modified aircraft. I know very little about the original Montana Power installation, which is unfortunate, as it may very well be the first passive microwave repeater ever put into service. What I do know is that in the fall of 1955, James called his brother George and asked if his company, Kreitzberg Aviation, could fabricate a passive repeater for Montana Power. George, he later recounted, said that "I can build anything you can draw." The repeater was made in a hangar on the side of Salem's McNary Field, erected by the flightline as a test, and then shipped in parts to Montana for reassembly in the field. It worked. It worked so well, in fact, that as word of Montana Power's new telephone system spread, other utilities wrote to inquire about obtaining passive repeaters for their own telephone systems. In 1956, James Kreitzberg moved to Salem and the two brothers formed the Microflect Company. From the sidelines of McNary Field, Microflect built aluminum "billboards" that can still be found on mountain passes and forested slopes throughout the western United States, and in many other parts of the world where mountainous terrain, adverse weather, and limited utilities made the construction of active repeaters impractical. Passive repeaters can be used in two basic configurations, defined by the angle at which the signal is reflected. In the first case, the reflection angle is around 90 degrees (the closer to this ideal angle, of course, the more efficiently the repeater performs). This situation is often encountered when there is an obstacle that the microwave path needs to "maneuver" around. For example, a ridge or even a large structure like a building in between two sites. In the second case, the microwave signal must travel in something closer to a straight line---over a mountain pass between two towns, for example. When the reflection angle is greater than 135 degrees, the use of a single passive repeater becomes inefficient or impossible, so Microflect recommends the use of two. Arranged like a dogleg or periscope, the two repeaters reflect the signal to the side and then onward in the intended direction. Microflect published an excellent engineering manual with many examples of passive repeater installations along with the signal calculations. You might think that passive repeaters would be so inefficient as to be impractical, especially when more than one was required, but this is surprisingly untrue. Flat aluminum panels are almost completely efficient reflectors of microwave, and somewhat counterintuitively, passive repeaters can even provide gain. In an active repeater, it's easy to see how gain is achieved: power is added. A receiver picks up a signal, and then a powered transmitter retransmits it, stronger than it was before. But passive repeaters require no power at all, one of their key advantages. How do they pull off this feat? The design manual explains with an ITU definition of gain that only an engineer could love, but in an article for "Electronics World," Microflect field engineer Ray Thrower provided a more intuitive explanation. A passive repeater, he writes, functions essentially identically to a parabolic antenna, or a telescope: Quite probably the difficulty many people have in understanding how the passive repeater, a flat surface, can have gain relates back to the common misconception about parabolic antennas. It is commonly believed that it is the focusing characteristics of the parabolic antenna that gives it its gain. Therefore, goes the faulty conclusion, how can the passive repeater have gain? The truth is, it isn't focusing that gives a parabola its gain; it is its larger projected aperture. The focusing is a convenient means of transition from a large aperture (the dish) to a small aperture (the feed device). And since it is projected aperture that provides gain, rather than focusing, the passive repeater with its larger aperture will provide high gain that can be calculated and measured reliably. A check of the method of determining antenna gain in any antenna engineering handbook will show that focusing does not enter into the basic gain calculation. We can also think of it this way: the beam of energy emitted by a microwave antenna expands in an arc as it travels, dissipating the "density" of the energy such that a dish antenna of the same size will receive a weaker and weaker signal as it moves further away (this is the major component of path loss, the "dilution" of the energy over space). A passive repeater employs a reflecting surface which is quite large, larger than practical antennas, and so it "collects" a large cross section of that energy for reemission. Projected aperture is the effective "window" of energy seen by the antenna at the active terminal as it views the passive repeater. The passive repeater also sees the antenna as a "window" of energy. If the two are far enough away from one another, they will appear to each other as essentially point sources. In practice, a passive repeater functions a bit like an active repeater that collects a signal with a large antenna and then reemits it with a smaller directional antenna. To be quite honest, I still find it a bit challenging to intuit this effect, but the mathematics bear it out as well. Interestingly, the effect only occurs when the passive repeater is far enough from either terminal so as to be usefully approximated as a point source. Microflect refers to this as the far field condition. When the passive repeater is very close to one of the active sites, within the near field, it is more effective to consider the passive reflector as part of the transmitting antenna itself, and disregard it for path loss calculations. This dichotomy between far field and near field behavior is actually quite common in antenna engineering (where an "antenna" is often multiple radiating and nonradiating elements within the near field of each other), but it's yet another of the things that gives antenna design the feeling of a dark art. One of the most striking things about passive repeaters is their size. As a passive repeater becomes larger, it reflects a larger cross section of the RF energy and thus provides more gain. Much like with dish or horn antennas, the size of a passive repeater can be traded off with transmitter power (and the size of other antennas involved) to design an economical solution. Microflect offered as standard sizes ranging from 8'x10' (gain at around 6.175GHz: 90.95 dB) to 40'x60' (120.48dB, after a "rough estimate" reduction of 1dB due to interference effects possible from such a short wavelength reflecting off of such a large panel as to invoke multipath effects). By comparison, a typical active microwave repeater site might provide a gain of around 140dB---and we must bear in mind that dB is a logarithmic unit, so the difference between 121 and 140 is bigger than it sounds. Still, there's a reason that logarithms are used when discussing radio paths... in practice, it is orders of magnitude that make the difference in reliable reception. The reduction in gain from an active repeater to a passive repeater can be made up for with higher-gain terminal antennas and more powerful transmitters. Given that the terminal sites are often at far more convenient locations than the passive repeater, that tradeoff can be well worth it. Keep in mind that, as Microflect emphasizes, passive repeaters require no power and very little ("virtually no") maintenance. Microflect passive repeaters were manufactured in sections that bolted together in the field, and the support structures provided for fine adjustment of the panel alignment after mounting. These features made it possible to install passive repeaters by helicopter onto simple site-built foundations, and many are found on mountainsides that are difficult to reach even on foot. Even in less difficult locations, these advantages made passive repeaters less expensive to install and operate than active repeaters. Even when the repeater side was readily accessible, passives were often selected simply for cost savings. Let's consider some examples of passive repeater installations. Microflect was born of the power industry, and electrical generators and utilities remained one of their best customers. Even today, you can find passive repeaters at many hydroelectric dams. There is a practical need to communicate by telephone between a dispatch center (often at the utility's city headquarters) and the operators in the dam's powerhouse, but the powerhouse is at the base of the dam, often in a canyon where microwave signals are completely blocked. A passive repeater set on the canyon rim, at an angle downwards, solves the problem by redirecting the signal from horizontal to vertical. Such an installation can be seen, for example, at the Hoover Dam. In some sense, these passive repeaters "relocate" the radio equipment from the canyon rim (where the desirable signal path is located) to a more convenient location with the other powerhouse equipment. Because of the short distance from the powerhouse to the repeater, these passives were usually small. This idea can be extended to relocating en-route repeaters to a more serviceable site. In Glacier National Park, Mountain States Telephone and Telegraph installed a telephone system to serve various small towns and National Park Service sites. Glacier is incredibly mountainous, with only narrow valleys and passes. The only points with long sight ranges tend to be very inaccessible. Mt. Furlong provided ideal line of sight to East Glacier and Essex along highway 2, but it would have been extremely challenging to install and maintain a microwave site on the steep peak. Instead, two passive repeaters were installed near the mountaintop, redirecting the signals from those two destinations to an active repeater installed downslope near the highway and railroad. This example raises another advantage of passive repeaters: their reduced environmental impact, something that Microflect emphasized as the environmental movement of the 1970s made agencies like the Forest Service (which controlled many of the most appealing mountaintop radio sites) less willing to grant permits that would lead to extensive environmental disruption. Construction by helicopter and the lack of a need for power meant that passive repeaters could be installed without extensive clearing of trees for roads and power line rights of way. They eliminated the persistent problem of leakage from standby generator fuel tanks. Despite their large size, passive repeaters could be camouflaged. Many in national forests were painted green to make them less conspicuous. And while they did have a large surface area, Microflect argued that since they could be installed on slopes rather than requiring a large leveled area, passive repeaters would often fall below the ridge or treeline behind them. This made them less visually conspicuous than a traditional active repeater site that would require a tower. Indeed, passive repeaters are only rarely found on towers, with most elevated off the ground only far enough for the bottom edge to be free of undergrowth and snow. Other passive repeater installations were less a result of exceptionally difficult terrain and more a simple cost optimization. In rural Nevada, Nevada Bell and a dozen independents and coops faced the challenge of connecting small towns with ridges between them. The need for an active repeater at the top of each ridge, even for short routes, made these rural lines excessively expensive. Instead, such towns were linked with dual passive repeaters on the ridge in a "straight through" configuration, allowing microwave antennas at the towns' existing telephone exchange buildings to reach each other. This was the case with the installation I photographed above Pioche. I have been frustratingly unable to confirm the original use of these repeaters, but from context they were likely installed by the Lincoln County Telephone System to link their "hub" microwave site at Mt. Wilson (with direct sight to several towns) to their site near Caliente. The Microflect manual describes, as an example, a very similar installation connecting Elko to Carlin. Two 20'x32' passive repeaters on a ridge between the two (unfortunately since demolished) provided a direct connection between the two telephone exchanges. As an example of a typical use, it might be interesting to look at the manual's calculations for this route. From Elko to the repeaters is 13.73 miles, the repeaters are close enough to each other as to be in near field (and so considered as a single antenna system), and from the repeaters to Carlin is 6.71 miles. The first repeater reflects the signal at a 68 degree angle, then the second reflects it back at a 45 degree angle, for a net change in direction of 23 degrees---a mostly straight route. The transmitter produces 33.0 dBm, both antennas provide a 34.5 dB gain, and the passive repeater assembly provides 88 dB gain (this calculated basically by consulting a table in the manual). That means there is 190 dB of gain in the total system. The 6.71 and 13.73 mile paths add up to 244 dB of free space path loss, and Microflect throws in a few more dB of loss to account for connectors and cables and the less than ideal performance of the double passive repeater. The net result is a received signal of -58 dBm, which is plenty acceptable for a 72-channel voice carrier system. This is all done at a significantly lower price than the construction of a full radio site on the ridge [1]. The combination of relocating radio equipment to a more convenient location and simply saving money leads to one of the iconic applications of passive repeaters, the "periscope" or "flyswatter" antenna. Microwave antennas of the 1960s were still quite large and heavy, and most were pressurized. You needed a sturdy tower to support one, and then a way to get up the tower for regular maintenance. This lead to most AT&T microwave sites using short, squat square towers, often with surprisingly convenient staircases to access the antenna decks. In areas where a very tall tower was needed, it might just not be practical to build one strong enough. You could often dodge the problem by putting the site up a hill, but that wasn't always possible, and besides, good hilltop sites that weren't already taken became harder to find. When Western Union built out their microwave network, they widely adopted the flyswatter antenna as an optimization. Here's how it works: the actual microwave antenna is installed directly on the roof of the equipment building facing up. Only short waveguides are needed, weight isn't an issue, and technicians can conveniently service the antenna without even fall protection. Then, at the top of a tall guyed lattice tower similar to an AM mast, a passive repeater is installed at a 45 degree angle to the ground, redirecting the signal from the rooftop antenna to the horizontal. The passive repeater is much lighter than the antenna, allowing for a thinner tower, and will rarely if ever need service. Western Union often employed two side-by-side lattice towers with a "crossbar" between them at the top for convenient mounting of reflectors each direction, and similar towers were used in some other installations such as the FAA's radar data links. Some of these towers are still in use, although generally with modern lightweight drum antennas replacing the reflectors. Passive microwave repeaters experienced their peak popularity during the 1960s and 1970s, as the technology became mature and communications infrastructure proliferated. Microflect manufactured thousands of units from there new, larger warehouse, across the street from their old hangar on McNary Field. Microflect's customer list grew to just about every entity in the Bell System, from Long Lines to Western Electric to nearly all of the BOCs. The list includes GTE, dozens of smaller independent telephone companies, most of the nation's major railroads, electrical utilities from the original Montana Power to the Tennessee Valley Authority. Microflect repeaters were used by ITT Arctic Services and RCA Alascom in the far north, and overseas by oil companies and telecoms on islands and in mountainous northern Europe. In Hawaii, a single passive repeater dodged a mountain to connect Lanai City telephones to the Hawaii Telephone Company network at Tantalus on Oahu---nearly 70 miles in one jump. In Nevada, six passive repeaters joined two active sites to connect six substations to the Sierra Pacific Power Company's control center in Reno. Jamaica's first high-capacity telephone network involved 11 passive repeaters, one as large as 40'x60'. The Rocky Mountains are still dotted with passive repeaters, structures that are sometimes hard to spot but seem to loom over the forest once noticed. In Seligman, AZ, a sun-faded passive repeater looks over the cemetery. BC Telephone installed passive repeaters to phase out active sites that were inaccessible for maintenance during the winter. Passive repeaters were, it turns out, quite common---and yet they are little known today. First, it cannot be ignored that passive repeaters are most common in areas where communications infrastructure was built post-1960 through difficult terrain. In North America, this means mostly the West [2], far away from the Eastern cities where we think of telephone history being concentrated. Second, the days of passive repeaters were relatively short. After widespread adoption in the '60s, fiber optics began to cut into microwave networks during the '80s and rendered microwave long-distance links largely obsolete by the late '90s. Considerable improvements in cable-laying equipment, not to mention the lighter and more durable cables, made fiber optics easier to install in difficult terrain than coaxial had ever been. Besides, during the 1990s, more widespread electrical infrastructure, miniaturization of radio equipment, and practical photovoltaic solar systems all combined to make active repeaters easier to install. Today, active repeater systems installed by helicopter with independent power supplies are not that unusual, supporting cellular service in the Mojave Desert, for example. Most passive repeaters have been obsoleted by changes in communications networks and technologies. Satellite communications offer an even more cost effective option for the most difficult installations, and there really aren't that many places left that a small active microwave site can't be installed. Moreover, little has been done to preserve the history of passive repeaters. In the wake of the 2015 Wired article on the Long Lines network, considerable enthusiasm has been directed towards former AT&T microwave stations, having been mostly preserved by their haphazard transfer to companies like American Tower. Passive repeaters, lacking even the minimal commercial potential of old AT&T sites, were mostly abandoned in place. Often being found in national forests and other resource management areas, many have been demolished for restoration. In 2019, a historic resources report was written on the Bonneville Power Administration's extensive microwave network. It was prepared to address the responsibility that federal agencies have for historical preservation under the National Historic Preservation Act and National Environmental Policy Act, policies intended to ensure that at least the government takes measures to preserve history before demolishing artifacts. The report reads: "Due to their limited features, passive repeaters are not considered historic resources, and are not evaluated as part of this study." In 1995, Valmont Industries acquired Microflect. Valmont is known mostly for their agricultural products, including center-pivot irrigation systems, but they had expanded their agricultural windmill business into a general infrastructure division that manufactured radio masts and communication towers. For a time, Valmont continued to manufacture passive repeaters as Valmont Microflect, but business seems to have dried up. Today, Valmont Structures manufactures modular telecom towers from their facility across the street from McNary Field in Salem, Oregon. A Salem local, descended from early Microflect employees, once shared a set of photos on Facebook: a beat-up hangar with a sign reading "Aircraft Repair Center," and in front of it, stacks of aluminum panel sections. Microflect workers erecting a passive repeater in front of a Douglas A-26. Rows of reflector sections beside a Shell aviation fuel station. George Kreitzberg died in 2004, James in 2017. As of 2025, Valmont no longer manufactures passive repeaters. Postscript If you are interested in the history of passive repeaters, there are a few useful tips I can give you. Nearly all passive repeaters in North America were built by Microflect, so they have a very consistent design. Locals sometimes confuse passive repeaters with old billboards or even drive-in theater screens, the clearest way to differentiate them is that passive repeaters have a face made up of aluminum modules with deep sidewalls for rigidity and flatness. Take a look at the Microflect manual for many photos. Because passive repeaters are passive, they do not require a radio license proper. However, for site-based microwave licenses, the FCC does require that passive repeaters be included in paths (i.e. a license will be for an active site but with a passive repeater as the location at the other end of the path). These sites are almost always listed with a name ending in "PR". I don't have any straight answer on whether or not any passive repeaters are still in use. It has likely become very rare but there are probably still examples. Two sources suggest that Rachel, NV still relies on a passive repeater for telephone and DSL. I have not been able to confirm that, and the tendency of these systems to be abandoned in place means that people sometimes think they are in use long after they were retired. I can find documentation of a new utility SCADA system being installed, making use of existing passive repeaters, as recently as 2017. [1] If you find these dB gain/loss calculations confusing, you are not alone. It is deceptively simple in a way that was hard for me to learn, and perhaps I will devote an article to it one day. [2] Although not exclusively, with installations in places like Vermont and Newfoundland where similar constraints applied.

yesterday 2 votes
Learn how to make a 2D capacitive touch sensor with ElectroBOOM

Mehdi Sadaghdar, better known as ElectroBOOM, created a name for himself with shocking content on YouTube full of explosive antics. But once you get past the meme-worthy shenanigans, he is a genuinely smart guy that provides useful and accessible lessons on many electrical engineering principles. If you like your learning with a dash of over-the-top […] The post Learn how to make a 2D capacitive touch sensor with ElectroBOOM appeared first on Arduino Blog.

4 days ago 6 votes
The 'politsei' problem, or how filtering unwanted content is still an issue in 2025

A long time ago, there was a small Estonian website called “Mängukoobas” (literal translation from Estonian is “game cave”). It started out as a place for people to share various links to browser games, mostly built with Flash or Shockwave. It had a decent moderation system, randomized treasure chests that could appear on any part of the website, and a lot more.1 What it also had was a basic filtering system. As a good chunk of the audience was children (myself included), there was a need to filter out all the naughty Estonian words, such as “kurat”, “türa”, “lits” and many more colorful ones. The filtering was very basic, however, and some took it to themselves to demonstrate how flawed the system was by intentionally using phrases like “politsei”, which is Estonian for “police”. It would end up being filtered to “po****ei” as it also contained the word “lits”, which translates to “slut”2. Of course, you could easily overcome the filter by using a healthy dose of period characters, leading to many cases of “po.l.i.t.sei” being used. With the ZIRP phenomenon we got a lot of companies wanting to get into the “platform” business where they bring together buyers and sellers, or service providers and clients. A lot of these platforms rely on transactions taking place only on their platform and nowhere else, so they end up doing their best to avoid the two parties from being in contact off-platform and paying out of band, as that would directly cut into their revenue. As a result, they scan private messages and public content for common patterns, such as e-mails and phone numbers, and block or filter them. As you can predict, this can backfire in a very annoying way. I was looking for a cheap mini PC on a local buy-sell website and stumbled on one decent offer. I looked at the details, was going over the CPU model, and found the following: CPU: Intel i*-**** Oh. Well, maybe it was an error, I will ask the seller for additional details with a public question. The response? Hello, the CPU model is Intel i*-****. Damn it. I never ended up buying that machine because I don’t really want to gamble with Intel CPU model numbers, and a few days later it was gone. It’s 2025, I’m nearing my mandatory mid-life crisis, and the Scunthorpe problem is alive and well. fun tangent: the site ended up being like a tiny social network, eventually incorporating things like a cheap rate.ee knock-off where children were allowed to share pictures of themselves. As you can imagine, this was a horrible, horrible idea, as it attracted the exact type of person that would be interested in that type of content. I got lucky by being so poor that I did not have a webcam or a digital camera to make any pictures with, and I remember that fondly because someone on MSN Messenger was very insistent that I take some pictures of myself. Don’t leave children with unmonitored internet access! ↩︎ “slut” is also an actual word in Swedish which translates to “final”. I think. I’m not a Swedish expert, actually. ↩︎

6 days ago 12 votes
What Interviews Should I Look For?

Help point me in the right direction.

6 days ago 10 votes
Repairing an HP 5370A Time Interval Counter

MathJax.Hub.Config({ jax: ["input/TeX", "output/HTML-CSS"], tex2jax: { inlineMath: [ ['$', '$'], ["\\(", "\\)"] ], displayMath: [ ['$$', '$$'], ["\\[", "\\]"] ], processEscapes: true, skipTags: ['script', 'noscript', 'style', 'textarea', 'pre', 'code'] } //, //displayAlign: "left", //displayIndent: "2em" }); Introduction Inside the HP 5370A High Stability Reference Clock with an HP 10811-60111 OCXO RIFA Capacitors in the Corcom F2058 Power Entry Module? 15V Rail Issues Power Suppy Architecture Fault Isolation - It’s the Reference Frequency Buffer PCB! The Reference Frequency Buffer Board Fixing the Internal Reference Clock Fixing the External Reference Clock Future work Footnotes Introduction I bought an HP 5370A time interval counter at the Silicon Valley Electronics Flea Market for a cheap $40. The 5370A is a pretty popular device among time nuts: it has a precision of 20ps for single-shot time interval measurements, amazing for a device that was released in 1978, and even compared to contemporary time interval counters it’s still a decent performance. The 74LS chips in mine have a 1981 time code which makes the unit a whopping 44 years old. But after I plugged it in and pressed the power button, smoke and a horrible smell came out after a few minutes. I had just purchased myself hours of entertainment! Inside the HP 5370A It’s trivial to open the 5370A: remove the 4 feet in the back by removing the Philips screws inside them. remove a screw to release the top or bottom cover (Click to enlarge) Once inside, you can see an extremely modular build: the center consists of a motherboard with 10 plug-in PCBs, 4 on the left for an embedded computer that’s based on an MC6800 CPU, 6 on the right for the time acquisition. The top has plug-in PCBs as well, with the power supply on the left and reference clock circuitry on the right. My unit uses the well known HP 10811-60111 high-stability OCXO as 10MHz clock reference. The bottom doesn’t have plug-in PCBs. It has PCBs for trigger logic and the front panel. This kind of modular build probably adds significant cost, but it’s a dream for servicing and tracking down faults. To make things even easier, the vertical PCBs have a plastic ring or levers to pull them out of their slot! There are also plenty of generously sized test pins and some status LEDs. High Stability Reference Clock with an HP 10811-60111 OCXO Since the unit has the high stability option, I have now yet another piece of test equipment with an HP 10811-60111. OCXOs are supposed to be powered on at all time: environmental changes tend to stress them out and result in a deviation of their clock speed, which is why there’s a “24 hour warm-up” sticker on top of the case. It can indeed take a while for an OCXO to relax and settle back into its normal behavior, though 24 hours seems a bit excessive. The 5370A has a separate always-on power supply just for the oven of the OCXO to keeps the crystal at constant temperature even when the power switch on the front is not in the ON position. Luckily, the fan is powered off when the front switch is set to stand-by.1 In the image above, from top to bottom, you can see: the main power supply control PCB the HP 10811-60111 OCXO. To the right of it is the main power relay. the OCXO oven power supply the reference frequency buffer PCB These are the items that will play the biggest role during the repair. RIFA Capacitors in the Corcom F2058 Power Entry Module? Spoiler: probably not… After plugging in the 5370A the first time, magic smoke came out of it along with a pretty disgusting chemical smell, one that I already knew from some work that I did on my HP 8656A RF signal generator. I unplugged the power, opened up the case, and looked for burnt components but couldn’t find any. After a while, I decided to power the unit back on and… nothing. No smoke, no additional foul smell, but also no display. One of common failure mode of test equipment from way back when are RIFA capacitors that sit right next to the mains power input, before any kind of power switch. Their primary function is to filter out high frequency noise that’s coming from the device and reduce EMI. RIFAs have a well known tendency to crack over time and eventually catch fire. A couple of years ago, I replaced the RIFA capacitors of my HP 3457A, but a general advice is to inspect all old equipment for these gold colored capacitors. However, no such discrete capacitors could be found. But that doesn’t mean they are not there: like a lot of older HP test equipment, the 5370A uses a Corcom F2058 line power module that has capacitors embedded inside. Below is the schematic of the Corcom F2058 (HP part number 0960-0443). The capacitors are marked in red. You can also see a fuse F1, a transformer and, on the right, a selector that can be used to configure the device for 100V, 115V/120V, 220V and 230V/240V operation. (Click to enlarge) There was a bad smell lingering around the Corcom module, so I removed it to check it out. There are metal clips on the left and right side that you need to push in to get the module out. It takes a bit of wiggling, but it works out eventually. Once removed, however, the Corcom didn’t really have a strong smell at all. I couldn’t find any strong evidence online that these modules have RIFAs inside them, so for now, my conclusion is that they don’t have them and that there’s no need to replace them. Module replacement In the unlikely case that you want to replace the Corcom module, you can use this $20 AC Power Entry Module from Mouser. One reason why you might want to do this is because the new module has a built-in power switch. If you use an external 10 MHz clock reference instead of the 10811 OCXO, then there’s really no need to keep the 5370A connected to the mains all the time. There are two caveats, however: while it has the same dimensions as the Corcom F2058, the power terminals are located at the very back, not in an indented space. This is not a problem for the 5370A, which still has enough room for both, but it doesn’t work for most other HP devices that don’t have an oversized case. You can see that in the picture below: Unlike the Corcom F2058, the replacement only feeds through the line, neutral and ground that’s fed into it. You’d have to choose one configuration, 120V in my case, and wire a bunch of wires together to drive the transformer correctly. If you do this wrong, the input voltage to the power regulator will either be too low, and it wont work, or too high, and you might blow up the power regulation transistors. It’s not super complicated, but you need to know what you’re doing. 15V Rail Issues After powering the unit back up, it still didn’t work, but thanks to the 4 power rail status LEDs, it was immediately obvious that +15V power rail had issues. A close-by PCB is the reference frequency buffer PCB. It has a “10 MHz present” status LED that didn’t light up either, suggesting an issue with the 10811 OCXO, but I soon figured out that this status LED relies on the presence of the 15V rail. Power Suppy Architecture The 5370A was first released in 1978, decades before HP decided to stop including detailed schematics in their service manuals. Until Keysight, the Company Formerly Known as HP, decides to change its name again, you can download the operating and service manual here. If you need a higher quality scan, you can also purchase the manual for $10 from ArtekManuals2. The diagrams below were copied from the Keysight version. The power supply architecture is straightforward: the line transformer has 5 separate windings, 4 for the main power supply and 1 for the always-on OCXO power supply. A relay is used to disconnect the 4 unregulated DC rails from the power regulators when the front power button is in stand-by position, but the diode rectification bridge and the gigantic smoothing capacitors are located before the relay.3 For each of the 4 main power rails, a discrete linear voltage regulator is built around a power transistor, an LM307AN opamp and a smaller transistor for over-current protection, and a fuse. The 4 regulators share a 10V voltage reference. The opamps and the voltage reference are powered by a simple +16.2V power rail built out of a resistor and a Zener diode. (Click to enlarge) The power regulators for the +5V and -5.2V rails have a current sense resistor of 0.07 Ohm. The sense resistors for the +15V and -15V rails have a value of 0.4 Ohm. When the voltage across these resistors exceeds the 0.7V base-emitter potential of the bipolar transistors across them, the transistors start to conduct and pull down the base-emitter voltage of the power transistor, thus shutting them off. In the red rectanngle of the schematic above, the +15V power transistor is on the right, the current control transistor on the left, and current sense resistor R4 is right next to the +15V label. Using the values of 0.4 Ohm, 0.07 Ohm and 0.7V, we can estimate that the power regulators enter current control (and reduce the output voltage) when the current exceeds 10A for the +5/-5.2V rails and 1.5A for the +15/-15V rails. This more or less matches the value of the fuses, which are rated at 7A and 1.5A respectively. Power loss in this high current linear regulators is signficant and the heat sinks in the back become pretty hot. Some people have installed an external fan too cool it down a bit. Fault Isolation - It’s the Reference Frequency Buffer PCB! I measured a voltage of 8V instead of 15V. I would have prefered if I had measured no voltage at all, because a lower than expected voltage suggests that the power regulator is in current control instead of voltage control mode. In other words: there’s a short somewhere which results in a current that exceeds what’s expected under normal working conditions. Such a short can be located anywhere. But this is where the modular design of the 5370A shines: you can unplug all the PCBs, check the 15V rail, and if it’s fine, add back PCBs until it’s dead again. And, indeed, with all the PCBs removed, the 15V rail worked fine. I first added the CPU related PCBs, then the time acquisition PCBs, and the 15V stayed healthy. But after plugging in the reference frequency buffer PCB, the 15V LED went off and I measured 8V again. Of all the PCBs, this one is the easiest one to understand. The Reference Frequency Buffer Board The reference frequency buffer board has the following functionality: Convert the internally generated 10MHz frequency to emitter-coupled logic (ECL) signaling. The 5370A came either with the OCXO or with a lower performance crystal oscillator. These cheaper units were usually deployed in labs that already had an external reference clock network. Receive an external reference clock of 5 MHz or 10 MHz, multiply by 2 in the case of 5 MHz, and apply a 10 MHz filter. Convert to ECL as well. Select between internal and external clock to create the final reference clock. Send final reference clock out as ECL (time measurement logic), TTL (CPU) and sine wave (reference-out connector on the back panel). During PCB swapping, the front-panel display had remained off when all CPU boards were plugged in. Unlike later HP test equipment like the HP 5334A universal counter, the CPU clock of the 5370A is derived from the 10 MHz clock that comes out of this reference frequency buffer PCB4, so if this board is broken, nothing works. When we zoom down from block diagram to the schematic, we get this: (Click to enlarge) Leaving aside the debug process for a moment, I thought the 5 MHz/10 MHz to 10 MHz circuit was intriguing. I assumed that it worked by creating some second harmonic and filter out the base frequency, and that’s kind of how it works. There are 3 LC tanks with an inductance of 1 uH and a capacitance of 250pF, good for a natural resonance frequency of \(f = \frac{1}{2 \pi \sqrt{ L C }}\) = 10.066 MHz. The first 2 LC tanks are each part of a class C amplifier. The 3rd LC tank is an additional filter. The incoming 5 MHz or 10 MHz signal periodically inserts a bit of energy into the LC tank and nudges it to be in sync with it. This circuit deserves a blog post on its own. Fixing the Internal Reference Clock When you take a closer look at the schematic, there are 2 points that you can take advantage of: The only part on the path from the internal clock input to the various internal outputs that depends on the 15V rail is the ECL to TTL conversion circuit. And that part of the 15V rail is only connected to 3k Ohm resistor R4. Immediately after the connector, 15V first goes through an L/C/R/C circuit. In the process of debugging, I noticed the following: The arrow points to capacitor C17, which looks suspicioulsy black. I found the magic smoke generator. This was the plan off attack: Replace C17 with a new 10uF capacitor Remove resistor R16 to decouple the internal 15V rail from the external one. Disconnect the top side of R4 from the internal 15V and wire it up straight to the connector 15V rail. It’s an ugly bodge, but after these 3 fixes, I had a nice 10MHz ECL clock signal on the output clock test pin. The 5370A was alive and working fine! Fixing the External Reference Clock I usually connect my test equipment to my GT300 frequency standard, so I really wanted to fix that part of the board as well. This took way longer than it could have been… I started by replacing the burnt capacitor with a 10uF electrolytic capacitor and reinstalling R16. That didn’t go well: this time, the resistor went up in smoke. My theory is that, with shorted capacitor C17 removed, there was still another short and now the current path had to go through this resistor. Before burning up, this 10 Ohm resistor measured only 4 Ohms. I then removed the board and created a stand-alone setup to debug the board in isolation. With that burnt up R16 removed again, 15V applied to the internal 15V and a 10 MHz signal at the external input, the full circuit was working fine. I removed capacitor C16, checked it with an LCR tester and the values nicely in spec. Unable to find any real issues, I finally put in a new 10 Ohm resistor, put a new 10uF capacitor for C16 as well, plugged in the board and… now the external clock input was working fine too?! So the board is fixed now and I can use both the internal and external clock, but I still don’t why R16 burnt up after the first capacitor was replaced. Future work The HP 5370A is working very well now. Once I have another Digikey order going out, I want to add 2 capacitors to install 2 tantalum ones instead of the electrolytics that used to repair. I can’t find it back, but on the time-nuts email list, 2 easy modifications were suggested: Drill a hole through the case right above the HP 10811-60111 to have access to the frequency adjust screw. An OCXO is supposed to be immune to external temperature variations, but when you’re measuring picoseconds, a difference in ambient temperature can still have a minor impact. With this hole, you can keep the case closed while calibrating the internal oscillator. Disconnect the “10 MHz present” status LED on the reference clock buffer PCB. Apparently, this circuit creates some frequency spurs that can introduce some additional jitter on the reference clock. If you’re really hard core: Replace the entire CPU system by a modern CPU board More than 10 years ago, the HP5370 Processor Replacement Project reverse engineered the entire embedded software stack, created a PCB based on a Beagle board with new firmware. PCBs are not available anymore, but one could easily have a new one made for much cheaper than what it would have cost back then. Footnotes My HP 8656A RF signal generator has an OCXO as well. But the fan keeps running even when it’s in stand-by mode, and the default fan is very loud too! ↩ Don’t expect to be able to cut-and-paste text from the ArtekManuals scans, because they have some obnoxious rights managment that prevents this. ↩ Each smoothing capacitor has a bleeding resistor in parallel to discharge the capacitors when the power cable is unplugged. But these resistors will leak power even when the unit is switched off. Energy Star regulations clearly weren’t a thing back in 1978. ↩ The CPU runs at 1.25 MHz, the 10 MHz divided by 8. ↩

a week ago 41 votes