More from computers are bad
One of the most significant single advancements in telecommunications technology was the development of microwave radio. Essentially an evolution of radar, the middle of the Second World War saw the first practical microwave telephone system. By the time Japan surrendered, AT&T had largely abandoned their plan to build an extensive nationwide network of coaxial telephone cables. Microwave relay offered greater capacity at a lower cost. When Japan and the US signed their peace treaty in 1951, it was broadcast from coast to coast over what AT&T called the "skyway": the first transcontinental telephone lead made up entirely of radio waves. The fact that live television coverage could be sent over the microwave system demonstrated its core advantage. The bandwidth of microwave links, their capacity, was truly enormous. Within the decade, a single microwave antenna could handle over 1,000 simultaneous calls. Microwave's great capacity, its chief advantage, comes from the high frequencies and large bandwidths involved. The design of microwave-frequency radio electronics was an engineering challenge that was aggressively attacked during the war because microwave frequency's short wavelengths made them especially suitable for radar. The cavity magnetron, one of the first practical microwave transmitters, was an invention of such import that it was the UK's key contribution to a technical partnership that lead to the UK's access to US nuclear weapons research. Unlike the "peaceful atom," though, the "peaceful microwave" spread fast after the war. By the end of the 1950s, most long-distance telephone calls were carried over microwave. While coaxial long-distance carriers such as L-carrier saw continued use in especially congested areas, the supremacy of microwave for telephone communications would not fall until adoption of fiber optics in the 1980s. The high frequency, and short wavelength, of microwave radio is a limitation as well as an advantage. Historically, "microwave" was often used to refer to radio bands above VHF, including UHF. As RF technology improved, microwave shifted higher, and microwave telephone links operated mostly between 1 and 9 GHz. These frequencies are well beyond the limits of beyond-line-of-sight propagation mechanisms, and penetrate and reflect only poorly. Microwave signals could be received over 40 or 50 miles in ideal conditions, but the two antennas needed to be within direct line of sight. Further complicating planning, microwave signals are especially vulnerable to interference due to obstacles within the "fresnel zone," the region around the direct line of sight through which most of the received RF energy passes. Today, these problems have become relatively easy to overcome. Microwave relays, stations that receive signals and rebroadcast them further along a route, are located in positions of geographical advantage. We tend to think of mountain peaks and rocky ridges, but 1950s microwave equipment was large and required significant power and cooling, not to mention frequent attendance by a technician for inspection and adjustment. This was a tube-based technology, with analog and electromechanical control. Microwave stations ran over a thousand square feet, often of thick hardened concrete in the post-war climate and for more consistent temperature regulation, critical to keeping analog equipment on calibration. Where commercial power wasn't available they consumed a constant supply of diesel fuel. It simply wasn't practical to put microwave stations in remote locations. In the flatter regions of the country, locating microwave stations on hills gave them appreciably better range with few downsides. This strategy often stopped at the Rocky Mountains. In much of the American West, telephone construction had always been exceptionally difficult. Open-wire telephone leads had been installed through incredible terrain by the dedication and sacrifice of crews of men and horses. Wire strung over telephone poles proved able to handle steep inclines and rocky badlands, so long as the poles could be set---although inclement weather on the route could make calls difficult to understand. When the first transcontinental coaxial lead was installed, the route was carefully planned to follow flat valley floors whenever possible. This was an important requirement since it was installed mostly by mechanized equipment, heavy machines, which were incapable of navigating the obstacles that the old pole and wire crews had on foot. The first installations of microwave adopted largely the same strategy. Despite the commanding views offered by mountains on both sides of the Rio Grande Valley, AT&T's microwave stations are often found on low mesas or even at the center of the valley floor. Later installations, and those in the especially mountainous states where level ground was scarce, became more ambitious. At Mt. Rose, in Nevada, an aerial tramway carried technicians up the slope to the roof of the microwave station---the only access during winter when snowpack reached high up the building's walls. Expansion in the 1960s involved increasing use of helicopters as the main access to stations, although roads still had to be graded for construction and electrical service. These special arrangements for mountain locations were expensive, within the reach of the Long Lines department's monopoly-backed budget but difficult for anyone else, even Bell Operating Companies, to sustain. And the West---where these difficult conditions were encountered the most---also contained some of the least profitable telephone territory, areas where there was no interconnected phone service at all until government subsidy under the Rural Electrification Act. Independent telephone companies and telephone cooperatives, many of them scrappy operations that had expanded out from the manager's personal home, could scarcely afford a mountaintop fortress and a helilift operation to sustain it. For the telephone industry's many small players, and even the more rural Bell Operating Companies, another property of microwave became critical: with a little engineering, you can bounce it off of a mirror. James Kreitzberg was, at least as the obituary reads, something of a wunderkind. Raised in Missoula, Montana, he earned his pilots license at 15 and joined the Army Air Corps as soon as he was allowed. The Second World War came to a close shortly after, and so, he went on to the University of Washington where he studied aeronautical engineering and then went back home to Montana, taking up work as an engineer at one of the states' largest electrical utilities. His brother, George, had taken a similar path: a stint in the Marine Corps and an aeronautical engineering degree from Oklahoma. While James worked at Montana Power in Butte, George moved to Salem, Oregon, where he started an aviation company that supplemented their cropdusting revenue by modifying Army-surplus aircraft for other uses. Montana Power operated hydroelectric dams, coal mines, and power plants, a portfolio of facilities across a sparse and mountainous state that must have made communications a difficult problem. During the 1950s, James was involved in an effort to build a new private telephone system connecting the utility's facilities. It required negotiating some type of obstacle, perhaps a mountain pass. James proposed an idea: a mirror. Because the wavelength of microwaves are so short, say 30cm to 5cm (1GHz-6GHz), it's practical to build a flat metallic panel that spans multiple wavelengths. Such a panel will function like a reflector or mirror, redirecting microwave energy at an angle proportional to the angle on which it arrived. Much like you can redirect a laser using reflectors, you can also redirect a microwave signal. Some early commenters referred to this technique as a "radio mirror," but by the 1950s the use of "active" microwave repeaters with receivers and transmitters had become well established, so by comparison reflectors came to be known as "passive repeaters." James believed a passive repeater to be a practical solution, but Montana Power lacked the expertise to build one. For a passive repeater to work efficiently, its surface must be very flat and regular, even under varying temperature. Wind loading had to be accounted for, and the face sufficiently rigid to not flex under the wind. Of course, with his education in aeronautics, James knew that similar problems were encountered in aircraft: the need for lightweight metal structures with surfaces that kept an engineered shape. Wasn't he fortunate, then, that his brother owned a shop that repaired and modified aircraft. I know very little about the original Montana Power installation, which is unfortunate, as it may very well be the first passive microwave repeater ever put into service. What I do know is that in the fall of 1955, James called his brother George and asked if his company, Kreitzberg Aviation, could fabricate a passive repeater for Montana Power. George, he later recounted, said that "I can build anything you can draw." The repeater was made in a hangar on the side of Salem's McNary Field, erected by the flightline as a test, and then shipped in parts to Montana for reassembly in the field. It worked. It worked so well, in fact, that as word of Montana Power's new telephone system spread, other utilities wrote to inquire about obtaining passive repeaters for their own telephone systems. In 1956, James Kreitzberg moved to Salem and the two brothers formed the Microflect Company. From the sidelines of McNary Field, Microflect built aluminum "billboards" that can still be found on mountain passes and forested slopes throughout the western United States, and in many other parts of the world where mountainous terrain, adverse weather, and limited utilities made the construction of active repeaters impractical. Passive repeaters can be used in two basic configurations, defined by the angle at which the signal is reflected. In the first case, the reflection angle is around 90 degrees (the closer to this ideal angle, of course, the more efficiently the repeater performs). This situation is often encountered when there is an obstacle that the microwave path needs to "maneuver" around. For example, a ridge or even a large structure like a building in between two sites. In the second case, the microwave signal must travel in something closer to a straight line---over a mountain pass between two towns, for example. When the reflection angle is greater than 135 degrees, the use of a single passive repeater becomes inefficient or impossible, so Microflect recommends the use of two. Arranged like a dogleg or periscope, the two repeaters reflect the signal to the side and then onward in the intended direction. Microflect published an excellent engineering manual with many examples of passive repeater installations along with the signal calculations. You might think that passive repeaters would be so inefficient as to be impractical, especially when more than one was required, but this is surprisingly untrue. Flat aluminum panels are almost completely efficient reflectors of microwave, and somewhat counterintuitively, passive repeaters can even provide gain. In an active repeater, it's easy to see how gain is achieved: power is added. A receiver picks up a signal, and then a powered transmitter retransmits it, stronger than it was before. But passive repeaters require no power at all, one of their key advantages. How do they pull off this feat? The design manual explains with an ITU definition of gain that only an engineer could love, but in an article for "Electronics World," Microflect field engineer Ray Thrower provided a more intuitive explanation. A passive repeater, he writes, functions essentially identically to a parabolic antenna, or a telescope: Quite probably the difficulty many people have in understanding how the passive repeater, a flat surface, can have gain relates back to the common misconception about parabolic antennas. It is commonly believed that it is the focusing characteristics of the parabolic antenna that gives it its gain. Therefore, goes the faulty conclusion, how can the passive repeater have gain? The truth is, it isn't focusing that gives a parabola its gain; it is its larger projected aperture. The focusing is a convenient means of transition from a large aperture (the dish) to a small aperture (the feed device). And since it is projected aperture that provides gain, rather than focusing, the passive repeater with its larger aperture will provide high gain that can be calculated and measured reliably. A check of the method of determining antenna gain in any antenna engineering handbook will show that focusing does not enter into the basic gain calculation. We can also think of it this way: the beam of energy emitted by a microwave antenna expands in an arc as it travels, dissipating the "density" of the energy such that a dish antenna of the same size will receive a weaker and weaker signal as it moves further away (this is the major component of path loss, the "dilution" of the energy over space). A passive repeater employs a reflecting surface which is quite large, larger than practical antennas, and so it "collects" a large cross section of that energy for reemission. Projected aperture is the effective "window" of energy seen by the antenna at the active terminal as it views the passive repeater. The passive repeater also sees the antenna as a "window" of energy. If the two are far enough away from one another, they will appear to each other as essentially point sources. In practice, a passive repeater functions a bit like an active repeater that collects a signal with a large antenna and then reemits it with a smaller directional antenna. To be quite honest, I still find it a bit challenging to intuit this effect, but the mathematics bear it out as well. Interestingly, the effect only occurs when the passive repeater is far enough from either terminal so as to be usefully approximated as a point source. Microflect refers to this as the far field condition. When the passive repeater is very close to one of the active sites, within the near field, it is more effective to consider the passive reflector as part of the transmitting antenna itself, and disregard it for path loss calculations. This dichotomy between far field and near field behavior is actually quite common in antenna engineering (where an "antenna" is often multiple radiating and nonradiating elements within the near field of each other), but it's yet another of the things that gives antenna design the feeling of a dark art. One of the most striking things about passive repeaters is their size. As a passive repeater becomes larger, it reflects a larger cross section of the RF energy and thus provides more gain. Much like with dish or horn antennas, the size of a passive repeater can be traded off with transmitter power (and the size of other antennas involved) to design an economical solution. Microflect offered as standard sizes ranging from 8'x10' (gain at around 6.175GHz: 90.95 dB) to 40'x60' (120.48dB, after a "rough estimate" reduction of 1dB due to interference effects possible from such a short wavelength reflecting off of such a large panel as to invoke multipath effects). By comparison, a typical active microwave repeater site might provide a gain of around 140dB---and we must bear in mind that dB is a logarithmic unit, so the difference between 121 and 140 is bigger than it sounds. Still, there's a reason that logarithms are used when discussing radio paths... in practice, it is orders of magnitude that make the difference in reliable reception. The reduction in gain from an active repeater to a passive repeater can be made up for with higher-gain terminal antennas and more powerful transmitters. Given that the terminal sites are often at far more convenient locations than the passive repeater, that tradeoff can be well worth it. Keep in mind that, as Microflect emphasizes, passive repeaters require no power and very little ("virtually no") maintenance. Microflect passive repeaters were manufactured in sections that bolted together in the field, and the support structures provided for fine adjustment of the panel alignment after mounting. These features made it possible to install passive repeaters by helicopter onto simple site-built foundations, and many are found on mountainsides that are difficult to reach even on foot. Even in less difficult locations, these advantages made passive repeaters less expensive to install and operate than active repeaters. Even when the repeater side was readily accessible, passives were often selected simply for cost savings. Let's consider some examples of passive repeater installations. Microflect was born of the power industry, and electrical generators and utilities remained one of their best customers. Even today, you can find passive repeaters at many hydroelectric dams. There is a practical need to communicate by telephone between a dispatch center (often at the utility's city headquarters) and the operators in the dam's powerhouse, but the powerhouse is at the base of the dam, often in a canyon where microwave signals are completely blocked. A passive repeater set on the canyon rim, at an angle downwards, solves the problem by redirecting the signal from horizontal to vertical. Such an installation can be seen, for example, at the Hoover Dam. In some sense, these passive repeaters "relocate" the radio equipment from the canyon rim (where the desirable signal path is located) to a more convenient location with the other powerhouse equipment. Because of the short distance from the powerhouse to the repeater, these passives were usually small. This idea can be extended to relocating en-route repeaters to a more serviceable site. In Glacier National Park, Mountain States Telephone and Telegraph installed a telephone system to serve various small towns and National Park Service sites. Glacier is incredibly mountainous, with only narrow valleys and passes. The only points with long sight ranges tend to be very inaccessible. Mt. Furlong provided ideal line of sight to East Glacier and Essex along highway 2, but it would have been extremely challenging to install and maintain a microwave site on the steep peak. Instead, two passive repeaters were installed near the mountaintop, redirecting the signals from those two destinations to an active repeater installed downslope near the highway and railroad. This example raises another advantage of passive repeaters: their reduced environmental impact, something that Microflect emphasized as the environmental movement of the 1970s made agencies like the Forest Service (which controlled many of the most appealing mountaintop radio sites) less willing to grant permits that would lead to extensive environmental disruption. Construction by helicopter and the lack of a need for power meant that passive repeaters could be installed without extensive clearing of trees for roads and power line rights of way. They eliminated the persistent problem of leakage from standby generator fuel tanks. Despite their large size, passive repeaters could be camouflaged. Many in national forests were painted green to make them less conspicuous. And while they did have a large surface area, Microflect argued that since they could be installed on slopes rather than requiring a large leveled area, passive repeaters would often fall below the ridge or treeline behind them. This made them less visually conspicuous than a traditional active repeater site that would require a tower. Indeed, passive repeaters are only rarely found on towers, with most elevated off the ground only far enough for the bottom edge to be free of undergrowth and snow. Other passive repeater installations were less a result of exceptionally difficult terrain and more a simple cost optimization. In rural Nevada, Nevada Bell and a dozen independents and coops faced the challenge of connecting small towns with ridges between them. The need for an active repeater at the top of each ridge, even for short routes, made these rural lines excessively expensive. Instead, such towns were linked with dual passive repeaters on the ridge in a "straight through" configuration, allowing microwave antennas at the towns' existing telephone exchange buildings to reach each other. This was the case with the installation I photographed above Pioche. I have been frustratingly unable to confirm the original use of these repeaters, but from context they were likely installed by the Lincoln County Telephone System to link their "hub" microwave site at Mt. Wilson (with direct sight to several towns) to their site near Caliente. The Microflect manual describes, as an example, a very similar installation connecting Elko to Carlin. Two 20'x32' passive repeaters on a ridge between the two (unfortunately since demolished) provided a direct connection between the two telephone exchanges. As an example of a typical use, it might be interesting to look at the manual's calculations for this route. From Elko to the repeaters is 13.73 miles, the repeaters are close enough to each other as to be in near field (and so considered as a single antenna system), and from the repeaters to Carlin is 6.71 miles. The first repeater reflects the signal at a 68 degree angle, then the second reflects it back at a 45 degree angle, for a net change in direction of 23 degrees---a mostly straight route. The transmitter produces 33.0 dBm, both antennas provide a 34.5 dB gain, and the passive repeater assembly provides 88 dB gain (this calculated basically by consulting a table in the manual). That means there is 190 dB of gain in the total system. The 6.71 and 13.73 mile paths add up to 244 dB of free space path loss, and Microflect throws in a few more dB of loss to account for connectors and cables and the less than ideal performance of the double passive repeater. The net result is a received signal of -58 dBm, which is plenty acceptable for a 72-channel voice carrier system. This is all done at a significantly lower price than the construction of a full radio site on the ridge [1]. The combination of relocating radio equipment to a more convenient location and simply saving money leads to one of the iconic applications of passive repeaters, the "periscope" or "flyswatter" antenna. Microwave antennas of the 1960s were still quite large and heavy, and most were pressurized. You needed a sturdy tower to support one, and then a way to get up the tower for regular maintenance. This lead to most AT&T microwave sites using short, squat square towers, often with surprisingly convenient staircases to access the antenna decks. In areas where a very tall tower was needed, it might just not be practical to build one strong enough. You could often dodge the problem by putting the site up a hill, but that wasn't always possible, and besides, good hilltop sites that weren't already taken became harder to find. When Western Union built out their microwave network, they widely adopted the flyswatter antenna as an optimization. Here's how it works: the actual microwave antenna is installed directly on the roof of the equipment building facing up. Only short waveguides are needed, weight isn't an issue, and technicians can conveniently service the antenna without even fall protection. Then, at the top of a tall guyed lattice tower similar to an AM mast, a passive repeater is installed at a 45 degree angle to the ground, redirecting the signal from the rooftop antenna to the horizontal. The passive repeater is much lighter than the antenna, allowing for a thinner tower, and will rarely if ever need service. Western Union often employed two side-by-side lattice towers with a "crossbar" between them at the top for convenient mounting of reflectors each direction, and similar towers were used in some other installations such as the FAA's radar data links. Some of these towers are still in use, although generally with modern lightweight drum antennas replacing the reflectors. Passive microwave repeaters experienced their peak popularity during the 1960s and 1970s, as the technology became mature and communications infrastructure proliferated. Microflect manufactured thousands of units from there new, larger warehouse, across the street from their old hangar on McNary Field. Microflect's customer list grew to just about every entity in the Bell System, from Long Lines to Western Electric to nearly all of the BOCs. The list includes GTE, dozens of smaller independent telephone companies, most of the nation's major railroads, electrical utilities from the original Montana Power to the Tennessee Valley Authority. Microflect repeaters were used by ITT Arctic Services and RCA Alascom in the far north, and overseas by oil companies and telecoms on islands and in mountainous northern Europe. In Hawaii, a single passive repeater dodged a mountain to connect Lanai City telephones to the Hawaii Telephone Company network at Tantalus on Oahu---nearly 70 miles in one jump. In Nevada, six passive repeaters joined two active sites to connect six substations to the Sierra Pacific Power Company's control center in Reno. Jamaica's first high-capacity telephone network involved 11 passive repeaters, one as large as 40'x60'. The Rocky Mountains are still dotted with passive repeaters, structures that are sometimes hard to spot but seem to loom over the forest once noticed. In Seligman, AZ, a sun-faded passive repeater looks over the cemetery. BC Telephone installed passive repeaters to phase out active sites that were inaccessible for maintenance during the winter. Passive repeaters were, it turns out, quite common---and yet they are little known today. First, it cannot be ignored that passive repeaters are most common in areas where communications infrastructure was built post-1960 through difficult terrain. In North America, this means mostly the West [2], far away from the Eastern cities where we think of telephone history being concentrated. Second, the days of passive repeaters were relatively short. After widespread adoption in the '60s, fiber optics began to cut into microwave networks during the '80s and rendered microwave long-distance links largely obsolete by the late '90s. Considerable improvements in cable-laying equipment, not to mention the lighter and more durable cables, made fiber optics easier to install in difficult terrain than coaxial had ever been. Besides, during the 1990s, more widespread electrical infrastructure, miniaturization of radio equipment, and practical photovoltaic solar systems all combined to make active repeaters easier to install. Today, active repeater systems installed by helicopter with independent power supplies are not that unusual, supporting cellular service in the Mojave Desert, for example. Most passive repeaters have been obsoleted by changes in communications networks and technologies. Satellite communications offer an even more cost effective option for the most difficult installations, and there really aren't that many places left that a small active microwave site can't be installed. Moreover, little has been done to preserve the history of passive repeaters. In the wake of the 2015 Wired article on the Long Lines network, considerable enthusiasm has been directed towards former AT&T microwave stations, having been mostly preserved by their haphazard transfer to companies like American Tower. Passive repeaters, lacking even the minimal commercial potential of old AT&T sites, were mostly abandoned in place. Often being found in national forests and other resource management areas, many have been demolished for restoration. In 2019, a historic resources report was written on the Bonneville Power Administration's extensive microwave network. It was prepared to address the responsibility that federal agencies have for historical preservation under the National Historic Preservation Act and National Environmental Policy Act, policies intended to ensure that at least the government takes measures to preserve history before demolishing artifacts. The report reads: "Due to their limited features, passive repeaters are not considered historic resources, and are not evaluated as part of this study." In 1995, Valmont Industries acquired Microflect. Valmont is known mostly for their agricultural products, including center-pivot irrigation systems, but they had expanded their agricultural windmill business into a general infrastructure division that manufactured radio masts and communication towers. For a time, Valmont continued to manufacture passive repeaters as Valmont Microflect, but business seems to have dried up. Today, Valmont Structures manufactures modular telecom towers from their facility across the street from McNary Field in Salem, Oregon. A Salem local, descended from early Microflect employees, once shared a set of photos on Facebook: a beat-up hangar with a sign reading "Aircraft Repair Center," and in front of it, stacks of aluminum panel sections. Microflect workers erecting a passive repeater in front of a Douglas A-26. Rows of reflector sections beside a Shell aviation fuel station. George Kreitzberg died in 2004, James in 2017. As of 2025, Valmont no longer manufactures passive repeaters. Postscript If you are interested in the history of passive repeaters, there are a few useful tips I can give you. Nearly all passive repeaters in North America were built by Microflect, so they have a very consistent design. Locals sometimes confuse passive repeaters with old billboards or even drive-in theater screens, the clearest way to differentiate them is that passive repeaters have a face made up of aluminum modules with deep sidewalls for rigidity and flatness. Take a look at the Microflect manual for many photos. Because passive repeaters are passive, they do not require a radio license proper. However, for site-based microwave licenses, the FCC does require that passive repeaters be included in paths (i.e. a license will be for an active site but with a passive repeater as the location at the other end of the path). These sites are almost always listed with a name ending in "PR". I don't have any straight answer on whether or not any passive repeaters are still in use. It has likely become very rare but there are probably still examples. Two sources suggest that Rachel, NV still relies on a passive repeater for telephone and DSL. I have not been able to confirm that, and the tendency of these systems to be abandoned in place means that people sometimes think they are in use long after they were retired. I can find documentation of a new utility SCADA system being installed, making use of existing passive repeaters, as recently as 2017. [1] If you find these dB gain/loss calculations confusing, you are not alone. It is deceptively simple in a way that was hard for me to learn, and perhaps I will devote an article to it one day. [2] Although not exclusively, with installations in places like Vermont and Newfoundland where similar constraints applied.
Alcatraz first operated as a prison in 1859, when the military fort first held convicted soldiers. The prison technology of the time was simple, consisting of little more than a basement room with a trap-door entrance. Only small numbers of prisoners were held in this period, but it established Alcatraz as a center of incarceration. Later, the Civil War triggered construction of a "political prison," a term with fewer negative connotations at the time, for confederate sympathizers. This prison was more purpose-built (although actually a modification of an existing shop), but it was small and not designed for an especially high security level. It presaged, though, a much larger construction project to come. Alcatraz had several properties that made it an attractive prison. First, it had seen heavy military construction as a Civil War defensive facility, but just decades later improvements in artillery made its fortifications obsolete. That left Alcatraz surplus property, a complete military installation available for new use. Second, Alcatraz was formidable. The small island was made up of steep rock walls, and it was miles from shore in a bay known for its strong currents. Escape, even for prisoners who had seized control of the island, would be exceptionally difficult. These advantages were also limitations. Alcatraz was isolated and difficult to support, requiring a substantial roster of military personnel to ferry supplies back and forth. There were no connections to the mainland, requiring on-site power and water plants. Corrosive sea spray, sent over the island by the Bay's strong winds, lay perpetual siege on the island. Buildings needed constant maintenance, rust covered everything. Alcatraz was not just a famous prison, it was a particularly complicated one. In 1909, Alcatraz lost its previous defensive role and pivoted entirely to military prison. The Citadel, a hardened barracks building dating to the original fortifications, was partially demolished. On top of it, a new cellblock was built. This was a purpose-built prison, designed to house several hundred inmates under high security conditions. Unfortunately, few records seem to survive from the construction and operation of the cellblock as a disciplinary barracks. At some point, a manual telephone exchange was installed to provide service between buildings on the island. I only really know that because it was recorded as being removed later on. Communications to and from Alcatraz were a challenge. Radio and even light signals were used to convey messages between the island and other military installations on the bay. There was a constant struggle to maintain cables. Early efforts to lay cables in the bay were less about communications and more about triggering. Starting in 1883, the Army Corps of Engineers began the installation of "torpedoes" in the San Francisco bay. These were different from what we think of as torpedoes today, they were essentially remotely-operated mines. Each device floated in the water by its own buoyancy, anchored to the bottom by a cable that then ran to shore. An electrical signal sent down the cable detonated the torpedo. The system was intended primarily to protect the bay from submarines, a new threat that often required technically complex defenses. Submarines are, of course, difficult to spot. To make the torpedoes effective, the Army had to devise a targeting system. Observation posts on each side of the Golden Gate made sightings of possible submarines and reported them to a control post, where they were plotted on the map. With a threat confirmed, the control post would begin to detonate nearby torpedoes. A second set of observation posts, and a second line of torpedoes, were located further into the bay to address any submarines that made it through the first barrage. By 1891, there were three such control points in total: Fort Mason, Angel Island, and Yerba Buena. The rather florid San Francisco Examiner of the day described the control point at Fort Mason, a "chamber of death and destruction" in a tunnel twenty feet underground. The Army "death-dealers" that manned the plotting table in that bunker had access to a board that "greatly resemble[d] the switch board in the great operating rooms of the telephone companies." By cords and buttons, they could select chains of mines and send the signal to fire. NPS historians found that a torpedo control point had been planned at Alcatraz, and one of the fortifications modified to accommodate it, but never seems to have been used. The 1891 article gives a hint of the reason, noting that the line from Alcatraz to Fort Mason was "favorable for a line of torpedoes" but that currents were so strong that it was difficult to keep them anchored. Perhaps this problem was discovered after construction was already underway. Somewhere around 1887-1888, the Army Signal Corps had joined the cable-laying fray. A telegraph cable was constructed from the Presidio to Alcatraz, and provided good service except for the many times that it was drug up by anchors and severed. This was a tremendous problem: in 1898, Gen. A. W. Greely of the Signal Corps called San Francisco the "worst bay in the country" for cable laying and said that no cable across the Golden Gate had lasted more than three years. The General attributed the problem mainly to the heavy shipping traffic, but I suspect that the notorious currents must have been a factor in just how many anchors were dragged through cables [1]. In 1889, a brand new Army telegraph cable was announced, one that would run from Alcatraz to Angel Island, and then from Angel Island to Marin County. An existing commercial cable crossed the Golden Gate, providing a connection all the way to the Presidio. The many failures of Alcatraz cables makes it difficult to keep track. For example, a cable from Fort Mason to Alcatraz Island was apparently laid in 1891---but a few years later, it was lamented that Alcatraz's only cable connection to Fort Mason was indirect, via the 1889 Angel Island cable. Presumably the 1891 cable was damaged at some point and not replaced, but that event doesn't seem to have made the papers (or at least my search results!). In 1900, a Signal Corps officer on Angel Island made a routine check of the cable to Alcatraz, finding it in good working order---but noticing that a "four masted schooner... in direct line with the cable" seemed to be in trouble just off the island and was being assisted by a tug. That evening, the officer returned to the cable landing box to find the ship gone... along with the cable. A French ship, "Lamoriciere," had drifted from anchor overnight. A Signal Corps sergeant, apparently having spoken with harbor officials, reported that the ship would have run completely aground had the anchor not caught the Alcatraz cable and pulled it taught. Of course the efforts of the tug to free Lamoriciere seems to have freed a little more than intended, and the cable was broken away from its landing. "Its end has been carried into the bay and probably quite a distance from land," the Signal Corps reported. This ongoing struggle, of laying new cables to Alcatraz and then seeing them dragged away a few years later, has dogged the island basically to the modern day---when we have finally just given up. Today, as during many points in its history, Alcatraz must generate its own power and communicate with the mainland via radio. When the Bureau of Prisons took control of Alcatraz in 1933, they installed entirely new radio systems. A marine AM radio was used to reach the Coast Guard, their main point of contact in any emergency. Another radio was used to contact "Alcatraz Landing" from which BOP ferries sailed, and over the years several radios were installed to permit direct communications with military installations and police departments around the Bay Area. At some point, equipment was made available to connect telephone calls to the island. I'm not sure if this was manual patching by BOP or Coast Guard radio operators, or if a contract was made with PT&T to provide telephone service by radio. Such an arrangement seems to have been in place by 1937, when an unexplained distress call from the island made the warden impossible to contact (by the press or Bureau of Prisons) because "all lines [were] tied up." Unfortunately I have not been able to find much on the radiotelephone arrangements. The BOP, no doubt concerned about security, did not follow the Army's habit of announcing new construction projects to the press. Fortunately, the BOP-era history of Alcatraz is much better covered by modern NPS documentation than the Army era (presumably because the more recent closure of the BOP prison meant that much of the original documentation was archived). Unfortunately, the NPS reports are mostly concerned with the history of the structures on the island and do not pay much attention to outside communications or the infrastructure that supported it. Internal arrangements on the island almost completely changed when the BOP took over. The Army had left Alcatraz in a degree of disrepair (discussions about closing it having started by at least 1913), and besides, the BOP intended to provide a much higher level of security than the Army had. Extensive renovations were made of the main cellblock and many supporting buildings from 1933 to about 1939. The 1930s had seen a great deal of innovation in technical security. Technologies like electrostatic and microwave motion sensors were available in early forms. On Alcatraz, though, the island was small and buildings tightly spaced. The prison staff, and in some cases their families, would be housed on the island just a stones throw from the cellblock. That meant there would be quite a few people moving around exterior to the prison, ruling out motion sensors as a means of escape detection. Exterior security would instead be provided by guard and dog patrols. There was still some cutting-edge technical security when Alcatraz opened, including early metal detectors. At first, the BOP contracted the Teletouch Corporation of New York City. Teletouch, a manufactured burglar alarms and other electronic devices, was owned by or at least affiliated with famed electromagnetics inventor and Soviet spy Leon Theremin. Besides the instrument we remember him for today, Theremin had invented a number of devices for security applications, and the metal detectors were probably of his design. In practice, the Teletouch machines proved unsatisfactory. They were later replaced with machines made by Forewarn. I believe the metal detector on display today is one of the Forewarn products, although the NPS documents are a little unclear on this. Sensitive common areas like the mess hall, kitchen, and sallyport wre fitted with electrically-activated teargas canisters. Originally, the mess hall teargas was controlled by a set of toggle switches in a corner gun gallery, while the sallyport teargas was controlled from the armory. While the teargas system was never used, it was probably the most radical of Alcatraz's technical security measures. As more electronic systems were installed, the armory, with its hardened vault entrance and gun issue window, served as a de facto control center for Alcatraz's initial security systems. The Army's small manual telephone switchboard was considered unsuitable for the prison's use. The telephone system provided communication between the guards, making it a critical part of the overall security measures, and the BOP specified that all equipment and cabling needed to be better secured from any access by prisoners. Modifications to the cellblock building's entrance created a new room, just to the side of the sallyport, that housed a 100-line automatic exchange. Automatic Electric telephones that appear throughout historic photos of the prison would suggest that this exchange had been built by AE. Besides providing dial service between prison offices and the many other structures on the island, the exchange was equipped with a conference circuit that included annunciator panels in each of the prison's main offices. Assuming this was the type provided by Automatic Electric, it provided an emergency communications system in which the guard telephones could ring all of the office and guard phones simultaneously, even interrupting calls already in progress. Annunciator panels in the armory and offices showed which phone had started the emergency conference, and which phones had picked up. From the armory, a siren on the building roof could be sounded to alert the entire island to any attempted escape. Some locations, including the armory and the warden's office, were also fitted with fire annunciators. I am less clear on this system. Fire circuits similar to the previously described conference circuit (and sometimes called "crash alarms" after their use on airfields) were an optional feature on telephone exchanges of the time. Crash alarms were usually activated by dedicated "hotline" phones, and mentions of "emergency phones" in various prison locations support that this system worked the same way. Indeed, 1950s and 60s photos show a red phone alongside other telephones in several prison locations. The fire annunciator panels probably would have indicated which of the emergency phones had been lifted to initiate the alarm. One of the most fascinating parts of Alcatraz, to a person like me, is the prison doors. Prison doors have a long history, one that is interrelated with but largely distinct from other forms of physical security. Take a look, for example, at the keys used in prisons. Prisons of the era, and even many today, rely on lever locks manufactured by specialty companies like Folger Adams and Sargent and Greenleaf. These locks are prized for their durability, and that extends to the keys, huge brass plates that could hold up to daily wear well beyond most locks. At Alcatraz, the first warden adopted a "sterile area" model in which areas accessible to prisoners should be kept as clear as possible of dangerous items like guns and keys. Guards on the cellblock carried no keys, and cell doors lacked traditional locks. Instead, the cell doors were operated by a central mechanical system designed by Stewart Iron Works. To let prisoners out of cells in the morning, a guard in the elevated gun gallery passed keys to a cellblock guard in a bucket or on a string. The guard unlocked the cabinet of a cell row's control system, revealing a set of large levers. The design is quite ingenious: by purely mechanical means, the guard could select individual cells or the entire row to be unlocked, and then by throwing the largest lever the guard could pull the cell doors open---after returning the necessary key to the gun gallery above. This 1934 system represents a major innovation in centralized access control, designed specifically for Alcatraz. Stewart Iron Works is still in business, although not building prison doors. Some years ago, the company assisted NPS's work to restore the locking system to its original function. The present-day CEO provided replicas of the original Stewart logo plate for the restored locking cabinets. Interviewing him about the restoration work, the San Francisco Chronicle wrote that "Alcatraz, he believes, is part of the American experience." The Stewart mechanical system seems to have remained in use on the B and C blocks until the prison closed, but the D block was either originally fitted, or later upgraded, with electrically locked cell doors. These were controlled from a set of switches in the gun gallery. In 1960, the BOP launched another wave of renovations on Alcatraz, mostly to modernize its access and security arrangements to modern standards. The telephone exchange was moved away from the sallyport to an upper floor of the administration building, freeing up its original space for a new control center. This is the modern sallyport control area that visitors look into through the ballistic windows; the old service windows and viewports into the armory anteroom that had been the de facto control center are now removed. This control center is more typical of what you will see in modern prisons. Through large windows, guards observed the sallyport and visitor areas and controlled the electrically operated main gates. An electrical interlock prevented opening the full path from the cellblock to the outside, creating a mantrap in the visitor area through which the guards in the control room could identify everyone entering and leaving. Photos from the 1960 control room, and other parts of the prison around the same time, clearly show consoles for a Western Electric 507B PBX. The 507B is really a manual exchange, although it used keys rather than the more traditional plugboard for a more modern look. It dates back to about 1929---so I assume the 507B had been installed well before the 1960 renovation, and its appearance then is just a bias of more and better photos available from the prison's later days. Fortunately, the NPS Historic Furnishings Report for the cellblock building includes a complete copy of a 1960s memo describing the layout and requirements for the control center. We're fortunate to get such a detailed listing of the equipment: Four phones (these are Automatic Electric instruments, based on the photo). One is a fire reporting phone (presumably on the exchange's "crash alarm" circuit), one is the watch call reporting phone (detailed in a moment), a regular outgoing call telephone, and an "executive right of way" phone that I assume will disconnect other calls from the outgoing trunks. The 507B PBX switchboard An intercom for communication with each of the guard towers Controls for five electrically operated doors Intercoms to each of the electrically operated doors (many of these are right outside of the control center, but the glass is very thick and you would not otherwise be able to converse) An "annunciator panel for the interior telephone system" which presumably combines the conference circuit, fire circuit, and watch call annunciators. An intercom to the visitor registration area A "paging intercom for group control purposes." I don't really know what that is, possibly it is for the public address speakers installed in many parts of the cellblock. Monitor speaker for the inmate radio system. This presumably allowed the control center to check the operation of the two-channel wired radio system installed in the cells. The "watch call answering device," discussed later. An indicator panel that shows any open doors in the D cell block (which is the higher security unit and the only one equipped with electrically locking cell doors). Two-way radio remote console Tear gas controls Many of these are things we are already familiar with, but the watch call telephone system deserves some more discussion. It was clearly present back in the 1930s, but it wasn't clear to me what it actually did. Fortunately this memo gives some details on the operation. Guards calling in to report their watch call extension 3331. This connects to the watch call answering device in the control center, which when enabled, automatically answers the call during the first ring. The answering device then allows a guard anywhere in the control center to converse with the caller via a loudspeaker and microphone. So, the watch call system is essentially just a speaker phone. This approach is probably a holdover from the 1930s system (older documents mention a watch call phone as well), and that would have been the early days for speakerphones, making it a somewhat specialized device. Clearly it made these routine watch calls a lot more convenient for the control center, especially since the guard there didn't even have to do anything to answer. It might be useful to mention why this kind of system was used: I have never found any mention of two-way radios used on Alcatraz, and that's not surprising. Portable two-way radios were a nascent technology even in the 1960s---the handheld radio had basically been invented for the Second World War, and it took years for them to come down in size and price. If Alcatraz ever did issue radios to guards, it probably would have been in the last decade of operation. Instead, telephones were provided at enough places in the facility that guards could report their watch tour and any important events by finding a phone and calling the control center. Guards were probably required to report their location at various points as they patrolled, so the control center would receive quite a few calls that were just a guard saying where they were---to be written down in a log by a control room guard, who no doubt appreciated not having to walk to a phone to hear these reports. This provided both the functions of a "guard tour" system, ensuring that guards were actually performing their rounds, and improved the safety of guards by making it likely that the control center would notice fairly promptly that they had stopped reporting in. Alcatraz closed as a BOP prison in 1963, and after a surprising number of twists and turns ranging from plans to develop a shopping center to occupation by the Indians of All Tribes, Alcatraz opened to tourists. Most technology past this point might not be considered "historic," having been installed by NPS for operational purposes. I can't help but mention, though, that there were more attempts at a cable. For the NPS, operating the power plant at Alcatraz was a significant expense that they would much rather save. The idea of a buried power cable isn't new. I have seen references, although no solid documentation, that the BOP laid a power cable in 1934. They built a new power plant in 1939 and operated it for the rest of the life of the prison, so either that cable failed and was never replaced, or it never existed at all... I should take a moment here to mention that LLM-generated "AI slop" has become a pervasive and unavoidable problem around any "hot SEO topic" like tourism. Unfortunately the history of tourist sites like Alcatraz has become more and more difficult to learn as websites with well-researched history are displaced in search results by SEO spam---articles that often contains confident but unsourced and often incorrect information. This has always been a problem but it has increased by orders of magnitude over the last couple of years, and it seems that the LLM-generated articles are more likely to contain details that are outright made up than the older human-generated kind. It's really depressing. That's basically all I have to say about it. It seems that a power cable was installed to Alcatraz sometime in the 1960s but failed by about 1971. I'm a little skeptical of that because that was the era in which it was surplus GSA property, making such a large investment an odd choice, so maybe the 1980s article with that detail is wrong or confusing power with one of the several telephone cables that seem to have been laid (and failed) during BOP operations). In any case, in late 1980 or early 1981, Paul F. Pugh and Associates of Oakland designed a novel type of underwater power cable for the NPS. It was expected to provide power to Alcatraz at much reduced cost compared to more traditional underwater power cable technologies. It never even made it to day 1: after the cable was laid, but before commissioning, some failure caused a large span of it to float to the surface. The cable was evidently not repairable, and it was pulled back to shore. 'I don't know where we go from here,' William J. Whalen, superintendent of the Golden Gate National Recreation Area, said after the broken cable was hauled in. We do know now: where the NPS went from there was decades of operating two diesel generators on the island, until a 2017 DoE-sponsored project that installed solar panels on the cellblock building roof. The panels were intentionally installed such that they are not visible anywhere from the ground, preserving the historic integrity of the site. In aerial photos, though, they give Alcatraz a curiously modern look. The DoE calls the project, which incorporates battery storage and backup diesel generators, as "one of the largest microgrids in the United States." That is an interesting framing, one that emphasizes the modern valance of "microgrid," since Alcatraz had been a self-sufficient electrical system since the island's first electric lights. But what's old is, apparently, new again. I originally wrote much of this as part of a larger travelogue on my most recent trip to Alcatraz, which was coincidentally the same day as a visit by Pam Bondi and Doug Burgum to "survey" the prison for potential reopening. That piece became long and unwieldy, so I am breaking it up into more focused articles---this one on the technical history, a travelogue about the experience of visiting the island in this political context and its history as a symbol of justice and retribution, and probably a third piece on the way that the NPS interprets the site today. I am pitching the travelogue itself to other publications so it may not have a clear fate for a while, but if it doesn't appear here I'll let you know where. In any case there probably will be a loose part two to look forward to. [1] Greely had a rather illustrious Army career. His term as chief of the Signal Corps was something of a retirement after he led several arctic expeditions, the topic of his numerous popular books and articles. He received the Medal of Honor shortly before his death in 1935.
A long time ago I wrote about secret government telephone numbers, and before that, secret military telephone buttons. I suppose this is becoming a series. To be clear, the "secret" here is a joke, but more charitably I could say that it refers to obscurity rather than any real effort to keep them secret. Actually, today's examples really make this point: they're specifically intended to be well known, but are still pretty obscure in practice. If you've been around for a while, you know how much I love telephone numbers. Here in North America, we have a system called the North American Numbering Plan (NANP) that has rigidly standardized telephone dialing practices since the middle of the 20th century. The US, Canada, and a number of Central American countries benefit from a very orderly system of area codes (more formally numbering plan areas or NPAs) followed by a subscriber number written in the format NXX-XXXX (this is a largely NANP-centric notation for describing phone number patterns, N represents the digits 2-9 and X any digit). All of these NANP numbers reside under the country code 1, allowing at least theoretically seamless international dialing within the NANP community. It's really a pretty elegant system. NANP is the way it is for many reasons, but it mostly reflects technical requirements of the telephone exchanges of the 1940s. This is more thoroughly explained in the link above, but one of the goals of NANP is to ensure that step-by-step (SxS) exchanges can process phone numbers digit by digit as they are dialed. In other words, it needs to be possible to navigate the decision tree of telephone routing using only the digits dialed so far. Readers with a computer science education might have some tidy way to describe this in terms of Chompsky or something, but I do not have a computer science education; I have an Information Technology education. That means I prefer flow charts to automata, and we can visualize a basic SxS exchange as a big tree. When you pick up your phone, you start at the root of the tree, and each digit dialed chooses the edge to follow. Eventually you get to a leaf that is hopefully someone's telephone, but at no point in the process does any node benefit from the context of digits you dial before, after, or how many total digits you dial. This creates all kinds of practical constraints, and is the reason, for example, that we tend to write ten-digit phone numbers with a "1" before them. That requirement was in some ways long-lived (The last SxS exchange on the public telephone network was retired in 1999), and in other ways not so long lived... "common control" telephone exchanges, which did store the entire number in electromechanical memory before making a routing decision, were already in use by the time the NANP scheme was adopted. They just weren't universal, and a common nationwide numbering scheme had to be designed to accommodate the lowest common denominator. This discussion so far is all applicable to the land-line telephone. There is a whole telephone network that is, these days, almost completely separate but interconnected: cellular phones. Early cellular phones (where "early" extends into CDMA and early GSM deployments) were much more closely attached to the "POTS" (Plain Old Telephone System). AT&T and Verizon both operated traditional telephone exchanges, for example 5ESS, that routed calls to and from their customers. These telephone exchanges have become increasingly irrelevant to mobile telephony, and you won't find a T-Mobile ESS or DMS anywhere. All US cellular carriers have adopted the GSM technology stack, and GSM has its own definition of the switching element that can be, and often is, fulfilled by an AWS EC2 instance running RHEL 8. Calls between cell phones today, even between different carriers, are often connected completely over IP and never touch a traditional telephone exchange. The point is that not only is telephone number parsing less constrained on today's telephone network, in the case of cellular phones, it is outright required to be more flexible. GSM also defines the properties of phone numbers, and it is a very loose definition. Keep in mind that GSM is deeply European, and was built from the start to accommodate the wide variety of dialing practices found in Europe. This manifests in ways big and small; one of the notable small ways is that the European emergency number 112 works just as well as 911 on US cell phones because GSM dictates special handling for emergency numbers and dictates that 112 is one of those numbers. In fact, the definition of an "emergency call" on modern GSM networks is requesting a SIP URI of "urn:service:sos". This reveals that dialed number handling on cellular networks is fundamentally different. When you dial a number on your cellular phone, the phone collects the entire number and then applies a series of rules to determine what to do, often leading to a GSM call setup process where the entire number, along with various flags, is sent to the network. This is all software-defined. In the immortal words of our present predicament, "everything's computer." The bottom line is that, within certain regulatory boundaries and requirements set by GSM, cellular carriers can do pretty much whatever they want with phone numbers. Obviously numbers need to be NANP-compliant to be carried by the POTS, but many modern cellular calls aren't carried by the POTS, they are completed entirely within cellular carrier systems through their own interconnection agreements. This freedom allows all kinds of things like "HD voice" (cellular calls connected without the narrow filtering and companding used by the traditional network), and a lot of flexibility in dialing. Most people already know about some weird cellular phone numbers. For example, you can dial *#06# to display your phone's various serial numbers. This is an example of a GSM MMI (man-machine interface) code, phone numbers that are handled entirely within your device but nonetheless defined as dialable numbers by GSM for compatibility with even the most basic flip phones. GSM also defined numbers called USSD for unstructured supplementary service data, which set up connections to the network that can be used in any arbitrary way the network pleases. Older prepaid phone services used to implement balance check and top-up operations using USSD numbers, and they're also often used in ways similar to Vertical Service Codes (VSCs) on the landline network to control carrier features. USSDs also enabled the first forms of mobile data, which involved a "special telephone call" to a USSD in order to download a cut-down form of ESPN in a weird mobile-specific markup language. Now, put yourself in the shoes of an enterprising cellular network. The flexibility of processing phone numbers as you please opens up all kinds of possibilities. Innovative services! Customer convenience! Sell them for money! Oh my god, sell them for money! It seems like this started with customer service. It is an old practice, dating to the Bell operating companies, to have special short phone numbers to reach the telephone company itself. The details varied by company (often based on technical constraints in their switching system), but a common early setup was that dialing 114 got you the repair service operator to report a problem with your phone line. These numbers were usually listed in the front of the phone book, and for the phone company the fact that they were "special" or nonstandard was sort of a feature, since they could ensure that they were always routed within the same switch. The selection of "911" as the US emergency number seems rooted in this practice, as later on several major telcos used the "N11" numbers for their service lines. This became immortalized in the form of 611, which will get you customer service for most phone carriers. So cellular companies did the same, allocating themselves "special" numbers for various service lines. Verizon offers #PMT to make a payment. Naturally, there's also room for upsell services: #ROAD for roadside assistance on Verizon. The odd thing about these phone numbers is that there's really no standard involved, they're just the arbitrary practices of specific cellular companies. The term "mobile dial code" (MDC) is usually used to refer to them, although that term seems to have arisen organically rather than by intent. Remember, these aren't a real thing! The carriers just make them up, all on their own. The only real constraint on MDCs is that they need to not collide with any POTS number, which is most easily achieved by prefixing them with some combination of * and #, and usually not "*#" because it's referenced by the GSM standard for MMI. MDCs are available for purchase, but the terms don't seem to be public and you have to negotiate separately with each carrier. That's because there is no centralization. This is where MDCs stand in clear contrast to the better known SMS Short Code, or SMSSC. Those are the five or six-digit numbers widely used in advertising campaigns. SMSSCs are centrally managed by the SMS Short Code Registry, which is a function of industry association CTIA but contracted to iConectiv. iConectiv is sort of like the SAIC of the communications industry, a huge company that dates back to the Bell System (where it became Bellcore after divestiture) and that no one has heard of but nonetheless is a critically important part of the telephone system. Providers that want to have an SMSSC (typically on behalf of one of their customers) pay a fee, and usually recoup it from the end user. That fee is not cheap, typical end-user rates for an SMSSC run over $10k a year. But at least it's straightforward, and your SMS A2P or marketing company can make it happen for you. MDCs have no such centralization, no standardized registration process. You negotiate with each carrier individually. That means it's pretty difficult to put together "complete coverage" on an MDC by getting the same one assigned by every major carrier. And this is one of those areas where "good enough" is seldom good enough; people get pissed off when something you advertise doesn't work. Putting a phone number that only works for some people on a billboard can quickly turn into an expensive embarrassment, so companies will be wary of using an MDC in marketing if they don't feel really confident that it works for the vast majority of cellphone users. Because of this fragmentation, adoption of MDCs for marketing purposes has been very low. The only going concern I know of is #250, operated by a company called Mobile Direct Response. The premise of #250 is very simple: users call 250 and are greeted by a simple IVR. They say a keyword, and they're either forwarded to the phone number of the business that paid for the keyword or they receive a text message response with more information. #250 is specifically oriented towards radio advertising, where asking people to remember a ten-digit phone number is, well, asking a lot. It's also made the jump to podcast advertising. #250 is priced in a very radio-centric way, by the keyword and the size of the market area in which the advertisement that gives the keyword is played. 250 was founded by Dave Robinett, who used to work on marketing at Sprint, presumably where he became aware that these MDCs were a possibility. He has negotiated for #250 to work across a substantial list of cellular carriers in the US and Canada, providing almost complete coverage. That wasn't easy, Robinett said in an interview that it took five years to get AT&T, T-Mobile, Verizon, and Sprint on board. 250 does not appear to be especially widely used. For one, the website is a little junky, with some broken links and other indications that it is not backed by a large communications department. Dave Robinett may be the entire company. They've been operating since at least 2017, and I've only ever heard it in an ad once---a podcast ad that ended with "Call #250 and say I need a dentist." One thing you quickly notice when you look into telephone marketing is that dentists are apparently about 80% of the market. He does mention success with shows like "Rush, Hannity, and Levin," so it's safe to say that my radio habits are a little different from Robinett's. That's not to say that #250 is a failure. In the same interview Robinett says that the company pays his mortgage and, well, that ain't too bad. But it's also nothing like the widespread adoption of SMSSCs. One wonders if the limitation of MDCs to one company that is so focused on radio marketing limits their potential. It might really open things up if some company created a registration service, and prenegotiated terms with carriers so that companies could pick up their own MDCs to use as they please. Well, yeah, someone's trying. Around 2006, a recently-founded mobile marketing company called Zoove announced StarStar dialing. I'm a little unclear on Zoove's history. It seems that they were originally founded as Teleractive in Rhode Island as an SMS short code keyword response service, and after an infusion of VC cash moved to Palo Alto and started looking for something bigger. In 2016, they were acquired by a call center technology company called Mindful. Or maybe Zoove sold the StarStar business to Mindful? Stick a pin in that. I don't love the name StarStar, which has shades of Spacestar Ordering. But it refers to their chosen MDC prefix, two stars. Well, that point is a little odd, according to their marketing material you can also get numbers with a # prefix or * prefix, but all of the examples use **. I would say that, in general, StarStar has it a little less together than #250. Their website is kind of broken, it only loads intermittently and some of the images are missing. At one point it uses the term "CADC" to describe these numbers but I can't find that expanded anywhere. Plus the "About" page refers repeatedly to Virtual Hold Technologies, which renamed to VHT in 2018 and Mindful 2022. It really feels like the vestigial website of a dead company. I know about StarStar because, for a time, trucks from moving franchise All My Sons prominently bore the number MOVE on the side. Indeed, this is still one of the headline examples on the StarStar website, but it doesn't work. I just get a loud click and then the call ends. And it's not that StarStar doesn't work with my mobile carrier, because StarStar's own number MOBILE does connect to their IVR. That IVR promises that a representative will speak with me shortly, plays about five seconds of hold music, and then dumps me on a voicemail system. Despite StarStar numbers apparently basically working, I'm finding that most of the examples they give on their website won't even connect. Perhaps results will vary depending on the mobile network. Well, perhaps not that much is lost. StarStar was founded by Steve Doumar, a serial telephone marketing entrepreneur with a colorful past founding various inbound call center companies. Perhaps his most famous venture is R360, a "lead acquisition" service memorialized by headlines like "Drug treatment referral service took advantage of addictions to make a quick buck" from the Federal Trade Commission. He's one of those guys whose bio involves founding a new company every two years, which he has to spin as entrepreneurial dynamism rather than some combination of fleeing dissatisfied investors and fleeing angered regulators. Today he runs whisp.io, a "customer activation platform" that appears to be a glorified SMS advertising service featuring something ominously called "simplified opt-in." Whisp has a YouTube channel which features the 48-second gem "Fun Fact We Absolutely Love About Steve Doumar". Description: Our very own CEO, Steve Doumar is a kind and generous person who has given back to the community in many ways; this man is absolutely a man with a heart of gold. Do you want to know the fun fact? Yes you do! Here it is: "He is an incredible philanthropist. He loves helping other people. Every time I'm with him he comes up with new ways and new ideas to help other people. Which I think is amazing. And he doesn't brag about it, he doesn't talk about it a lot." Except he's got his CMO making a YouTube video about it? From Steve Doumar's blog: American entrepreneur Ray Kroc expressed the importance of persisting in a busy world where everyone wants a bite of success. This man is no exception. An entrepreneur. A family man. A visionary. These are the many names of a man that has made it possible for opt-ins to be safe, secure, and accurate; Steve Doumar. I love this stuff, you just can't make it up. I'm pretty sure what's going on here is just an SEO effort to outrank the FTC releases and other articles about the R360 case when you search for his name. It's only partially working, "FTC Hits R360 and its Owner With $3.8 Million Civil ..." still comes in at Google result #4 for "Steve Doumar," at least for me. But hey, #4 is better than #1. Well, to be fair to StarStar, I don't think Steve Doumar has been involved for some years, but also to be fair, some of their current situation clearly dates to past behavior that is maybe less than savory. Zoove originally styled itself as "The National StarStar Registry," clearly trying to draw parallels to CTIA/iConectiv's SMSSC registry. Their largest customer was evidently a company called Sumotext, which leased a number of StarStar numbers to offer an SMS and telephone marketing service. In 2016, Sumotext sued StarStar, Zoove, VHT (now Mindful), and a healthy list of other entities all involved in StarStar including the intriguingly named StarSteve LLC. I'm not alone in finding the corporate history a little baffling; in a footnote on one ruling the court expressed confusion about all the different names and opted to call them all Zoove. In any case, Sumotext alleged that Zoove, StarSteve, and VHT all merged as part of a scheme to illegally monopolize the StarStar market by undercutting the companies that had been leasing the numbers and effectively giving VHT (Mindful) an exclusive ability to offer marketing services with StarStar numbers. The case didn't end up going anywhere for Sumotext, the jury found that Sumotext hadn't established a relevant market which is a key part of a Sherman act case. An appeal was made all the way to the Supreme Court, but they didn't take it up. What the case did do was publicize some pretty sketchy sounding details, like the seemingly uncontested accusation that VHT got Sumotext's customer list from the registry database and used it to convert them all into StarSteve customers. And yes, the Steve in StarSteve is Steve Doumar. As best I can tell, the story here is that Steve Doumar founded Zoove (or bought Teleractive and renamed it or something?) to establish the National StarStar Registry, then founded a marketing company called StarSteve that resold StarStar numbers, then merged StarSteve and the National StarStar Registry together and cut off all of the other resellers. Apparently not a Sherman act violation but it sure is a bad look, and I wonder how much it contributed to the lack of adoption of the whole StarStar idea---especially given that Sumotext seems to have been responsible for most of that adoption, including the All My Sons deal for MOVE. I wonder if All My Sons had to take MOVE off of their trucks because of the whole StarSteve maneuver? That seems to be what happened. Look, ten-digit phone numbers are had to remember, that much is true. But as is, the "MDC" industry doesn't seem stable enough for advertising applications where the number needs to continue to work into the future. I think the #250 service is probably here to stay, but confined to the niche of audio advertising. StarStar raised at least $30 million in capital in the 2010s, but seems to have shot itself in the foot. StarStar owner VHT/Mindful, now acquired by Medallia, doesn't even mention StarStar as a product offering. Hey, remember how Steve Doumar is such a great philanthropist? There are a lot of vestiges around of StarStar Inc., a nonprofit that made StarStar numbers available to charitable organizations. Their website, starstar.org, is now a Wix error page. You can find old articles about StarStar Me, also written **me, which sounds lewd but was a $3/mo offering that allowed customers to get a vanity short code (such as ** followed by their name)---the original form of StarStar, dating back to 2012 and the beginning of Zoove. In a press release announcing the StarStar Me, Zoove CEO Joe Gillespie said: With two-thirds of smartphone users having downloaded social networking apps to their phones, there’s a rapidly growing trend in today's on-the-go lifestyle to extend our personal communications and identity into the digital realm via our mobile phones. And somehow this leads to paying $3 for to get StarStarred? I love it! It's so meaningless! And years later it would be StarStar Mobile formerly Zoove by VHT now known as Mindful a Medallia company. Truly an inspiring story of industry, and just one little corner of the vast tapestry of phone numbers.
Some time ago, via a certain orange website, I came across a report about a mission to recover nuclear material from a former Soviet test site. I don't know what you're doing here, go read that instead. But it brought up a topic that I have only known very little about: Hydronuclear testing. One of the key reasons for the nonproliferation concern at Semipalatinsk was the presence of a large quantity of weapons grade material. This created a substantial risk that someone would recover the material and either use it directly or sell it---either way giving a significant leg up on the construction of a nuclear weapon. That's a bit odd, though, isn't it? Material refined for use in weapons in scarce and valuable, and besides that rather dangerous. It's uncommon to just leave it lying around, especially not hundreds of kilograms of it. This material was abandoned in place because the nature of the testing performed required that a lot of weapons-grade material be present, and made it very difficult to remove. As the Semipalatinsk document mentions in brief, similar tests were conducted in the US and led to a similar abandonment of special nuclear material at Los Alamos's TA-49. Today, I would like to give the background on hydronuclear testing---the what and why. Then we'll look specifically at LANL's TA-49 and the impact of the testing performed there. First we have to discuss the boosted fission weapon. Especially in the 21st century, we tend to talk about "nuclear weapons" as one big category. The distinction between an "A-bomb" and an "H-bomb," for example, or between a conventional nuclear weapon and a thermonuclear weapon, is mostly forgotten. That's no big surprise: thermonuclear weapons have been around since the 1950s, so it's no longer a great innovation or escalation in weapons design. The thermonuclear weapon was not the only post-WWII design innovation. At around the same time, Los Alamos developed a related concept: the boosted weapon. Boosted weapons were essentially an improvement in the efficiency of nuclear weapons. When the core of a weapon goes supercritical, the fission produces a powerful pulse of neutrons. Those neutrons cause more fission, the chain reaction that makes up the basic principle of the atomic bomb. The problem is that the whole process isn't fast enough: the energy produced blows the core apart before it's been sufficiently "saturated" with neutrons to completely fission. That leads to a lot of the fuel in the core being scattered, rather than actually contributing to the explosive energy. In boosted weapons, a material that will fusion is added to the mix, typically tritium and deuterium gas. The immense heat of the beginning of the supercritical stage causes the gas to undergo fusion, and it emits far more neutrons than the fissioning fuel does alone. The additional neutrons cause more fission to occur, improving the efficiency of the weapon. Even better, despite the theoretical complexity of driving a gas into fusion¸ the mechanics of this mechanism are actually simpler than the techniques used to improve yield in non-boosted weapons (pushers and tampers). The result is that boosted weapons produce a more powerful yield in comparison to the amount of fuel, and the non-nuclear components can be made simpler and more compact as well. This was a pretty big advance in weapons design and boosting is now a ubiquitous technique. It came with some downsides, though. The big one is that whole property of making supercriticality easier to achieve. Early implosion weapons were remarkably difficult to detonate, requiring an extremely precisely timed detonation of the high explosive shell. While an inconvenience from an engineering perspective, the inherent difficulty of achieving a nuclear yield also provided a safety factor. If the high explosives detonated for some unintended reason, like being struck by canon fire as a bomber was intercepted, or impacting the ground following an accidental release, it wouldn't "work right." Uneven detonation of the shell would scatter the core, rather than driving it into supercriticality. This property was referred to as "one point safety:" a detonation at one point on the high explosive assembly should not produce a nuclear yield. While it has its limitations, it became one of the key safety principles of weapon design. The design of boosted weapons complicated this story. Just a small fission yield, from a small fragment of the core, could potentially start the fusion process and trigger the rest of the core to detonate as well. In other words, weapon designers became concerned that boosted weapons would not have one point safety. As it turns out, two-stage thermonuclear weapons, which were being fielded around the same time, posed a similar set of problems. The safety problems around more advanced weapon designs came to a head in the late '50s. Incidentally, so did something else: shifts in Soviet politics had given Khrushchev extensive power over Soviet military planning, and he was no fan of nuclear weapons. After some on-again, off-again dialog between the time's nuclear powers, the US and UK agreed to a voluntary moratorium on nuclear testing which began in late 1958. For weapons designers this was, of course, a problem. They had planned to address the safety of advanced weapon designs through a testing campaign, and that was now off the table for the indefinite future. An alternative had to be developed, and quickly. In 1959, the Hydronuclear Safety Program was initiated. By reducing the amount of material in otherwise real weapon cores, physicists realized they could run a complete test of the high explosive system and observe its effects on the core without producing a meaningful nuclear yield. These tests were dubbed "hydronuclear," because of the desire to observe the behavior of the core as it flowed like water under the immense explosive force. While the test devices were in some ways real nuclear weapons, the nuclear yield would be vastly smaller than the high explosive yield, practically nill. Weapons designers seemed to agree that these experiments complied with the spirit of the moratorium, being far from actual nuclear tests, but there was enough concern that Los Alamos went to the AEC and President Eisenhower for approval. They evidently agreed, and work started immediately to identify a suitable site for hydronuclear testing. While hydronuclear tests do not create a nuclear yield, they do involve a lot of high explosives and radioactive material. The plan was to conduct the tests underground, where the materials cast off by the explosion would be trapped. This would solve the immediate problem of scattering nuclear material, but it would obviously be impractical to recover the dangerous material once it was mixed with unstable soil deep below the surface. The material would stay, and it had to stay put! The US Army Corps of Engineers, a center of expertise in hydrology because of their reclamation work, arrived in October 1959 to begin an extensive set of studies on the Frijoles Mesa site. This was an unused area near a good road but far on the east edge of the laboratory, well separated from the town of Los Alamos and pretty much anything else. More importantly, it was a classic example of northern New Mexican geology: high up on a mesa built of tuff and volcanic sediments, well-drained and extremely dry soil in an area that received little rain. One of the main migration paths for underground contaminants is their interaction with water, and specifically the tendency of many materials to dissolve into groundwater and flow with it towards aquifers. The Corps of Engineers drilled test wells, about 1,500' deep, and a series of 400' core samples. They found that on the Frijoles Mesa, ground water was over 1,000' below the surface, and that everything above was far from saturation. That means no mobility of the water, which is trapped in the soil. It's just about the ideal situation for putting something underground and having it stay. Incidentally, this study would lead to the development of a series of new water wells for Los Alamos's domestic water supply. It also gave the green light for hydronuclear testing, and Frijoles Mesa was dubbed Technical Area 49 and subdivided into a set of test areas. Over the following three years, these test areas would see about 35 hydronuclear detonations carried out in the bottom of shafts that were about 200' deep and 3-6' wide. It seems that for most tests, the hole was excavated and lined with a ladder installed to reach the bottom. Technicians worked at the bottom of the hole to prepare the test device, which was connected by extensive cabling to instrumentation trailers on the surface. When the "shot" was ready, the hole was backfilled with sand and sealed at the top with a heavy plate. The material on top of the device held everything down, preventing migration of nuclear material to the surface. The high explosives did, of course, destroy the test device and the cabling, but not before the instrumentation trailers had recorded a vast amount of data. If you read these kinds of articles, you must know that the 1958 moratorium did not last. Soviet politics shifted again, France began nuclear testing, negotiations over a more formal test ban faltered. US intelligence suspected that the Soviet Union had operated their nuclear weapons program at full tilt during the test ban, and the military suspected clandestine tests, although there was no evidence they had violated the treaty. Of course, that they continued their research efforts is guaranteed, we did as well. Physicist Edward Teller, ever the nuclear weapons hawk, opposed the moratorium and pushed to resume testing. In 1961, the Soviet Union resumed testing, culminating in the test of the record-holding "Tsar Bomba," a 50 megaton device. The US resumed testing as well. The arms race was back on. US hydronuclear testing largely ended with the resumption of full-scale testing. The same safety studies could be completed on real weapons, and those tests would serve other purposes in weapons development as well. Although post-moratorium testing included atmospheric detonations, the focus had shifted towards underground tests and the 1963 Partial Test Ban Treaty restricted the US and USSR to underground tests only. One wonders about the relationship between hydronuclear testing at TA-49 and the full-scale underground tests extensively performed at the NTS. Underground testing began in 1951 with Buster-Jangle Uncle, a test to determine how big of a crater could be produced by a ground-penetrating weapon. Uncle wasn't really an underground test in the modern sense, the device was emplaced only 17 feet deep and still produced a huge cloud of fallout. It started a trend, though: a similar 1955 test was set 67 feet deep, producing a spectacular crater, before the 1957 Plumbbob Pascal-A was detonated at 486 feet and produced radically less fallout. 1957's Plumbbob Rainier was the first fully-contained underground test, set at the end of a tunnel excavated far into a hillside. This test emitted no fallout at all, proving the possibility of containment. Thus both the idea of emplacing a test device in a deep hole, and the fact that testing underground could contain all of the fallout, were known when the moratorium began in 1959. What's very interesting about the hydronuclear tests is the fact that technicians actually worked "downhole," at the bottom of the excavation. Later underground tests were prepared by assembling the test device at the surface, as part of a rocket-like "rack," and then lowering it to the bottom just before detonation. These techniques hadn't yet been developed in the '50s, thus the use of a horizontal tunnel for the first fully-contained test. Many of the racks used for underground testing were designed and built by LANL, but others (called "canisters" in an example of the tendency of the labs to not totally agree on things) were built by Lawrence Livermore. I'm not actually sure which of the two labs started building them first, a question for future research. It does seem likely that the hydronuclear testing at LANL advanced the state of the art in remote instrumentation and underground test design, facilitating the adoption of fully-contained underground tests in the following years. During the three years of hydronuclear testing, shafts were excavated in four testing areas. It's estimated that the test program at TA-49 left about 40kg of plutonium and 93kg of enriched uranium underground, along with 92kg of depleted uranium and 13kg of beryllium (both toxic contaminants). Because of the lack of a nuclear yield, these tests did not create the caverns associated with underground testing. Material from the weapons likely spread within just a 10-20' area, as holes were drilled on a 25' grid and contamination from previous neighboring tests was encountered only once. The tests also produced quite a bit of ancillary waste: things like laboratory equipment, handling gear, cables and tubing, that are not directly radioactive but were contaminated with radioactive or toxic materials. In the fashion typical of the time, this waste was buried on site, often as part of the backfilling of the test shafts. During the excavation of one of the test shafts, 2-M in December 1960, contamination was detected at the surface. It seems that the geology allowed plutonium from a previous test to spread through cracks into the area where 2-M was being drilled. The surface soil contaminated by drill cuttings was buried back in hole 2-M, but this incident made area 2 the most heavily contaminated part of TA-49. When hydronuclear testing ended in 1961, area 2 was covered by a 6' of gravel and 4-6" of asphalt to better contain any contaminated soil. Several support buildings on the surface were also contaminated, most notably a building used as a radiochemistry laboratory to support the tests. An underground calibration facility that allowed for exposure of test equipment to a contained source in an underground chamber was also built at TA-49 and similarly contaminated by use with radioisotopes. The Corps of Engineers continued to monitor the hydrology of the site from 1961 to 1970, and test wells and soil samples showed no indication that any contamination was spreading. In 1971, LANL established a new environmental surveillance department that assumed responsibility for legacy sites like TA-49. That department continued to sample wells, soil, and added air sampling. Monitoring of stream sediment downhill from the site was added in the '70s, as many of the contaminants involved can bind to silt and travel with surface water. This monitoring has not found any spread either. That's not to say that everything is perfect. In 1975, a section of the asphalt pad over Area 2 collapsed, leaving a three foot deep depression. Rainwater pooled in the depression and then flowed through the gravel into hole 2-M itself, collecting in the bottom of the lining of the former experimental shaft. In 1976, the asphalt cover was replaced, but concerns remained about the water that had already entered 2-M. It could potentially travel out of the hole, continue downwards, and carry contamination into the aquifer around 800' below. Worse, a nearby core sample hole had picked up some water too, suggesting that the water was flowing out of 2-M through cracks and into nearby features. Since the core hole had a slotted liner, it would be easier for water to leave it and soak into the ground below. In 1980, the water that had accumulated in 2-M was removed by lifting about 24 gallons to the surface. While the water was plutonium contaminated, it fell within acceptable levels for controlled laboratory areas. Further inspections through 1986 did not find additional water in the hole, suggesting that the asphalt pad was continuing to function correctly. Several other investigations were conducted, including the drilling of some additional sample wells and examination of other shafts in the area, to determine if there were other routes for water to enter the Area 2 shafts. Fortunately no evidence of ongoing water ingress was found. In 1986, TA-49 was designated a hazardous waste site under the Resource Conservation and Recovery Act. Shortly after, the site was evaluated under CERCLA to prioritize remediation. Scoring using the Hazard Ranking System determined a fairly low risk for the site, due to the lack of spread of the contamination and evidence suggesting that it was well contained by the geology. Still, TA-49 remains an environmental remediation site and now falls under a license granted by the New Mexico Environment Department. This license requires ongoing monitoring and remediation of any problems with the containment. For example, in 1991 the asphalt cover of Area 2 was found to have cracked and allowed more water to enter the sample wells. The covering was repaired once again, and investigations made every few years from 1991 to 2015 to check for further contamination. Ongoing monitoring continues today. So far, Area 2 has not been found to pose an unacceptable risk to human health or a risk to the environment. NMED permitting also covers the former radiological laboratory and calibration facility, and infrastructure related to them like a leach field from drains. Sampling found some surface contamination, so the affected soil was removed and disposed of at a hazardous waste landfill where it will be better contained. TA-49 was reused for other purposes after hydronuclear testing. These activities included high explosive experiments contained in metal "bottles," carried out in a metal-lined pit under a small structure called the "bottle house." Part of the bottle house site was later reused to build a huge hydraulic ram used to test steel cables at their failure strength. I am not sure of the exact purpose of this "Cable Test Facility," but given the timeline of its use during the peak of underground testing and the design I suspect LANL used it as a quality control measure for the cable assemblies used in lowering underground test racks into their shafts. No radioactive materials were involved in either of these activities, but high explosives and hydraulic oil can both be toxic, so both were investigated and received some surface soil cleanup. Finally, the NMED permit covers the actual test shafts. These have received numerous investigations over the sixty years since the original tests, and significant contamination is present as expected. However, that contamination does not seem to be spreading, and modeling suggests that it will stay that way. In 2022, the NMED issued Certificates of Completion releasing most of the TA-49 remediation sites without further environmental controls. The test shafts themselves, known to NMED by the punchy name of Solid Waste Management Unit 49-001(e), received a certificate of completion that requires ongoing controls to ensure that the land is used only for industrial purposes. Environmental monitoring of the TA-49 site continues under LANL's environmental management program and federal regulation, but TA-49 is no longer an active remediation project. The plutonium and uranium is just down there, and it'll have to stay.
More in technology
other irredeemably nerdy habit is roadgeeking, exploring and mapping highways both old and new, and it turns out that 8-bit roadgeeking on ordinary home computers was absolutely possible. For computers of this class, devising an optimal highway route becomes an exercise not only in how to encode sufficient map data to a floppy disk, but also performing efficient graph traversal with limited hardware. Today we'll explore Roadsearch-Plus, one of the (if not the) earliest such software — primarily on the Commodore 64, but originating on the Apple II — and at the end "drive" all the way from southern California to British Columbia along US Highway 395, my first long haul expedition, but as it was in 1985. Buckle up while we crack the program's runtime library, extract its database, and (working code included) dive deeply into the quickest ways to go from A to B using a contemporary home computer. Although this article assumes a little bit of familiarity with the United States highway system, I'll provide a 30-second version. The top-tier national highway network is the 1956 Eisenhower Interstate System (abbreviated I-, such as I-95), named for president Dwight D. Eisenhower who promulgated it, signed with red, white and blue shields. Nearly all of its alignments, which is to say the physical roads composing it, are grade-separated full freeway. It has come to eclipse the 1926 United States Numbered Highway System (abbreviated US, such as US 395), a nationally-numbered grid system of highways maintained by the states, albeit frequently with federal funding. Signed using a horned white shield, these roads vary from two-lane highway all the way to full freeway and may be multiplexed (i.e., multiply signed) with other US highways or Interstates in many areas. While they are no longer the highest class of U.S. national road, they nevertheless remain very important for regional links especially in those areas that Interstates don't service. States and counties maintain their own locally allocated highway systems in parallel. Here is a glossary of these and other roadgeek terms. Geographic information systems (GIS) started appearing in the 1960s, after Waldo Tobler's 1959 "Automation and Cartography" paper about his experience with the military Semi-Autographic Ground Environment system. SAGE OA-1008 displays relied on map transparencies developed manually but printed with computers like the IBM 704. Initially such systems contained only geographic features like terrain and coastlines and specific points of interest, but support for highways as a layer or integral component was rapidly implemented for land use applications, and such support became part of most, if not all, mainframe and minicomputer GIS systems by the late 1970s. However, these systems generally only handled highways as one of many resource or entity types; rarely were there specific means of using them for navigational purposes. Honda Electro-Gyrocator. Because Global Positioning System (GPS) satellite data was not then available for civilian applications and other radio navigational systems like LoRAN were hard to reliably receive in canyons or tunnels, it relied on its own internal gas gyroscope to detect rotation and movement aided by a servo in the car's transmission. The Electro-Gyrocator used a Texas Instruments TMS9980 (a derivative of the 16-bit TMS9900 in the TI-99/4A but with an 8-bit data bus) as its CPU and a sidecar TMS9901 for I/O. It had 10K of ROM, 1K of SRAM and 16K of DRAM, hopeless for storing map data of any consequence, so the actual maps were transparencies too; the 9980 integrated sensor data provided by the 9901 from the gyroscope and servo to plot the car's course on a small 6" CRT behind the map overlay. The user was expected to set the starting location on the map before driving and there was likewise no provision for routing. It was only made available that year for ¥300,000 (about US$2900 in 2025 dollars at current exchange rates) on the JDM Honda Accord and Honda Vigor, and discontinued by 1982. There were also a few early roadgeek-oriented computer and console games, which I distinguish from more typical circuit or cross-country racers by an attempt to base them on real (not fictional) roads with actual geography. One I remember vividly was Imagic's Truckin', a Mattel Intellivision-exclusive game from 1983 which we played on our Tandyvision One. Juggernaut for the ZX Spectrum by at least two years, and Juggernaut uses a completely fictitious game world instead. you're expected to do that! an easter egg. Besides very terse prompts and a heavily compressed internal routing table, the game makes all highway numbers unique and has no multiplexes or three-digit auxiliary Interstates, and while you can drive into Canada you can't drive into Mexico (admittedly it was pre-NAFTA). Additionally, for gameplay reasons every highway junction is a named "city," introducing irregularities like Interstate 70 ending in Provo, UT when it really ends about 130 miles south, or Interstate 15 ending in El Cajon, CA, and many cities and various two-digit primary Interstates are simply not included (e.g., I-12, I-19, I-30, I-85, I-93, etc.). As a result of these constraints, among other inaccuracies Interstate 90 terminates in Madison, WI at I-94 (not Boston), I-94 terminates in Milwaukee at I-55 (not Port Huron, MI), and I-40 is extended west from its true terminus in Barstow, CA along real-world California State Highway 58 to intersect I-5 "in Bakersfield" (real Interstate 5 is around twenty miles away). Still, it contains an extensive network of real highways with their lengths and control cities, managing it all on an early console platform with limited memory, while simultaneously supporting full 1983-level gameplay. The 8-bit home computer was ascendant during the same time and at least one company perceived a market for computer-computed routes with vacations and business trips. I can't find any references to an earlier software package of this kind for this class of computer, at least in the United States, so we'll call it the first. The Software Writer's Marketplace from 1984. If this is the same person, he later appears in a 2001 document as the Ground and Mission Systems Manager at the NASA Goddard Space Flight Center in Greenbelt, Maryland, about a half hour's drive away. Columbia Software does not appear in any magazines prior to 1982 nor does it appear after 1985, though no business record under that name is known to either the state of Maryland or the state of Delaware. T%(), N%() and B%() which sound like Applesoft BASIC integer arrays. (The file MPH just contains the MPG and MPH targets in text.) In fact, there are actually two compiled programs present, ROADSEARCH (the main executable) and DBA (the distance editor), and using these "variable" files allows both programs to keep the memory image of the arrays, no doubt the representation of its map database, consistent between them. You can start either compiled program by BRUNning them, as HELLO does. Exactly which compiler we can also make a fairly educated guess about: there were only a few practical choices in 1981-82 when it was probably written, and most of them we can immediately eliminate due to what they (don't) support or what they (don't) generate. For example, being almost certainly Applesoft BASIC would obviously eliminate any Integer BASIC compiler. It also can't be On-Line Systems' Expediter II because that generates "Applesoft BASIC" programs with a single CALL statement instead of binaries that are BRUN, and it probably isn't Southwestern Data Systems' Speed Star because of the apparent length of the program and that particular compiler's limited capacity. That leaves the two major compilers of this era, Microsoft's TASC ("The Applesoft Compiler"), which was famously compiled with itself, and Hayden Book Company's Applesoft Compiler. TASC uses a separate runtime that must be BLOADed before BRUNning the main executable, which HELLO doesn't do. Hayden's compiler is thus the most likely tool, and this theory is buoyed by the fact that this compiler does support variable sharing between modules. If we run strings on the DOS 3.3 .dsk image, we see things like Accounting for the fact that the disk interleave will not necessarily put lines in sequential order, you can pick out the strings of the program as well as a number of DOS 3.3 file manager commands, sometimes in pieces, such as B%(),A30733,L5448 which is probably part of a BSAVE command, or BLOAD B%(),A30733. The disk also has examples of both kinds of DOS 3.3 text files, both sequentially accessed (such as OPEN T%(), READ T%() and CLOSE T%()) but also the less commonly encountered random access (with explicitly specified record numbers and lengths such as OPEN ROADS,L12 and OPEN ROADMAP CITIES,L19, then READ ROADS,R and READ ROADMAP CITIES,R which would be followed by the record number). For these random access files, given a record length and record number, the DOS track-and-sector list is walked to where that record would be and only the necessary sector(s) are read to construct and return the record. We can see the contents with a quick Perl one-liner to strip off the high bit and feeding that to strings: Again note that the order is affected by the disk interleave, but the file is stored alphabetically (we'll extract this file properly in a moment). Another interesting string I found this way was "TIABSRAB WS AIBMULOC" which is COLUMBIA SW BARSBAIT backwards. Perhaps someone can explain this reference. In a hex editor the city database looks like this, where you can see the regularly repeating 19-byte record format for the names. Remember that the characters are stored with the high bit set. This is not a very efficient means of storage especially considering DOS 3.3 only had 124K free per disk side (after DOS itself and filesystem overhead), but it would be a lot easier for an Applesoft BASIC program to handle since the record lookup work could be shunted off to DOS 3.3 and performed quickly. Also, while you can list the cities and junctions from the menu, they are not indexed by state, only by first letter: ROADMAP CITIES to the emulator's virtual printer. We know its record length because it was stored as a string in the compiled BASIC text we scanned previously. That, in turn, gives us a text file we can read on the Mac side. It is, as expected, 406 lines long. The same process gets us the ROADS list, which is 171 lines long. Keep in mind that's just a list of the names of the roads; it does not contain information on the actual highway segments between points. T%()), where for each letter index 1-26, the value is the first record number for all cities starting with that letter. As we saw previously, the cities are already sorted on disk to facilitate. There are no cities starting with Z, so that letter just goes to the end of the file (non-existent record 407) and terminates. B%() and N%(), but we'll solve that problem a little later on. and updated routing information. However, this release appears to be specific to the Commodore port — if there were a 1985 map update for the Apple II, it has yet to surface — and I can find no subsequent release of Roadsearch after 1985 on any platform. Both C64 versions came in the same Roadsearch and Roadsearch-Plus variants for the same prices, so the iteration we'll explore here is Roadsearch-Plus with the presumed final 1985 database. For purposes of emulation we'll use VICE's SuperCPU spin with the 65816 enabled as some number crunching is involved, but I also tested it on my real Commodore SX-64 and 128DCR for veracity. (Running xscpu64 in warp mode will absolutely tear through any routing task, but I strongly recommend setting location 650 to 64 in the VICE monitor to disable key repeat.) It's worth noting that the various circulating 1541 disk images of the 1984 and 1985 editions were modified to add entries by their previous owners, though we'll point out how these can be detected and reversed. For much of this article I'll be running a 1985 disk that I manually cleared in a hex editor to what I believe was its original contents. ROADSEARCH+), the editor (IDBA) and two RELative files for the CITIES and ROADS. RELative files are functionally equivalent to DOS 3.3 random access files, being a record-based file format with a fixed size and an index (made up of "side sectors"). They are an uncommon sight on commercial software due to their idiosyncracies and a few outright bugs, and they don't work well for binary data or support variable record sizes which is why Berkeley Softworks came up with VLIR for GEOS instead. On the other hand, they do just fine for text strings and the lookup can be very fast. The cities file looks like this when dumping the raw sectors from the .d64 disk image: The directory entry indicates a file with 31-byte records, the first side sector at track 19 sector 10 and the first data sector (shown in part below) at track 19 sector 0. Other than the obvious typo in Abilene, TX, it is the same basic format as the Apple version and also sorted, ending each string with a carriage return. As a method of what I assume prevents trivially dumping its contents, a naive read of each record won't yield anything useful because every record starts with an $ff and a whole bunch of nulls, which Commodore DOS interprets as the end. The actual string doesn't start until offset 12. The same basic idea holds for the roads file: The same offset trick is used, but here the records are 24-byte since route names are shorter. Again, this isn't a particularly efficient storage mechanism, but we have over 165K available on a formatted disk, and RELative file access to any arbitrary record is quite quick. Despite the presence of side sectors, the actual records of a RELative file are still sequentially stored on disk with the usual forward track and sector pointers. As such, we don't need to grab the side sectors to simply extract its contents. For some period of time the c1541 tool from the VICE suite would not copy REL files and this was only recently fixed, so here is a Perl script I threw together to iterate over a D64 disk image and transfer a file to standard output, either by name or if you specify a starting track and sector. Because this yanks the files "raw," we will then need to strip them down. Feed the extracted REL records to this and you'll get a text file: You'll notice that both the cities and roads lists are, or at least start out, sorted. All C64 Roadsearch images I've seen circulating so far have been altered by their previous owners to add their own local roads and cities, and their changes can be easily distinguished at the point where the lists abruptly go out of alphabetical order. There is also a pair of SEQuential files, one named MPH and serving the same function to store the preferred speed and fuel economy, and a second one simply named M. This file contains four ASCII numbers, one obviously recognizeable as our serial number, though this is only one of the two places the program checks. The others are the number of cities (406, or 487 in the 1985 version), number of roads (215 in 1985), and a third we don't yet know the purpose of. You can confirm the first two by checking it against the number of lines in the files you extracted. What we don't see are files for the arrays we spotted on the Apple disk. The only file left big enough to account for those is MATS. To figure out how that works, we should start digging into the program. RTL-64, a runtime library and the telltale sign of a program compiled with DTL BASIC. DTL BASIC, formally DTL-BASIC 64 Jetpack, became extremely popular with developers not just due to its good performance and compatibility but also requiring no royalties to sell programs compiled with it as long as credit was given. An optional "protector" version can obfuscate the program and/or require a hardware dongle, though this version is rarer due to its expense (and fortunately was not used here). The runtime library slots into the RAM under the BASIC ROM, so there is no obvious loss of free memory. DTL stood for "Drive Technology Ltd." and was written by David Hughes in the UK, first for the PET; the compiler is notable for being compiled with itself, like Microsoft TASC, and using the same "RTL" runtime library and protection system (and, obnoxiously, a dongle) as the object code it generates. The 64 tape version is substantially less capable than the disk one. DTL BASIC compiles to its own bespoke P-code which is executed by the RTL. It achieves its speed through a greater degree of precalculation and preparsing (e.g., pre-resolving values and line numbers, removal of comments, etc.), a custom garbage collection routine, and also, where possible, the use of true signed 16-bit integer math. This is a substantial speed-up over most Microsoft-derived BASICs, in which Microsoft Binary Format floating point is the native system for all calculations to the point where integer variables must first be converted to floating point, the computation performed, and then converted back. Ordinarily this would only make them useful for smaller arrays in memory because the double conversion will make them slower than regular floating point variables. However, DTL BASIC does perform true integer math without conversion, first for all variables explicitly declared as integer, and even autoconverting other variables at compile time with a directive (pragma). Although some private work on a decompiler reportedly exists, to the best of my knowledge that decompiler remains unfinished and unavailable. Interestingly, what I presume is the earlier 1984 disk image has some ghost directory entries, two each for the source BASIC programs and their "symbol tables" used by ERROR LOCATE for runtime errors: Sadly these source files were overwritten and cannot be resurrected, and even the ghost entries were purged on the second 1984 image. However, two aspects of the compiler make it possible to recover at least a portion of the original program text by hand. The first is that not all statements of the original program are fully converted to P-code: in particular, READ/DATA, OPEN, INPUT/INPUT#, DIM, PRINT/PRINT# and possibly others will be preserved nearly in their original BASIC form, including literal strings and most usefully variable names. For example, if we pull the compiled ROADSEARCH+ off the disk image and run it through strings, we see text not unlike a BASIC program: DTL-compiled programs always start with an SYS 2073 to jump into a small machine-code subroutine linked into the object. This section of code loads the RTL (the filename follows) and has other minor housekeeping functions, including one we'll explain shortly when we break into the program while it's running. It resides here so that it can bank BASIC ROM in or out as necessary without crashing the computer. Following that is an incomplete machine language subroutine in a consolidated DATA statement. Disassembly of the fragment shows it's clearly intended to run from $c000, but at least part of it is missing. Of the portion we can see here, however, there are calls for what appears to be a Kernal load, so if we drop a breakpoint on $ffba in VICE (e.g., the Kernal SETLFS routine) we can run the 6502's call stack back to see the full routine and the filename (at $c0e1) it's trying to access: MATS. It loads it straight into memory in the middle of the available BASIC range. After that, now we see the arrays we saw in the Apple II version, but more importantly they're clearly part of DIM statements, so we can also see their dimensions. We already knew T%() was likely to be only 26 (27 counting the zero index) integers long from dumping the Apple version's contents, but the N%() array has up to 757 entries of five fields each, and B%() is even bigger, with 1081 records of four fields each. This is obviously where our map data is stored, and MATS is the file it seems to be loading to populate them. This brings us to the second aspect of DTL Jetpack that helps to partially decipher the program: to facilitate some reuse of the BASIC ROM, the generated code will still create and maintain BASIC-compatible variables which we can locate in RAM. So that we know what we're dealing with, we need to figure out some way to stop the program while it's running to examine its state. Naturally the compiler offers a means to defeat this. ; warm start (2061) .C:080d 20 52 08 JSR $0852 .C:0810 4C 2B A9 JMP $A92B ; not called? .C:0813 20 52 08 JSR $0852 .C:0816 4C 11 A9 JMP $A911 ; cold start (2073) ; load the RTL if flag not set, then start P-code execution .C:0819 20 52 08 JSR $0852 .C:081c AD FF 02 LDA $02FF .C:081f C9 64 CMP #$64 .C:0821 F0 17 BEQ $083A .C:0823 A9 3F LDA #$3F .C:0825 85 BB STA $BB .C:0827 A9 08 LDA #$08 .C:0829 85 BC STA $BC .C:082b A9 06 LDA #$06 .C:082d 85 B7 STA $B7 .C:082f A9 00 LDA #$00 .C:0831 85 B9 STA $B9 .C:0833 A2 00 LDX #$00 .C:0835 A0 A0 LDY #$A0 .C:0837 20 D5 FF JSR $FFD5 .C:083a 68 PLA .C:083b 68 PLA .C:083c 4C 48 A0 JMP $A048 .C:083f .asc "RTL-64" ; seems to get called on a crash or error .C:0845 20 52 08 JSR $0852 .C:0848 20 2F A0 JSR $A02F ; jumps to $a948 .C:084b 20 5F 08 JSR $085F .C:084e 60 RTS ; dummy STOP routine .C:084f A2 01 LDX #$01 .C:0851 60 RTS ; bank out BASIC ROM .C:0852 A9 03 LDA #$03 .C:0854 05 00 ORA $00 .C:0856 85 00 STA $00 .C:0858 A9 FE LDA #$FE .C:085a 25 01 AND $01 .C:085c 85 01 STA $01 .C:085e 60 RTS ; bank in BASIC ROM .C:085f 48 PHA .C:0860 A9 03 LDA #$03 .C:0862 05 00 ORA $00 .C:0864 85 00 STA $00 .C:0866 A5 01 LDA $01 .C:0868 09 01 ORA #$01 .C:086a 85 01 STA $01 .C:086c 68 PLA .C:086d 60 RTS ; execute next statement (using BASIC ROM) .C:086e 20 5F 08 JSR $085F .C:0871 20 ED A7 JSR $A7ED .C:0874 20 52 08 JSR $0852 .C:0877 60 RTS DS and ES compiler directives which disable and enable RUN/STOP respectively. If we scan the RTL for code that modifies $0328 and $0329 where the STOP routine is vectored, we find this segment: $f6ed is the normal value for the STOP vector, so we can assume that the routine at $b7c9 (which calls the routine at $b7da to set it) enables RUN/STOP, and thus the routine at $b7e5 disables it. Both routines twiddle a byte at $a822, part of this section: By default the byte at $a822 is $ea, a 6502 NOP. This falls through to checking $91 for the state of the STOP key at the last time the keyboard matrix was scanned and branching accordingly. When the STOP routine is revectored, at the same time the byte at $a822 is changed to $60 RTS so that the STOP key check is never performed. (The RTL further achieves additional speed here by only doing this check on NEXT and IF statements even if the check is enabled.) The simplest way to deal with this is to alter RTL-64 with a hex editor and turn everything from $b7e5 to $b7ec inclusive into NOPs. This turns the DS directive into a no-op as well, and now we can break out of the program by mashing RUN/STOP, though we'll do this after MATS is loaded. Parenthetically, instead of using CONT, the compiled program can be continued with SYS 2061 instead of SYS 2073. T%(), since we've assumed it has the same purpose in the Commodore port, and it does (once again there are no cities starting with Z, so the final letter points to non-existent record 488 beyond Yuma, AZ as record 487). We then use the BASIC routine at $b08b (SYS 45195) to look up the address of the first variable in the array — note no comma between the SYS and the variable reference — which is deposited as a 16-bit address in location $47 (71). In this case the pointer is to $6b8a (27530), the actual zeroth data element, so if we rewind a few bytes we'll also get the array descriptor as well as its entire contents: C:6b83 d4 80 3d 00 01 00 1b 00 01 00 01 00 0f 00 2d 00 >C:6b93 52 00 60 00 69 00 7c 00 8b 00 97 00 ed 00 f6 00 >C:6ba3 fe 01 16 01 36 01 43 01 4e 01 62 01 63 01 70 01 >C:6bb3 a3 01 b6 01 c8 01 cf 01 e4 01 e4 01 e8 big-endian: the dimensions themselves (a single big-endian short specified as 27, i.e. 26 plus the zeroth element), and then each value. In multidimensional arrays, each dimension is run out fully before moving to the next. We can easily pick out the values we saw for T%() in the above dump, but more importantly we now have a long unambiguous 61-byte key we can search for. The entire sequence shows up in MATS, demonstrating that the file is in fact nothing more than an in-place memory dump of the program arrays' contents. Rather than manually dump the other two arrays from BASIC, we can simply walk MATS and pull the array values directly. This Perl script walks a memory dump of arrays (currently only two-dimensional integer arrays are implemented because that's all we need for this project). It skips the starting address and then writes out tab-separated values into files named by the arrays it finds. Unlike their storage in memory, where values go (0,0), (1,0), (2,0) ... (0,1), (1,1) and so on, this pivots the arrays horizontally so you get (0,0), (0,1), (0,2), etc., grouped into lines as "rows." B%() is the easiest to grok. Ignoring index 0, here is an extract from the file: B%() is referenced directly in one PRINT statement when displaying outgoing roads from a particular city: Let's take city record number 1, which is the misspelled (but only in the 1985 version) ABILINE TX (that is, Abilene, Texas). The develop-a-route feature will list all connected cities. For Abilene, there are five in the database. PRINTed. PRINTs above that B%(I,3) represents the mile length of the current connecting segment indexed by I (here 39), which would make B%() the array containing the connecting roads between cities and junctions. We also know it gets R$ and C$ from the RELative files ROADS and CITIES on each row. Helpfully, it tells us that FORT WORTH TX is city number 115 (it is), which we can see is B%(I,1). B%(I,2) is 1, which must be us, leaving B%(I,0) as the route number which must be the record number for Interstate 20 in ROADS. If we grab line 39 from out-B% (the value of index I), we do indeed see these same values. However, we can now do the search ourselves by just hunting for any record containing city 1 in columns 1 or 2 of our dumped array file (^I is a TAB character): Or, to show the correspondence more plainly: B%(). N%(), on the other hand, is a little harder to crack. Other than its apparent DIM statement at the beginning (as shown above), it is never displayed directly to the user and never appears again in plaintext in the compiled object. Here are the first few entries of the array: The first record (index 0) is the second place where the serial number is stored, though only the map editor references it. Not counting index 0 the file has exactly one record for every city or junction, so it must correspond to them somehow. Otherwise, column 3 (except for index 0) is always an unusual value 32,766, near the positive maximum for a 16-bit integer variable, and columns 4 and 5 are always zero (note from the future: this is not always true for routes which are added in the editor). Additionally, columns 1 and 2 have an odd statistical distribution where the first is always between 4499 and 8915 and the second between 11839 and 21522. There are no negative values anywhere in the file, and the first three columns are never zero (other than index 0). Whatever they are, they are certainly not random. The meaning of the values in this array managed to successfully evade my understanding for awhile, so in the meantime I turned to figuring out how Roadsearch does its routing. The theorists reading this have already internalized this part as an exercise in graph traversal, specifically finding the shortest path. Abstractly the cities and junctions can be considered as the nodes or vertices of a graph. These nodes are enumerated by name in CITIES with additional, currently opaque, metadata in N%(). The nodes are then connected with edges, which are weighted by their mile length. These edges are the highway alignments listed with termini and length in B%() and their names in ROADS. The program appears to universally treat all edges as bi-directional, going both to and from a destination, which also makes the graph generally undirected. Because of the way the database (and indeed the nature of the American highway network) is constructed, all nodes are eventually reachable from any other node. For a first analysis I presented Roadsearch with a drive I know well, having done it as part of longer trips many times in both directions: Bishop, California (city number 31) to Pendleton, Oregon (city number 338). This run can be traveled on a single highway number, namely US Highway 395, and it is also the shortest route. I have not indicated any roads that should be removed, so it will use everything in its database. five seconds. No, warp mode was not on. Take note of the progress display showing miles traveled. This is the first milepoint that appears. If we compare the edges (highway alignments) directly leaving from Bishop and Pendleton, an edge with a weight (length) of 198 miles only shows up leaving Pendleton, suggesting the program works the routing backwards: Edsger Dijkstra's algorithm, conceived in 1956 and published in 1959, an independent reimplementation of Vojtěch Jarník's 1930 minimum spanning tree algorithm that was also separately rediscovered and published by Robert C. Prim in 1957. In broad strokes, the algorithm works by building a tree out of the available nodes, putting them all into a queue. It keeps an array of costs for each node, initially set to an "infinite" value for all nodes except for the first node to be examined, traditionally the start point. It then repeatedly iterates over the queue: of all current nodes in the queue the lowest cost node is pulled out (so, first out, this will be the start point) and its edges are examined, selecting and marking any edges where the sum of the current node's cost and the cost of the edge (the mile length) are less than the current cost of the node it connects to (which, first out, will be "infinite"). The nodes connected by these marked edges take the current node as their parent, constructing the tree, and store the new lower cost value. Once the queue is empty, the algorithm halts and the tree is walked backwards from the target back to the start point, accumulating the optimal route using the parent pointers. This Perl script accepts two city numbers and will generate the optimal route between them using the Roadsearch database with Dijkstra's algorithm. It expects cities, roads (both converted to text, not the raw RELative files) and out-B% to be in the same directory. We build arrays of @cities and @roads, and turn B%() into @edges and @a (for arcs) for expedience. We then walk down the nodes and build the tree in @s, noting which arc/edge was used in @prou so that we can look it up later, and then at the end walk the parent pointers back and do all the dereferencing to generate a human-readable route. As a check we also dump @s, which indexes @edges. Here's what it computes for Bishop to Pendleton, forward and reverse: This is the same answer, and Dijkstra's algorithm further gives us a clue about one of the columns in N%() — the one which is invariably 32766. We already know from the plaintext portion of the compiled program that there are no other arrays, so the routing must be built in-place in the arrays that are present. We need an starting "infinite" value for each node within the range of a 16-bit signed integer, and no optimal path in continental North America would ever exceed that mileage, so this is the "infinite" value used to build the route into N%(). (It gets reset by reloading MATS if you select the option to start over after route generation.) That said, it's almost certain that the program is not using an implementation of Dijkstra's algorithm. The queue starts out with all (in this case 487) nodes in it, making finding the lowest cost node in the queue on each iteration quite expensive on a little ~1.02MHz 6502. The usual solution in later implementations is a min-priority queue to speed up the search, but we already know there are no other arrays in memory to construct a faster heap or tree, so any efficiency gained would likely be unimpressive. Furthermore, it's not clear the author would have known about this approach (the earliest work on it dates to the late 1970s, such as Johnson [1977]), and even if he did, it still doesn't explain the meaning of the first two columns in N%() which aren't even used by this algorithm. After all, they're not there for nothing. E(x) specific to the problem being examined to walk from a source to a target node, terminating when the target was reached. In the process it created a set of partial trees from the source to the target from which the lowest cumulative value of E(x) would ultimately be selected, starting with individual nodes where E(x) to some next immediate destination was locally smaller. Additionally, since it was now no longer required to construct a complete tree touching every possible node, the initial set of nodes considered could simply be the starting point. How the evaluation function was to be implemented was not specified, but a simple straight-line distance to the goal node would have been one logical means. Raphael instead suggested that the function to be minimized be the sum of the distance to the goal node and the evaluation function, which was reworked as a heuristic. The heuristic thus provided a hint to the algorithm, ideally attracting it to the solution sooner by considering a smaller set of nodes. If the heuristic function was properly admissible — what Hart defined as never overstating the actual cost to get to the target — then this new algorithm could be proven to always find the lowest-cost path and greatly reduce comparisons if the solution converged quickly. Considering Graph Traverser as algorithm "A," this more informed goal-directed algorithm was dubbed "A*" (say it A-star). If Roadsearch really is using A-star, at this point we don't know what the heuristic function is. (Note from the future: stay tuned.) Fortunately, a heuristic function that returns zero for all values is absolutely admissible because it will never overstate the actual cost. Although as a degenerate case doing so effectively becomes Dijkstra's algorithm, we'll still observe the runtime benefit of not having an initial queue potentially containing all possible nodes and being able to terminate early if the target is reached (which is always possible with Roadsearch's database). To make the number of nodes more manageable for study, since we will also want to observe how the program behaves for comparison, we'll now consider a smaller routing problem that will be easier to reason about. #!/usr/bin/perl die("usage: $0 point_no_1 point_no_2\n") if (scalar(@ARGV) != 2); $p1 = $ARGV[0]; $p2 = $ARGV[1]; die("wherever you go there you are\n") if ($p1 == $p2); open(K, "cities") || open(K, "cities.txt") || die("cities: $!\n"); @cities = ( undef ); while(<K>) { $ln++; if ($ln==$p1) { $v1=$p1; print; } if ($ln==$p2) { $v2=$p2; print; } chomp; push(@cities, $_); # default gscore and fscore $gscore[$ln] = 99999; $fscore[$ln] = 99999; } die("both cities must be valid\n") if (!$v1 || !$v2); close(K); open(R, "roads") || open(R, "roads.txt") || die("roads: $!\n"); open(B, "out-B%") || die("out-B%: $!\n"); @roads = ( undef ); while(<R>) { chomp; push(@roads, $_); } close(R); $ee = 0; while(<B>) { chomp; ($rn, $c1, $c2, $d) = split(/\t/, $_); $rn += 0; $c1 += 0; $c2 += 0; $d += 0; next if (!$d || !$c1 || !$c2 || !$rn); push(@edges, [ $rn, $c1, $c2, $d ]); push(@{ $a[$c1] }, [ $rn, $c2, $d, $ee ]); push(@{ $a[$c2] }, [ $rn, $c1, $d, $ee++ ]); } close(B); @camefrom = (); @openset = ( $v1 ); $gscore[$v1] = 0; $fscore[$v1] = 0; # heuristic of distance is 0 for the start while(scalar(@openset)) { @openset = sort { $fscore[$a] <=> $fscore[$b] } @openset; print join(", ", @openset), "\n"; $current = shift(@openset); last if ($current == $v2); foreach $n (@{ $a[$current] }) { $ni = $n->[1]; $tgscore = $gscore[$current] + $n->[2]; if ($tgscore < $gscore[$ni]) { $camefrom[$ni] = $current; $routefrom[$ni] = $n->[3]; $gscore[$ni] = $tgscore; $fscore[$ni] = $tgscore + 0; # "heuristic" unless (scalar(grep { $_ == $ni } @openset)) { push(@openset, $ni); } } } } @s = ( ); while(defined($camefrom[$current])) { $route = $routefrom[$current]; $current = $camefrom[$current]; unshift(@s, $route); } print join(' - ', @s), "\n"; $miles = 0; foreach(@s) { print $roads[$edges[$_]->[0]], "($edges[$_]->[0]) - ", $cities[$edges[$_]->[2]], "($edges[$_]->[2]) - ", $cities[$edges[$_]->[1]], "($edges[$_]->[1]) ", $edges[$_]->[3], " miles\n"; $miles += $edges[$_]->[3]; } print "total $miles miles\n"; f(x) (realized here as @fscore) and g(x) (@gscore). The G-score for a given node is the currently known cost of the cheapest path from the start to that node, which we build from the mile length of each edge. The node's F-score is its G-score plus the value of the heuristic function for that node, representing our best guess as to how cheap the overall path could be if the path from start to finish goes through it. In this case, the F-score and G-score will be identical because the heuristic function in this implementation always equals zero. Also, because we're interested in knowing how many fewer nodes we've considered, we dump the open set on every iteration. % route-dij 375 311 NEEDLES CA SAN BERNARDINO CA 237 - 236 - 63 - 719 I 15 - SAN BERNARDINO CA - US395/I15 CA 27 miles I 15 - US395/I15 CA - BARSTOW CA 44 miles I 40 - BARSTOW CA - I40/US95 CA 134 miles I 40 - I40/US95 CA - NEEDLES CA 12 miles total 217 miles % route-astar 375 311 NEEDLES CA SAN BERNARDINO CA 375 443, 335, 274, 376 335, 274, 442, 18, 376 274, 442, 18, 376, 34 442, 18, 376, 171, 380, 34 18, 376, 171, 380, 15, 34, 31 376, 171, 380, 15, 34, 169, 262, 31 424, 171, 380, 15, 34, 169, 262, 31, 487 171, 380, 15, 34, 169, 262, 31, 487 380, 15, 34, 169, 275, 262, 31, 487 15, 34, 169, 275, 262, 31, 379, 487 34, 45, 169, 275, 262, 31, 379, 487 45, 169, 275, 262, 31, 379, 311, 487, 342 169, 275, 262, 31, 379, 311, 119, 487, 342 275, 311, 262, 31, 379, 119, 487, 342 311, 262, 31, 379, 336, 119, 487, 170, 342 237 - 236 - 63 - 719 I 15 (31) - SAN BERNARDINO CA (375) - US395/I15 CA (443) 27 miles I 15 (31) - US395/I15 CA (443) - BARSTOW CA (18) 44 miles I 40 (60) - BARSTOW CA (18) - I40/US95 CA (169) 134 miles I 40 (60) - I40/US95 CA (169) - NEEDLES CA (311) 12 miles total 217 miles % route-astar 311 375 NEEDLES CA SAN BERNARDINO CA 311 169, 251, 34 251, 34, 262, 18 168, 34, 262, 18 34, 262, 18, 475, 110 262, 18, 475, 110, 335, 342 454, 18, 475, 110, 335, 342, 427 18, 475, 110, 335, 342, 53, 427, 446 442, 443, 475, 110, 335, 342, 53, 427, 446 443, 475, 110, 335, 342, 15, 53, 427, 31, 446 475, 110, 375, 335, 342, 15, 53, 427, 31, 446 110, 375, 335, 342, 15, 53, 427, 31, 446 375, 335, 342, 15, 53, 427, 31, 446, 124, 247 719 - 63 - 236 - 237 I 40 (60) - I40/US95 CA (169) - NEEDLES CA (311) 12 miles I 40 (60) - BARSTOW CA (18) - I40/US95 CA (169) 134 miles I 15 (31) - US395/I15 CA (443) - BARSTOW CA (18) 44 miles I 15 (31) - SAN BERNARDINO CA (375) - US395/I15 CA (443) 27 miles total 217 miles N%(), we'll instruct VICE to trap each integer array write to the range covered by MATS. (Trapping reads would also be handy but we'd go mad with the amount of data this generates.) We can get the value being written from $64/$65 (using the BASIC #1 floating point accumulator as temporary space) based on this code in the RTL: On each store we'll duly log the value in $64/$65 (remember it's big-endian) and the address it's being stored to. I wrote a one-off script to turn this string of writes into array offsets so we can understand how they relate to N%(), and then look them up in the tables so that we know which node and edge is under consideration. Remember, Roadsearch works this problem backwards starting with Needles. From the above you can see where the program marks the order of each node and the accumulated mileage. Our running totals, most likely the F-score and G-score, for a given node x are in N%(x,1) and N%(x,2), the length of the candidate edge is in N%(x,1), the optimal edge for the node is in N%(x,4), and the iteration it was marked in is recorded in N%(x,3). (We count an iteration as any loop in which a candidate node is marked, which in this simple example will occur on every run through the open set.) N%(x,0) also looks like a distance, but it doesn't correlate to a highway distance. To construct the itinerary at the end, it starts with San Bernardino and then repeatedly walks the selected edge to the next node until it reaches Needles. It's painfully obvious that compared to our models Roadsearch is considering a much smaller number of nodes (18, 34, 169, 251, 262, 311, 375, 442 and 443), counting the five in the optimal solution, and it duly converges on the optimal routing in just four iterations compared to twelve in our best case. I did look at its read pattern and found that N%(x,0) and N%(x,1) lit up a lot. These values are clearly important to computing whatever heuristic it's using, so I pulled out a few from across North America. I stared at it for awhile until it dawned on me what the numbers are. Do you see what I saw? Here, let me plot these locations on a Mercator projection for you (using the United States' territorial boundaries): coordinates. The first column increases going north, and the second column increases going west. Indeed, in the very paper written by Hart et al., they suggest that the straight-line distance from the current node to the goal would make a dandy heuristic and now we can compute it! The thing we have to watch for here is that the scales of the mileage and the coordinates are not identical, and if we use a simple distance calculation we'll end up clobbering the cumulative mileage with it — which would not only make it non-admissible as a heuristic, but also give us the wrong answer. To avoid that, we'll compute a fudge factor to yield miles from "map units" and keep the heuristic function at the same scale. Let's take San Diego, California to Bangor, Maine as a reference standard, for which computing the geodesic distance using WGS 84 yields 2,696.83 miles from city centre to city centre as the crow flies. If we compute the straight-line distance between them using the coordinates above, we get 8703.63 "map units," which is a ratio of 3.23:1. To wit: We now have implemented an h(x) that for a given node x returns its straight-line distance from the target node. Let's try it out. Remembering that Roadsearch works it backwards, we converge on the same solution examining the same nodes in the same number of iterations (four). Further proof is if we dump the program's array writes for the opposite direction: This routing proceeds in six iterations, just as we do, and once again the nodes we end up considering in our new A-star model are also the same. It also explains N%(x,0) — this is the straight-line distance and thus our heuristic, calculated (it's backwards) as the distance to the "start." For example, Palm Springs is indeed roughly 135 miles from Needles as the crow flies, again depending on your exact termini, whereas the western US 95/Interstate 40 junction is only about 11 miles. It should also be obvious that this "fudge" divisor has a direct effect on the efficiency of the routine. While we're purportedly using it as a means to scale down the heuristic, doing so is actually just a backhanded way of deciding how strongly we want the heuristic weighted. However, we can't really appreciate its magnitude in a problem space this small, so now we'll throw it a big one: drive from San Diego (376) to Bangor (17). (I did myself drive from Bangor to San Diego in 2006, but via Georgia to visit relatives.) This route requires a lot more computation and will also generate multiple useless cycles in which no node is sufficiently profitable, so I added code to our heuristic A-star router to explicitly count iterations only when a candidate node is marked: With our computed initial fudge factor of 3.23, we get this (again, worked backwards): And now for the real thing. quick. Both our simulation and the real program agree on the optimal route, as demonstrated by the cumulative mileage. N%(x,3). Interestingly, we do not: our simulation converged on the optimal route in 328 iterations, but the Commodore got it in 307. However, if we tune our simulation's fudge factor to 3.03, we also get it in 307. Is there an optimal fudge divisor? We know the optimal route, so we'll run a whole bunch of simulations over the entire interval, rejecting ones that get the wrong answer and noting the iterations that were required for the right ones. In fact, we can do this generally for any routing by using the longhand Dijkstra method to get the correct answer and then run a whole bunch of tweaked A-stars compared with its routing after that. In our simulation, stepping the fudge divisor over the interval from 0 inclusive to 4 by 0.001 increments, I ran six long haul drives in total, San Diego (376) to Bangor (17) and back, Bellingham, WA (24) to Miami, FL (291) and back, and then San Francisco, CA (377) to Washington, DC (465) and back. I then plotted them all out as curves. h(x)=0 heuristic from above, which is always accurate and the least goal directed (and consequently requires the most iterations). First off, there is no single fudge divisor that always yields the most optimal route for all of our test cases. Notice how much of the graph yields absolutely wrong answers (so nothing is plotted), and even in the interval between around 2.2 and 2.8 or so not all of the curves are valid. They all become valid around 3, which I enlarged in the inset on the left, and each one's iteration count slowly rises after that with the zero heuristic as more or less the upper asymptote. Except for the purple curve, however, 3 is not generally their lower bound. Second, there is no single fudge divisor that always corresponds to the program's iteration count, which is 311 (17 to 376), 307 (376 to 17), 183 (291 to 24), 264 (24 to 291), 261 (377 to 465) and 220 (465 to 377). With the value of 3.03 from above, however, our simulation generally does better than the real thing, which is both gratifying for the purposes of pathfinding and frustrating for the purposes of modeling. B%(x,3)). That assures it will always have an insuperably high cost, yet be unlikely to overflow if it's involved in any calculations, and thus it will never be pulled from the open set for evaluation. The in-memory database is always reloaded from disk after the route is disposed of. As our final stop in this article nearly as long as some of the drives we've analysed, we'll do my first 2005 long haul expedition along US Highway 395. This is a useful way to show how to add your own roads and what the program does with that, since not all of its segments are in the database. M, which will then cause the program to ignore any extraneous ones. Style points are added if you clear the records in the RELative files and turn them into nulls, though this isn't required. For the 1985 release, set M to 20 34 38 37 20 0D 20 37 37 35 20 0D 20 32 31 35 20 0D 20 39 39 39 39 20 0D, which is (in PETSCII) 487 cities, 775 road segments, 215 road names and serial number 9999 or as you like. (Note this will not change the serial number in the editor, but with this patch you're only using the main program in any case.) However, if you try to use a disk modified this way to add routes or cities, it will correctly recognize the new ones as new but erroneously find their old links in the arrays. That will not only get you unexpected residual road links to any new cities, but the links' presence can also confuse the program to such an extent it will end up in an infinite loop. In this case you'll need to delete these hanging links by patching MATS to null out the entries in B%() and N%() referring to city numbers greater than 487 and road numbers greater than 215 (remember that there are three arrays in that file and that multi-dimensional arrays run out each dimension before going to the next, meaning you can't just truncate it and stuff nulls on the end). This is simple to understand in concept but a little tedious to do, so I have left this as an exercise for the reader. Should I come up with a clean MATS myself, I will put it up somewhere if there is interest; for now we'll do our work on a well-loved copy which means that the city numbers you see here may not necessarily be the ones you get if you're following along at home. From there it proceeded north to the Columbia River gorge and intersected US 730 further west, then went east with it as before to US 12. This intermediate state is shown on this 1976 Donnelley atlas page, with the relevant portion highlighted. It was not until around 1986-7 when Interstate 82 was completed that US 395 was moved to I-82 instead as its present-day routing, leaving only a tiny residual overlap with US 730 east of Umatilla, Oregon. The problem is that none of these locations have waypoints we can easily connect to. M or MATS to disk until this point so that if the process gets aborted in the middle, the database remains internally consistent except for some harmless unreferenced RELative file records that can be overwritten later. N%() as its heuristic, but we were never asked for anything like latitude or longitude. How, then, can it get this information for the new cities we just created? At this point I dumped the RELative files and the new MATS to find out, and it so happens that the new city gets the same coordinates as the one it was connected to: both Wallula Junction and Walla Walla get coordinates 8062 and 20802, which are Walla Walla's coordinates. As we are connecting a couple new cities together along US 12, they also get the same coordinates, up to Yakima which retains its own. If we extended from both sides into the middle it would probably have made the database slightly more accurate at the cost of some inconvenient dancing around. This would be a good idea if you needed to reconstruct or chop up a particularly long segment, for example. I had also made an earlier call-forward that cities added from the editor sometimes don't have columns 4 or 5 set to zero. I can see this for the entries the previous owner entered, but all of mine ended up zero, so I'm not sure under what conditions that occurred. It doesn't seem to matter to the program in any case. .d64 disk images for this program all show new routes had been added suggests they got non-trivial usage by their owners, which really speaks to the strengths of the program and what it could accomplish. While it's hardly a sexy interface, it's functional and straightforward, and clearly a significant amount of time was spent getting the map data in and tuning up the routing algorithm so that it could perform decently on an entry-level home computer. Unfortunately, I suspect it didn't sell a lot of copies, because there are few advertisements for Columbia Software in any magazine of the time and no evidence that the Commodore version sold any better than the Apple II release did. That's a shame because I think its technical achievements merited a better market reception then, and it certainly deserves historical recognition today. We take things like Google Maps almost for granted, but a program that could drive a highway map on your 1985 Apple or Commodore would truly have been fascinating. As for me, I look forward to another long haul road trip in the future, maybe US 95 or US 101, or possibly reacquainting myself with US 6 which I drove in 2006 from Bishop to Cape Cod, Massachusetts. Regardless of where we end up going, though, this time we won't be taking the SX-64 with us — my wife has insisted on getting her seat back.
An affordable, AI-assisted, wearable robotic arm? That’s not science fiction – it’s RedSnapper, an open-source prosthesis built by PAC Tech, a team of high school students from Istituto Maria Immacolata in Gorgonzola, Italy. Powered entirely by Arduino boards, their project won the national “Robot Arm Makers” title at the 2025 RomeCup – and we think […] The post Meet the teens behind RedSnapper: a smart Arduino-powered prosthetic arm appeared first on Arduino Blog.
Introduction The SR620 Repairing the SR620 Replacing the Backup 3V Lithium Battery Switching to an External Reference Clock Running Auto-Calibration Oscilloscope Display Mode References Footnotes Introduction A little over a year ago, I found a Stanford Research Systems SR620 universal time interval counter at the Silicon Valley Electronics Flea Market. It had a big sticker “Passes Self-Test” and “Tested 3/9/24” (the day before the flea market) on it so I took the gamble and spent an ungodly $4001 on it. Luckily, it did work fine, initially at least, but I soon discovered that it sometimes got into some weird behavior after pressing the power-on switch. The SR620 The SR620 was designed sometime in the mid-1980s. Mine has a rev C PCB with a date of July 1988, 37 year old! The manual lists 1989, 2006, 2019 and 2025 revisions. I don’t know if there were any major changes along the way, but I doubt it. It’s still for sale on the SRS website, starting at $5150. The specifications are still pretty decent, especially for a hobbyist: 25 ps single shot time resolution 1.3 GHz frequency range 11-digit resolution over a 1 s measurement interval The SR620 is not perfect, one notable issue is its thermal design. It simply doesn’t have enough ventilation holes, the heat-generating power regulators are located close to the high precision time-to-analog converters, and the temperature sensor for the fan is inexplicably placed right next to the fan, which is not close at all to the power regulators. The Signal Path has an SR620 repair video that talks about this. Repairing the SR620 You can see the power-on behavior in the video below: Of note is that lightly touching the power button changes the behavior and sometimes makes it get all the way through the power-on sequence. This made me hopeful that the switch itself was bad, something that should be easy to fix. Unlike my still broken SRS DG535, another flea market buy with the most cursed assembly, the SR620 is a dream to work on: 4 side screws is all it takes to remove the top of the case and have access to all the components from the top. Another 4 screws to remove the bottom panel and you have access to the solder side of the PCB. You can desolder components without lifting the PCB out of the enclosure. Like my HP 5370A, the power switch of the SR620 selects between power on and standby mode. The SR620 enables the 15V rail at all times to keep a local TCXO or OCXO warmed up. The power switch is located at the right of the front panel. It has 2 black and 2 red wires. When the unit is powered on, the 2 black wires and the 2 red wires are connected to each other. To make sure that the switch itself was the problem, I soldered the wires together to create a permanent connection: After this, the SR620 worked totall fine! Let’s replace the switch. Unscrew 4 more screws and pull the knobs of the 3 front potentiometers and power switch to get rid of the front panel: A handful of additional screws to remove the front PCB from the chassis, and you have access to the switch: The switch is an ITT Schadow NE15 T70. Unsurprisingly, these are not produced anymore, but you can still find them on eBay. I paid $7.5 + shipping, the price increased to $9.5 immediately after that. According to this EEVblog forum post, this switch on Digikey is a suitable replacement, but I didn’t try it. The old switch (bottom) has 6 contact points vs only 4 of the new one (top), but that wasn’t an issue since only 4 were used. Both switches also have a metal screw plate, but they were oriented differently. However, you can easily reconfigure the screw plate by straightening 4 metal prongs. If you buy the new switch from Digikey and it doesn’t come with the metal screw plate, you should be able to transplant the plate from the broken switch to the new one just the same. To get the switch through the narrow hole of the case, you need to cut off the pins on the one side of the switch and you need to bend the contact points a bit. After soldering the wires back in place, the SR620 powered on reliably. Switch replacement completed! Replacing the Backup 3V Lithium Battery The SR620 has a simple microcontroller system consists of a Z8800 CPU, 64 KB of EPROM and a 32 KB SRAM. In addition to program data, the SRAM also contains calibration and settings. replaced one such battery in my HP 3478A multimeter. These batteries last almost forever, but mine had a 1987 date code and 38 years is really pushing things, so I replaced it with this new one from Digikey. The 1987 version of this battery had 1 pin on each side, on the new ones, the + side has 2 pins, so you need to cut one of those pins and install the battery slightly crooked back onto the PCB. When you first power up the SR620 after replacing the battery, you might see “Test Error 3” on the display. According to the manual: Test error 3 is usually “self-healing”. The instrument settings will be returned to their default values and factory calibration data will be recalled from ROM. Test Error 3 will recur if the Lithium battery or RAM is defective. After power cycling the device again, the test error was gone and everything worked, but with a precision that was slightly lower than before: before the battery replacement, when feeding the 10 MHz output reference clock into channel A and measuring frequency with a 1s gate time, I’d get a read-out of 10,000,000.000N MHz. In other words: around a milli-Hz accuracy. After the replacment, the accuracy was about an order of magnitude worse. That’s just not acceptable! The reason for this loss in accuracy is because the auto-calibration parameters were lost. Luckily, this is easy to fix. Switching to an External Reference Clock My SR620 has the cheaper TCXO option which gives frequency measurement results that are about one order of magnitude less accurate than using an external OCXO based reference clock. So I always switch to an external reference clock. The SR620 doesn’t do that automatically, you need to manually change it in the settings, as follows: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” SET -> “cloc source int” Scale Down arrow -> “cloc source rear” SET -> “cloc Fr 10000000” SET If you have a 5 MHz reference clock, use the down or up arrow to switch between 1000000 and 5000000. Running Auto-Calibration You can rerun auto-calibration manually from the front panel without opening up the device with this sequence: SET -> “ctrl cal out scn” SEL -> “ctrl cal out scn” SET -> “auto cal” START The auto-calibration will take around 2 minutes. Only run it once the device has been running for a while to make sure all components have warmed up and are at stable temperature. The manual recommends a 30 minute warmup time. After doing auto-calibration, feeding back the reference clock into channel A and measuring frequency with a 1 s gate time gave me a result that oscillated around 10 MHz, with the mHz digits always 000 or 999.2 It’s possible to fine-tune the SR620 beyond the auto-calibration settings. One reason why one might want to do this is to correct for drift of the internal oscillator To enable this kind of tuning, you need to move a jumper inside the case. The time-nuts email list has a couple of discussions about this, here is one such post. Page 69 of the SR620 manual has detailed calibration instructions. Oscilloscope Display Mode When the 16 7-segment LEDs on the front panel are just not enough, the SR620 has this interesting way of (ab)using an oscilloscope as general display: it uses XY mode to paint the data. I had tried this mode in the past with my Sigilent digital oscilloscope, but the result was unreadable: for this kind of rendering, having a CRT beam that lights up all the phosphor from one point to the next is a feature, not a bug. This time, I tried it with an old school analog oscilloscope3: (Click to enlarge) The result is much better on the analog scope, but still very hard to read. When you really need all the data you can get from the SR620, just use the GPIB or RS232 interface. References The Signal Path - TNP #41 - Stanford Research SR620 Universal Time Interval Counter Teardown, Repair & Experiments Some calibration info about the SR620 Fast High Precision Set-up of SR 620 Counter The rest of this page has a bunch of other interesting SR620 related comments. Time-Nuts topics The SR620 is mentioned in tons of threads on the time-nuts emaiml list. Here are just a few interesting posts: This post talks about some thermal design mistakes in the SR620. E.g. the linear regulators and heat sink are placed right next to the the TCXO. It also talks about the location of the thermistor inside the fan path, resulting in unstable behavior. This is something Shrirar of The Signal Path fixed by moving the thermistor. This comment mentions that while the TXCO stays powered on in standby, the DAC that sets the control voltage does not, which results in an additional settling time after powering up. General recommendation is to use an external 10 MHz clock reference. This comment talks about warm-up time needed depending on the desired accuracy. It also has some graphs. Footnotes This time, the gamble paid off, and the going rate of a good second hand SR620 is quite a bit higher. But I don’t think I’ll ever do this again! ↩ In other words, when fed with the same 10 MHz as the reference clock, the display always shows a number that is either 10,000,000,000x or 9,999,999,xx. ↩ I find it amazing that this scope was calibrated as recently as April 2023. ↩