More from computers are bad
A long time ago I wrote about secret government telephone numbers, and before that, secret military telephone buttons. I suppose this is becoming a series. To be clear, the "secret" here is a joke, but more charitably I could say that it refers to obscurity rather than any real effort to keep them secret. Actually, today's examples really make this point: they're specifically intended to be well known, but are still pretty obscure in practice. If you've been around for a while, you know how much I love telephone numbers. Here in North America, we have a system called the North American Numbering Plan (NANP) that has rigidly standardized telephone dialing practices since the middle of the 20th century. The US, Canada, and a number of Central American countries benefit from a very orderly system of area codes (more formally numbering plan areas or NPAs) followed by a subscriber number written in the format NXX-XXXX (this is a largely NANP-centric notation for describing phone number patterns, N represents the digits 2-9 and X any digit). All of these NANP numbers reside under the country code 1, allowing at least theoretically seamless international dialing within the NANP community. It's really a pretty elegant system. NANP is the way it is for many reasons, but it mostly reflects technical requirements of the telephone exchanges of the 1940s. This is more thoroughly explained in the link above, but one of the goals of NANP is to ensure that step-by-step (SxS) exchanges can process phone numbers digit by digit as they are dialed. In other words, it needs to be possible to navigate the decision tree of telephone routing using only the digits dialed so far. Readers with a computer science education might have some tidy way to describe this in terms of Chompsky or something, but I do not have a computer science education; I have an Information Technology education. That means I prefer flow charts to automata, and we can visualize a basic SxS exchange as a big tree. When you pick up your phone, you start at the root of the tree, and each digit dialed chooses the edge to follow. Eventually you get to a leaf that is hopefully someone's telephone, but at no point in the process does any node benefit from the context of digits you dial before, after, or how many total digits you dial. This creates all kinds of practical constraints, and is the reason, for example, that we tend to write ten-digit phone numbers with a "1" before them. That requirement was in some ways long-lived (The last SxS exchange on the public telephone network was retired in 1999), and in other ways not so long lived... "common control" telephone exchanges, which did store the entire number in electromechanical memory before making a routing decision, were already in use by the time the NANP scheme was adopted. They just weren't universal, and a common nationwide numbering scheme had to be designed to accommodate the lowest common denominator. This discussion so far is all applicable to the land-line telephone. There is a whole telephone network that is, these days, almost completely separate but interconnected: cellular phones. Early cellular phones (where "early" extends into CDMA and early GSM deployments) were much more closely attached to the "POTS" (Plain Old Telephone System). AT&T and Verizon both operated traditional telephone exchanges, for example 5ESS, that routed calls to and from their customers. These telephone exchanges have become increasingly irrelevant to mobile telephony, and you won't find a T-Mobile ESS or DMS anywhere. All US cellular carriers have adopted the GSM technology stack, and GSM has its own definition of the switching element that can be, and often is, fulfilled by an AWS EC2 instance running RHEL 8. Calls between cell phones today, even between different carriers, are often connected completely over IP and never touch a traditional telephone exchange. The point is that not only is telephone number parsing less constrained on today's telephone network, in the case of cellular phones, it is outright required to be more flexible. GSM also defines the properties of phone numbers, and it is a very loose definition. Keep in mind that GSM is deeply European, and was built from the start to accommodate the wide variety of dialing practices found in Europe. This manifests in ways big and small; one of the notable small ways is that the European emergency number 112 works just as well as 911 on US cell phones because GSM dictates special handling for emergency numbers and dictates that 112 is one of those numbers. In fact, the definition of an "emergency call" on modern GSM networks is requesting a SIP URI of "urn:service:sos". This reveals that dialed number handling on cellular networks is fundamentally different. When you dial a number on your cellular phone, the phone collects the entire number and then applies a series of rules to determine what to do, often leading to a GSM call setup process where the entire number, along with various flags, is sent to the network. This is all software-defined. In the immortal words of our present predicament, "everything's computer." The bottom line is that, within certain regulatory boundaries and requirements set by GSM, cellular carriers can do pretty much whatever they want with phone numbers. Obviously numbers need to be NANP-compliant to be carried by the POTS, but many modern cellular calls aren't carried by the POTS, they are completed entirely within cellular carrier systems through their own interconnection agreements. This freedom allows all kinds of things like "HD voice" (cellular calls connected without the narrow filtering and companding used by the traditional network), and a lot of flexibility in dialing. Most people already know about some weird cellular phone numbers. For example, you can dial *#06# to display your phone's various serial numbers. This is an example of a GSM MMI (man-machine interface) code, phone numbers that are handled entirely within your device but nonetheless defined as dialable numbers by GSM for compatibility with even the most basic flip phones. GSM also defined numbers called USSD for unstructured supplementary service data, which set up connections to the network that can be used in any arbitrary way the network pleases. Older prepaid phone services used to implement balance check and top-up operations using USSD numbers, and they're also often used in ways similar to Vertical Service Codes (VSCs) on the landline network to control carrier features. USSDs also enabled the first forms of mobile data, which involved a "special telephone call" to a USSD in order to download a cut-down form of ESPN in a weird mobile-specific markup language. Now, put yourself in the shoes of an enterprising cellular network. The flexibility of processing phone numbers as you please opens up all kinds of possibilities. Innovative services! Customer convenience! Sell them for money! Oh my god, sell them for money! It seems like this started with customer service. It is an old practice, dating to the Bell operating companies, to have special short phone numbers to reach the telephone company itself. The details varied by company (often based on technical constraints in their switching system), but a common early setup was that dialing 114 got you the repair service operator to report a problem with your phone line. These numbers were usually listed in the front of the phone book, and for the phone company the fact that they were "special" or nonstandard was sort of a feature, since they could ensure that they were always routed within the same switch. The selection of "911" as the US emergency number seems rooted in this practice, as later on several major telcos used the "N11" numbers for their service lines. This became immortalized in the form of 611, which will get you customer service for most phone carriers. So cellular companies did the same, allocating themselves "special" numbers for various service lines. Verizon offers #PMT to make a payment. Naturally, there's also room for upsell services: #ROAD for roadside assistance on Verizon. The odd thing about these phone numbers is that there's really no standard involved, they're just the arbitrary practices of specific cellular companies. The term "mobile dial code" (MDC) is usually used to refer to them, although that term seems to have arisen organically rather than by intent. Remember, these aren't a real thing! The carriers just make them up, all on their own. The only real constraint on MDCs is that they need to not collide with any POTS number, which is most easily achieved by prefixing them with some combination of * and #, and usually not "*#" because it's referenced by the GSM standard for MMI. MDCs are available for purchase, but the terms don't seem to be public and you have to negotiate separately with each carrier. That's because there is no centralization. This is where MDCs stand in clear contrast to the better known SMS Short Code, or SMSSC. Those are the five or six-digit numbers widely used in advertising campaigns. SMSSCs are centrally managed by the SMS Short Code Registry, which is a function of industry association CTIA but contracted to iConectiv. iConectiv is sort of like the SAIC of the communications industry, a huge company that dates back to the Bell System (where it became Bellcore after divestiture) and that no one has heard of but nonetheless is a critically important part of the telephone system. Providers that want to have an SMSSC (typically on behalf of one of their customers) pay a fee, and usually recoup it from the end user. That fee is not cheap, typical end-user rates for an SMSSC run over $10k a year. But at least it's straightforward, and your SMS A2P or marketing company can make it happen for you. MDCs have no such centralization, no standardized registration process. You negotiate with each carrier individually. That means it's pretty difficult to put together "complete coverage" on an MDC by getting the same one assigned by every major carrier. And this is one of those areas where "good enough" is seldom good enough; people get pissed off when something you advertise doesn't work. Putting a phone number that only works for some people on a billboard can quickly turn into an expensive embarrassment, so companies will be wary of using an MDC in marketing if they don't feel really confident that it works for the vast majority of cellphone users. Because of this fragmentation, adoption of MDCs for marketing purposes has been very low. The only going concern I know of is #250, operated by a company called Mobile Direct Response. The premise of #250 is very simple: users call 250 and are greeted by a simple IVR. They say a keyword, and they're either forwarded to the phone number of the business that paid for the keyword or they receive a text message response with more information. #250 is specifically oriented towards radio advertising, where asking people to remember a ten-digit phone number is, well, asking a lot. It's also made the jump to podcast advertising. #250 is priced in a very radio-centric way, by the keyword and the size of the market area in which the advertisement that gives the keyword is played. 250 was founded by Dave Robinett, who used to work on marketing at Sprint, presumably where he became aware that these MDCs were a possibility. He has negotiated for #250 to work across a substantial list of cellular carriers in the US and Canada, providing almost complete coverage. That wasn't easy, Robinett said in an interview that it took five years to get AT&T, T-Mobile, Verizon, and Sprint on board. 250 does not appear to be especially widely used. For one, the website is a little junky, with some broken links and other indications that it is not backed by a large communications department. Dave Robinett may be the entire company. They've been operating since at least 2017, and I've only ever heard it in an ad once---a podcast ad that ended with "Call #250 and say I need a dentist." One thing you quickly notice when you look into telephone marketing is that dentists are apparently about 80% of the market. He does mention success with shows like "Rush, Hannity, and Levin," so it's safe to say that my radio habits are a little different from Robinett's. That's not to say that #250 is a failure. In the same interview Robinett says that the company pays his mortgage and, well, that ain't too bad. But it's also nothing like the widespread adoption of SMSSCs. One wonders if the limitation of MDCs to one company that is so focused on radio marketing limits their potential. It might really open things up if some company created a registration service, and prenegotiated terms with carriers so that companies could pick up their own MDCs to use as they please. Well, yeah, someone's trying. Around 2006, a recently-founded mobile marketing company called Zoove announced StarStar dialing. I'm a little unclear on Zoove's history. It seems that they were originally founded as Teleractive in Rhode Island as an SMS short code keyword response service, and after an infusion of VC cash moved to Palo Alto and started looking for something bigger. In 2016, they were acquired by a call center technology company called Mindful. Or maybe Zoove sold the StarStar business to Mindful? Stick a pin in that. I don't love the name StarStar, which has shades of Spacestar Ordering. But it refers to their chosen MDC prefix, two stars. Well, that point is a little odd, according to their marketing material you can also get numbers with a # prefix or * prefix, but all of the examples use **. I would say that, in general, StarStar has it a little less together than #250. Their website is kind of broken, it only loads intermittently and some of the images are missing. At one point it uses the term "CADC" to describe these numbers but I can't find that expanded anywhere. Plus the "About" page refers repeatedly to Virtual Hold Technologies, which renamed to VHT in 2018 and Mindful 2022. It really feels like the vestigial website of a dead company. I know about StarStar because, for a time, trucks from moving franchise All My Sons prominently bore the number MOVE on the side. Indeed, this is still one of the headline examples on the StarStar website, but it doesn't work. I just get a loud click and then the call ends. And it's not that StarStar doesn't work with my mobile carrier, because StarStar's own number MOBILE does connect to their IVR. That IVR promises that a representative will speak with me shortly, plays about five seconds of hold music, and then dumps me on a voicemail system. Despite StarStar numbers apparently basically working, I'm finding that most of the examples they give on their website won't even connect. Perhaps results will vary depending on the mobile network. Well, perhaps not that much is lost. StarStar was founded by Steve Doumar, a serial telephone marketing entrepreneur with a colorful past founding various inbound call center companies. Perhaps his most famous venture is R360, a "lead acquisition" service memorialized by headlines like "Drug treatment referral service took advantage of addictions to make a quick buck" from the Federal Trade Commission. He's one of those guys whose bio involves founding a new company every two years, which he has to spin as entrepreneurial dynamism rather than some combination of fleeing dissatisfied investors and fleeing angered regulators. Today he runs whisp.io, a "customer activation platform" that appears to be a glorified SMS advertising service featuring something ominously called "simplified opt-in." Whisp has a YouTube channel which features the 48-second gem "Fun Fact We Absolutely Love About Steve Doumar". Description: Our very own CEO, Steve Doumar is a kind and generous person who has given back to the community in many ways; this man is absolutely a man with a heart of gold. Do you want to know the fun fact? Yes you do! Here it is: "He is an incredible philanthropist. He loves helping other people. Every time I'm with him he comes up with new ways and new ideas to help other people. Which I think is amazing. And he doesn't brag about it, he doesn't talk about it a lot." Except he's got his CMO making a YouTube video about it? From Steve Doumar's blog: American entrepreneur Ray Kroc expressed the importance of persisting in a busy world where everyone wants a bite of success. This man is no exception. An entrepreneur. A family man. A visionary. These are the many names of a man that has made it possible for opt-ins to be safe, secure, and accurate; Steve Doumar. I love this stuff, you just can't make it up. I'm pretty sure what's going on here is just an SEO effort to outrank the FTC releases and other articles about the R360 case when you search for his name. It's only partially working, "FTC Hits R360 and its Owner With $3.8 Million Civil ..." still comes in at Google result #4 for "Steve Doumar," at least for me. But hey, #4 is better than #1. Well, to be fair to StarStar, I don't think Steve Doumar has been involved for some years, but also to be fair, some of their current situation clearly dates to past behavior that is maybe less than savory. Zoove originally styled itself as "The National StarStar Registry," clearly trying to draw parallels to CTIA/iConectiv's SMSSC registry. Their largest customer was evidently a company called Sumotext, which leased a number of StarStar numbers to offer an SMS and telephone marketing service. In 2016, Sumotext sued StarStar, Zoove, VHT (now Mindful), and a healthy list of other entities all involved in StarStar including the intriguingly named StarSteve LLC. I'm not alone in finding the corporate history a little baffling; in a footnote on one ruling the court expressed confusion about all the different names and opted to call them all Zoove. In any case, Sumotext alleged that Zoove, StarSteve, and VHT all merged as part of a scheme to illegally monopolize the StarStar market by undercutting the companies that had been leasing the numbers and effectively giving VHT (Mindful) an exclusive ability to offer marketing services with StarStar numbers. The case didn't end up going anywhere for Sumotext, the jury found that Sumotext hadn't established a relevant market which is a key part of a Sherman act case. An appeal was made all the way to the Supreme Court, but they didn't take it up. What the case did do was publicize some pretty sketchy sounding details, like the seemingly uncontested accusation that VHT got Sumotext's customer list from the registry database and used it to convert them all into StarSteve customers. And yes, the Steve in StarSteve is Steve Doumar. As best I can tell, the story here is that Steve Doumar founded Zoove (or bought Teleractive and renamed it or something?) to establish the National StarStar Registry, then founded a marketing company called StarSteve that resold StarStar numbers, then merged StarSteve and the National StarStar Registry together and cut off all of the other resellers. Apparently not a Sherman act violation but it sure is a bad look, and I wonder how much it contributed to the lack of adoption of the whole StarStar idea---especially given that Sumotext seems to have been responsible for most of that adoption, including the All My Sons deal for MOVE. I wonder if All My Sons had to take MOVE off of their trucks because of the whole StarSteve maneuver? That seems to be what happened. Look, ten-digit phone numbers are had to remember, that much is true. But as is, the "MDC" industry doesn't seem stable enough for advertising applications where the number needs to continue to work into the future. I think the #250 service is probably here to stay, but confined to the niche of audio advertising. StarStar raised at least $30 million in capital in the 2010s, but seems to have shot itself in the foot. StarStar owner VHT/Mindful, now acquired by Medallia, doesn't even mention StarStar as a product offering. Hey, remember how Steve Doumar is such a great philanthropist? There are a lot of vestiges around of StarStar Inc., a nonprofit that made StarStar numbers available to charitable organizations. Their website, starstar.org, is now a Wix error page. You can find old articles about StarStar Me, also written **me, which sounds lewd but was a $3/mo offering that allowed customers to get a vanity short code (such as ** followed by their name)---the original form of StarStar, dating back to 2012 and the beginning of Zoove. In a press release announcing the StarStar Me, Zoove CEO Joe Gillespie said: With two-thirds of smartphone users having downloaded social networking apps to their phones, there’s a rapidly growing trend in today's on-the-go lifestyle to extend our personal communications and identity into the digital realm via our mobile phones. And somehow this leads to paying $3 for to get StarStarred? I love it! It's so meaningless! And years later it would be StarStar Mobile formerly Zoove by VHT now known as Mindful a Medallia company. Truly an inspiring story of industry, and just one little corner of the vast tapestry of phone numbers.
Some time ago, via a certain orange website, I came across a report about a mission to recover nuclear material from a former Soviet test site. I don't know what you're doing here, go read that instead. But it brought up a topic that I have only known very little about: Hydronuclear testing. One of the key reasons for the nonproliferation concern at Semipalatinsk was the presence of a large quantity of weapons grade material. This created a substantial risk that someone would recover the material and either use it directly or sell it---either way giving a significant leg up on the construction of a nuclear weapon. That's a bit odd, though, isn't it? Material refined for use in weapons in scarce and valuable, and besides that rather dangerous. It's uncommon to just leave it lying around, especially not hundreds of kilograms of it. This material was abandoned in place because the nature of the testing performed required that a lot of weapons-grade material be present, and made it very difficult to remove. As the Semipalatinsk document mentions in brief, similar tests were conducted in the US and led to a similar abandonment of special nuclear material at Los Alamos's TA-49. Today, I would like to give the background on hydronuclear testing---the what and why. Then we'll look specifically at LANL's TA-49 and the impact of the testing performed there. First we have to discuss the boosted fission weapon. Especially in the 21st century, we tend to talk about "nuclear weapons" as one big category. The distinction between an "A-bomb" and an "H-bomb," for example, or between a conventional nuclear weapon and a thermonuclear weapon, is mostly forgotten. That's no big surprise: thermonuclear weapons have been around since the 1950s, so it's no longer a great innovation or escalation in weapons design. The thermonuclear weapon was not the only post-WWII design innovation. At around the same time, Los Alamos developed a related concept: the boosted weapon. Boosted weapons were essentially an improvement in the efficiency of nuclear weapons. When the core of a weapon goes supercritical, the fission produces a powerful pulse of neutrons. Those neutrons cause more fission, the chain reaction that makes up the basic principle of the atomic bomb. The problem is that the whole process isn't fast enough: the energy produced blows the core apart before it's been sufficiently "saturated" with neutrons to completely fission. That leads to a lot of the fuel in the core being scattered, rather than actually contributing to the explosive energy. In boosted weapons, a material that will fusion is added to the mix, typically tritium and deuterium gas. The immense heat of the beginning of the supercritical stage causes the gas to undergo fusion, and it emits far more neutrons than the fissioning fuel does alone. The additional neutrons cause more fission to occur, improving the efficiency of the weapon. Even better, despite the theoretical complexity of driving a gas into fusion¸ the mechanics of this mechanism are actually simpler than the techniques used to improve yield in non-boosted weapons (pushers and tampers). The result is that boosted weapons produce a more powerful yield in comparison to the amount of fuel, and the non-nuclear components can be made simpler and more compact as well. This was a pretty big advance in weapons design and boosting is now a ubiquitous technique. It came with some downsides, though. The big one is that whole property of making supercriticality easier to achieve. Early implosion weapons were remarkably difficult to detonate, requiring an extremely precisely timed detonation of the high explosive shell. While an inconvenience from an engineering perspective, the inherent difficulty of achieving a nuclear yield also provided a safety factor. If the high explosives detonated for some unintended reason, like being struck by canon fire as a bomber was intercepted, or impacting the ground following an accidental release, it wouldn't "work right." Uneven detonation of the shell would scatter the core, rather than driving it into supercriticality. This property was referred to as "one point safety:" a detonation at one point on the high explosive assembly should not produce a nuclear yield. While it has its limitations, it became one of the key safety principles of weapon design. The design of boosted weapons complicated this story. Just a small fission yield, from a small fragment of the core, could potentially start the fusion process and trigger the rest of the core to detonate as well. In other words, weapon designers became concerned that boosted weapons would not have one point safety. As it turns out, two-stage thermonuclear weapons, which were being fielded around the same time, posed a similar set of problems. The safety problems around more advanced weapon designs came to a head in the late '50s. Incidentally, so did something else: shifts in Soviet politics had given Khrushchev extensive power over Soviet military planning, and he was no fan of nuclear weapons. After some on-again, off-again dialog between the time's nuclear powers, the US and UK agreed to a voluntary moratorium on nuclear testing which began in late 1958. For weapons designers this was, of course, a problem. They had planned to address the safety of advanced weapon designs through a testing campaign, and that was now off the table for the indefinite future. An alternative had to be developed, and quickly. In 1959, the Hydronuclear Safety Program was initiated. By reducing the amount of material in otherwise real weapon cores, physicists realized they could run a complete test of the high explosive system and observe its effects on the core without producing a meaningful nuclear yield. These tests were dubbed "hydronuclear," because of the desire to observe the behavior of the core as it flowed like water under the immense explosive force. While the test devices were in some ways real nuclear weapons, the nuclear yield would be vastly smaller than the high explosive yield, practically nill. Weapons designers seemed to agree that these experiments complied with the spirit of the moratorium, being far from actual nuclear tests, but there was enough concern that Los Alamos went to the AEC and President Eisenhower for approval. They evidently agreed, and work started immediately to identify a suitable site for hydronuclear testing. While hydronuclear tests do not create a nuclear yield, they do involve a lot of high explosives and radioactive material. The plan was to conduct the tests underground, where the materials cast off by the explosion would be trapped. This would solve the immediate problem of scattering nuclear material, but it would obviously be impractical to recover the dangerous material once it was mixed with unstable soil deep below the surface. The material would stay, and it had to stay put! The US Army Corps of Engineers, a center of expertise in hydrology because of their reclamation work, arrived in October 1959 to begin an extensive set of studies on the Frijoles Mesa site. This was an unused area near a good road but far on the east edge of the laboratory, well separated from the town of Los Alamos and pretty much anything else. More importantly, it was a classic example of northern New Mexican geology: high up on a mesa built of tuff and volcanic sediments, well-drained and extremely dry soil in an area that received little rain. One of the main migration paths for underground contaminants is their interaction with water, and specifically the tendency of many materials to dissolve into groundwater and flow with it towards aquifers. The Corps of Engineers drilled test wells, about 1,500' deep, and a series of 400' core samples. They found that on the Frijoles Mesa, ground water was over 1,000' below the surface, and that everything above was far from saturation. That means no mobility of the water, which is trapped in the soil. It's just about the ideal situation for putting something underground and having it stay. Incidentally, this study would lead to the development of a series of new water wells for Los Alamos's domestic water supply. It also gave the green light for hydronuclear testing, and Frijoles Mesa was dubbed Technical Area 49 and subdivided into a set of test areas. Over the following three years, these test areas would see about 35 hydronuclear detonations carried out in the bottom of shafts that were about 200' deep and 3-6' wide. It seems that for most tests, the hole was excavated and lined with a ladder installed to reach the bottom. Technicians worked at the bottom of the hole to prepare the test device, which was connected by extensive cabling to instrumentation trailers on the surface. When the "shot" was ready, the hole was backfilled with sand and sealed at the top with a heavy plate. The material on top of the device held everything down, preventing migration of nuclear material to the surface. The high explosives did, of course, destroy the test device and the cabling, but not before the instrumentation trailers had recorded a vast amount of data. If you read these kinds of articles, you must know that the 1958 moratorium did not last. Soviet politics shifted again, France began nuclear testing, negotiations over a more formal test ban faltered. US intelligence suspected that the Soviet Union had operated their nuclear weapons program at full tilt during the test ban, and the military suspected clandestine tests, although there was no evidence they had violated the treaty. Of course, that they continued their research efforts is guaranteed, we did as well. Physicist Edward Teller, ever the nuclear weapons hawk, opposed the moratorium and pushed to resume testing. In 1961, the Soviet Union resumed testing, culminating in the test of the record-holding "Tsar Bomba," a 50 megaton device. The US resumed testing as well. The arms race was back on. US hydronuclear testing largely ended with the resumption of full-scale testing. The same safety studies could be completed on real weapons, and those tests would serve other purposes in weapons development as well. Although post-moratorium testing included atmospheric detonations, the focus had shifted towards underground tests and the 1963 Partial Test Ban Treaty restricted the US and USSR to underground tests only. One wonders about the relationship between hydronuclear testing at TA-49 and the full-scale underground tests extensively performed at the NTS. Underground testing began in 1951 with Buster-Jangle Uncle, a test to determine how big of a crater could be produced by a ground-penetrating weapon. Uncle wasn't really an underground test in the modern sense, the device was emplaced only 17 feet deep and still produced a huge cloud of fallout. It started a trend, though: a similar 1955 test was set 67 feet deep, producing a spectacular crater, before the 1957 Plumbbob Pascal-A was detonated at 486 feet and produced radically less fallout. 1957's Plumbbob Rainier was the first fully-contained underground test, set at the end of a tunnel excavated far into a hillside. This test emitted no fallout at all, proving the possibility of containment. Thus both the idea of emplacing a test device in a deep hole, and the fact that testing underground could contain all of the fallout, were known when the moratorium began in 1959. What's very interesting about the hydronuclear tests is the fact that technicians actually worked "downhole," at the bottom of the excavation. Later underground tests were prepared by assembling the test device at the surface, as part of a rocket-like "rack," and then lowering it to the bottom just before detonation. These techniques hadn't yet been developed in the '50s, thus the use of a horizontal tunnel for the first fully-contained test. Many of the racks used for underground testing were designed and built by LANL, but others (called "canisters" in an example of the tendency of the labs to not totally agree on things) were built by Lawrence Livermore. I'm not actually sure which of the two labs started building them first, a question for future research. It does seem likely that the hydronuclear testing at LANL advanced the state of the art in remote instrumentation and underground test design, facilitating the adoption of fully-contained underground tests in the following years. During the three years of hydronuclear testing, shafts were excavated in four testing areas. It's estimated that the test program at TA-49 left about 40kg of plutonium and 93kg of enriched uranium underground, along with 92kg of depleted uranium and 13kg of beryllium (both toxic contaminants). Because of the lack of a nuclear yield, these tests did not create the caverns associated with underground testing. Material from the weapons likely spread within just a 10-20' area, as holes were drilled on a 25' grid and contamination from previous neighboring tests was encountered only once. The tests also produced quite a bit of ancillary waste: things like laboratory equipment, handling gear, cables and tubing, that are not directly radioactive but were contaminated with radioactive or toxic materials. In the fashion typical of the time, this waste was buried on site, often as part of the backfilling of the test shafts. During the excavation of one of the test shafts, 2-M in December 1960, contamination was detected at the surface. It seems that the geology allowed plutonium from a previous test to spread through cracks into the area where 2-M was being drilled. The surface soil contaminated by drill cuttings was buried back in hole 2-M, but this incident made area 2 the most heavily contaminated part of TA-49. When hydronuclear testing ended in 1961, area 2 was covered by a 6' of gravel and 4-6" of asphalt to better contain any contaminated soil. Several support buildings on the surface were also contaminated, most notably a building used as a radiochemistry laboratory to support the tests. An underground calibration facility that allowed for exposure of test equipment to a contained source in an underground chamber was also built at TA-49 and similarly contaminated by use with radioisotopes. The Corps of Engineers continued to monitor the hydrology of the site from 1961 to 1970, and test wells and soil samples showed no indication that any contamination was spreading. In 1971, LANL established a new environmental surveillance department that assumed responsibility for legacy sites like TA-49. That department continued to sample wells, soil, and added air sampling. Monitoring of stream sediment downhill from the site was added in the '70s, as many of the contaminants involved can bind to silt and travel with surface water. This monitoring has not found any spread either. That's not to say that everything is perfect. In 1975, a section of the asphalt pad over Area 2 collapsed, leaving a three foot deep depression. Rainwater pooled in the depression and then flowed through the gravel into hole 2-M itself, collecting in the bottom of the lining of the former experimental shaft. In 1976, the asphalt cover was replaced, but concerns remained about the water that had already entered 2-M. It could potentially travel out of the hole, continue downwards, and carry contamination into the aquifer around 800' below. Worse, a nearby core sample hole had picked up some water too, suggesting that the water was flowing out of 2-M through cracks and into nearby features. Since the core hole had a slotted liner, it would be easier for water to leave it and soak into the ground below. In 1980, the water that had accumulated in 2-M was removed by lifting about 24 gallons to the surface. While the water was plutonium contaminated, it fell within acceptable levels for controlled laboratory areas. Further inspections through 1986 did not find additional water in the hole, suggesting that the asphalt pad was continuing to function correctly. Several other investigations were conducted, including the drilling of some additional sample wells and examination of other shafts in the area, to determine if there were other routes for water to enter the Area 2 shafts. Fortunately no evidence of ongoing water ingress was found. In 1986, TA-49 was designated a hazardous waste site under the Resource Conservation and Recovery Act. Shortly after, the site was evaluated under CERCLA to prioritize remediation. Scoring using the Hazard Ranking System determined a fairly low risk for the site, due to the lack of spread of the contamination and evidence suggesting that it was well contained by the geology. Still, TA-49 remains an environmental remediation site and now falls under a license granted by the New Mexico Environment Department. This license requires ongoing monitoring and remediation of any problems with the containment. For example, in 1991 the asphalt cover of Area 2 was found to have cracked and allowed more water to enter the sample wells. The covering was repaired once again, and investigations made every few years from 1991 to 2015 to check for further contamination. Ongoing monitoring continues today. So far, Area 2 has not been found to pose an unacceptable risk to human health or a risk to the environment. NMED permitting also covers the former radiological laboratory and calibration facility, and infrastructure related to them like a leach field from drains. Sampling found some surface contamination, so the affected soil was removed and disposed of at a hazardous waste landfill where it will be better contained. TA-49 was reused for other purposes after hydronuclear testing. These activities included high explosive experiments contained in metal "bottles," carried out in a metal-lined pit under a small structure called the "bottle house." Part of the bottle house site was later reused to build a huge hydraulic ram used to test steel cables at their failure strength. I am not sure of the exact purpose of this "Cable Test Facility," but given the timeline of its use during the peak of underground testing and the design I suspect LANL used it as a quality control measure for the cable assemblies used in lowering underground test racks into their shafts. No radioactive materials were involved in either of these activities, but high explosives and hydraulic oil can both be toxic, so both were investigated and received some surface soil cleanup. Finally, the NMED permit covers the actual test shafts. These have received numerous investigations over the sixty years since the original tests, and significant contamination is present as expected. However, that contamination does not seem to be spreading, and modeling suggests that it will stay that way. In 2022, the NMED issued Certificates of Completion releasing most of the TA-49 remediation sites without further environmental controls. The test shafts themselves, known to NMED by the punchy name of Solid Waste Management Unit 49-001(e), received a certificate of completion that requires ongoing controls to ensure that the land is used only for industrial purposes. Environmental monitoring of the TA-49 site continues under LANL's environmental management program and federal regulation, but TA-49 is no longer an active remediation project. The plutonium and uranium is just down there, and it'll have to stay.
In a previous life, I worked for a location-based entertainment company, part of a huge team of people developing a location for Las Vegas, Nevada. It was COVID, a rough time for location-based anything, and things were delayed more than usual. Coworkers paid a lot of attention to another upcoming Las Vegas attraction, one with a vastly larger budget but still struggling to make schedule: the MSG (Madison Square Garden) Sphere. I will set aside jokes about it being a square sphere, but they were perhaps one of the reasons that it underwent a pre-launch rebranding to merely the Sphere. If you are not familiar, the Sphere is a theater and venue in Las Vegas. While it's know mostly for the video display on the outside, that's just marketing for the inside: a digital dome theater, with seating at a roughly 45 degree stadium layout facing a near hemisphere of video displays. It is a "near" hemisphere because the lower section is truncated to allow a flat floor, which serves as a stage for events but is also a practical architectural decision to avoid completely unsalable front rows. It might seem a little bit deceptive that an attraction called the Sphere does not quite pull off even a hemisphere of "payload," but the same compromise has been reached by most dome theaters. While the use of digital display technology is flashy, especially on the exterior, the Sphere is not quite the innovation that it presents itself as. It is just a continuation of a long tradition of dome theaters. Only time will tell, but the financial difficulties of the Sphere suggest that follows the tradition faithfully: towards commercial failure. You could make an argument that the dome theater is hundreds of years old, but I will omit it. Things really started developing, at least in our modern tradition of domes, with the 1923 introduction of the Zeiss planetarium projector. Zeiss projectors and their siblings used a complex optical and mechanical design to project accurate representations of the night sky. Many auxiliary projectors, incorporated into the chassis and giving these projectors famously eccentric shapes, rendered planets and other celestial bodies. Rather than digital light modulators, the images from these projectors were formed by purely optical means: perforated metal plates, glass plates with etched metalized layers, and fiber optics. The large, precisely manufactured image elements and specialized optics created breathtaking images. While these projectors had considerable entertainment value, especially in the mid-century when they represented some of the most sophisticated projection technology yet developed, their greatest potential was obviously in education. Planetarium projectors were fantastically expensive (being hand-built in Germany with incredible component counts) [1], they were widely installed in science museums around the world. Most of us probably remember a dogbone-shaped Zeiss, or one of their later competitors like Spitz or Minolta, from our youths. Unfortunately, these marvels of artistic engineering were mostly retired as digital projection of near comparable quality became similarly priced in the 2000s. But we aren't talking about projectors, we're talking about theaters. Planetarium projectors were highly specialized to rendering the night sky, and everything about them was intrinsically spherical. For both a reasonable viewing experience, and for the projector to produce a geometrically correct image, the screen had to be a spherical section. Thus the planetarium itself: in its most traditional form, rings of heavily reclined seats below a hemispherical dome. The dome was rarely a full hemisphere, but was usually truncated at the horizon. This was mostly a practical decision but integrated well into the planetarium experience, given that sky viewing is usually poor near the horizon anyway. Many planetaria painted a city skyline or forest silhouette around the lower edge to make the transition from screen to wall more natural. Later, theatrical lighting often replaced the silhouette, reproducing twilight or the haze of city lights. Unsurprisingly, the application-specific design of these theaters also limits their potential. Despite many attempts, the collective science museum industry has struggled to find entertainment programming for planetaria much beyond Pink Floyd laser shows [1]. There just aren't that many things that you look up at. Over time, planetarium shows moved in more narrative directions. Film projection promised new flexibility---many planetaria with optical star projectors were also equipped with film projectors, which gave show producers exciting new options. Documentary video of space launches and animations of physical principles became natural parts of most science museum programs, but were a bit awkward on the traditional dome. You might project four copies of the image just above the horizon in the four cardinal directions, for example. It was very much a compromise. With time, the theater adapted to the projection once again: the domes began to tilt. By shifting the dome in one direction, and orienting the seating towards that direction, you could create a sort of compromise point between the traditional dome and traditional movie theater. The lower central area of the screen was a reasonable place to show conventional film, while the full size of the dome allowed the starfield to almost fill the audience's vision. The experience of the tilted dome is compared to "floating in space," as opposed to looking up at the sky. In true Cold War fashion, it was a pair of weapons engineers (one nuclear weapons, the other missiles) who designed the first tilted planetarium. In 1973, the planetarium of what is now called the Fleet Science Center in San Diego, California opened to the public. Its dome was tilted 25 degrees to the horizon, with the seating installed on a similar plane and facing in one direction. It featured a novel type of planetarium projector developed by Spitz and called the Space Transit Simulator. The STS was not the first, but still an early mechanical projector to be controlled by a computer---a computer that also had simultaneous control of other projectors and lighting in the theater, what we now call a show control system. Even better, the STS's innovative optical design allowed it to warp or bend the starfield to simulate its appearance from locations other than earth. This was the "transit" feature: with a joystick connected to the control computer, the planetarium presenter could "fly" the theater through space in real time. The STS was installed in a well in the center of the seating area, and its compact chassis kept it low in the seating area, preserving the spherical geometry (with the projector at the center of the sphere) without blocking the view of audience members sitting behind it and facing forward. And yet my main reason for discussing the Fleet planetarium is not the the planetarium projector at all. It is a second projector, an "auxiliary" one, installed in a second well behind the STS. The designers of the planetarium intended to show film as part of their presentations, but they were not content with a small image at the center viewpoint. The planetarium commissioned a few of the industry's leading film projection experts to design a film projection system that could fill the entire dome, just as the planetarium projector did. They knew that such a large dome would require an exceptionally sharp image. Planetarium projectors, with their large lithographed slides, offered excellent spatial resolution. They made stars appear as point sources, the same as in the night sky. 35mm film, spread across such a large screen, would be obviously blurred in comparison. They would need a very large film format. Fortuitously, almost simultaneously the Multiscreen Corporation was developing a "sideways" 70mm format. This 15-perf format used 70mm film but fed it through the projector sideways, making each frame much larger than typical 70mm film. In its debut, at a temporary installation in the 1970 Expo Osaka, it was dubbed IMAX. IMAX made an obvious basis for a high-resolution projection system, and so the then-named IMAX Corporation was added to the planetarium project. The Fleet's film projector ultimately consisted of an IMAX film transport with a custom-built compact, liquid-cooled lamphouse and spherical fisheye lens system. The large size of the projector, the complex IMAX framing system and cooling equipment, made it difficult to conceal in the theater's projector well. Threading film into IMAX projectors is quite complex, with several checks the projectionist must make during a pre-show inspection. The projectionist needed room to handle the large film, and to route it to and from the enormous reels. The projector's position in the middle of the seating area left no room for any of this. We can speculate that it was, perhaps, one of the designer's missile experience that lead to the solution: the projector was serviced in a large projection room beneath the theater's seating. Once it was prepared for each show, it rose on near-vertical rails until just the top emerged in the theater. Rollers guided the film as it ran from a platter, up the shaft to the projector, and back down to another platter. Cables and hoses hung below the projector, following it up and down like the traveling cable of an elevator. To advertise this system, probably the greatest advance in film projection since the IMAX format itself, the planetarium coined the term Omnimax. Omnimax was not an easy or economical format. Ideally, footage had to be taken in the same format, using a 70mm camera with a spherical lens system. These cameras were exceptionally large and heavy, and the huge film format limited cinematographers to short takes. The practical problems with Omnimax filming were big enough that the first Omnimax films faked it, projecting to the larger spherical format from much smaller conventional negatives. This was the case for "Voyage to the Outer Planets" and "Garden Isle," the premier films at the Fleet planetarium. The history of both is somewhat obscure, the latter especially. "Voyage to the Outer Planets" was executive-produced by Preston Fleet, a founder of the Fleet center (which was ultimately named for his father, a WWII aviator). We have Fleet's sense of showmanship to thank for the invention of Omnimax: He was an accomplished business executive, particularly in the photography industry, and an aviation enthusiast who had his hands in more than one museum. Most tellingly, though, he had an eccentric hobby. He was a theater organist. I can't help but think that his passion for the theater organ, an instrument almost defined by the combination of many gizmos under electromechanical control, inspired "Voyage." The film, often called a "multimedia experience," used multiple projectors throughout the planetarium to depict a far-future journey of exploration. The Omnimax film depicted travel through space, with slide projectors filling in artist's renderings of the many wonders of space. The ten-minute Omnimax film was produced by Graphic Films Corporation, a brand that would become closely associated with Omnimax in the following decades. Graphic was founded in the midst of the Second World War by Lester Novros, a former Disney animator who found a niche creating training films for the military. Novros's fascination with motion and expertise in presenting complicated 3D scenes drew him to aerospace, and after the war he found much of his business in the newly formed Air Force and NASA. He was also an enthusiast of niche film formats, and Omnimax was not his first dome. For the 1964 New York World's Fair, Novros and Graphic Films had produced "To the Moon and Beyond," a speculative science film with thematic similarities to "Voyage" and more than just a little mechanical similarity. It was presented in Cinerama 360, a semi-spherical, dome-theater 70mm format presented in a special theater called the Moon Dome. "To the Moon and Beyond" was influential in many ways, leading to Graphic Films' involvement in "2001: A Space Odyssey" and its enduring expertise in domes. The Fleet planetarium would not remain the only Omnimax for long. In 1975, the city of Spokane, Washington struggled to find a new application for the pavilion built for Expo '74 [3]. A top contender: an Omnimax theater, in some ways a replacement for the temporary IMAX theater that had been constructed for the actual Expo. Alas, this project was not to be, but others came along: in 1978, the Detroit Science Center opened the second Omnimax theater ("the machine itself looks like and is the size of a front loader," the Detroit Free Press wrote). The Science Museum of Minnesota, in St. Paul, followed shortly after. The Carnegie Science Center, in Pittsburgh, rounded out the year's new launches. Omnimax hit prime time the next year, with the 1979 announcement of an Omnimax theater at Caesars Palace in Las Vegas, Nevada. Unlike the previous installations, this 380-seat theater was purely commercial. It opened with the 1976 IMAX film "To Fly!," which had been optically modified to fit the Omnimax format. This choice of first film is illuminating. "To Fly!" is a 27 minute documentary on the history of aviation in the United States, originally produced for the IMAX theater at the National Air and Space Museum [4]. It doesn't exactly seem like casino fare. The IMAX format, the flat-screen one, was born of world's fairs. It premiered at an Expo, reappeared a couple of years later at another one, and for the first years of the format most of the IMAX theaters built were associated with either a major festival or an educational institution. This noncommercial history is a bit hard to square with the modern IMAX brand, closely associated with major theater chains and the Marvel Cinematic Universe. Well, IMAX took off, and in many ways it sold out. Over the decades since the 1970 Expo, IMAX has met widespread success with commercial films and theater owners. Simultaneously, the definition or criteria for IMAX theaters have relaxed, with smaller screens made permissible until, ultimately, the transition to digital projection eliminated the 70mm film and more or less reduce IMAX to just another ticket surcharge brand. It competes directly with Cinemark xD, for example. To the theater enthusiast, this is a pretty sad turn of events, a Westinghouse-esque zombification of a brand that once heralded the field's most impressive technical achievements. The same never happened to Omnimax. The Caesar's Omnimax theater was an odd exception; the vast majority of Omnimax theaters were built by science museums and the vast majority of Omnimax films were science documentaries. Quite a few of those films had been specifically commissioned by science museums, often on the occasion of their Omnimax theater opening. The Omnimax community was fairly tight, and so the same names recur. The Graphic Films Corporation, which had been around since the beginning, remained so closely tied to the IMAX brand that they practically shared identities. Most Omnimax theaters, and some IMAX theaters, used to open with a vanity card often known as "the wormhole." It might be hard to describe beyond "if you know you know," it certainly made an impression on everyone I know that grew up near a theater that used it. There are some videos, although unfortunately none of them are very good. I have spent more hours of my life than I am proud to admit trying to untangle the history of this clip. Over time, it has appeared in many theaters with many different logos at the end, and several variations of the audio track. This is in part informed speculation, but here is what I believe to be true: the "wormhole" was originally created by Graphic Films for the Fleet planetarium specifically, and ran before "Voyage to the Outer Planets" and its double-feature companion "Garden Isle," both of which Graphic Films had worked on. This original version ended with the name Graphic Films, accompanied by an odd sketchy drawing that was also used as an early logo of the IMAX Corporation. Later, the same animation was re-edited to end with an IMAX logo. This version ran in both Omnimax and conventional IMAX theaters, probably as a result of the extensive "cross-pollination" of films between the two formats. Many Omnimax films through the life of the format had actually been filmed for IMAX, with conventional lenses, and then optically modified to fit the Omnimax dome after the fact. You could usually tell: the reprojection process created an unusual warp in the image, and more tellingly, these pseudo-Omnimax films almost always centered the action at the middle of the IMAX frame, which was too high to be quite comfortable in an Omnimax theater (where the "frame center" was well above the "front center" point of the theater). Graphic Films had been involved in a lot of these as well, perhaps explaining the animation reuse, but it's just as likely that they had sold it outright to the IMAX corporation which used it as they pleased. For some reason, this version also received new audio that is mostly the same but slightly different. I don't have a definitive explanation, but I think there may have been an audio format change between the very early Omnimax theaters and later IMAX/Omnimax systems, which might have required remastering. Later, as Omnimax domes proliferated at science museums, the IMAX Corporation (which very actively promoted Omnimax to education) gave many of these theaters custom versions of the vanity card that ended with the science museum's own logo. I have personally seen two of these, so I feel pretty confident that they exist and weren't all that rare (basically 2 out of 2 Omnimax theaters I've visited used one), but I cannot find any preserved copies. Another recurring name in the world of IMAX and Omnimax is MacGillivray Freeman Films. MacGillivray and Freeman were a pair of teenage friends from Laguna Beach who dropped out of school in the '60s to make skateboard and surf films. This is, of course, a rather cliché start for documentary filmmakers but we must allow that it was the '60s and they were pretty much the ones creating the cliché. Their early films are hard to find in anything better than VHS rip quality, but worth watching: Wikipedia notes their significance in pioneering "action cameras," mounting 16mm cinema cameras to skateboards and surfboards, but I would say that their cinematography was innovative in more ways than just one. The 1970 "Catch the Joy," about sandrails, has some incredible shots that I struggle to explain. There's at least one where they definitely cut the shot just a couple of frames before a drifting sandrail flung their camera all the way down the dune. For some reason, I would speculate due to their reputation for exciting cinematography, the National Air and Space Museum chose MacGillivray and Freeman for "To Fly!". While not the first science museum IMAX documentary by any means (that was, presumably, "Voyage to the Outer Planets" given the different subject matter of the various Expo films), "To Fly!" might be called the first modern one. It set the pattern that decades of science museum films followed: a film initially written by science educators, punched up by producers, and filmed with the very best technology of the time. Fearing that the film's history content would be dry, they pivoted more towards entertainment, adding jokes and action sequences. "To Fly!" was a hit, running in just about every science museum with an IMAX theater, including Omnimax. Sadly, Jim Freeman died in a helicopter crash shortly after production. Nonetheless, MacGillivray Freeman Films went on. Over the following decades, few IMAX science documentaries were made that didn't involve them somehow. Besides the films they produced, the company consulted on action sequences in most of the format's popular features. I had hoped to present here a thorough history of the films were actually produced in the Omnimax format. Unfortunately, this has proven very difficult: the fact that most of them were distributed only to science museums means that they are very spottily remembered, and besides, so many of the films that ran in Omnimax theaters were converted from IMAX presentations that it's hard to tell the two apart. I'm disappointed that this part of cinema history isn't better recorded, and I'll continue to put time into the effort. Science museum documentaries don't get a lot of attention, but many of the have involved formidable technical efforts. Consider, for example, the cameras: befitting the large film, IMAX cameras themselves are very large. When filming "To Fly!", MacGillivray and Freeman complained that the technically very basic 80 pound cameras required a lot of maintenance, were complex to operate, and wouldn't fit into the "action cam" mounting positions they were used to. The cameras were so expensive, and so rare, that they had to be far more conservative than their usual approach out of fear of damaging a camera they would not be able to replace. It turns out that they had it easy. Later IMAX science documentaries would be filmed in space ("The Dream is Alive" among others) and deep underwater ("Deep Sea 3D" among others). These IMAX cameras, modified for simpler operation and housed for such difficult environments, weighed over 1,000 pounds. Astronauts had to be trained to operate the cameras; mission specialists on Hubble service missions had wrangling a 70-pound handheld IMAX camera around the cabin and developing its film in a darkroom bag among their duties. There was a lot of film to handle: as a rule of thumb, one mile of IMAX film is good for eight and a half minutes. I grew up in Portland, Oregon, and so we will make things a bit more approachable by focusing on one example: The Omnimax theater of the Oregon Museum of Science and Industry, which opened as part of the museum's new waterfront location in 1992. This 330-seat boasted a 10,000 sq ft dome and 15 kW of sound. The premier feature was "Ring of Fire," a volcano documentary originally commissioned by the Fleet, the Fort Worth Museum of Science and Industry, and the Science Museum of Minnesota. By the 1990s, the later era of Omnimax, the dome format was all but abandoned as a commercial concept. There were, an announcement article notes, around 90 total IMAX theaters (including Omnimax) and 80 Omnimax films (including those converted from IMAX) in '92. Considering the heavy bias towards science museums among these theaters, it was very common for the films to be funded by consortia of those museums. Considering the high cost of filming in IMAX, a lot of the documentaries had a sort of "mashup" feel. They would combine footage taken in different times and places, often originally for other projects, into a new narrative. "Ring of Fire" was no exception, consisting of a series of sections that were sometimes more loosely connected to the theme. The 1982 Loma Prieta earthquake was a focus, and the eruption of Mt. St. Helens, and lava flows in Hawaii. Perhaps one of the reasons it's hard to catalog IMAX films is this mashup quality, many of the titles carried at science museums were something along the lines of "another ocean one." I don't mean this as a criticism, many of the IMAX documentaries were excellent, but they were necessarily composed from painstakingly gathered fragments and had to cover wide topics. Given that I have an announcement feature piece in front of me, let's also use the example of OMSI to discuss the technical aspects. OMSI's projector cost about $2 million and weighted about two tons. To avoid dust damaging the expensive prints, the "projection room" under the seating was a positive-pressure cleanroom. This was especially important since the paucity of Omnimax content meant that many films ran regularly for years. The 15 kW water-cooled lamp required replacement at 800 to 1,000 hours, but unfortunately, the price is not noted. By the 1990s, Omnimax had become a rare enough system that the projection technology was a major part of the appeal. OMSI's installation, like most later Omnimax theaters, had the audience queue below the seating, separated from the projection room by a glass wall. The high cost of these theaters meant that they operated on high turnovers, so patrons would wait in line to enter immediately after the previous showing had exited. While they waited, they could watch the projectionist prepare the next show while a museum docent explained the equipment. I have written before about multi-channel audio formats, and Omnimax gives us some more to consider. The conventional audio format for much of Omnimax's life was six-channel: left rear, left screen, center screen, right screen, right rear, and top. Each channel had an independent bass cabinet (in one theater, a "caravan-sized" enclosure with eight JBL 2245H 46cm woofers), and a crossover network fed the lowest end of all six channels to a "sub-bass" array at screen bottom. The original Fleet installation also had sub-bass speakers located beneath the audience seating, although that doesn't seem to have become common. IMAX titles of the '70s and '80s delivered audio on eight-track magnetic tape, with the additional tracks used for synchronization to the film. By the '90s, IMAX had switched to distributing digital audio on three CDs (one for each two channels). OMSI's theater was equipped for both, and the announcement amusingly notes the availability of cassette decks. A semi-custom audio processor made for IMAX, the Sonics TAC-86, managed synchronization with film playback and applied equalization curves individually calibrated to the theater. IMAX domes used perforated aluminum screens (also the norm in later planetaria), so the speakers were placed behind the screen in the scaffold-like superstructure that supported it. When I was young, OMSI used to start presentations with a demo program that explained the large size of IMAX film before illuminating work lights behind the screen to make the speakers visible. Much of this was the work of the surprisingly sophisticated show control system employed by Omnimax theaters, a descendent of the PDP-15 originally installed in the Fleet. Despite Omnimax's almost complete consignment to science museums, there were some efforts it bringing commercial films. Titles like Disney's "Fantasia" and "Star Wars: Episode III" were distributed to Omnimax theaters via optical reprojection, sometimes even from 35mm originals. Unfortunately, the quality of these adaptations was rarely satisfactory, and the short runtimes (and marketing and exclusivity deals) typical of major commercial releases did not always work well with science museum schedules. Still, the cost of converting an existing film to dome format is pretty low, so the practice continues today. "Star Wars: The Force Awakens," for example, ran on at least one science museum dome. This trickle of blockbusters was not enough to make commercial Omnimax theaters viable. Caesars Palace closed, and then demolished, their Omnimax theater in 2000. The turn of the 21st century was very much the beginning of the end for the dome theater. IMAX was moving away from their film system and towards digital projection, but digital projection systems suitable for large domes were still a nascent technology and extremely expensive. The end of aggressive support from IMAX meant that filming costs became impractical for documentaries, so while some significant IMAX science museum films were made in the 2000s, the volume definitely began to lull and the overall industry moved away from IMAX in general and Omnimax especially. It's surprising how unforeseen this was, at least to some. A ten-screen commercial theater in Duluth opened an Omnimax theater in 1996! Perhaps due to the sunk cost, it ran until 2010, not a bad closing date for an Omnimax theater. Science museums, with their relatively tight budgets and less competitive nature, did tend to hold over existing Omnimax installations well past their prime. Unfortunately, many didn't: OMSI, for example, closed its Omnimax theater in 2013 for replacement with a conventional digital theater that has a large screen but is not IMAX branded. Fortunately, some operators hung onto their increasingly costly Omnimax domes long enough for modernization to become practical. The IMAX Corporation abandoned the Omnimax name as more of the theaters closed, but continued to support "IMAX Dome" with the introduction of a digital laser projector with spherical optics. There are only ten examples of this system. Others, including Omnimax's flagship at the Fleet Science Center, have been replaced by custom dome projection systems built by competitors like Sony. Few Omnimax projectors remain. The Fleet, to their credit, installed the modern laser projectors in front of the projector well so that the original film projector could remain in place. It's still functional and used for reprisals of Omnimax-era documentaries. IMAX projectors in general are a dying breed, a number of them have been preserved but their complex, specialized design and the end of vendor support means that it may become infeasible to keep them operating. We are, of course, well into the digital era. While far from inexpensive, digital projection systems are now able to match the quality of Omnimax projection. The newest dome theaters, like the Sphere, dispense with projection entirely. Instead, they use LED display panels capable of far brighter and more vivid images than projection, and with none of the complexity of water-cooled arc lamps. Still, something has been lost. There was once a parallel theater industry, a world with none of the glamor of Hollywood but for whom James Cameron hauled a camera to the depths of the ocean and Leonardo DiCaprio narrated repairs to the Hubble. In a good few dozen science museums, two-ton behemoths rose from beneath the seats, the zenith of film projection technology. After decades of documentaries, I think people forgot how remarkable these theaters were. Science museums stopped promoting them as aggressively, and much of the showmanship faded away. Sometime in the 2000s, OMSI stopped running the pre-show demonstration, instead starting the film directly. They stopped explaining the projectionist's work in preparing the show, and as they shifted their schedule towards direct repetition of one feature, there was less for the projectionist to do anyway. It became just another museum theater, so it's no wonder that they replaced it with just another museum theater: a generic big-screen setup with the exceptionally dull name of "Empirical Theater." From time to time, there have been whispers of a resurgence of 70mm film. Oppenheimer, for example, was distributed to a small number of theaters in this giant of film formats: 53 reels, 11 miles, 600 pounds of film. Even conventional IMAX is too costly for the modern theater industry, though. Omnimax has fallen completely by the wayside, with the few remaining dome operators doomed to recycling the same films with a sprinkling of newer reformatted features. It is hard to imagine a collective of science museums sending another film camera to space. Omnimax poses a preservation challenge in more ways than one. Besides the lack of documentation on Omnimax theaters and films, there are precious few photographs of Omnimax theaters and even fewer videos of their presentations. Of course, the historian suffers where Madison Square Garden hopes to succeed: the dome theater is perhaps the ultimate in location-based entertainment. Photos and videos, represented on a flat screen, cannot reproduce the experience of the Omnimax theater. The 180 horizontal degrees of screen, the sound that was always a little too loud, in no small part to mask the sound of the projector that made its own racket in the middle of the seating. You had to be there. IMAGES: Omnimax projection room at OMSI, Flickr user truk. Omnimax dome with work lights on at MSI Chicago, Wikimedia Commons user GualdimG. Omnimax projector at St. Louis Science Center, Flickr user pasa47. [1] I don't have extensive information on pricing, but I know that in the 1960s an "economy" Spitz came in over $30,000 (~10x that much today). [2] Pink Floyd's landmark album Dark Side of The Moon debuted in a release event held at the London Planetarium. This connection between Pink Floyd and planetaria, apparently much disliked by the band itself, has persisted to the present day. Several generations of Pink Floyd laser shows have been licensed by science museums around the world, and must represent by far the largest success of fixed-installation laser projection. [3] Are you starting to detect a theme with these Expos? the World's Fairs, including in their various forms as Expos, were long one of the main markets for niche film formats. Any given weird projection format you run into, there's a decent chance that it was originally developed for some short film for an Expo. Keep in mind that it's the nature of niche projection formats that they cannot easily be shown in conventional theaters, so they end up coupled to these crowd events where a custom venue can be built. [4] The Smithsonian Institution started looking for an exciting new theater in 1970. As an example of the various niche film formats at the time, the Smithsonian considered a dome (presumably Omnimax), Cinerama (a three-projector ultrawide system), and Circle-Vision 360 (known mostly for the few surviving Expo films at Disney World's EPCOT) before settling on IMAX. The Smithsonian theater, first planned for the Smithsonian Museum of Natural History before being integrated into the new National Air and Space Museum, was tremendously influential on the broader world of science museum films. That is perhaps an understatement, it is sometimes credited with popularizing IMAX in general, and the newspaper coverage the new theater received throughout North America lends credence to the idea. It is interesting, then, to imagine how different our world would be if they had chosen Circle-Vision. "Captain America: Brave New World" in Cinemark 360.
Sometimes I think I should pivot my career to home automation critic, because I have many opinions on the state of the home automation industry---and they're pretty much all critical. Virtually every time I bring up home automation, someone says something about the superiority of the light switch. Controlling lights is one of the most obvious applications of home automation, and there is a roughly century long history of developments in light control---yet, paradoxically, it is an area where consumer home automation continues to struggle. An analysis of how and why billion-dollar tech companies fail to master the simple toggling of lights in response to human input will have to wait for a future article, because I will have a hard time writing one without descending into incoherent sobbing about the principles of scene control and the interests of capital. Instead, I want to just dip a toe into the troubled waters of "smart lighting" by looking at one of its earliest precedents: low-voltage lighting control. A source I generally trust, the venerable "old internet" website Inspectapedia, says that low-voltage lighting control systems date back to about 1946. The earliest conclusive evidence I can find of these systems is a newspaper ad from 1948, but let's be honest, it's a holiday and I'm only making a half effort on the research. In any case, the post-war timing is not a coincidence. The late 1940s were a period of both rapid (sub)urban expansion and high copper prices, and the original impetus for relay systems seems to have been the confluence of these two. But let's step back and explain what a relay or low-voltage lighting control system is. First, I am not referring to "low voltage lighting" meaning lights that run on 12 or 24 volts DC or AC, as was common in landscape lighting and is increasingly common today for integrated LED lighting. Low-voltage lighting control systems are used for conventional 120VAC lights. In the most traditional construction, e.g. in the 1940s, lights would be served by a "hot" wire that passed through a wall box containing a switch. In many cases the neutral (likely shared with other fixtures) went directly from the light back to the panel, bypassing the switch... running both the hot and neutral through the switch box did not become conventional until fairly recently, to the chagrin of anyone installing switches that require a neutral for their own power, like timers or "smart" switches. The problem with this is that it lengthens the wiring runs. If you have a ceiling fixture with two different switches in a three-way arrangement, say in a hallway in a larger house, you could be adding nearly 100' in additional wire to get the hot to the switches and the runner between them. The cost of that wiring, in the mid-century, was quite substantial. Considering how difficult it is to find an employee to unlock the Romex cage at Lowes these days, I'm not sure that's changed that much. There are different ways of dealing with this. In the UK, the "ring main" served in part to reduce the gauge (and thus cost) of outlet wiring, but we never picked up that particular eccentricity in the US (for good reason). In commercial buildings, it's not unusual for lighting to run on 240v for similar reasons, but 240v is discouraged in US residential wiring. Besides, the mid-century was an age of optimism and ambition in electrical technology, the days of Total Electric Living. Perhaps the technology of the relay, refined by so many innovations of WWII, could offer a solution. Switch wiring also had to run through wall cavities, an irritating requirement in single-floor houses where much of the lighting wiring could be contained to the attic. The wiring of four-way and other multi-switch arrangements could become complex and require a lot more wall runs, discouraging builders providing switches in the most convenient places. What if relays also made multiple switches significantly easier to install and relocate? You probably get the idea. In a typical low-voltage lighting control system, a transformer provides a low voltage like 24VAC, much the same as used by doorbells. The light switches simply toggle the 24VAC control power to the coils of relays. Some (generally older) systems powered the relay continuously, but most used latching relays. In this case, all light switches are momentary, with an "on" side and an "off" side. This could be a paddle that you push up or down (much like a conventional light switch), a bar that you push the left or right sides of, or a pair of two push buttons. In most installations, all of the relays were installed together in a single enclosure, usually in the attic where the high-voltage wiring to the actual lights would be fairly short. The 24VAC cabling to the switches was much smaller gauge, and depending on the jurisdiction might not require any sort of license to install. Many systems had enclosures with separate high voltage and low voltage components, or mounted the relays on the outside of an enclosure such that the high voltage wiring was inside and low voltage outside. Both arrangements helped to meet code requirements for isolating high and low voltage systems and provided a margin of safety in the low voltage wiring. That provided additional cost savings as well; low voltage wiring was usually installed without any kind of conduit or sheathed cable. By 1950, relay lighting controls were making common appearances in real estate listings. A feature piece on the "Melody House," a builder's model home, in the Tacoma News Tribune reads thus: Newest features in the house are the low voltage touch plate and relay system lighting controls, with wide plates instead of snap buttons---operated like the stops of a pipe organ, with the merest flick of a finger. The comparison to a pipe organ is interesting, first in its assumption that many readers were familiar with typical organ stops. Pipe organs were, increasingly, one of the technological marvels of the era: while the concept of the pipe organ is very old, this same era saw electrical control systems (replete with relays!) significantly reduce the cost and complexity of organ consoles. What's more, the tonewheel electric organ had become well-developed and started to find its way into homes. The comparison is also interesting because of its deficiencies. The Touch-Plate system described used wide bars, which you pressed the left or right side of---you could call them momentary SPDT rocker switches if you wanted. There were organs with similar rocker stops but I do not think they were common in 1950. My experience is that such rocker switch stops usually indicate a fully digital control system, where they make momentary action unobtrusive and avoid state synchronization problems. I am far from an expert on organs, though, which is why I haven't yet written about them. If you have a guess at which type of pipe organ console our journalist was familiar with, do let me know. Touch-Plate seems to have been one of the first manufacturers of these systems, although I can't say for sure that they invented them. Interestingly, Touch-Plate is still around today, but their badly broken WordPress site ("Welcome to the new touch-plate.com" despite it actually being touchplate.com) suggests they may not do much business. After a few pageloads their WordPress plugin WAF blocked me for "exceed[ing] the maximum number of page not found errors per minute for humans." This might be related to my frustration that none of the product images load. It seems that the Touch-Plate company has mostly pivoted to reselling imported LED lighting (touchplateled.com), so I suppose the controls business is withering on the vine. The 1950s saw a proliferation of relay lighting control brands, with GE introducing a particularly popular system with several generations of fixtures. Kyle Switch Plates, who sell replacement switch plates (what else?), list options for Remcon, Sierra, Bryant, Pyramid, Douglas, and Enercon systems in addition to the two brands we have met so far. As someone who pays a little too much attention to light switches, I have personally seen four of these brands, three of them still in use and one apparently abandoned in place. Now, you might be thinking that simply economizing wiring by relocating the switches does not constitute "home automation," but there are other features to consider. For one, low-voltage light control systems made it feasible to install a lot more switches. Houses originally built with them often go a little wild with the n-way switching, every room providing lightswitches at every door. But there is also the possibility of relay logic. From the same article: The necessary switches are found in every room, but in the master bedroom there is a master control panel above the bed, from where the house and yard may be flooded with instant light in case of night emergency. Such "master control panels" were a big attraction for relay lighting, and the finest homes of the 1950s and 1960s often displayed either a grid of buttons near the head of the master bed, or even better, a GE "Master Selector" with a curious system of rotary switches. On later systems, timers often served as auxiliary switches, so you could schedule exterior lights. With a creative installer, "scenes" were even possible by wiring switches to arbitrary sets of relays (this required DC or half-wave rectified control power and diodes to isolate the switches from each other). Many of these relay control systems are still in use today. While they are quite outdated in a certain sense, the design is robust and the simple components mean that it's usually not difficult to find replacement parts when something does fail. The most popular system is the one offered by GE, using their RR series relays (RR3, RR4, etc., to the modern RR9). That said, GE suggests a modernization path to their LightSweep system, which is really a 0-10v analog dimming controller that has the add-on ability to operate relays. The failure modes are mostly what you would expect: low voltage wiring can chafe and short, or the switches can become stuck. This tends to cause the lights to stick on or off, and the continuous current through the relay coil often burns it out. The fix requires finding the stuck switch or short and correcting it, and then replacing the relay. One upside of these systems that persists today is density: the low voltage switches are small, so with most systems you can fit 3 per gang. Another is that they still make N-way switching easier. There is arguably a safety benefit, considering the reduction in mains-voltage wire runs. Yet we rarely see such a thing installed in homes newer than around the '80s. I don't know that I can give a definitive explanation of the decline of relay lighting control, but reduced prices for copper wiring were probably a main factor. The relays added a failure point, which might lead to a perception of unreliability, and the declining familiarity of electricians means that installing a relay system could be expensive and frustrating today. What really interests me about relay systems is that they weren't really replaced... the idea just went away. It's not like modern homes are providing a master control panel in the bedroom using some alternative technology. I mean, some do, those with prices in the eight digits, but you'll hardly ever see it. That gets us to the tension between residential lighting and architectural lighting control systems. In higher-end commercial buildings, and in environments like conference rooms and lecture halls, there's a well established industry building digital lighting control systems. Today, DALI is a common standard for the actual lighting control, but if you look at a range of existing buildings you will find everything from completely proprietary digital distributed dimming to 0-10v analog dimming to central dimmer racks (similar to traditional theatrical lighting). Relay lighting systems were, in a way, a nascent version of residential architectural lighting control. And the architectural lighting control industry continues to evolve. If there is a modern equivalent to relay lighting, it's something like Lutron QSX. That's a proprietary digital lighting (and shade) control system, marketed for both residential and commercial use. QSX offers a wide range of attractive wall controls, tight integration to Lutron's HomeSense home automation platform, and a price tag that'll make your eyes water. Lutron has produced many generations of these systems, and you could make an argument that they trace their heritage back to the relay systems of the 1940s. But they're just priced way beyond the middle-class home. And, well, I suppose that requires an argument based on economics. Prices have gone up. Despite tract construction being a much older idea than people often realize, it seems clear that today's new construction homes have been "value engineered" to significantly lower feature and quality levels than those of the mid-century---but they're a lot bigger. There is a sort of maxim that today's home buyers don't care about anything but square footage, and if you've seen what Pulte or D. R. Horton are putting up... well, I never knew that 3,000 sq ft could come so cheap, and look it too. Modern new-construction homes just don't come with the gizmos that older ones did, especially in the '60s and '70s. Looking at the sales brochure for a new development in my own Albuquerque ("Estates at La Cuentista"), besides 21st century suburbanization (Gated Community! "East Access to Paseo del Norte" as if that's a good thing!) most of the advertised features are "big." I'm serious! If you look at the "More Innovation Built In" section, the "innovations" are a home office (more square footage), storage (more square footage), indoor and outdoor gathering spaces (to be fair, only the indoor ones are square footage), "dedicated learning areas" for kids (more square footage), and a "basement or bigger garage" for a home gym (more square footage). The only thing in the entire innovation section that I would call a "technical" feature is water filtration. You can scroll down for more details, and you get to things like "space for a movie room" and a finished basement described eight different ways. Things were different during the peak of relay lighting in the '60s. A house might only be 1,600 sq ft, but the builder would deck it out with an intercom (including multi-room audio of a primitive sort), burglar alarm, and yes, relay lighting. All of these technologies were a lot newer and people were more excited about them; I bring up Total Electric Living a lot because of an aesthetic obsession but it was a large-scale advertising and partnership campaign by the electrical industry (particularly Westinghouse) that gave builders additional cross-promotion if they included all of these bells and whistles. Remember, that was when people were watching those old videos about the "kitchen of the future." What would a 2025 "Kitchen of the Future" promotional film emphasize? An island bigger than my living room and a nook for every meal, I assume. Features like intercoms and even burglar alarms have become far less common in new construction, and even if they were present I don't think most buyers would use them. But that might seem a little odd, right, given the push towards home automation? Well, built-in home automation options have existed for longer than any of today's consumer solutions, but "built in" is a liability for a technology product. There are practical reasons, in that built-in equipment is harder to replace, but there's also a lamer commercial reason. Consumer technology companies want to sell their products like consumer technology, so they've recontextualized lighting control as "IoT" and "smart" and "AI" rather than something an electrician would hook up. While I was looking into relay lighting control systems, I ran into an interesting example. The Lutron Lu Master Lumi 5. What a name! Lutron loves naming things like this. The Lumi 5 is a 1980s era product with essentially the same features as a relay system, but architected in a much stranger way. It is, essentially, five three way switches in a box with remote controls. That means that each of the actual light switches in the house (which could also be dimmers) need mains-voltage wiring, including runner, back to the Lumi 5 "interface." Pressing a button on one of the Lutron wall panels toggles the state of the relay in the "interface" cabinet, toggling the light. But, since it's all wired as a three-way switch, toggling the physical switch at the light does the same thing. As is typical when combining n-way switches and dimming, the Lumi 5 has no control over dimmers. You can only dim a light up or down at the actual local control, the Lumi 5 can just toggle the dimmer on and off using the 3-way runner. The architecture also means that you have two fundamentally different types of wall panels in your house: local switches or dimmers wired to each light, and the Lu Master panels with their five buttons for the five circuits, along with "all on" and "all off." The Lumi 5 "interface" uses simple relay logic to implement a few more features. Five mains-voltage-level inputs can be wired to time clocks, so that you can schedule any combination(s) of the circuits to turn on and off. The manual recommends models including one with an astronomical clock for sunrise/sunset. An additional input causes all five circuits to turn on; it's suggested for connection to an auxiliary relay on a burglar alarm to turn all of the lights on should the alarm be triggered. The whole thing is strange and fascinating. It is basically a relay lighting control system, like so many before it, but using a distinctly different wiring convention. I think the main reason for the odd wiring was to accommodate dimmers, an increasingly popular option in the 1980s that relay systems could never really contend with. It doesn't have the cost advantages of relay systems at all, it will definitely be more expensive! But it adds some features over the fancy Lutron switches and dimmers you were going to install anyway. The Lu Master is the transitional stage between relay lighting systems and later architectural lighting controls, and it straddled too the end of relay light control in homes. It gives an idea of where relay light control in homes would have evolved, had the whole technology not been doomed to the niche zone of conference centers and universities. If you think about it, the Lu Master fills the most fundamental roles of home automation in lighting: control over multiple lights in a convenient place, scheduling and triggers, and an emergency function. It only lacks scenes, which I think we can excuse considering that the simple technology it uses does not allow it to adjust dimmers. And all of that with no Node-RED in sight! Maybe that conveys what most frustrates me about the "home automation" industry: it is constantly reinventing the wheel, an oligopoly of tech companies trying to drag people's homes into their "ecosystem." They do so by leveraging the buzzword of the moment, IoT to voice assistants to, I guess now AI?, to solve a basic set of problems that were pretty well solved at least as early as 1948. That's not to deny that modern home automation platforms have features that old ones don't. They are capable of incredibly sophisticated things! But realistically, most of their users want only very basic functionality: control in convenient places, basic automation, scenes. It wouldn't sting so much if all these whiz-bang general purpose computers were good at those tasks, but they aren't. For the very most basic tasks, things like turning on and off a group of lights, major tech ecosystems like HomeKit provide a user experience that is significantly worse than the model home of 1950. You could install a Lutron system, and it would solve those fundamental tasks much better... for a much higher price. But it's not like Lutron uses all that money to be an absolute technical powerhouse, a center of innovation at the cutting edge. No, even the latest Lutron products are really very simple, technically. The technical leaders here, Google, Apple, are the companies that can't figure out how to make a damn light switch. The problem with modern home automation platforms is that they are too ambitious. They are trying to apply enormously complex systems to very simple tasks, and thus contaminating the simplest of electrical systems with all the convenience and ease of a Smart TV. Sometimes that's what it feels like this whole industry is doing: adding complexity while the core decays. From automatic programming to AI coding agents, video terminals to Electron, the scope of the possible expands while the fundamentals become more and more irritating. But back to the real point, I hope you learned about some cool light switches. Check out the Kyle Switch Plates reference and you'll start seeing these buildings and homes, at least if you live in an area that built up during the era that they were common (1950s to the 1970s).
More in technology
We’re excited to welcome a new member to the Arduino Nano family – the Nano R4. Powered by the same RA4M1 microcontroller that’s at the core of the popular UNO R4 boards, this tiny-yet-mighty module is here to help you take your projects from prototype to product, smoothly and efficiently. If you’re already prototyping with […] The post Introducing the Arduino Nano R4: small in size, big on possibilities appeared first on Arduino Blog.
The cost effective solution to your computer needs for only £1,450
IoT (Internet of Things) devices can be very useful, but they do, by definition, require internet access. That’s easy enough when Wi-Fi® is available, and it is even possible to rely on LoRa® and cellular data connections to transmit data outside of urban areas. However, deploying an IoT device to a truly remote location has […] The post Dive into satellite IoT with the new Arduino-compatible Iridium Certus 9704 Development Kit appeared first on Arduino Blog.
Today I learned that Kagi uses Yandex as part of its search infrastructure, making up about 2% of their costs, and their CEO has confirmed that they do not plan to change that. To quote: Yandex represents about 2% of our total costs and is only one of dozens of sources we use. To put this in perspective: removing any single source would degrade search quality for all users while having minimal economic impact on any particular region. The world doesn’t need another politicized search engine. It needs one that works exceptionally well, regardless of the political climate. That’s what we’re building. That is unfortunate, as I found Kagi to be a good product with an interesting take on utilizing LLM models with search that is kind of useful, but I cannot in good heart continue to support it while they unapologetically finance a major company that has ties to the Russian government, the same country that is actively waging a war against Ukraine, an European country, for over 11 years, during which they’ve committed countless war crimes against civilians and military personnel. Kagi has the freedom to decide how they build the best search engine, and I have the freedom to use something else. Please send all your whataboutisms to /dev/null.