Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
23
In a previous life, I worked for a location-based entertainment company, part of a huge team of people developing a location for Las Vegas, Nevada. It was COVID, a rough time for location-based anything, and things were delayed more than usual. Coworkers paid a lot of attention to another upcoming Las Vegas attraction, one with a vastly larger budget but still struggling to make schedule: the MSG (Madison Square Garden) Sphere. I will set aside jokes about it being a square sphere, but they were perhaps one of the reasons that it underwent a pre-launch rebranding to merely the Sphere. If you are not familiar, the Sphere is a theater and venue in Las Vegas. While it's know mostly for the video display on the outside, that's just marketing for the inside: a digital dome theater, with seating at a roughly 45 degree stadium layout facing a near hemisphere of video displays. It is a "near" hemisphere because the lower section is truncated to allow a flat floor, which serves as a stage for...
a month ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from computers are bad

2025-07-06 secret cellular phone numbers

A long time ago I wrote about secret government telephone numbers, and before that, secret military telephone buttons. I suppose this is becoming a series. To be clear, the "secret" here is a joke, but more charitably I could say that it refers to obscurity rather than any real effort to keep them secret. Actually, today's examples really make this point: they're specifically intended to be well known, but are still pretty obscure in practice. If you've been around for a while, you know how much I love telephone numbers. Here in North America, we have a system called the North American Numbering Plan (NANP) that has rigidly standardized telephone dialing practices since the middle of the 20th century. The US, Canada, and a number of Central American countries benefit from a very orderly system of area codes (more formally numbering plan areas or NPAs) followed by a subscriber number written in the format NXX-XXXX (this is a largely NANP-centric notation for describing phone number patterns, N represents the digits 2-9 and X any digit). All of these NANP numbers reside under the country code 1, allowing at least theoretically seamless international dialing within the NANP community. It's really a pretty elegant system. NANP is the way it is for many reasons, but it mostly reflects technical requirements of the telephone exchanges of the 1940s. This is more thoroughly explained in the link above, but one of the goals of NANP is to ensure that step-by-step (SxS) exchanges can process phone numbers digit by digit as they are dialed. In other words, it needs to be possible to navigate the decision tree of telephone routing using only the digits dialed so far. Readers with a computer science education might have some tidy way to describe this in terms of Chompsky or something, but I do not have a computer science education; I have an Information Technology education. That means I prefer flow charts to automata, and we can visualize a basic SxS exchange as a big tree. When you pick up your phone, you start at the root of the tree, and each digit dialed chooses the edge to follow. Eventually you get to a leaf that is hopefully someone's telephone, but at no point in the process does any node benefit from the context of digits you dial before, after, or how many total digits you dial. This creates all kinds of practical constraints, and is the reason, for example, that we tend to write ten-digit phone numbers with a "1" before them. That requirement was in some ways long-lived (The last SxS exchange on the public telephone network was retired in 1999), and in other ways not so long lived... "common control" telephone exchanges, which did store the entire number in electromechanical memory before making a routing decision, were already in use by the time the NANP scheme was adopted. They just weren't universal, and a common nationwide numbering scheme had to be designed to accommodate the lowest common denominator. This discussion so far is all applicable to the land-line telephone. There is a whole telephone network that is, these days, almost completely separate but interconnected: cellular phones. Early cellular phones (where "early" extends into CDMA and early GSM deployments) were much more closely attached to the "POTS" (Plain Old Telephone System). AT&T and Verizon both operated traditional telephone exchanges, for example 5ESS, that routed calls to and from their customers. These telephone exchanges have become increasingly irrelevant to mobile telephony, and you won't find a T-Mobile ESS or DMS anywhere. All US cellular carriers have adopted the GSM technology stack, and GSM has its own definition of the switching element that can be, and often is, fulfilled by an AWS EC2 instance running RHEL 8. Calls between cell phones today, even between different carriers, are often connected completely over IP and never touch a traditional telephone exchange. The point is that not only is telephone number parsing less constrained on today's telephone network, in the case of cellular phones, it is outright required to be more flexible. GSM also defines the properties of phone numbers, and it is a very loose definition. Keep in mind that GSM is deeply European, and was built from the start to accommodate the wide variety of dialing practices found in Europe. This manifests in ways big and small; one of the notable small ways is that the European emergency number 112 works just as well as 911 on US cell phones because GSM dictates special handling for emergency numbers and dictates that 112 is one of those numbers. In fact, the definition of an "emergency call" on modern GSM networks is requesting a SIP URI of "urn:service:sos". This reveals that dialed number handling on cellular networks is fundamentally different. When you dial a number on your cellular phone, the phone collects the entire number and then applies a series of rules to determine what to do, often leading to a GSM call setup process where the entire number, along with various flags, is sent to the network. This is all software-defined. In the immortal words of our present predicament, "everything's computer." The bottom line is that, within certain regulatory boundaries and requirements set by GSM, cellular carriers can do pretty much whatever they want with phone numbers. Obviously numbers need to be NANP-compliant to be carried by the POTS, but many modern cellular calls aren't carried by the POTS, they are completed entirely within cellular carrier systems through their own interconnection agreements. This freedom allows all kinds of things like "HD voice" (cellular calls connected without the narrow filtering and companding used by the traditional network), and a lot of flexibility in dialing. Most people already know about some weird cellular phone numbers. For example, you can dial *#06# to display your phone's various serial numbers. This is an example of a GSM MMI (man-machine interface) code, phone numbers that are handled entirely within your device but nonetheless defined as dialable numbers by GSM for compatibility with even the most basic flip phones. GSM also defined numbers called USSD for unstructured supplementary service data, which set up connections to the network that can be used in any arbitrary way the network pleases. Older prepaid phone services used to implement balance check and top-up operations using USSD numbers, and they're also often used in ways similar to Vertical Service Codes (VSCs) on the landline network to control carrier features. USSDs also enabled the first forms of mobile data, which involved a "special telephone call" to a USSD in order to download a cut-down form of ESPN in a weird mobile-specific markup language. Now, put yourself in the shoes of an enterprising cellular network. The flexibility of processing phone numbers as you please opens up all kinds of possibilities. Innovative services! Customer convenience! Sell them for money! Oh my god, sell them for money! It seems like this started with customer service. It is an old practice, dating to the Bell operating companies, to have special short phone numbers to reach the telephone company itself. The details varied by company (often based on technical constraints in their switching system), but a common early setup was that dialing 114 got you the repair service operator to report a problem with your phone line. These numbers were usually listed in the front of the phone book, and for the phone company the fact that they were "special" or nonstandard was sort of a feature, since they could ensure that they were always routed within the same switch. The selection of "911" as the US emergency number seems rooted in this practice, as later on several major telcos used the "N11" numbers for their service lines. This became immortalized in the form of 611, which will get you customer service for most phone carriers. So cellular companies did the same, allocating themselves "special" numbers for various service lines. Verizon offers #PMT to make a payment. Naturally, there's also room for upsell services: #ROAD for roadside assistance on Verizon. The odd thing about these phone numbers is that there's really no standard involved, they're just the arbitrary practices of specific cellular companies. The term "mobile dial code" (MDC) is usually used to refer to them, although that term seems to have arisen organically rather than by intent. Remember, these aren't a real thing! The carriers just make them up, all on their own. The only real constraint on MDCs is that they need to not collide with any POTS number, which is most easily achieved by prefixing them with some combination of * and #, and usually not "*#" because it's referenced by the GSM standard for MMI. MDCs are available for purchase, but the terms don't seem to be public and you have to negotiate separately with each carrier. That's because there is no centralization. This is where MDCs stand in clear contrast to the better known SMS Short Code, or SMSSC. Those are the five or six-digit numbers widely used in advertising campaigns. SMSSCs are centrally managed by the SMS Short Code Registry, which is a function of industry association CTIA but contracted to iConectiv. iConectiv is sort of like the SAIC of the communications industry, a huge company that dates back to the Bell System (where it became Bellcore after divestiture) and that no one has heard of but nonetheless is a critically important part of the telephone system. Providers that want to have an SMSSC (typically on behalf of one of their customers) pay a fee, and usually recoup it from the end user. That fee is not cheap, typical end-user rates for an SMSSC run over $10k a year. But at least it's straightforward, and your SMS A2P or marketing company can make it happen for you. MDCs have no such centralization, no standardized registration process. You negotiate with each carrier individually. That means it's pretty difficult to put together "complete coverage" on an MDC by getting the same one assigned by every major carrier. And this is one of those areas where "good enough" is seldom good enough; people get pissed off when something you advertise doesn't work. Putting a phone number that only works for some people on a billboard can quickly turn into an expensive embarrassment, so companies will be wary of using an MDC in marketing if they don't feel really confident that it works for the vast majority of cellphone users. Because of this fragmentation, adoption of MDCs for marketing purposes has been very low. The only going concern I know of is #250, operated by a company called Mobile Direct Response. The premise of #250 is very simple: users call 250 and are greeted by a simple IVR. They say a keyword, and they're either forwarded to the phone number of the business that paid for the keyword or they receive a text message response with more information. #250 is specifically oriented towards radio advertising, where asking people to remember a ten-digit phone number is, well, asking a lot. It's also made the jump to podcast advertising. #250 is priced in a very radio-centric way, by the keyword and the size of the market area in which the advertisement that gives the keyword is played. 250 was founded by Dave Robinett, who used to work on marketing at Sprint, presumably where he became aware that these MDCs were a possibility. He has negotiated for #250 to work across a substantial list of cellular carriers in the US and Canada, providing almost complete coverage. That wasn't easy, Robinett said in an interview that it took five years to get AT&T, T-Mobile, Verizon, and Sprint on board. 250 does not appear to be especially widely used. For one, the website is a little junky, with some broken links and other indications that it is not backed by a large communications department. Dave Robinett may be the entire company. They've been operating since at least 2017, and I've only ever heard it in an ad once---a podcast ad that ended with "Call #250 and say I need a dentist." One thing you quickly notice when you look into telephone marketing is that dentists are apparently about 80% of the market. He does mention success with shows like "Rush, Hannity, and Levin," so it's safe to say that my radio habits are a little different from Robinett's. That's not to say that #250 is a failure. In the same interview Robinett says that the company pays his mortgage and, well, that ain't too bad. But it's also nothing like the widespread adoption of SMSSCs. One wonders if the limitation of MDCs to one company that is so focused on radio marketing limits their potential. It might really open things up if some company created a registration service, and prenegotiated terms with carriers so that companies could pick up their own MDCs to use as they please. Well, yeah, someone's trying. Around 2006, a recently-founded mobile marketing company called Zoove announced StarStar dialing. I'm a little unclear on Zoove's history. It seems that they were originally founded as Teleractive in Rhode Island as an SMS short code keyword response service, and after an infusion of VC cash moved to Palo Alto and started looking for something bigger. In 2016, they were acquired by a call center technology company called Mindful. Or maybe Zoove sold the StarStar business to Mindful? Stick a pin in that. I don't love the name StarStar, which has shades of Spacestar Ordering. But it refers to their chosen MDC prefix, two stars. Well, that point is a little odd, according to their marketing material you can also get numbers with a # prefix or * prefix, but all of the examples use **. I would say that, in general, StarStar has it a little less together than #250. Their website is kind of broken, it only loads intermittently and some of the images are missing. At one point it uses the term "CADC" to describe these numbers but I can't find that expanded anywhere. Plus the "About" page refers repeatedly to Virtual Hold Technologies, which renamed to VHT in 2018 and Mindful 2022. It really feels like the vestigial website of a dead company. I know about StarStar because, for a time, trucks from moving franchise All My Sons prominently bore the number MOVE on the side. Indeed, this is still one of the headline examples on the StarStar website, but it doesn't work. I just get a loud click and then the call ends. And it's not that StarStar doesn't work with my mobile carrier, because StarStar's own number MOBILE does connect to their IVR. That IVR promises that a representative will speak with me shortly, plays about five seconds of hold music, and then dumps me on a voicemail system. Despite StarStar numbers apparently basically working, I'm finding that most of the examples they give on their website won't even connect. Perhaps results will vary depending on the mobile network. Well, perhaps not that much is lost. StarStar was founded by Steve Doumar, a serial telephone marketing entrepreneur with a colorful past founding various inbound call center companies. Perhaps his most famous venture is R360, a "lead acquisition" service memorialized by headlines like "Drug treatment referral service took advantage of addictions to make a quick buck" from the Federal Trade Commission. He's one of those guys whose bio involves founding a new company every two years, which he has to spin as entrepreneurial dynamism rather than some combination of fleeing dissatisfied investors and fleeing angered regulators. Today he runs whisp.io, a "customer activation platform" that appears to be a glorified SMS advertising service featuring something ominously called "simplified opt-in." Whisp has a YouTube channel which features the 48-second gem "Fun Fact We Absolutely Love About Steve Doumar". Description: Our very own CEO, Steve Doumar is a kind and generous person who has given back to the community in many ways; this man is absolutely a man with a heart of gold. Do you want to know the fun fact? Yes you do! Here it is: "He is an incredible philanthropist. He loves helping other people. Every time I'm with him he comes up with new ways and new ideas to help other people. Which I think is amazing. And he doesn't brag about it, he doesn't talk about it a lot." Except he's got his CMO making a YouTube video about it? From Steve Doumar's blog: American entrepreneur Ray Kroc expressed the importance of persisting in a busy world where everyone wants a bite of success. This man is no exception. An entrepreneur. A family man. A visionary. These are the many names of a man that has made it possible for opt-ins to be safe, secure, and accurate; Steve Doumar. I love this stuff, you just can't make it up. I'm pretty sure what's going on here is just an SEO effort to outrank the FTC releases and other articles about the R360 case when you search for his name. It's only partially working, "FTC Hits R360 and its Owner With $3.8 Million Civil ..." still comes in at Google result #4 for "Steve Doumar," at least for me. But hey, #4 is better than #1. Well, to be fair to StarStar, I don't think Steve Doumar has been involved for some years, but also to be fair, some of their current situation clearly dates to past behavior that is maybe less than savory. Zoove originally styled itself as "The National StarStar Registry," clearly trying to draw parallels to CTIA/iConectiv's SMSSC registry. Their largest customer was evidently a company called Sumotext, which leased a number of StarStar numbers to offer an SMS and telephone marketing service. In 2016, Sumotext sued StarStar, Zoove, VHT (now Mindful), and a healthy list of other entities all involved in StarStar including the intriguingly named StarSteve LLC. I'm not alone in finding the corporate history a little baffling; in a footnote on one ruling the court expressed confusion about all the different names and opted to call them all Zoove. In any case, Sumotext alleged that Zoove, StarSteve, and VHT all merged as part of a scheme to illegally monopolize the StarStar market by undercutting the companies that had been leasing the numbers and effectively giving VHT (Mindful) an exclusive ability to offer marketing services with StarStar numbers. The case didn't end up going anywhere for Sumotext, the jury found that Sumotext hadn't established a relevant market which is a key part of a Sherman act case. An appeal was made all the way to the Supreme Court, but they didn't take it up. What the case did do was publicize some pretty sketchy sounding details, like the seemingly uncontested accusation that VHT got Sumotext's customer list from the registry database and used it to convert them all into StarSteve customers. And yes, the Steve in StarSteve is Steve Doumar. As best I can tell, the story here is that Steve Doumar founded Zoove (or bought Teleractive and renamed it or something?) to establish the National StarStar Registry, then founded a marketing company called StarSteve that resold StarStar numbers, then merged StarSteve and the National StarStar Registry together and cut off all of the other resellers. Apparently not a Sherman act violation but it sure is a bad look, and I wonder how much it contributed to the lack of adoption of the whole StarStar idea---especially given that Sumotext seems to have been responsible for most of that adoption, including the All My Sons deal for MOVE. I wonder if All My Sons had to take MOVE off of their trucks because of the whole StarSteve maneuver? That seems to be what happened. Look, ten-digit phone numbers are had to remember, that much is true. But as is, the "MDC" industry doesn't seem stable enough for advertising applications where the number needs to continue to work into the future. I think the #250 service is probably here to stay, but confined to the niche of audio advertising. StarStar raised at least $30 million in capital in the 2010s, but seems to have shot itself in the foot. StarStar owner VHT/Mindful, now acquired by Medallia, doesn't even mention StarStar as a product offering. Hey, remember how Steve Doumar is such a great philanthropist? There are a lot of vestiges around of StarStar Inc., a nonprofit that made StarStar numbers available to charitable organizations. Their website, starstar.org, is now a Wix error page. You can find old articles about StarStar Me, also written **me, which sounds lewd but was a $3/mo offering that allowed customers to get a vanity short code (such as ** followed by their name)---the original form of StarStar, dating back to 2012 and the beginning of Zoove. In a press release announcing the StarStar Me, Zoove CEO Joe Gillespie said: With two-thirds of smartphone users having downloaded social networking apps to their phones, there’s a rapidly growing trend in today's on-the-go lifestyle to extend our personal communications and identity into the digital realm via our mobile phones. And somehow this leads to paying $3 for to get StarStarred? I love it! It's so meaningless! And years later it would be StarStar Mobile formerly Zoove by VHT now known as Mindful a Medallia company. Truly an inspiring story of industry, and just one little corner of the vast tapestry of phone numbers.

6 days ago 15 votes
2025-06-19 hydronuclear testing

Some time ago, via a certain orange website, I came across a report about a mission to recover nuclear material from a former Soviet test site. I don't know what you're doing here, go read that instead. But it brought up a topic that I have only known very little about: Hydronuclear testing. One of the key reasons for the nonproliferation concern at Semipalatinsk was the presence of a large quantity of weapons grade material. This created a substantial risk that someone would recover the material and either use it directly or sell it---either way giving a significant leg up on the construction of a nuclear weapon. That's a bit odd, though, isn't it? Material refined for use in weapons in scarce and valuable, and besides that rather dangerous. It's uncommon to just leave it lying around, especially not hundreds of kilograms of it. This material was abandoned in place because the nature of the testing performed required that a lot of weapons-grade material be present, and made it very difficult to remove. As the Semipalatinsk document mentions in brief, similar tests were conducted in the US and led to a similar abandonment of special nuclear material at Los Alamos's TA-49. Today, I would like to give the background on hydronuclear testing---the what and why. Then we'll look specifically at LANL's TA-49 and the impact of the testing performed there. First we have to discuss the boosted fission weapon. Especially in the 21st century, we tend to talk about "nuclear weapons" as one big category. The distinction between an "A-bomb" and an "H-bomb," for example, or between a conventional nuclear weapon and a thermonuclear weapon, is mostly forgotten. That's no big surprise: thermonuclear weapons have been around since the 1950s, so it's no longer a great innovation or escalation in weapons design. The thermonuclear weapon was not the only post-WWII design innovation. At around the same time, Los Alamos developed a related concept: the boosted weapon. Boosted weapons were essentially an improvement in the efficiency of nuclear weapons. When the core of a weapon goes supercritical, the fission produces a powerful pulse of neutrons. Those neutrons cause more fission, the chain reaction that makes up the basic principle of the atomic bomb. The problem is that the whole process isn't fast enough: the energy produced blows the core apart before it's been sufficiently "saturated" with neutrons to completely fission. That leads to a lot of the fuel in the core being scattered, rather than actually contributing to the explosive energy. In boosted weapons, a material that will fusion is added to the mix, typically tritium and deuterium gas. The immense heat of the beginning of the supercritical stage causes the gas to undergo fusion, and it emits far more neutrons than the fissioning fuel does alone. The additional neutrons cause more fission to occur, improving the efficiency of the weapon. Even better, despite the theoretical complexity of driving a gas into fusion¸ the mechanics of this mechanism are actually simpler than the techniques used to improve yield in non-boosted weapons (pushers and tampers). The result is that boosted weapons produce a more powerful yield in comparison to the amount of fuel, and the non-nuclear components can be made simpler and more compact as well. This was a pretty big advance in weapons design and boosting is now a ubiquitous technique. It came with some downsides, though. The big one is that whole property of making supercriticality easier to achieve. Early implosion weapons were remarkably difficult to detonate, requiring an extremely precisely timed detonation of the high explosive shell. While an inconvenience from an engineering perspective, the inherent difficulty of achieving a nuclear yield also provided a safety factor. If the high explosives detonated for some unintended reason, like being struck by canon fire as a bomber was intercepted, or impacting the ground following an accidental release, it wouldn't "work right." Uneven detonation of the shell would scatter the core, rather than driving it into supercriticality. This property was referred to as "one point safety:" a detonation at one point on the high explosive assembly should not produce a nuclear yield. While it has its limitations, it became one of the key safety principles of weapon design. The design of boosted weapons complicated this story. Just a small fission yield, from a small fragment of the core, could potentially start the fusion process and trigger the rest of the core to detonate as well. In other words, weapon designers became concerned that boosted weapons would not have one point safety. As it turns out, two-stage thermonuclear weapons, which were being fielded around the same time, posed a similar set of problems. The safety problems around more advanced weapon designs came to a head in the late '50s. Incidentally, so did something else: shifts in Soviet politics had given Khrushchev extensive power over Soviet military planning, and he was no fan of nuclear weapons. After some on-again, off-again dialog between the time's nuclear powers, the US and UK agreed to a voluntary moratorium on nuclear testing which began in late 1958. For weapons designers this was, of course, a problem. They had planned to address the safety of advanced weapon designs through a testing campaign, and that was now off the table for the indefinite future. An alternative had to be developed, and quickly. In 1959, the Hydronuclear Safety Program was initiated. By reducing the amount of material in otherwise real weapon cores, physicists realized they could run a complete test of the high explosive system and observe its effects on the core without producing a meaningful nuclear yield. These tests were dubbed "hydronuclear," because of the desire to observe the behavior of the core as it flowed like water under the immense explosive force. While the test devices were in some ways real nuclear weapons, the nuclear yield would be vastly smaller than the high explosive yield, practically nill. Weapons designers seemed to agree that these experiments complied with the spirit of the moratorium, being far from actual nuclear tests, but there was enough concern that Los Alamos went to the AEC and President Eisenhower for approval. They evidently agreed, and work started immediately to identify a suitable site for hydronuclear testing. While hydronuclear tests do not create a nuclear yield, they do involve a lot of high explosives and radioactive material. The plan was to conduct the tests underground, where the materials cast off by the explosion would be trapped. This would solve the immediate problem of scattering nuclear material, but it would obviously be impractical to recover the dangerous material once it was mixed with unstable soil deep below the surface. The material would stay, and it had to stay put! The US Army Corps of Engineers, a center of expertise in hydrology because of their reclamation work, arrived in October 1959 to begin an extensive set of studies on the Frijoles Mesa site. This was an unused area near a good road but far on the east edge of the laboratory, well separated from the town of Los Alamos and pretty much anything else. More importantly, it was a classic example of northern New Mexican geology: high up on a mesa built of tuff and volcanic sediments, well-drained and extremely dry soil in an area that received little rain. One of the main migration paths for underground contaminants is their interaction with water, and specifically the tendency of many materials to dissolve into groundwater and flow with it towards aquifers. The Corps of Engineers drilled test wells, about 1,500' deep, and a series of 400' core samples. They found that on the Frijoles Mesa, ground water was over 1,000' below the surface, and that everything above was far from saturation. That means no mobility of the water, which is trapped in the soil. It's just about the ideal situation for putting something underground and having it stay. Incidentally, this study would lead to the development of a series of new water wells for Los Alamos's domestic water supply. It also gave the green light for hydronuclear testing, and Frijoles Mesa was dubbed Technical Area 49 and subdivided into a set of test areas. Over the following three years, these test areas would see about 35 hydronuclear detonations carried out in the bottom of shafts that were about 200' deep and 3-6' wide. It seems that for most tests, the hole was excavated and lined with a ladder installed to reach the bottom. Technicians worked at the bottom of the hole to prepare the test device, which was connected by extensive cabling to instrumentation trailers on the surface. When the "shot" was ready, the hole was backfilled with sand and sealed at the top with a heavy plate. The material on top of the device held everything down, preventing migration of nuclear material to the surface. The high explosives did, of course, destroy the test device and the cabling, but not before the instrumentation trailers had recorded a vast amount of data. If you read these kinds of articles, you must know that the 1958 moratorium did not last. Soviet politics shifted again, France began nuclear testing, negotiations over a more formal test ban faltered. US intelligence suspected that the Soviet Union had operated their nuclear weapons program at full tilt during the test ban, and the military suspected clandestine tests, although there was no evidence they had violated the treaty. Of course, that they continued their research efforts is guaranteed, we did as well. Physicist Edward Teller, ever the nuclear weapons hawk, opposed the moratorium and pushed to resume testing. In 1961, the Soviet Union resumed testing, culminating in the test of the record-holding "Tsar Bomba," a 50 megaton device. The US resumed testing as well. The arms race was back on. US hydronuclear testing largely ended with the resumption of full-scale testing. The same safety studies could be completed on real weapons, and those tests would serve other purposes in weapons development as well. Although post-moratorium testing included atmospheric detonations, the focus had shifted towards underground tests and the 1963 Partial Test Ban Treaty restricted the US and USSR to underground tests only. One wonders about the relationship between hydronuclear testing at TA-49 and the full-scale underground tests extensively performed at the NTS. Underground testing began in 1951 with Buster-Jangle Uncle, a test to determine how big of a crater could be produced by a ground-penetrating weapon. Uncle wasn't really an underground test in the modern sense, the device was emplaced only 17 feet deep and still produced a huge cloud of fallout. It started a trend, though: a similar 1955 test was set 67 feet deep, producing a spectacular crater, before the 1957 Plumbbob Pascal-A was detonated at 486 feet and produced radically less fallout. 1957's Plumbbob Rainier was the first fully-contained underground test, set at the end of a tunnel excavated far into a hillside. This test emitted no fallout at all, proving the possibility of containment. Thus both the idea of emplacing a test device in a deep hole, and the fact that testing underground could contain all of the fallout, were known when the moratorium began in 1959. What's very interesting about the hydronuclear tests is the fact that technicians actually worked "downhole," at the bottom of the excavation. Later underground tests were prepared by assembling the test device at the surface, as part of a rocket-like "rack," and then lowering it to the bottom just before detonation. These techniques hadn't yet been developed in the '50s, thus the use of a horizontal tunnel for the first fully-contained test. Many of the racks used for underground testing were designed and built by LANL, but others (called "canisters" in an example of the tendency of the labs to not totally agree on things) were built by Lawrence Livermore. I'm not actually sure which of the two labs started building them first, a question for future research. It does seem likely that the hydronuclear testing at LANL advanced the state of the art in remote instrumentation and underground test design, facilitating the adoption of fully-contained underground tests in the following years. During the three years of hydronuclear testing, shafts were excavated in four testing areas. It's estimated that the test program at TA-49 left about 40kg of plutonium and 93kg of enriched uranium underground, along with 92kg of depleted uranium and 13kg of beryllium (both toxic contaminants). Because of the lack of a nuclear yield, these tests did not create the caverns associated with underground testing. Material from the weapons likely spread within just a 10-20' area, as holes were drilled on a 25' grid and contamination from previous neighboring tests was encountered only once. The tests also produced quite a bit of ancillary waste: things like laboratory equipment, handling gear, cables and tubing, that are not directly radioactive but were contaminated with radioactive or toxic materials. In the fashion typical of the time, this waste was buried on site, often as part of the backfilling of the test shafts. During the excavation of one of the test shafts, 2-M in December 1960, contamination was detected at the surface. It seems that the geology allowed plutonium from a previous test to spread through cracks into the area where 2-M was being drilled. The surface soil contaminated by drill cuttings was buried back in hole 2-M, but this incident made area 2 the most heavily contaminated part of TA-49. When hydronuclear testing ended in 1961, area 2 was covered by a 6' of gravel and 4-6" of asphalt to better contain any contaminated soil. Several support buildings on the surface were also contaminated, most notably a building used as a radiochemistry laboratory to support the tests. An underground calibration facility that allowed for exposure of test equipment to a contained source in an underground chamber was also built at TA-49 and similarly contaminated by use with radioisotopes. The Corps of Engineers continued to monitor the hydrology of the site from 1961 to 1970, and test wells and soil samples showed no indication that any contamination was spreading. In 1971, LANL established a new environmental surveillance department that assumed responsibility for legacy sites like TA-49. That department continued to sample wells, soil, and added air sampling. Monitoring of stream sediment downhill from the site was added in the '70s, as many of the contaminants involved can bind to silt and travel with surface water. This monitoring has not found any spread either. That's not to say that everything is perfect. In 1975, a section of the asphalt pad over Area 2 collapsed, leaving a three foot deep depression. Rainwater pooled in the depression and then flowed through the gravel into hole 2-M itself, collecting in the bottom of the lining of the former experimental shaft. In 1976, the asphalt cover was replaced, but concerns remained about the water that had already entered 2-M. It could potentially travel out of the hole, continue downwards, and carry contamination into the aquifer around 800' below. Worse, a nearby core sample hole had picked up some water too, suggesting that the water was flowing out of 2-M through cracks and into nearby features. Since the core hole had a slotted liner, it would be easier for water to leave it and soak into the ground below. In 1980, the water that had accumulated in 2-M was removed by lifting about 24 gallons to the surface. While the water was plutonium contaminated, it fell within acceptable levels for controlled laboratory areas. Further inspections through 1986 did not find additional water in the hole, suggesting that the asphalt pad was continuing to function correctly. Several other investigations were conducted, including the drilling of some additional sample wells and examination of other shafts in the area, to determine if there were other routes for water to enter the Area 2 shafts. Fortunately no evidence of ongoing water ingress was found. In 1986, TA-49 was designated a hazardous waste site under the Resource Conservation and Recovery Act. Shortly after, the site was evaluated under CERCLA to prioritize remediation. Scoring using the Hazard Ranking System determined a fairly low risk for the site, due to the lack of spread of the contamination and evidence suggesting that it was well contained by the geology. Still, TA-49 remains an environmental remediation site and now falls under a license granted by the New Mexico Environment Department. This license requires ongoing monitoring and remediation of any problems with the containment. For example, in 1991 the asphalt cover of Area 2 was found to have cracked and allowed more water to enter the sample wells. The covering was repaired once again, and investigations made every few years from 1991 to 2015 to check for further contamination. Ongoing monitoring continues today. So far, Area 2 has not been found to pose an unacceptable risk to human health or a risk to the environment. NMED permitting also covers the former radiological laboratory and calibration facility, and infrastructure related to them like a leach field from drains. Sampling found some surface contamination, so the affected soil was removed and disposed of at a hazardous waste landfill where it will be better contained. TA-49 was reused for other purposes after hydronuclear testing. These activities included high explosive experiments contained in metal "bottles," carried out in a metal-lined pit under a small structure called the "bottle house." Part of the bottle house site was later reused to build a huge hydraulic ram used to test steel cables at their failure strength. I am not sure of the exact purpose of this "Cable Test Facility," but given the timeline of its use during the peak of underground testing and the design I suspect LANL used it as a quality control measure for the cable assemblies used in lowering underground test racks into their shafts. No radioactive materials were involved in either of these activities, but high explosives and hydraulic oil can both be toxic, so both were investigated and received some surface soil cleanup. Finally, the NMED permit covers the actual test shafts. These have received numerous investigations over the sixty years since the original tests, and significant contamination is present as expected. However, that contamination does not seem to be spreading, and modeling suggests that it will stay that way. In 2022, the NMED issued Certificates of Completion releasing most of the TA-49 remediation sites without further environmental controls. The test shafts themselves, known to NMED by the punchy name of Solid Waste Management Unit 49-001(e), received a certificate of completion that requires ongoing controls to ensure that the land is used only for industrial purposes. Environmental monitoring of the TA-49 site continues under LANL's environmental management program and federal regulation, but TA-49 is no longer an active remediation project. The plutonium and uranium is just down there, and it'll have to stay.

3 weeks ago 20 votes
2025-05-27 the first smart homes

Sometimes I think I should pivot my career to home automation critic, because I have many opinions on the state of the home automation industry---and they're pretty much all critical. Virtually every time I bring up home automation, someone says something about the superiority of the light switch. Controlling lights is one of the most obvious applications of home automation, and there is a roughly century long history of developments in light control---yet, paradoxically, it is an area where consumer home automation continues to struggle. An analysis of how and why billion-dollar tech companies fail to master the simple toggling of lights in response to human input will have to wait for a future article, because I will have a hard time writing one without descending into incoherent sobbing about the principles of scene control and the interests of capital. Instead, I want to just dip a toe into the troubled waters of "smart lighting" by looking at one of its earliest precedents: low-voltage lighting control. A source I generally trust, the venerable "old internet" website Inspectapedia, says that low-voltage lighting control systems date back to about 1946. The earliest conclusive evidence I can find of these systems is a newspaper ad from 1948, but let's be honest, it's a holiday and I'm only making a half effort on the research. In any case, the post-war timing is not a coincidence. The late 1940s were a period of both rapid (sub)urban expansion and high copper prices, and the original impetus for relay systems seems to have been the confluence of these two. But let's step back and explain what a relay or low-voltage lighting control system is. First, I am not referring to "low voltage lighting" meaning lights that run on 12 or 24 volts DC or AC, as was common in landscape lighting and is increasingly common today for integrated LED lighting. Low-voltage lighting control systems are used for conventional 120VAC lights. In the most traditional construction, e.g. in the 1940s, lights would be served by a "hot" wire that passed through a wall box containing a switch. In many cases the neutral (likely shared with other fixtures) went directly from the light back to the panel, bypassing the switch... running both the hot and neutral through the switch box did not become conventional until fairly recently, to the chagrin of anyone installing switches that require a neutral for their own power, like timers or "smart" switches. The problem with this is that it lengthens the wiring runs. If you have a ceiling fixture with two different switches in a three-way arrangement, say in a hallway in a larger house, you could be adding nearly 100' in additional wire to get the hot to the switches and the runner between them. The cost of that wiring, in the mid-century, was quite substantial. Considering how difficult it is to find an employee to unlock the Romex cage at Lowes these days, I'm not sure that's changed that much. There are different ways of dealing with this. In the UK, the "ring main" served in part to reduce the gauge (and thus cost) of outlet wiring, but we never picked up that particular eccentricity in the US (for good reason). In commercial buildings, it's not unusual for lighting to run on 240v for similar reasons, but 240v is discouraged in US residential wiring. Besides, the mid-century was an age of optimism and ambition in electrical technology, the days of Total Electric Living. Perhaps the technology of the relay, refined by so many innovations of WWII, could offer a solution. Switch wiring also had to run through wall cavities, an irritating requirement in single-floor houses where much of the lighting wiring could be contained to the attic. The wiring of four-way and other multi-switch arrangements could become complex and require a lot more wall runs, discouraging builders providing switches in the most convenient places. What if relays also made multiple switches significantly easier to install and relocate? You probably get the idea. In a typical low-voltage lighting control system, a transformer provides a low voltage like 24VAC, much the same as used by doorbells. The light switches simply toggle the 24VAC control power to the coils of relays. Some (generally older) systems powered the relay continuously, but most used latching relays. In this case, all light switches are momentary, with an "on" side and an "off" side. This could be a paddle that you push up or down (much like a conventional light switch), a bar that you push the left or right sides of, or a pair of two push buttons. In most installations, all of the relays were installed together in a single enclosure, usually in the attic where the high-voltage wiring to the actual lights would be fairly short. The 24VAC cabling to the switches was much smaller gauge, and depending on the jurisdiction might not require any sort of license to install. Many systems had enclosures with separate high voltage and low voltage components, or mounted the relays on the outside of an enclosure such that the high voltage wiring was inside and low voltage outside. Both arrangements helped to meet code requirements for isolating high and low voltage systems and provided a margin of safety in the low voltage wiring. That provided additional cost savings as well; low voltage wiring was usually installed without any kind of conduit or sheathed cable. By 1950, relay lighting controls were making common appearances in real estate listings. A feature piece on the "Melody House," a builder's model home, in the Tacoma News Tribune reads thus: Newest features in the house are the low voltage touch plate and relay system lighting controls, with wide plates instead of snap buttons---operated like the stops of a pipe organ, with the merest flick of a finger. The comparison to a pipe organ is interesting, first in its assumption that many readers were familiar with typical organ stops. Pipe organs were, increasingly, one of the technological marvels of the era: while the concept of the pipe organ is very old, this same era saw electrical control systems (replete with relays!) significantly reduce the cost and complexity of organ consoles. What's more, the tonewheel electric organ had become well-developed and started to find its way into homes. The comparison is also interesting because of its deficiencies. The Touch-Plate system described used wide bars, which you pressed the left or right side of---you could call them momentary SPDT rocker switches if you wanted. There were organs with similar rocker stops but I do not think they were common in 1950. My experience is that such rocker switch stops usually indicate a fully digital control system, where they make momentary action unobtrusive and avoid state synchronization problems. I am far from an expert on organs, though, which is why I haven't yet written about them. If you have a guess at which type of pipe organ console our journalist was familiar with, do let me know. Touch-Plate seems to have been one of the first manufacturers of these systems, although I can't say for sure that they invented them. Interestingly, Touch-Plate is still around today, but their badly broken WordPress site ("Welcome to the new touch-plate.com" despite it actually being touchplate.com) suggests they may not do much business. After a few pageloads their WordPress plugin WAF blocked me for "exceed[ing] the maximum number of page not found errors per minute for humans." This might be related to my frustration that none of the product images load. It seems that the Touch-Plate company has mostly pivoted to reselling imported LED lighting (touchplateled.com), so I suppose the controls business is withering on the vine. The 1950s saw a proliferation of relay lighting control brands, with GE introducing a particularly popular system with several generations of fixtures. Kyle Switch Plates, who sell replacement switch plates (what else?), list options for Remcon, Sierra, Bryant, Pyramid, Douglas, and Enercon systems in addition to the two brands we have met so far. As someone who pays a little too much attention to light switches, I have personally seen four of these brands, three of them still in use and one apparently abandoned in place. Now, you might be thinking that simply economizing wiring by relocating the switches does not constitute "home automation," but there are other features to consider. For one, low-voltage light control systems made it feasible to install a lot more switches. Houses originally built with them often go a little wild with the n-way switching, every room providing lightswitches at every door. But there is also the possibility of relay logic. From the same article: The necessary switches are found in every room, but in the master bedroom there is a master control panel above the bed, from where the house and yard may be flooded with instant light in case of night emergency. Such "master control panels" were a big attraction for relay lighting, and the finest homes of the 1950s and 1960s often displayed either a grid of buttons near the head of the master bed, or even better, a GE "Master Selector" with a curious system of rotary switches. On later systems, timers often served as auxiliary switches, so you could schedule exterior lights. With a creative installer, "scenes" were even possible by wiring switches to arbitrary sets of relays (this required DC or half-wave rectified control power and diodes to isolate the switches from each other). Many of these relay control systems are still in use today. While they are quite outdated in a certain sense, the design is robust and the simple components mean that it's usually not difficult to find replacement parts when something does fail. The most popular system is the one offered by GE, using their RR series relays (RR3, RR4, etc., to the modern RR9). That said, GE suggests a modernization path to their LightSweep system, which is really a 0-10v analog dimming controller that has the add-on ability to operate relays. The failure modes are mostly what you would expect: low voltage wiring can chafe and short, or the switches can become stuck. This tends to cause the lights to stick on or off, and the continuous current through the relay coil often burns it out. The fix requires finding the stuck switch or short and correcting it, and then replacing the relay. One upside of these systems that persists today is density: the low voltage switches are small, so with most systems you can fit 3 per gang. Another is that they still make N-way switching easier. There is arguably a safety benefit, considering the reduction in mains-voltage wire runs. Yet we rarely see such a thing installed in homes newer than around the '80s. I don't know that I can give a definitive explanation of the decline of relay lighting control, but reduced prices for copper wiring were probably a main factor. The relays added a failure point, which might lead to a perception of unreliability, and the declining familiarity of electricians means that installing a relay system could be expensive and frustrating today. What really interests me about relay systems is that they weren't really replaced... the idea just went away. It's not like modern homes are providing a master control panel in the bedroom using some alternative technology. I mean, some do, those with prices in the eight digits, but you'll hardly ever see it. That gets us to the tension between residential lighting and architectural lighting control systems. In higher-end commercial buildings, and in environments like conference rooms and lecture halls, there's a well established industry building digital lighting control systems. Today, DALI is a common standard for the actual lighting control, but if you look at a range of existing buildings you will find everything from completely proprietary digital distributed dimming to 0-10v analog dimming to central dimmer racks (similar to traditional theatrical lighting). Relay lighting systems were, in a way, a nascent version of residential architectural lighting control. And the architectural lighting control industry continues to evolve. If there is a modern equivalent to relay lighting, it's something like Lutron QSX. That's a proprietary digital lighting (and shade) control system, marketed for both residential and commercial use. QSX offers a wide range of attractive wall controls, tight integration to Lutron's HomeSense home automation platform, and a price tag that'll make your eyes water. Lutron has produced many generations of these systems, and you could make an argument that they trace their heritage back to the relay systems of the 1940s. But they're just priced way beyond the middle-class home. And, well, I suppose that requires an argument based on economics. Prices have gone up. Despite tract construction being a much older idea than people often realize, it seems clear that today's new construction homes have been "value engineered" to significantly lower feature and quality levels than those of the mid-century---but they're a lot bigger. There is a sort of maxim that today's home buyers don't care about anything but square footage, and if you've seen what Pulte or D. R. Horton are putting up... well, I never knew that 3,000 sq ft could come so cheap, and look it too. Modern new-construction homes just don't come with the gizmos that older ones did, especially in the '60s and '70s. Looking at the sales brochure for a new development in my own Albuquerque ("Estates at La Cuentista"), besides 21st century suburbanization (Gated Community! "East Access to Paseo del Norte" as if that's a good thing!) most of the advertised features are "big." I'm serious! If you look at the "More Innovation Built In" section, the "innovations" are a home office (more square footage), storage (more square footage), indoor and outdoor gathering spaces (to be fair, only the indoor ones are square footage), "dedicated learning areas" for kids (more square footage), and a "basement or bigger garage" for a home gym (more square footage). The only thing in the entire innovation section that I would call a "technical" feature is water filtration. You can scroll down for more details, and you get to things like "space for a movie room" and a finished basement described eight different ways. Things were different during the peak of relay lighting in the '60s. A house might only be 1,600 sq ft, but the builder would deck it out with an intercom (including multi-room audio of a primitive sort), burglar alarm, and yes, relay lighting. All of these technologies were a lot newer and people were more excited about them; I bring up Total Electric Living a lot because of an aesthetic obsession but it was a large-scale advertising and partnership campaign by the electrical industry (particularly Westinghouse) that gave builders additional cross-promotion if they included all of these bells and whistles. Remember, that was when people were watching those old videos about the "kitchen of the future." What would a 2025 "Kitchen of the Future" promotional film emphasize? An island bigger than my living room and a nook for every meal, I assume. Features like intercoms and even burglar alarms have become far less common in new construction, and even if they were present I don't think most buyers would use them. But that might seem a little odd, right, given the push towards home automation? Well, built-in home automation options have existed for longer than any of today's consumer solutions, but "built in" is a liability for a technology product. There are practical reasons, in that built-in equipment is harder to replace, but there's also a lamer commercial reason. Consumer technology companies want to sell their products like consumer technology, so they've recontextualized lighting control as "IoT" and "smart" and "AI" rather than something an electrician would hook up. While I was looking into relay lighting control systems, I ran into an interesting example. The Lutron Lu Master Lumi 5. What a name! Lutron loves naming things like this. The Lumi 5 is a 1980s era product with essentially the same features as a relay system, but architected in a much stranger way. It is, essentially, five three way switches in a box with remote controls. That means that each of the actual light switches in the house (which could also be dimmers) need mains-voltage wiring, including runner, back to the Lumi 5 "interface." Pressing a button on one of the Lutron wall panels toggles the state of the relay in the "interface" cabinet, toggling the light. But, since it's all wired as a three-way switch, toggling the physical switch at the light does the same thing. As is typical when combining n-way switches and dimming, the Lumi 5 has no control over dimmers. You can only dim a light up or down at the actual local control, the Lumi 5 can just toggle the dimmer on and off using the 3-way runner. The architecture also means that you have two fundamentally different types of wall panels in your house: local switches or dimmers wired to each light, and the Lu Master panels with their five buttons for the five circuits, along with "all on" and "all off." The Lumi 5 "interface" uses simple relay logic to implement a few more features. Five mains-voltage-level inputs can be wired to time clocks, so that you can schedule any combination(s) of the circuits to turn on and off. The manual recommends models including one with an astronomical clock for sunrise/sunset. An additional input causes all five circuits to turn on; it's suggested for connection to an auxiliary relay on a burglar alarm to turn all of the lights on should the alarm be triggered. The whole thing is strange and fascinating. It is basically a relay lighting control system, like so many before it, but using a distinctly different wiring convention. I think the main reason for the odd wiring was to accommodate dimmers, an increasingly popular option in the 1980s that relay systems could never really contend with. It doesn't have the cost advantages of relay systems at all, it will definitely be more expensive! But it adds some features over the fancy Lutron switches and dimmers you were going to install anyway. The Lu Master is the transitional stage between relay lighting systems and later architectural lighting controls, and it straddled too the end of relay light control in homes. It gives an idea of where relay light control in homes would have evolved, had the whole technology not been doomed to the niche zone of conference centers and universities. If you think about it, the Lu Master fills the most fundamental roles of home automation in lighting: control over multiple lights in a convenient place, scheduling and triggers, and an emergency function. It only lacks scenes, which I think we can excuse considering that the simple technology it uses does not allow it to adjust dimmers. And all of that with no Node-RED in sight! Maybe that conveys what most frustrates me about the "home automation" industry: it is constantly reinventing the wheel, an oligopoly of tech companies trying to drag people's homes into their "ecosystem." They do so by leveraging the buzzword of the moment, IoT to voice assistants to, I guess now AI?, to solve a basic set of problems that were pretty well solved at least as early as 1948. That's not to deny that modern home automation platforms have features that old ones don't. They are capable of incredibly sophisticated things! But realistically, most of their users want only very basic functionality: control in convenient places, basic automation, scenes. It wouldn't sting so much if all these whiz-bang general purpose computers were good at those tasks, but they aren't. For the very most basic tasks, things like turning on and off a group of lights, major tech ecosystems like HomeKit provide a user experience that is significantly worse than the model home of 1950. You could install a Lutron system, and it would solve those fundamental tasks much better... for a much higher price. But it's not like Lutron uses all that money to be an absolute technical powerhouse, a center of innovation at the cutting edge. No, even the latest Lutron products are really very simple, technically. The technical leaders here, Google, Apple, are the companies that can't figure out how to make a damn light switch. The problem with modern home automation platforms is that they are too ambitious. They are trying to apply enormously complex systems to very simple tasks, and thus contaminating the simplest of electrical systems with all the convenience and ease of a Smart TV. Sometimes that's what it feels like this whole industry is doing: adding complexity while the core decays. From automatic programming to AI coding agents, video terminals to Electron, the scope of the possible expands while the fundamentals become more and more irritating. But back to the real point, I hope you learned about some cool light switches. Check out the Kyle Switch Plates reference and you'll start seeing these buildings and homes, at least if you live in an area that built up during the era that they were common (1950s to the 1970s).

a month ago 19 votes
2025-05-11 air traffic control

Air traffic control has been in the news lately, on account of my country's declining ability to do it. Well, that's a long-term trend, resulting from decades of under-investment, severe capture by our increasingly incompetent defense-industrial complex, no small degree of management incompetence in the FAA, and long-lasting effects of Reagan crushing the PATCO strike. But that's just my opinion, you know, maybe airplanes got too woke. In any case, it's an interesting time to consider how weird parts of air traffic control are. The technical, administrative, and social aspects of ATC all seem two notches more complicated than you would expect. ATC is heavily influenced by its peculiar and often accidental development, a product of necessity that perpetually trails behind the need, and a beneficiary of hand-me-down military practices and technology. Aviation Radio In the early days of aviation, there was little need for ATC---there just weren't many planes, and technology didn't allow ground-based controllers to do much of value. There was some use of flags and signal lights to clear aircraft to land, but for the most part ATC had to wait for the development of aviation radio. The impetus for that work came mostly from the First World War. Here we have to note that the history of aviation is very closely intertwined with the history of warfare. Aviation technology has always rapidly advanced during major conflicts, and as we will see, ATC is no exception. By 1913, the US Army Signal Corps was experimenting with the use of radio to communicate with aircraft. This was pretty early in radio technology, and the aircraft radios were huge and awkward to operate, but it was also early in aviation and "huge and awkward to operate" could be similarly applied to the aircraft of the day. Even so, radio had obvious potential in aviation. The first military application for aircraft was reconnaissance. Pilots could fly past the front to find artillery positions and otherwise provide useful information, and then return with maps. Well, even better than returning with a map was providing the information in real-time, and by the end of the war medium-frequency AM radios were well developed for aircraft. Radios in aircraft lead naturally to another wartime innovation: ground control. Military personnel on the ground used radio to coordinate the schedules and routes of reconnaissance planes, and later to inform on the positions of fighters and other enemy assets. Without any real way to know where the planes were, this was all pretty primitive, but it set the basic pattern that people on the ground could keep track of aircraft and provide useful information. Post-war, civil aviation rapidly advanced. The early 1920s saw numerous commercial airlines adopting radio, mostly for business purposes like schedule coordination. Once you were in contact with someone on the ground, though, it was only logical to ask about weather and conditions. Many of our modern practices like weather briefings, flight plans, and route clearances originated as more or less formal practices within individual airlines. Air Mail The government was not left out of the action. The Post Office operated what may have been the largest commercial aviation operation in the world during the early 1920s, in the form of Air Mail. The Post Office itself did not have any aircraft; all of the flying was contracted out---initially to the Army Air Service, and later to a long list of regional airlines. Air Mail was considered a high priority by the Post Office and proved very popular with the public. When the transcontinental route began proper operation in 1920, it became possible to get a letter from New York City to San Francisco in just 33 hours by transferring it between airplanes in a nearly non-stop relay race. The Post Office's largesse in contracting the service to private operators provided not only the funding but the very motivation for much of our modern aviation industry. Air travel was not very popular at the time, being loud and uncomfortable, but the mail didn't complain. The many contract mail carriers of the 1920s grew and consolidated into what are now some of the United States' largest companies. For around a decade, the Post Office almost singlehandedly bankrolled civil aviation, and passengers were a side hustle [1]. Air mail ambition was not only of economic benefit. Air mail routes were often longer and more challenging than commercial passenger routes. Transcontinental service required regular flights through sparsely populated parts of the interior, challenging the navigation technology of the time and making rescue of downed pilots a major concern. Notably, air mail operators did far more nighttime flying than any other commercial aviation in the 1920s. The post office became the government's de facto technical leader in civil aviation. Besides the network of beacons and markers built to guide air mail between cities, the post office built 17 Air Mail Radio Stations along the transcontinental route. The Air Mail Radio Stations were the company radio system for the entire air mail enterprise, and the closest thing to a nationwide, public air traffic control service to then exist. They did not, however, provide what we would now call control. Their role was mainly to provide pilots with information (including, critically, weather reports) and to keep loose tabs on air mail flights so that a disappearance would be noticed in time to send search and rescue. In 1926, the Watres Act created the Aeronautic Branch of the Department of Commerce. The Aeronautic Branch assumed a number of responsibilities, but one of them was the maintenance of the Air Mail routes. Similarly, the Air Mail Radio Stations became Aeronautics Branch facilities, and took on the new name of Flight Service Stations. No longer just for the contract mail carriers, the Flight Service Stations made up a nationwide network of government-provided services to aviators. They were the first edifices in what we now call the National Airspace System (NAS): a complex combination of physical facilities, technologies, and operating practices that enable safe aviation. In 1935, the first en-route air traffic control center opened, a facility in Newark owned by a group of airlines. The Aeronautic Branch, since renamed the Bureau of Air Commerce, supported the airlines in developing this new concept of en-route control that used radio communications and paperwork to track which aircraft were in which airways. The rising number of commercial aircraft made in-air collisions a bigger problem, so the Newark control center was quickly followed by more facilities built on the same pattern. In 1936, the Bureau of Air Commerce took ownership of these centers, and ATC became a government function alongside the advisory and safety services provided by the flight service stations. En route center controllers worked off of position reports from pilots via radio, but needed a way to visualize and track aircraft's positions and their intended flight paths. Several techniques helped: first, airlines shared their flight planning paperwork with the control centers, establishing "flight plans" that corresponded to each aircraft in the sky. Controllers adopted a work aid called a "flight strip," a small piece of paper with the key information about an aircraft's identity and flight plan that could easily be handed between stations. By arranging the flight strips on display boards full of slots, controllers could visualize the ordering of aircraft in terms of altitude and airway. Second, each center was equipped with a large plotting table map where controllers pushed markers around to correspond to the position reports from aircraft. A small flag on each marker gave the flight number, so it could easily be correlated to a flight strip on one of the boards mounted around the plotting table. This basic concept of air traffic control, of a flight strip and a position marker, is still in use today. Radar The Second World War changed aviation more than any other event of history. Among the many advancements were two British inventions of particular significance: first, the jet engine, which would make modern passenger airliners practical. Second, the radar, and more specifically the magnetron. This was a development of such significance that the British government treated it as a secret akin to nuclear weapons; indeed, the UK effectively traded radar technology to the US in exchange for participation in US nuclear weapons research. Radar created radical new possibilities for air defense, and complimented previous air defense development in Britain. During WWI, the organization tasked with defending London from aerial attack had developed a method called "ground-controlled interception" or GCI. Under GCI, ground-based observers identify possible targets and then direct attack aircraft towards them via radio. The advent of radar made GCI tremendously more powerful, allowing a relatively small number of radar-assisted air defense centers to monitor for inbound attack and then direct defenders with real-time vectors. In the first implementation, radar stations reported contacts via telephone to "filter centers" that correlated tracks from separate radars to create a unified view of the airspace---drawn in grease pencil on a preprinted map. Filter center staff took radar and visual reports and updated the map by moving the marks. This consolidated information was then provided to air defense bases, once again by telephone. Later technical developments in the UK made the process more automated. The invention of the "plan position indicator" or PPI, the type of radar scope we are all familiar with today, made the radar far easier to operate and interpret. Radar sets that automatically swept over 360 degrees allowed each radar station to see all activity in its area, rather than just aircraft passing through a defensive line. These new capabilities eliminated the need for much of the manual work: radar stations could see attacking aircraft and defending aircraft on one PPI, and communicated directly with defenders by radio. It became routine for a radar operator to give a pilot navigation vectors by radio, based on real-time observation of the pilot's position and heading. A controller took strategic command of the airspace, effectively steering the aircraft from a top-down view. The ease and efficiency of this workflow was a significant factor in the end of the Battle of Britain, and its remarkable efficacy was noticed in the US as well. At the same time, changes were afoot in the US. WWII was tremendously disruptive to civil aviation; while aviation technology rapidly advanced due to wartime needs those same pressing demands lead to a slowdown in nonmilitary activity. A heavy volume of military logistics flights and flight training, as well as growing concerns about defending the US from an invasion, meant that ATC was still a priority. A reorganization of the Bureau of Air Commerce replaced it with the Civil Aeronautics Authority, or CAA. The CAA's role greatly expanded as it assumed responsibility for airport control towers and commissioned new en route centers. As WWII came to a close, CAA en route control centers began to adopt GCI techniques. By 1955, the name Air Route Traffic Control Center (ARTCC) had been adopted for en route centers and the first air surveillance radars were installed. In a radar-equipped ARTCC, the map where controllers pushed markers around was replaced with a large tabletop PPI built to a Navy design. The controllers still pushed markers around to track the identities of aircraft, but they moved them based on their corresponding radar "blips" instead of radio position reports. Air Defense After WWII, post-war prosperity and wartime technology like the jet engine lead to huge growth in commercial aviation. During the 1950s, radar was adopted by more and more ATC facilities (both "terminal" at airports and "en route" at ARTCCs), but there were few major changes in ATC procedure. With more and more planes in the air, tracking flight plans and their corresponding positions became labor intensive and error-prone. A particular problem was the increasing range and speed of aircraft, and corresponding longer passenger flights, that meant that many aircraft passed from the territory of one ARTCC into another. This required that controllers "hand off" the aircraft, informing the "next" ARTCC of the flight plan and position at which the aircraft would enter their airspace. In 1956, 128 people died in a mid-air collision of two commercial airliners over the Grand Canyon. In 1958, 49 people died when a military fighter struck a commercial airliner over Nevada. These were not the only such incidents in the mid-1950s, and public trust in aviation started to decline. Something had to be done. First, in 1958 the CAA gave way to the Federal Aviation Administration. This was more than just a name change: the FAA's authority was greatly increased compared tot he CAA, most notably by granting it authority over military aviation. This is a difficult topic to explain succinctly, so I will only give broad strokes. Prior to 1958, military aviation was completely distinct from civil aviation, with no coordination and often no communication at all between the two. This was, of course, a factor in the 1958 collision. Further, the 1956 collision, while it did not involve the military, did result in part from communications issues between separate distinct CAA facilities and the airline's own control facilities. After 1958, ATC was completely unified into one organization, the FAA, which assumed the work of the military controllers of the time and some of the role of the airlines. The military continues to have its own air controllers to this day, and military aircraft continue to include privileges such as (practical but not legal) exemption from transponder requirements, but military flights over the US are still beholden to the same ATC as civil flights. Some exceptions apply, void where prohibited, etc. The FAA's suddenly increased scope only made the practical challenges of ATC more difficult, and commercial aviation numbers continued to rise. As soon as the FAA was formed, it was understood that there needed to be major investments in improving the National Airspace System. While the first couple of years were dominated by the transition, the FAA's second director (Najeeb Halaby) prepared two lengthy reports examining the situation and recommending improvements. One of these, the Beacon report (also called Project Beacon), specifically addressed ATC. The Beacon report's recommendations included massive expansion of radar-based control (called "positive control" because of the controller's access to real-time feedback on aircraft movements) and new control procedures for airways and airports. Even better, for our purposes, it recommended the adoption of general-purpose computers and software to automate ATC functions. Meanwhile, the Cold War was heating up. US air defense, a minor concern in the few short years after WWII, became a higher priority than ever before. The Soviet Union had long-range aircraft capable of reaching the United States, and nuclear weapons meant that only a few such aircraft had to make it to cause massive destruction. Considering the vast size of the United States (and, considering the new unified air defense command between the United States and Canada, all of North America) made this a formidable challenge. During the 1950s, the newly minted Air Force worked closely with MIT's Lincoln Laboratory (an important center of radar research) and IBM to design a computerized, integrated, networked system for GCI. When the Air Force committed to purchasing the system, it was christened the Semi-Automated Ground Environment, or SAGE. SAGE is a critical juncture in the history of the computer and computer communications, the first system to demonstrate many parts of modern computer technology and, moreover, perhaps the first large-scale computer system of any kind. SAGE is an expansive topic that I will not take on here; I'm sure it will be the focus of a future article but it's a pretty well-known and well-covered topic. I have not so far felt like I had much new to contribute, despite it being the first item on my "list of topics" for the last five years. But one of the things I want to tell you about SAGE, that is perhaps not so well known, is that SAGE was not used for ATC. SAGE was a purely military system. It was commissioned by the Air Force, and its numerous operating facilities (called "direction centers") were located on Air Force bases along with the interceptor forces they would direct. However, there was obvious overlap between the functionality of SAGE and the needs of ATC. SAGE direction centers continuously received tracks from remote data sites using modems over leased telephone lines, and automatically correlated multiple radar tracks to a single aircraft. Once an operator entered information about an aircraft, SAGE stored that information for retrieval by other radar operators. When an aircraft with associated data passed from the territory of one direction center to another, the aircraft's position and related information were automatically transmitted to the next direction center by modem. One of the key demands of air defense is the identification of aircraft---any unknown track might be routine commercial activity, or it could be an inbound attack. The air defense command received flight plan data on commercial flights (and more broadly all flights entering North America) from the FAA and entered them into SAGE, allowing radar operators to retrieve "flight strip" data on any aircraft on their scope. Recognizing this interconnection with ATC, as soon as SAGE direction centers were being installed the Air Force started work on an upgrade called SAGE Air Traffic Integration, or SATIN. SATIN would extend SAGE to serve the ATC use-case as well, providing SAGE consoles directly in ARTCCs and enhancing SAGE to perform non-military safety functions like conflict warning and forward projection of flight plans for scheduling. Flight strips would be replaced by teletype output, and in general made less necessary by the computer's ability to filter the radar scope. Experimental trial installations were made, and the FAA participated readily in the research efforts. Enhancement of SAGE to meet ATC requirements seemed likely to meet the Beacon report's recommendations and radically improve ARTCC operations, sooner and cheaper than development of an FAA-specific system. As it happened, well, it didn't happen. SATIN became interconnected with another planned SAGE upgrade to the Super Combat Centers (SCC), deep underground combat command centers with greatly enhanced SAGE computer equipment. SATIN and SCC planners were so confident that the last three Air Defense Sectors scheduled for SAGE installation, including my own Albuquerque, were delayed under the assumption that the improved SATIN/SCC equipment should be installed instead of the soon-obsolete original system. SCC cost estimates ballooned, and the program's ambitions were reduced month by month until it was canceled entirely in 1960. Albuquerque never got a SAGE installation, and the Albuquerque air defense sector was eliminated by reorganization later in 1960 anyway. Flight Service Stations Remember those Flight Service Stations, the ones that were originally built by the Post Office? One of the oddities of ATC is that they never went away. FSS were transferred to the CAB, to the CAA, and then to the FAA. During the 1930s and 1940s many more were built, expanding coverage across much of the country. Throughout the development of ATC, the FSS remained responsible for non-control functions like weather briefing and flight plan management. Because aircraft operating under instrument flight rules must closely comply with ATC, the involvement of FSS in IFR flights is very limited, and FSS mostly serve VFR traffic. As ATC became common, the FSS gained a new and somewhat odd role: playing go-between for ATC. FSS were more numerous and often located in sparser areas between cities (while ATC facilities tended to be in cities), so especially in the mid-century, pilots were more likely to be able to reach an FSS than ATC. It was, for a time, routine for FSS to relay instructions between pilots and controllers. This is still done today, although improved communications have made the need much less common. As weather dissemination improved (another topic for a future post), FSS gained access to extensive weather conditions and forecasting information from the Weather Service. This connectivity is bidirectional; during the midcentury FSS not only received weather forecasts by teletype but transmitted pilot reports of weather conditions back to the Weather Service. Today these communications have, of course, been computerized, although the legacy teletype format doggedly persists. There has always been an odd schism between the FSS and ATC: they are operated by different departments, out of different facilities, with different functions and operating practices. In 2005, the FAA cut costs by privatizing the FSS function entirely. Flight service is now operated by Leidos, one of the largest government contractors. All FSS operations have been centralized to one facility that communicates via remote radio sites. While flight service is still available, increasing automation has made the stations far less important, and the general perception is that flight service is in its last years. Last I looked, Leidos was not hiring for flight service and the expectation was that they would never hire again, retiring the service along with its staff. Flight service does maintain one of my favorite internet phenomenon, the phone number domain name: 1800wxbrief.com. One of the odd manifestations of the FSS/ATC schism and the FAA's very partial privatization is that Leidos maintains an online aviation weather portal that is separate from, and competes with, the Weather Service's aviationweather.gov. Since Flight Service traditionally has the responsibility for weather briefings, it is honestly unclear to what extend Leidos vs. the National Weather Service should be investing in aviation weather information services. For its part, the FAA seems to consider aviationweather.gov the official source, while it pays for 1800wxbrief.com. There's also weathercams.faa.gov, which duplicates a very large portion (maybe all?) of the weather information on Leidos's portal and some of the NWS's. It's just one of those things. Or three of those things, rather. Speaking of duplication due to poor planning... The National Airspace System Left in the lurch by the Air Force, the FAA launched its own program for ATC automation. While the Air Force was deploying SAGE, the FAA had mostly been waiting, and various ARTCCs had adopted a hodgepodge of methods ranging from one-off computer systems to completely paper-based tracking. By 1960 radar was ubiquitous, but different radar systems were used at different facilities, and correlation between radar contacts and flight plans was completely manual. The FAA needed something better, and with growing congressional support for ATC modernization, they had the money to fund what they called National Airspace System En Route Stage A. Further bolstering historical confusion between SAGE and ATC, the FAA decided on a practical, if ironic, solution: buy their own SAGE. In an upcoming article, we'll learn about the FAA's first fully integrated computerized air traffic control system. While the failed detour through SATIN delayed the development of this system, the nearly decade-long delay between the design of SAGE and the FAA's contract allowed significant technical improvements. This "New SAGE," while directly based on SAGE at a functional level, used later off-the-shelf computer equipment including the IBM System/360, giving it far more resemblance to our modern world of computing than SAGE with its enormous, bespoke AN/FSQ-7. And we're still dealing with the consequences today! [1] It also laid the groundwork for the consolidation of the industry, with a 1930 decision that took air mail contracts away from most of the smaller companies and awarded them instead to the precursors of United, TWA, and American Airlines.

2 months ago 32 votes

More in technology

You should repaste your MacBook (but don't)

My favorite memory of my M1 Pro MacBook Pro was the whole sensation of “holy crap, you never hear the fans in this thing”, which was very novel in 2021. Four years later, this MacBook Pro is still a delight. It’s the longest I’ve ever owned a laptop, and while I’d love to pick up the new M4 goodness, this dang thing still seems to just shrug at basically anything I throw at it. Video editing, code compiling, CAD models, the works. (My desire to update is helped though by the fact I got the 2TB SSD, 32GB RAM option, and upgrading to those on new MacBooks is still eye wateringly expensive.) But my MacBook is starting to show its age in one area: it’s not quiet anymore. If you’re doing anything too intensive like compiling code for awhile, or converting something in Handbrake, the age of the fans being quiet is long past. The fans are properly loud. (And despite having two cats, it’s not them! I clean out the fans pretty regularly.) Enter the thermal paste Everyone online seems to point toward one thing: the thermal paste on computers tends to dry up over the years. What the heck is thermal paste? Well, components on your computer that generate a lot of heat are normally made to touch something like a copper heatsink that is really good at pulling that heat away from it. The issue is, when you press these two metal surfaces against each other, even the best machining isn’t perfect and you there’s microscopic gaps between them meaning there’s just air at those parts, and air is a terrible conductor of heat. The solution is to put a little bit of thermal paste (basically a special grey toothpaste gunk that is really good at transferring heat) between them, and it fills in any of those microscopic gaps. The problem with this solution is after hundreds and hundreds of days of intense heat, the paste can dry up into something closer to almost a powder, and it’s not nearly as good at filling in those gaps. Replacement time The logic board! MacBook thermal paste isn’t anything crazy (for the most part, see below), custom PC builders use thermal paste all the time so incredibly performant options are available online. I grabbed a tube of Noctua NT-H2 for about $10 and set to taking apart my MacBook to swap out the aging thermal paste. And thankfully, iFixit has a tremendous, in depth guide on the disassembly required, so I got to it. Indeed, that grey thermal paste looked quite old, but also above and below it (on the RAM chips) I noticed something that didn’t quite seem like thermal paste, it was far more… grainy almost? Spottiness is due to half of it being on the heatsink It turns out, ending with my generation of MacBooks (lucky me!) Apple used a very special kind of thermal compound often called “Carbon Black”, which is basically designed to be able to bridge an even thicker gap than traditional thermal paste. I thought about replacing it, but it seems really hard to come across that special thermal compound (and do not do it with normal thermal paste) and my RAM temperatures always seemed fine (65°C is fine… right?) so I just made sure to not touch that. For the regular grey thermal paste, I used some cotton swabs and isopropyl alcohol to remove the dried up existing thermal paste, then painted on a bit of the new stuff. Disaster To get to the underside of the CPU, you basically need to disassemble the entire MacBook. It’s honestly not that hard, but iFixit warned that the fan cables (which also need to be unclipped) are incredibly delicate. And they’re not wrong, seriously they have the structural integrity of the half-ply toilet paper available at gas stations. So, wouldn’t you know it, I moved the left fan’s cable a bit too hard and it completely tore in half. Gah. I found a replacement fan online (yeah you can’t just buy the cable, need a whole new fan) and in the meantime I just kept an eye on my CPU thermals. As long as I wasn’t doing anything too intensive it honestly always stayed around 65° which was warm, but not terrifying (MacBook Airs completely lack a fan, after all). Take two A few days later, the fans arrived, and I basically had to redo the entire disassembly process to get to the fans. At least I was a lot faster this time. The fan was incredibly easy to swap out (hats off there, Apple!) and I screwed everything back together and began reconnecting all the little connectors. Until I saw it: the tiny (made of the same half ply material as the fan cable) Touch ID sensor cable was inexpicably torn in half, the top half just hanging out. I didn’t even half to touch this thing really, and I hadn’t even got to the stage of reconnecting it (I was about to!), it comes from underneath the logic board and I guess just the movement of sliding the logic board back in sheared it in half. me Bah. I looked up if I could just grab another replacement cable here, and sure enough you can… but the Touch ID chip is cryptographically paired to your MacBook so you’d have to take it into an Apple Store. Estimates seemed to be in the hundreds of dollars, so if anyone has any experience there let me know, but for now I’m just going to live happily without a Touch ID sensor… or the button because the button also does not work. RIP little buddy (And yeah I’m 99.9% sure I can’t solder this back together, there’s a bunch of tiny lanes that make up the cable that you would need experience with proper micro-soldering to do.) Honestly, the disassembly process for my MacBook was surprisingly friendly and not very difficult, I just really wish they beefed up some of the cables even slightly so they weren’t so delicate. The results I was going to cackle if I went through all that just to have identical temperatures as before, but I’m very happy to say they actually improved a fair bit. I ran a Cinebench test before disassembling the MacBook the very first time to establish a baseline: Max CPU temperature: 102°C Max fan speed: 6,300 RPM Cinbench score: 12,252 After the new thermal paste (and the left fan being new): Max CPU temperature: 96°C Max fan speed: 4,700 RPM Cinbench score: 12,316 Now just looking at those scores you might be like… so? But let me tell you, dropping 1,600 RPM on the fan is a noticeable change, it goes from “Oh my god this is annoyingly loud” to “Oh look the fans kicked in”, and despite slower fan speeds there was still a decent drop in CPU temperature! And a 0.5% higher Cinebench score! But where I also really notice it is in idling: just writing this blog post my CPU was right at 46°C the whole time, where previously my computer idled right aroud 60°C. The whole computer just feels a bit healthier. So… should you do it? Honestly, unless you’re very used to working on small, delicate electronics, probably not. But if you do have that experience and are very careful, or have a local repair shop that can do it for a reasonable fee (and your MacBook is a few years old so as to warrant it) it’s honestly a really nice tweak that I feel will hopefully at least get me to the M5 generation. I do miss Touch ID, though.

2 days ago 5 votes
Six Game Devs Speak to Computer Games Mag (1984)

Meet the Creators of Choplifter, Wizardry, Castle Wolfenstein, Zaxxon, Canyon Climber, and the Arcade Machine

2 days ago 4 votes
New AWS x Arduino Opta Workshop: Connect your PLC to the Cloud in just a few steps

We’re excited to invite you to a brand-new workshop created in collaboration with Amazon Web Services (AWS). Whether you’re modernizing factory operations or tinkering with your first industrial project, this hands-on workshop is your gateway to building cloud-connected PLCs that ship data – fast. At Arduino, we believe in making advanced technology more accessible. That’s […] The post New AWS x Arduino Opta Workshop: Connect your PLC to the Cloud in just a few steps appeared first on Arduino Blog.

2 days ago 3 votes
The History of Acer

A Shy Kid Builds the Taiwanese Tech Industry

5 days ago 11 votes
Concept Bytes’ coffee table tracks people and walks itself across a room when called

The term “mmWave” refers to radio waves with wavelengths on the millimeter scale. When it comes to wireless communications technology, like 5G, mmWave allows for very fast data transfer — though that comes at the expense of range. But mmWave technology also has some very useful sensing and scanning applications, which you may have experienced […] The post Concept Bytes’ coffee table tracks people and walks itself across a room when called appeared first on Arduino Blog.

5 days ago 8 votes