Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
26
I have a rough list of topics for future articles, a scratchpad of two-word ideas that I sometimes struggle to interpret. Some items have been on that list for years now. Sometimes, ideas languish because I'm not really interested in them enough to devote the time. Others have the opposite problem: chapters of communications history with which I'm so fascinated that I can't decide where to start and end. They seem almost too big to take on. One of these stories starts in another vast frontier: northeastern Canada. It was a time, rather unlike our own, of relative unity between Canada and the United States. Both countries had spent the later part of World War II planning around the possibility of an Axis attack on North America, and a ragtag set of radar stations had been built to detect inbound bombers. The US had built a series of stations along the border, and the Canadians had built a few north of Ontario and Quebec to extend coverage north of those population centers. Then the war...
2 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from computers are bad

2025-05-04 iBeacons

You know sometimes a technology just sort of... comes and goes? Without leaving much of an impression? And then gets lodged in your brain for the next decade? Let's talk about one of those: the iBeacon. I think the reason that iBeacons loom so large in my memory is that the technology was announced at WWDC in 2013. Picture yourself in 2013: Steve Jobs had only died a couple of years ago, Apple was still widely viewed as a visionary leader in consumer technology, and WWDC was still happening. Back then, pretty much anything announced at an Apple event was a Big Deal that got Big Coverage. Even, it turns out, if it was a minor development for a niche application. That's the iBeacon, a specific solution to a specific problem. It's not really that interesting, but the valance of it's Apple origin makes it seem cool? iBeacon Technology Let's start out with what iBeacon is, as it's so simple as to be underwhelming. Way back in the '00s, a group of vendors developed a sort of "Diet Bluetooth": a wireless protocol that was directly based on Bluetooth but simplified and optimized for low-power, low-data-rate devices. This went through an unfortunate series of names, including the delightful Wibree, but eventually settled on Bluetooth Low Energy (BLE). BLE is not just lower-power, but also easier to implement, so it shows up in all kinds of smart devices today. Back in 2011, it was quite new, and Apple was one of the first vendors to adopt it. BLE is far less connection-oriented than regular Bluetooth; you may have noticed that BLE devices are often used entirely without conventional "pairing." A lot of typical BLE profiles involve just broadcasting some data into the void for any device that cares (and is in short range) to receive, which is pretty similar to ANT+ and unsurprisingly appears in ANT+-like applications of fitness monitors and other sensors. Of course, despite the simpler association model, BLE applications need some way to find devices, so BLE provides an advertising mechanism in which devices transmit their identifying info at regular intervals. And that's all iBeacon really is: a standard for very simple BLE devices that do nothing but transmit advertisements with a unique ID as the payload. Add a type field on the advertising packet to specify that the device is trying to be an iBeacon and you're done. You interact with an iBeacon by receiving its advertisements, so you know that you are near it. Any BLE device with advertisements enabled could be used this way, but iBeacons are built only for this purpose. The applications for iBeacon are pretty much defined by its implementation in iOS; there's not much of a standard even if only for the reason that there's not much to put in a standard. It's all obvious. iOS provides two principle APIs for working with iBeacons: the region monitoring API allows an app to determine if it is near an iBeacon, including registering the region so that the app will be started when the iBeacon enters range. This is useful for apps that want to do something in response to the user being in a specific location. The ranging API allows an app to get a list of all of the nearby iBeacons and a rough range from the device to the iBeacon. iBeacons can actually operate at substantial ranges---up to hundreds of meters for more powerful beacons with external power, so ranging mode can potentially be used as sort of a lightweight local positioning system to estimate the location of the user within a larger space. iBeacon IDs are in the format of a UUID, followed by a "major" number and a "minor" number. There are different ways that these get used, especially if you are buying cheap iBeacons and not reconfiguring them, but the general idea is roughly that the UUID identifies the operator, the major a deployment, and the minor a beacon within the deployment. In practice this might be less common than just every beacon having its own UUID due to how they're sourced. It would be interesting to survey iBeacon applications to see which they do. Promoted Applications So where do you actually use these? Retail! Apple seems to have designed the iBeacon pretty much exclusively for "proximity marketing" applications in the retail environment. It goes something like this: when you're in a store and open that store's app, the app will know what beacons you are nearby and display relevant content. For example, in a grocery store, the grocer's app might offer e-coupons for cosmetics when you are in the cosmetics section. That's, uhh, kind of the whole thing? The imagined universe of applications around the launch of iBeacon was pretty underwhelming to me, even at the time, and it still seems that way. That's presumably why iBeacon had so little success in consumer-facing applications. You might wonder, who actually used iBeacons? Well, Apple did, obviously. During 2013 and into 2014 iBeacons were installed in all US Apple stores, and prompted the Apple Store app to send notifications about upgrade offers and other in-store deals. Unsurprisingly, this Apple Store implementation was considered the flagship deployment. It generated a fair amount of press, including speculation as to whether or not it would prove the concept for other buyers. Around the same time, Apple penned a deal with Major League Baseball that would see iBeacons installed in MLB stadiums. For the 2014 season, MLB Advanced Marketing, a joint venture of team owners, had installed iBeacon technology in 20 stadiums. Baseball fans will be able to utilize iBeacon technology within MLB.com At The Ballpark when the award-winning app's 2014 update is released for Opening Day. Complete details on new features being developed by MLBAM for At The Ballpark, including iBeacon capabilities, will be available in March. What's the point? the iBeacons "enable the At The Ballpark app to play specific videos or offer coupons." This exact story repeats for other retail companies that have picked the technology up at various points, including giants like Target and WalMart. The iBeacons are simply a way to target advertising based on location, with better indoor precision and lower power consumption than GPS. Aiding these applications along, Apple integrated iBeacon support into the iOS location framework and further blurred the lines between iBeacon and other positioning services by introducing location-based-advertising features that operated on geofencing alone. Some creative thinkers did develop more complex applications for the iBeacon. One of the early adopters was a company called Exact Editions, which prepared the Apple Newsstand version of a number of major magazines back when "readable on iPad" was thought to be the future of print media. Exact Editions explored a "read for free" feature where partner magazines would be freely accessible to users at partnering locations like coffee shops and book stores. This does not seem to have been a success, but using the proximity of an iBeacon to unlock some paywalled media is at least a little creative, if probably ill-advised considering security considerations we'll discuss later. The world of applications raises interesting questions about the other half of the mobile ecosystem: how did this all work on Android? iOS has built-in support for iBeacons. An operating system service scans for iBeacons and dispatches notifications to apps as appropriate. On Android, there has never been this type of OS-level support, but Android apps have access to relatively rich low-level Bluetooth functionality and can easily scan for iBeacons themselves. Several popular libraries exist for this purpose, and it's not unusual for them to be used to give ported cross-platform apps more or less equivalent functionality. These apps do need to run in the background if they're to notify the user proactively, but especially back in 2013 Android was far more generous about background work than iOS. iBeacons found expanded success through ShopKick, a retail loyalty platform that installed iBeacons in locations of some major retailers like American Eagle. These powered location-based advertising and offers in the ShopKick app as well as retailer-specific apps, which is kind of the start of a larger, more seamless network, but it doesn't seem to have caught on. Honestly, consumers just don't seem to want location-based advertising that much. Maybe because, when you're standing in an American Eagle, getting ads for products carried in the American Eagle is inane and irritating. iBeacons sort of foresaw cooler screens in this regard. To be completely honest, I'm skeptical that anyone ever really believed in the location-based advertising thing. I mean, I don't know, the advertising industry is pretty good at self-deception, but I don't think there were ever any real signs of hyper-local smartphone-based advertising taking off. I think the play was always data collection, and advertising and special offers just provided a convenient cover story. Real Applications iBeacons are one of those technologies that feels like a flop from a consumer perspective but has, in actuality, enjoyed surprisingly widespread deployments. The reason, of course, is data mining. To Apple's credit, they took a set of precautions in the design of the iBeacon iOS features that probably felt sufficient in 2013. Despite the fact that a lot of journalists described iBeacons as being used to "notify a user to install an app," that was never actually a capability (a very similar-seeming iOS feature attached to Siri actually used conventional geofencing rather than iBeacons). iBeacons only did anything if the user already had an app installed that either scanned for iBeacons when in the foreground or registered for region notifications. In theory, this limited iBeacons to companies with which consumers already had some kind of relationship. What Apple may not have foreseen, or perhaps simply accepted, is the incredible willingness of your typical consumer brand to sell that relationship to anyone who would pay. iBeacons became, in practice, just another major advancement in pervasive consumer surveillance. The New York Times reported in 2019 that popular applications were including SDKs that reported iBeacon contacts to third-party consumer data brokers. This data became one of several streams that was used to sell consumer location history to advertisers. It's a little difficult to assign blame and credit, here. Apple, to their credit, kept iBeacon features in iOS relatively locked down. This suggests that they weren't trying to facilitate massive location surveillance. That said, Apple always marketed iBeacon to developers based on exactly this kind of consumer tracking and micro-targeting, they just intended for it to be done under the auspices of a single brand. That industry would obviously form data exchanges and recruit random apps into reporting everything in your proximity isn't surprising, but maybe Apple failed to foresee it. They certainly weren't the worst offender. Apple's promotion of iBeacon opened the floodgates for everyone else to do the same thing. During 2014 and 2015, Facebook started offering bluetooth beacons to businesses that were ostensibly supposed to facilitate in-app special offers (though I'm not sure that those ever really materialized) but were pretty transparently just a location data collection play. Google jumped into the fray in their Signature Google style, with an offering that was confusing, semi-secret, incoherently marketed, and short lived. Google's Project Beacon, or Google My Business, also shipped free Bluetooth beacons out to businesses to give Android location services a boost. Google My Business seems to have been the source of a fair amount of confusion even at the time, and we can virtually guarantee that (as reporters speculated at the time) Google was intentionally vague and evasive about the system to avoid negative attention from privacy advocates. In the case of Facebook, well, they don't have the level of opsec that Google does so things are a little better documented: Leaked documents show that Facebook worried that users would 'freak out' and spread 'negative memes' about the program. The company recently removed the Facebook Bluetooth beacons section from their website. The real deployment of iBeacons and closely related third-party iBeacon-like products [1] occurred at massive scale but largely in secret. It became yet another dark project of the advertising-industrial complex, perhaps the most successful yet of a long-running series of retail consumer surveillance systems. Payments One interesting thing about iBeacon is how it was compared to NFC. The two really aren't that similar, especially considering the vast difference in usable ranges, but NFC was the first radio technology to be adopted for "location marketing" applications. "Tap your phone to see our menu," kinds of things. Back in 2013, Apple had rather notably not implemented NFC in its products, despite its increasing adoption on Android. But, there is much more to this story than learning about new iPads and getting a surprise notification that you are eligible for a subsidized iPhone upgrade. What we're seeing is Apple pioneering the way mobile devices can be utilized to make shopping a better experience for consumers. What we're seeing is Apple putting its money where its mouth is when it decided not to support NFC. (MacObserver) Some commentators viewed iBeacon as Apple's response to NFC, and I think there's more to that than you might think. In early marketing, Apple kept positioning iBeacon for payments. That's a little weird, right, because iBeacons are a purely one-way broadcast system. Still, part of Apple's flagship iBeacon implementation was a payment system: Here's how he describes the purchase he made there, using his iPhone and the EasyPay system: "We started by using the iPhone to scan the product barcode and then we had to enter our Apple ID, pretty much the way we would for any online Apple purchase [using the credit card data on file with one's Apple account]. The one key difference was that this transaction ended with a digital receipt, one that we could show to a clerk if anyone stopped us on the way out." Apple Wallet only kinda-sorta existed at the time, although Apple was clearly already midway into a project to expand into consumer payments. It says a lot about this point in time in phone-based payments that several reporters talk about iBeacon payments as a feature of iTunes, since Apple was mostly implementing general-purpose billing by bolting it onto iTunes accounts. It seems like what happened is that Apple committed to developing a pay-by-phone solution, but decided against NFC. To be competitive with other entrants in the pay-by-phone market, they had to come up with some kind of technical solution to interact with retail POS, and iBeacon was their choice. From a modern perspective this seems outright insane; like, Bluetooth broadcasts are obviously not the right way to initiate a payment flow, and besides, there's a whole industry-standard stack dedicated to that purpose... built on NFC. But remember, this was 2013! EMV was not yet in meaningful use in the US; several major banks and payment networks had just committed to rolling it out in 2012 and every American can tell you that the process was long and torturous. Because of the stringent security standards around EMV, Android devices did not implement EMV until ARM secure enclaves became widely available. EMVCo, the industry body behind EMV, did not have a certification program for smartphones until 2016. Android phones offered several "tap-to-pay" solutions, from Google's frequently rebranded Google Wallet^w^wAndroid Pay^w^wGoogle Wallet to Verizon's embarrassingly rebranded ISIS^wSoftcard and Samsung Pay. All of these initially relied on proprietary NFC protocols with bespoke payment terminal implementations. This was sketchy enough, and few enough phones actually had NFC, that the most successful US pay-by-phone implementations like Walmart's and Starbucks' used barcodes for communication. It would take almost a decade before things really settled down and smartphones all just implemented EMV. So, in that context, Apple's decision isn't so odd. They must have figured that iBeacon could solve the same "initial handshake" problem as Walmart's QR codes, but more conveniently and using radio hardware that they already included in their phones. iBeacon-based payment flows used the iBeacon only to inform the phone of what payment devices were nearby, everything else happened via interaction with a cloud service or whatever mechanism the payment vendor chose to implement. Apple used their proprietary payments system through what would become your Apple Account, PayPal slapped together an iBeacon-based fast path to PayPal transfers, etc. I don't think that Apple's iBeacon-based payments solution ever really shipped. It did get some use, most notably by Apple, but these all seem to have been early-stage implementations, and the complete end-to-end SDK that a lot of developers expected never landed. You might remember that this was a very chaotic time in phone-based payments, solutions were coming and going. When Apple Pay was properly announced a year after iBeacons, there was little mention of Bluetooth. By the time in-store Apple Pay became common, Apple had given up and adopted NFC. Limitations One of the great weaknesses of iBeacon was the security design, or lack thereof. iBeacon advertisements were sent in plaintext with no authentication of any type. This did, of course, radically simplify implementation, but it also made iBeacon untrustworthy for any important purpose. It is quite trivial, with a device like an Android phone, to "clone" any iBeacon and transmit its identifiers wherever you want. This problem might have killed off the whole location-based-paywall-unlocking concept had market forces not already done so. It also opens the door to a lot of nuisance attacks on iBeacon-based location marketing, which may have limited the depth of iBeacon features in major apps. iBeacon was also positioned as a sort of local positioning system, but it really wasn't. iBeacon offers no actual time-of-flight measurements, only RSSI-based estimation of range. Even with correct on-site calibration (which can be aided by adjusting a fixed RSSI-range bias value included in some iBeacon advertisements) this type of estimation is very inaccurate, and in my little experiments with a Bluetooth beacon location library I can see swings from 30m to 70m estimated range based only on how I hold my phone. iBeacon positioning has never been accurate enough to do more than assert whether or not a phone is "near" the beacon, and "near" can take on different values depending on the beacon's transmit power. Developers have long looked towards Bluetooth as a potential local positioning solution, and it's never quite delivered. The industry is now turning towards Ultra-Wideband or UWB technology, which combines a high-rate, high-bandwidth radio signal with a time-of-flight radio ranging protocol to provide very accurate distance measurements. Apple is, once again, a technical leader in this field and UWB radios have been integrated into the iPhone 11 and later. Senescence iBeacon arrived to some fanfare, quietly proliferated in the shadows of the advertising industry, and then faded away. The Wikipedia article on iBeacons hasn't really been updated since support on Windows Phone was relevant. Apple doesn't much talk about iBeacons any more, and their compatriots Facebook and Google both sunset their beacon programs years ago. Part of the problem is, well, the pervasive surveillance thing. The idea of Bluetooth beacons cooperating with your phone to track your every move proved unpopular with the public, and so progressively tighter privacy restrictions in mobile operating systems and app stores have clamped down on every grocery store app selling location data to whatever broker bids the most. I mean, they still do, but it's gotten harder to use Bluetooth as an aid. Even Android, the platform of "do whatever you want in the background, battery be damned," strongly discourages Bluetooth scanning by non-foreground apps. Still, the basic technology remains in widespread use. BLE beacons have absolutely proliferated, there are plenty of apps you can use to list nearby beacons and there almost certainly are nearby beacons. One of my cars has, like, four separate BLE beacons going on all the time, related to a phone-based keyless entry system that I don't think the automaker even supports any more. Bluetooth beacons, as a basic primitive, are so useful that they get thrown into all kinds of applications. My earbuds are a BLE beacon, which the (terrible, miserable, no-good) Bose app uses to detect their proximity when they're paired to another device. A lot of smart home devices like light bulbs are beacons. The irony, perhaps, of iBeacon-based location tracking is that it's a victim of its own success. There is so much "background" BLE beacon activity that you scarcely need to add purpose-built beacons to track users, and only privacy measures in mobile operating systems and the beacons themselves (some of which rotate IDs) save us. Apple is no exception to the widespread use of Bluetooth beacons: iBeacon lives on in virtually every apple device. If you do try out a Bluetooth beacon scanning app, you'll discover pretty much every Apple product in a 30 meter radius. From MacBooks Pro to Airpods, almost all Apple products transmit iBeacon advertisements to their surroundings. These are used for the initial handshake process of peer-to-peer features like Airdrop, and Find My/AirTag technology seems to be derived from the iBeacon protocol (in the sense that anything can be derived from such a straightforward design). Of course, pretty much all of these applications now randomize identifiers to prevent passive use of device advertisements for long-term tracking. Here's some good news: iBeacons are readily available in a variety of form factors, and they are very cheap. Lots of libraries exist for working with them. If you've ever wanted some sort of location-based behavior for something like home automation, iBeacons might offer a good solution. They're neat, in an old technology way. Retrotech from the different world of 2013. It's retro in more ways than one. It's funny, and a bit quaint, to read the contemporary privacy concerns around iBeacon. If only they had known how bad things would get! Bluetooth beacons were the least of our concerns. [1] Things can be a little confusing here because the iBeacon is such a straightforward concept, and Apple's implementation is so simple. We could define "iBeacon" as including only officially endorsed products from Apple affiliates, or as including any device that behaves the same as official products (e.g. by using the iBeacon BLE advertisement type codes), or as any device that is performing substantially the same function (but using a different advertising format). I usually mean the latter of these three as there isn't really much difference between an iBeacon and ten million other BLE beacons that are doing the same thing with a slightly different identifier format. Facebook and Google's efforts fall into this camp.

5 days ago 1 votes
2025-04-18 white alice

When we last talked about Troposcatter, it was Pole Vault. Pole Vault was the first troposcatter communications network, on the east coast of Canada. It would not be alone for long. By the time the first Pole Vault stations were complete, work was already underway on a similar network for Alaska: the White Alice Communication System, WACS. Alaska has long posed a challenge for communications. In the 1960s, Western Union wanted to extent their telegraph network from the United States into Europe. Although the technology would be demonstrated shortly after, undersea telegraph cables were still notional and it seemed that a route that minimized the ocean crossing would be preferable---of course, that route maximized the length on land, stretching through present-day Alaska and Siberia on each side of the Bering Strait. This task proved more formidable than Western Union had imagined, and the first transatlantic telegraph cable (on a much further south crossing) was completed before the arctic segments of the overland route. The "Western Union Telegraph Expedition" abandoned its work, leaving a telegraph line well into British Columbia that would serve as one of the principle communications assets in the region for decades after. This ill-fated telegraph line failed to link San Francisco to Moscow, but its aftermath included a much larger impact on Russian interests in North America: the purchase of Alaska in 1867. Shortly after, the US military began its expansion into the new frontier. The Army Signal Corps, mostly to fulfill its function in observing the weather, built and staffed small installations that stretched further and further west. Later, in the 1890s, a gold rush brought a sudden influx of American settlers to Alaska's rugged terrain. The sudden economic importance of Klondike, and the rather colorful personalities of the prospectors looking to exploit it, created a much larger need for military presence. Fortuitously, many of the forts present had been built by the Signal Corps, which had already started on lines of communication. Construction was difficult, though, and without Alaskan communications as major priority there was only minimal coverage. Things changed in 1900, when Congress appropriated a substantial budget to the Washington-Alaska Military Cable and Telegraph System. The Signal Corps set on Alaska like, well, like an army, and extensive telegraph and later telephone lines were built to link the various military outposts. Later renamed the Alaska Communications System, these cables brought the first telecommunication to much of Alaska. The arrival of the telegraph was quite revolutionary for remote towns, who could now receive news in real-time that had previously been delayed by as much as a year [1]. Telegraphy was important to civilians as well, something that Congress had anticipated: The original act authorizing the Alaska Communications System dictated that it would carry commercial traffic as well. The military had an unusual role in Alaska, and one aspect of it was telecommunications provider. In 1925, an outbreak of diphtheria began to claim the lives of children in Nome, a town in far western Alaska on the Seward Peninsula. The daring winter delivery of antidiphtheria serum by dog sled is widely remembered due to its tangential connection to the Iditarod, but there were two sides of the "serum run." The message from Nome's sole doctor requesting the urgent shipment was transmitted from Nome to the Public Health Service in DC over the Alaska Communications System. It gives us some perspective on the importance of the telegraph in Alaska that the 600 mile route to Nome took five days and many feats of heroism---but at the same time could be crossed instantaneously by telegrams. The Alaska Communications System included some use of radio from the beginning. A pair of HF radio stations specifically handled traffic for Nome, covering a 100-mile stretch too difficult for even the intrepid Signal Corps. While not a totally new technology to the military, radio was quite new to the telegraph business, and the ACS to Nome was probably the first commercial radiotelegraph system on the continent. By the 1930s, the condition of the Alaskan telegraph cables had decayed while demand for telephony had increased. Much of ACS was upgraded and modernized to medium-frequency radiotelephone links. In towns small and large, even in Anchorage itself, the sole telephone connection to the contiguous United States was an ACS telephone installed in the general store. Alaskan communications became an even greater focus of the military with the onset of the Second World War. A few weeks after Pearl Harbor, the Japanese attacked Fort Mears in the Aleutian Islands. Fort Mears had no telecommunications connections, so despite the proximity of other airbases support was slow to come. The lack of a telegraph or telephone line contributed to 43 deaths and focused attention on the ACS. By 1944, the Army Signal Corps had a workforce of 2,000 dedicated to Alaska. WWII brought more than one kind of attention to Alaska. Several Japanese assaults on the Aleutian islands represented the largest threats to American soil outside of Pearl Harbor, showing both Alaska's vulnerability and the strategic importance given to it by its relative proximity to Eurasia. WWII ended but, in 1949, the USSR demonstrated an atomic weapon. A combination of Soviet expansionism and the new specter of nuclear war turned military planners towards air defense. Like the Canadian Maritimes in the East, Alaska covered a huge swath of the airspace through which Soviet bombers might approach the US. Alaska was, once again, a battleground. The early Cold War military buildup of Alaska was particularly heavy on air defense. During the late '40s and early '50s, more than a dozen new radar and control sites were built. The doctrine of ground-controlled interception requires real-time communication between radar centers, stressing the limited number of voice channels available on the ACS. As early as 1948, the Signal Corps had begun experiments to choose an upgrade path. Canadian early-warning radar networks, including the Distant Early Warning Line, were on the drawing board and would require many communications channels in particularly remote parts of Alaska. Initially, point-to-point microwave was used in relatively favorable terrain (where the construction of relay stations about every 50 miles was practical). For the more difficult segments, the Signal Corps found that VHF radio could provide useful communications at ranges over 100 miles. VHF radiotelephones were installed at air defense radar stations, but there was a big problem: the airspace surveillance radar of the 1950s also operated in the VHF band, and caused so much interference with the radiotelephones that they were difficult to use. The radar stations were probably the most important users of the network, so VHF would have to be abandoned. In 1954, a military study group was formed to evaluate options for the ACS. That group, in turn, requested a proposal from AT&T. Bell Laboratories had been involved in the design and evaluation of Pole Vault, the first sites of which had been completed two years before, so they naturally positioned troposcatter as the best option. It is worth mentioning the unusual relationship AT&T had with Alaska, or rather, the lack of one. While the Bell System enjoyed a monopoly on telephony in most of the United States [2], they had never expanded into Alaska. Alaska was only a territory, after all, and a very sparsely populated one at that. The paucity of long-distance leads to or from Alaska (only one connected to Anchorage, for example) limited the potential for integration of Alaska into the broader Bell System anyway. Long-distance telecommunications in Alaska were a military project, and AT&T was involved only as a vendor. Because of the high cost of troposcatter stations, proven during Pole Vault construction, a hybrid was proposed: microwave stations could be spaced every 50 miles along the road network, while troposcatter would cover the long stretches without roads. In 1955, the Signal Corps awarded Western Electric a contract for the White Alice Communications System. The Corps of Engineers surveyed the locations of 31 sites, verifying each by constructing a temporary antenna tower. The Corps of Engineers led construction of the first 11 sites, and the final 20 were built on contract by Western Electric itself. All sites used radio equipment furnished by Western Electric and were built to Western Electric designs. Construction was far from straightforward. Difficult conditions delayed completion of the original network until 1959, two years later than intended. A much larger issue, though, was the budget. The original WACS was expected to cost $38 million. By the time the first 31 sites were complete, the bill totaled $113 million---equivalent to over a billion dollars today. Western Electric had underestimated not only the complexity of the sites but the difficulty of their construction. A WECo report read: On numerous occasions, the men were forced to surrender before the onslaught of cold, wind and snow and were immobilized for days, even weeks . This ordeal of waiting was of times made doubly galling by the knowledge that supplies and parts needed for the job were only a few miles distant but inaccessible because the white wall of winter had become impenetrable WACS initial capability included 31 stations, of which 22 were troposcatter and the remainder only microwave (using Western Electric's TD-2). A few stations were equipped with both troposcatter and microwave, serving as relays between the two carriers. In 1958, construction started on the Ballistic Missile Early Warning System or BMEWS. BMEWS was an over-the-horizon radar system intended to provide early warning of a Soviet attack. BMEWS would provide as little as 15 minutes warning, requiring that alerts reach NORAD in Colorado as quickly as possible. One BMEWS set was installed in Greenland, where the Pole Vault system was expanded to provide communications. Similarly, the BMEWS set at Clear Missile Early Warning Station in central Alaska relied on White Alice. Planners were concerned about the ability of the Soviet Union to suppress an alert by destroying infrastructure, so two redundant chains of microwave sites were added to White Alice. One stretched from Clear to Ketchikan where it connected to an undersea cable to Seattle. The other went east, towards Canada, where it met existing telephone cables on the Alaska Highway. A further expansion of White Alice started the next year, in 1959. Troposcatter sites were extended through the Aleutian islands in "Project Stretchout" to serve new DEW Line stations. During the 1960s, existing WACS sites were expanded and new antennas were installed at Air Force installations. These were generally microwave links connecting the airbases to existing troposcatter stations. In total, WACS reached 71 sites. Four large sites served as key switching points with multiple radio links and telephone exchanges. Pedro Dome, for example, had a 15,000 square foot communications building with dormitories, a power plant, and extensive equipment rooms. Support facilities included a vehicle maintenance building, storage warehouse, and extensive fuel tanks. A few WACS sites even had tramways for access between the "lower camp" (where equipment and personnel were housed) and the "upper camp" (where the antennas were located)... although they apparently did not fare well in the Alaskan conditions. While Western Electric had initially planned for six people and 25 KW of power at each station, the final requirements were typically 20 people and 120-180 KW of generator capacity. Some sites stored over half a million gallons of fuel---conditions often meant that resupply was only possible during the summer. Besides troposcatter and microwave radios, the equipment included tandem telephone exchanges. These are described in a couple of documents as "ATSS-4A," ATSS standing for Alaska Telephone Switching System. Based on the naming and some circumstantial evidence, I believe these were Western Electric 4A crossbar exchanges. They were later incorporated into AUTOVON, but also handled commercial long-distance traffic between Alaskan towns. With troposcatter comes large antennas, and depending on connection lengths, WACS troposcatter antennas ranged from 30' dishes to 120' "billboard" antennas similar to those seen at Pole Vault sites. The larger antennas handled up to 50kW of transmit power. Some 60' and 120' antennas included their own fuel tanks and steam plants that heated the antennas through winter to minimize snow accumulation. WACS represented an enormous improvement in Alaskan communications. The entire system was multi-channel with redundancy in many key parts of the network. Outside of the larger cities, WACS often brought the first usable long-distance telephone service. Even in Anchorage, WACS provided the only multi-channel connection. Despite these achievements, WACS was set for much the same fate as other troposcatter systems: obsolescence after the invention of communications satellites. The experimental satellites Telstar 1 and 2 launched in the early 1960s, and the military began a shift towards satellite communications shortly after. Besides, the formidable cost of WACS had become a political issue. Maintenance of the system overran estimates by just as much as construction, and placing this cost on taxpayers was controversial since much of the traffic carried by the system consisted of regular commercial telephone calls. Besides, a general reticence to allocate money to WACS had lead to a general decay of the system. WACS capacity was insufficient for the rapidly increasing long-distance telephone traffic of the '60s, and due to decreased maintenance funding reliability was beginning to decline. The retirement of a Cold War communications system is not unusual, but the particular fate of WACS is. It entered a long second life. After acting as the sole long-distance provider for 60 years, the military began its retreat. In 1969, Congress passed the Alaska Communications Disposal Act. It called for complete divestment of the Alaska Communications System and WACS, to a private owner determined by a bidding process. Several large independent communications companies bid, but the winner was RCA. Committing to a $28.5 million purchase price followed by $30 million in upgrades, RCA reorganized the Alaska Communications System as RCA Alascom. Transfer of the many ACS assets from the military to RCA took 13 years, involving both outright transfer of property and complex lease agreements on sites colocated with military installations. RCA's interest in Alaskan communications was closely connected to the coming satellite revolution: RCA had just built the Bartlett Earth Station, the first satellite ground station in Alaska. While Bartlett was originally an ACS asset owned by the Signal Corps, it became just the first of multiple ground stations that RCA would build for Alascom. Several of the new ground stations were colocated with WACS sites, establishing satellite as an alternative to the troposcatter links. Alascom appears to have been the first domestic satellite voice network in commercial use, initially relying on a Canadian communications satellite [3]. In 1974, SATCOM 1 and 2 launched. These were not the first commercial communications satellites, but they represented a significant increase in capacity over previous commercial designs and are sometimes thought of as the true beginning of the satellite communications era. Both were built and owned by RCA, and Alascom took advantage of the new transponders. At the same time, Alascom launched a modernization effort. 22 of the former WACS stations were converted to satellite ground stations, a project that took much of the '70s as Alascom struggled with the same conditions that had made WACS so challenging to begin with. Modernization also included the installation of DMS-10 telephone switches and conversion of some connections to digital. A series of regulatory and business changes in the 1970s lead RCA to step away from the domestic communications industry. In 1979, Alascom sold to Pacific Power and Light, now for $200 million and $90 million in debt. PP&L continued on much the same trajectory, expanding the Alascom system to over 200 ground stations and launching the satellite Aurora I---the first of a small series of satellites that gave Alaska the distinction of being the only state with its own satellite communications network. For much of the '70s to the '00s, large parts of Alaska relied on satellite relay for calls between towns. In a slight twist of irony considering its long lack of interest in the state, AT&T purchased parts of Alascom from PP&L in 1995, forming AT&T Alascom which has faded away as an independent brand. Other parts of the former ACS network, generally non-toll (or non-long-distance) operations, were split off into then PP&L subsidiary CenturyTel. While CenturyTel has since merged into CenturyLink, the Alaskan assets were first sold to Alaska Communications. Alaska Communications considers itself the successor of the ACS heritage, giving them a claim to over 100 years of communications history. As electronics technology has continued to improve, penetration of microwave relays into inland Alaska has increased. Fewer towns rely on satellite today than in the 1970s, and the half-second latency to geosynchronous orbit is probably not missed. Alaska communications have also become more competitive, with long-distance connectivity available from General Communications (GCI) as well as AT&T and Alaska Communications. Still, the legacy of Alaska's complex and expensive long-distance infrastructure still echoes in our telephone bills. State and federal regulators have allowed for extra fees on telephone service in Alaska and calls into Alaska, both intended to offset the high cost of infrastructure. Alaska is generally the most expensive long-distance calling destination in the United States, even when considering the territories. But what of White Alice? The history of the Alaska Communications System's transition to private ownership is complex and not especially well documented. While RCA's winning bid following the Alaska Communications Disposal Act set the big picture, the actual details of the transition were established by many individual negotiations spanning over a decade. Depending on the station, WACS troposcatter sites generally conveyed to RCA in 1973 or 1974. Some, colocated with active military installations, were leased rather than included in the sale. RCA generally decommissioned each WACS site once a satellite ground station was ready to replace it, either on-site or nearby. For some WACS sites, this meant the troposcatter equipment was shut down in 1973. Others remained in use later. The Boswell Bay troposcatter station seems to have been the last turned down, in 1985. The 1980s were decidedly the end of WACS. Alascom's sale to PP&L cemented the plan to shut down all troposcatter operations, and the 1980 Comprehensive Environmental Response, Compensation, and Liability Act lead to the establishment of the Formerly Used Defense Sites (FUDS) program within DoD. Under FUDS, the Corps of Engineers surveyed the disused WACS sites and found nearly all had significant contamination by asbestos (used in seemingly every building material in the '50s and '60s) and leaked fuel oil. As a result, most White Alice sites were demolished between 1986 and 1999. The cost of demolition and remediation in such remote locations was sometimes greater than the original construction. No WACS sites remain intact today. Postscript: A 1988 Corps of Engineers historical inventory of WACS, prepared due to the demolition of many of the stations, mentions that meteor burst communications might replace troposcatter. Meteor burst is a fascinating communications mode, similar in many ways to troposcatter but with the twist that the reflecting surface is not the troposphere but the ionized trail of meteors entering the atmosphere. Meteor burst connections only work when there is a meteor actively vaporizing in the upper atmosphere, but atmospheric entry of small meteors is common enough that meteor burst communications are practical for low-rate packetized communications. For example, meteor burst has been used for large weather and agricultural telemetry systems. The Alaska Meteor Burst Communications System was implemented in 1977 by several federal agencies, and was used primarily for automated environmental telemetry. Unlike most meteor burst systems, though, it seems to have been used for real-time communications by the BLM and FAA. I can't find much information, but they seem to have built portable teleprinter terminals for this use. Even more interesting, the Air Force's Alaskan Air Command built its own meteor burst network around the same time. This network was entirely for real-time use, and demonstrated the successful transmission of radar track data from radar stations across the state to Elmendorf Air Force base. Even better, the Air Force experimented with the use of meteor burst for intercept control by fitting aircraft with a small speech synthesizer that translated coded messages into short phrases. The Air Force experimented with several meteor burst systems during the Cold War, anticipating that it might be a survivable communications system in wartime. More details on these will have to fill a future article. [1] Crews of the Western Union Telegraph Expedition reportedly continued work for a full year after the completion of the transatlantic telephone cable, because news of it hadn't reached them yet. [2] Eliding here some complexities like GTE and their relationship to the Bell System. [3] Perhaps owing to the large size of the country and many geographical challenges to cable laying, Canada has often led North America in satellite communications technology.

3 weeks ago 11 votes
2025-04-06 Airfone

We've talked before about carphones, and certainly one of the only ways to make phones even more interesting is to put them in modes of transportation. Installing telephones in cars made a lot of sense when radiotelephones were big and required a lot of power; and they faded away as cellphones became small enough to have a carphone even outside of your car. There is one mode of transportation where the personal cellphone is pretty useless, though: air travel. Most readers are probably well aware that the use of cellular networks while aboard an airliner is prohibited by FCC regulations. There are a lot of urban legends and popular misconceptions about this rule, and fully explaining it would probably require its own article. The short version is that it has to do with the way cellular devices are certified and cellular networks are planned. The technical problems are not impossible to overcome, but honestly, there hasn't been a lot of pressure to make changes. One line of argument that used to make an appearance in cellphones-on-airplanes discourse is the idea that airlines or the telecom industry supported the cellphone ban because it created a captive market for in-flight telephone services. Wait, in-flight telephone services? That theory has never had much to back it up, but with the benefit of hindsight we can soundly rule it out: not only has the rule persisted well past the decline and disappearance of in-flight telephones, in-flight telephones were never commercially successful to begin with. Let's start with John Goeken. A 1984 Washington Post article tells us that "Goeken is what is called, predictably enough, an 'idea man.'" Being the "idea person" must not have had quite the same connotations back then, it was a good time for Goeken. In the 1960s, conversations with customers at his two-way radio shop near Chicago gave him an idea for a repeater network to allow truckers to reach their company offices via CB radio. This was the first falling domino in a series that lead to the founding of MCI and the end of AT&T's long-distance monopoly. Goeken seems to have been the type who grew bored with success, and he left MCI to take on a series of new ventures. These included an emergency medicine messaging service, electrically illuminated high-viz clothing, and a system called the Mercury Network that built much of the inertia behind the surprisingly advanced computerization of florists [1]. "Goeken's ideas have a way of turning into dollars, millions of them," the Washington Post continued. That was certainly true of MCI, but every ideas guy had their misses. One of the impressive things about Goeken was his ability to execute with speed and determination, though, so even his failures left their mark. This was especially true of one of his ideas that, in the abstract, seemed so solid: what if there were payphones on commercial flights? Goeken's experience with MCI and two-way radios proved valuable, and starting in the mid-1970s he developed prototype air-ground radiotelephones. In its first iteration, "Airfone" consisted of a base unit installed on an aircraft bulkhead that accepted a credit card and released a cordless phone. When the phone was returned to the base station, the credit card was returned to the customer. This equipment was simple enough, but it would require an extensive ground network to connect callers to the telephone system. The infrastructure part of the scheme fell into place when long-distance communications giant Western Union signed on with Goeken Communications to launch a 50/50 joint venture under the name Airfone, Inc. Airfone was not the first to attempt air-ground telephony---AT&T had pursued the same concept in the 1970s, but abandoned it after resistance from the FCC (unconvinced the need was great enough to justify frequency allocations) and the airline industry (which had formed a pact, blessed by the government, that prohibited the installation of telephones on aircraft until such time as a mature technology was available to all airlines). Goeken's hard-headed attitude, exemplified in the six-year legal battle he fought against AT&T to create MCI, must have helped to defeat this resistance. Goeken brought technical advances, as well. By 1980, there actually was an air-ground radiotelephone service in general use. The "General Aviation Air-Ground Radiotelephone Service" allocated 12 channels (of duplex pairs) for radiotelephony from general aviation aircraft to the ground, and a company called Wulfsberg had found great success selling equipment for this service under the FliteFone name. Wulfsberg FliteFones were common equipment on business aircraft, where they let executives shout "buy" and "sell" from the air. Goeken referred to this service as evidence of the concept's appeal, but it was inherently limited by the 12 allocated channels. General Aviation Air-Ground Radiotelephone Service, which I will call AGRAS (this is confusing in a way I will discuss shortly), operated at about 450MHz. This UHF band is decidedly line-of-sight, but airplanes are very high up and thus can see a very long ways. The reception radius of an AGRAS transmission, used by the FCC for planning purposes, was 220 miles. This required assigning specific channels to specific cities, and there the limits became quite severe. Albuquerque had exactly one AGRAS channel available. New York City got three. Miami, a busy aviation area but no doubt benefiting from its relative geographical isolation, scored a record-setting four AGRAS channels. That meant AGRAS could only handle four simultaneous calls within a large region... if you were lucky enough for that to be the Miami region; otherwise capacity was even more limited. Back in the 1970s, AT&T had figured that in-flight telephones would be very popular. In a somewhat hand-wavy economic analysis, they figured that about a million people flew in the air on a given day, and about a third of them would want to make telephone calls. That's over 300,000 calls a day, clearly more than the limited AGRAS channels could handle... leading to the FCC's objection that a great deal of spectrum would have to be allocated to make in-flight telephony work. Goeken had a better idea: single-sideband. SSB is a radio modulation technique that allows a radio transmission to fit within a very narrow bandwidth (basically by suppressing half of the signal envelope), at the cost of a somewhat more fiddly tuning process for reception. SSB was mostly used down in the HF bands, where the low frequencies meant that bandwidth was acutely limited. Up in the UHF world, bandwidth seemed so plentiful that there was little need for careful modulation techniques... until Goeken found himself asking the FCC for 10 blocks of 29 channels each, a lavish request that wouldn't really fit anywhere in the popular UHF spectrum. The use of UHF SSB, pioneered by Airfone, allowed far more efficient use of the allocation. In 1983, the FCC held hearings on Airfone's request for an experimental license to operate their SSB air-ground radiotelephone system in two allocations (separate air-ground and ground-air ranges) around 850MHz and 895MHz. The total spectrum allocated was about 1.5MHz in each of the two directions. The FCC assented and issued the experimental license in 1984, and Airfone was in business. Airfone initially planned 52 ground stations for the system, although I'm not sure how many were ultimately built---certainly 37 were in progress in 1984, at a cost of about $50 million. By 1987, the network had reportedly grown to 68. Airfone launched on six national airlines (a true sign of how much airline consolidation has happened in recent decades---there were six national airlines?), typically with four cordless payphones on a 727 or similar aircraft. The airlines received a commission on the calling rates, and Airfone installed the equipment at their own expense. Still, it was expected to be profitable... Airfone projected that 20-30% of passengers would have calls to make. I wish I could share more detail on these ground stations, in part because I assume there was at least some reuse of existing Western Union facilities (WU operated a microwave network at the time and had even dabbled in cellular service in the 1980s). I can't find much info, though. The antennas for the 800MHz band would have been quite small, but the 1980s multiplexing and control equipment probably took a fare share of floorspace. Airfone was off to a strong start, at least in terms of installation base and press coverage. I can't say now how many users it actually had, but things looked good enough that in 1986 Western Union sold their share of the company to GTE. Within a couple of years, Goeken sold his share to GTE as well, reportedly as a result of disagreements with GTE's business strategy. Airfone's SSB innovation was actually quite significant. At the same time, in the 1980s, a competitor called Skytel was trying to get a similar idea off the ground with the existing AGRAS allocation. It doesn't seem to have gone anywhere, I don't think the FCC ever approved it. Despite an obvious concept, Airfone pretty much launched as a monopoly, operating under an experimental license that named them alone. Unsurprisingly there was some upset over this apparent show of favoritism by the FCC, including from AT&T, which vigorously opposed the experimental license. As it happened, the situation would be resolved by going the other way: in 1990, the FCC established the "commercial aviation air-ground service" which normalized the 800 MHz spectrum and made licenses available to other operators. That was six years after Airfone started their build-out, though, giving them a head start that severely limited competition. Still, AT&T was back. AT&T introduced a competing service called AirOne. AirOne was never as widely installed as Airfone but did score some customers including Southwest Airlines, which only briefly installed AirOne handsets on their fleet. "Only briefly" describes most aspects of AirOne, but we'll get to that in a moment. The suddenly competitive market probably gave GTE Airfone reason to innovate, and besides, a lot had changed in communications technology since Airfone was designed. One of Airfone's biggest limitations was its lack of true roaming: an Airfone call could only last as long as the aircraft was within range of the same ground station. Airfone called this "30 minutes," but you can imagine that people sometimes started their call near the end of this window, and the problem was reportedly much worse. Dropped calls were common, adding insult to the injury that Airfone was decidedly expensive. GTE moved towards digital technology and automation. 1991 saw the launch of Airfone GenStar, which used QAM digital modulation to achieve better call quality and tighter utilization within the existing bandwidth. Further, a new computerized network allowed calls to be handed off from one ground station to another. Capitalizing on the new capacity and reliability, the aircraft equipment was upgraded as well. The payphone like cordless stations were gone, replaced by handsets installed in seatbacks. First class cabins often got a dedicated handset for every seat, economy might have one handset on each side of a row. The new handsets offered RJ11 jacks, allowing the use of laptop modems while in-flight. Truly, it was the future. During the 1990s, satellites were added to the Airfone network as well, improving coverage generally and making telephone calls possible on overseas flights. Of course, the rise of satellite communications also sowed the seeds of Airfone's demise. A company called Aircell, which started out using the cellular network to connect calls to aircraft, rebranded to Gogo and pivoted to satellite-based telephone services. By the late '90s, they were taking market share from Airfone, a trend that would only continue. Besides, for all of its fanfare, Airfone was not exactly a smash hit. Rates were very high, $5 a minute in the late '90s, giving Airfone a reputation as a ripoff that must have cut a great deal into that "20-30% of fliers" they hoped to serve. With the rise of cellphones, many preferred to wait until the aircraft was on the ground to use their own cellphone at a much lower rate. GTE does not seem to have released much in the way of numbers for Airfone, but it probably wasn't making them rich. Goeken, returning to the industry, inadvertently proved this point. He aggressively lobbied the FCC to issue competitive licenses, and ultimately succeeded. His second company in the space, In-Flight Phone Inc., became one of the new competitors to his old company. In-Flight Phone did not last for long. Neither did AT&T AirOne. A 2005 FCC ruling paints a grim picture: Current 800 MHz Air-Ground Radiotelephone Service rules contemplate six competing licensees providing voice and low-speed data services. Six entities were originally licensed under these rules, which required all systems to conform to detailed technical specifications to enable shared use of the air-ground channels. Only three of the six licensees built systems and provided service, and two of those failed for business reasons. In 2002, AT&T pulled out, and Airfone was the only in-flight phone left. By then, GTE had become Verizon, and GTE Airfone was Verizon Airfone. Far from a third of passengers, the CEO of Airfone admitted in an interview that a typical flight only saw 2-3 phone calls. Considering the minimum five-figure capital investment in each aircraft, it's hard to imagine that Airfone was profitable---even at $5 minute. Airfone more or less faded into obscurity, but not without a detour into the press via the events of 9/11. Flight 93, which crashed in Pennsylvania, was equipped with Airfone and passengers made numerous calls. Many of the events on board this aircraft were reconstructed with the assistance of Airfone records, and Claircom (the name of the operator of AT&T AirOne, which never seems to have been well marketed) also produced records related to other aircraft involved in the attacks. Most notably, flight 93 passenger Todd Beamer had a series of lengthy calls with Airfone operator Lisa Jefferson, through which he relayed many of the events taking place on the plane in real time. During these calls, Beamer seems to have coordinated the effort by passengers to retake control of the plane. The significance of Airfone and Claircom records to 9/11 investigations is such that 9/11 conspiracy theories may be one of the most enduring legacies of Claircom especially. In an odd acknowledgment of their aggressive pricing, Airfone decided not to bill for any calls made on 9/11, and temporarily introduced steep discounts (to $0.99 a minute) in the weeks after. This rather meager show of generosity did little to reverse the company's fortunes, though, and it was already well into a backslide. In 2006, the FCC auctioned the majority of Airfone's spectrum to new users. The poor utilization of Airfone was a factor in the decision, as well as Airfone's relative lack of innovation compared to newer cellular and satellite systems. In fact, a large portion of the bandwidth was purchased by Gogo, who years later would use to to deliver in-flight WiFi. Another portion went to a subsidiary of JetBlue that provided in-flight television. Verizon announced the end of Airfone in 2006, pending an acquisition by JetBlue, and while the acquisition did complete JetBlue does not seem to have continued Airfone's passenger airline service. A few years later, Gogo bought out JetBlue's communications branch, making them the new monopoly in 800MHz air ground radiotelephony. Gogo only offered telephone service for general aviation aircraft; passenger aircraft telephones had gone the way of the carphone. It's interesting to contrast the fate of Airfone to to its sibling, AGRAS. Depending on who you ask, AGRAS refers to the radio service or to the Air Ground Radiotelephone Automated Service operated by Mid-America Computer Corporation. What an incredible set of names. This was a situation a bit like ARINC, the semi-private company that for some time held a monopoly on aviation radio services. MACC had a practical monopoly on general aviation telephone service throughout the US, by operating the billing system for calls. MACC still exists today as a vendor of telecom billing software and this always seems to have been their focus---while I'm not sure, I don't believe that MACC ever operated ground stations, instead distributing rate payments to private companies that operated a handful of ground stations each. Unfortunately the history of this service is quite obscure and I'm not sure how MACC came to operate the system, but I couldn't resist the urge to mention the Mid-America Computer Corporation. AGRAS probably didn't make anyone rich, but it seems to have been generally successful. Wulfsberg FliteFones operating on the AGRAS network gave way to Gogo's business aviation phone service, itself a direct descendent of Airfone technology. The former AGRAS allocation at 450MHz somehow came under the control of a company called AURA Network Systems, which for some years has used a temporary FCC waiver of AGRAS rules to operate data services. This year, the FCC began rulemaking to formally reallocate the 450MHz air ground allocation to data services for Advanced Air Mobility, a catch-all term for UAS and air taxi services that everyone expects to radically change the airspace system in coming years. New uses of the band will include command and control for long-range UAS, clearance and collision avoidance for air taxis, and ground and air-based "see and avoid" communications for UAS. This pattern, of issuing a temporary authority to one company and later performing rulemaking to allow other companies to enter, is not unusual for the FCC but does make an interesting recurring theme in aviation radio. It's typical for no real competition to occur, the incumbent provider having been given such a big advantage. When reading about these legacy services, it's always interesting to look at the licenses. ULS has only nine licenses on record for the original 800 MHz air ground service, all expired and originally issued to Airfone (under both GTE and Verizon names), Claircom (operating company for AT&T AirOne), and Skyway Aircraft---this one an oddity, a Florida-based company that seems to have planned to introduce in-flight WiFi but not gotten all the way there. Later rulemaking to open up the 800MHz allocation to more users created a technically separate radio service with two active licenses, both held by AC BidCo. This is an intriguing mystery until you discover that AC BidCo is obviously a front company for Gogo, something they make no effort to hide---the legalities of FCC bidding processes are such that it's very common to use shell companies to hold FCC licenses, and we could speculate that AC BidCo is the Aircraft Communications Bidding Company, created by Gogo for the purpose of the 2006-2008 auctions. These two licenses are active for the former Airfone band, and Gogo reportedly continues to use some of the original Airfone ground stations. Gogo's air-ground network, which operates at 800MHz as well as in a 3GHz band allocated specifically to Gogo, was originally based on CDMA cellular technology. The ground stations were essentially cellular stations pointed upwards. It's not clear to me if this CDMA-derived system is still in use, but Gogo relies much more heavily on their Ku-band satellite network today. The 450MHz licenses are fascinating. AURA is the only company to hold current licenses, but the 246 reveal the scale of the AGRAS business. Airground of Idaho, Inc., until 1999 held a license for an AGRAS ground station on Brundage Mountain McCall, Idaho. The Arlington Telephone Company, until a 2004 cancellation, held a license for an AGRAS ground station atop their small telephone exchange in Arlington, Nebraska. AGRAS ground stations seem to have been a cottage industry, with multiple licenses to small rural telephone companies and even sole proprietorships. Some of the ground stations appear to have been the roofs of strip mall two-way radio installers. In another life, maybe I would be putting a 450MHz antenna on my roof to make a few dollars. Still, there were incumbents: numerous licenses belonged to SkyTel, which after the decline of AGRAS seems to have refocused on paging and, then, gone the same direction as most paging companies: an eternal twilight as American Messaging ("The Dependable Choice"), promoting innovation in the form of longer-range restaurant coaster pagers. In another life, I'd probably be doing that too. [1] This is probably a topic for a future article, but the Mercury Network was a computerized system that Goeken built for a company called Florist's Telegraph Delivery (FTD). It was an evolution of FTD's telegraph system that allowed a florist in one city to place an order to be delivered by by a florist in another city, thus enabling the long-distance gifting of flowers. There were multiple such networks and they had an enduring influence on the florist industry and broader business telecommunications.

a month ago 13 votes
2025-03-01 the cold glow of tritium

I have been slowly working on a book. Don't get too excited, it is on a very niche topic and I will probably eventually barely finish it and then post it here. But in the mean time, I will recount some stories which are related, but don't quite fit in. Today, we'll learn a bit about the self-illumination industry. At the turn of the 20th century, it was discovered that the newfangled element radium could be combined with a phosphor to create a paint that glowed. This was pretty much as cool as it sounds, and commercial radioluminescent paints like Undark went through periods of mass popularity. The most significant application, though, was in the military: radioluminescent paints were applied first to aircraft instruments and later to watches and gunsights. The low light output of radioluminescent paints had a tactical advantage (being very difficult to see from a distance), while the self-powering nature of radioisotopes made them very reliable. The First World War was thus the "killer app" for radioluminescence. Military demand for self-illuminating devices fed a "radium rush" that built mines, processing plants, and manufacturing operations across the country. It also fed, in a sense much too literal, the tragedy of the "Radium Girls." Several self-luminous dial manufacturers knowingly subjected their women painters to shockingly irresponsible conditions, leading inevitably to radium poisoning that disfigured, debilitated, and ultimately killed. Today, this is a fairly well-known story, a cautionary tale about the nuclear excess and labor exploitation of the 1920s. That the situation persisted into the 1940s is often omitted, perhaps too inconvenient to the narrative that a series of lawsuits, and what was essentially the invention of occupational medicine, headed off the problem in the late 1920s. What did happen after the Radium Girls? What was the fate of the luminous radium industry? A significant lull in military demand after WWI was hard on the radium business, to say nothing of a series of costly settlements to radium painters despite aggressive efforts to avoid liability. At the same time, significant radium reserves were discovered overseas, triggering a price collapse that closed most of the mines. The two largest manufacturers of radium dials, Radium Dial Company (part of Standard Chemical who owned most radium mines) and US Radium Corporation (USRC), both went through lean times. Fortunately, for them, the advent of the Second World War reignited demand for radioluminescence. The story of Radium Dial and USRC doesn't end in the 1920s---of course it doesn't, luminous paints having had a major 1970s second wind. Both companies survived, in various forms, into the current century. In this article, I will focus on the post-WWII story of radioactive self-illumination and the legacy that we live with today. During its 1920s financial difficulties, the USRC closed the Orange, New Jersey plant famously associated with Radium Girls and opened a new facility in Brooklyn. In 1948, perhaps looking to manage expenses during yet another post-war slump, USRC relocated again to Bloomsburg, Pennsylvania. The Bloomsburg facility, originally a toy factory, operated through a series of generational shifts in self-illuminating technology. The use of radium, with some occasional polonium, for radioluminescence declined in the 1950s and ended entirely in the 1970s. The alpha radiation emitted by those elements is very effective in exciting phosphors but so energetic that it damages them. A longer overall lifespan, and somewhat better safety properties, could be obtained by the use of a beta emitter like strontium or tritium. While strontium was widely used in military applications, civilian products shifted towards tritium, which offered an attractive balance of price and half life. USRC handled almost a dozen radioisotopes in Bloomsburg, much of them due to diversified operations during the 1950s that included calibration sources, ionizers, and luminous products built to various specific military requirements. The construction of a metal plating plant enabled further diversification, including foil sources used in research, but eventually became an opportunity for vertical integration. By 1968, USRC had consolidated to only tritium products, with an emphasis on clocks and watches. Radioluminescent clocks were a huge hit, in part because of their practicality, but fashion was definitely a factor. Millions of radioluminescent clocks were sold during the '60s and '70s, many of them by Westclox. Westclox started out as a typical clock company (the United Clock Company in 1885), but joined the atomic age through a long-lived partnership with the Radium Dial Company. The two companies were so close that they became physically so: Radium Dial's occupational health tragedy played out in Ottawa, Illinois, a town Radium Dial had chosen as its headquarters due to its proximity to Westclox in nearby Peru [1]. Westclox sold clocks with radioluminescent dials from the 1920s to probably the 1970s, but one of the interesting things about this corner of atomic history is just how poorly documented it is. Westclox may have switched from radium to tritium at some point, and definitely abandoned radioisotopes entirely at some point. Clock and watch collectors, a rather avid bunch, struggle to tell when. Many consumer radioisotopes are like this: it's surprisingly hard to know if they even are radioactive. Now, the Radium Dial Company itself folded entirely to a series of radium poisoning lawsuits in the 1930s. Simply being found guilty of one of the most malevolent labor abuses of the era would not stop free enterprise, though, and Radium Dial's president founded a legally distinct company called Luminous Processes just down the street. Luminous Processes is particularly notable for having continued the production of radium-based clock faces until 1978, making them the last manufacturer of commercial radioluminescent radium products. This also presents compelling circumstantial evidence that Westclox continued to use radium paint until sometime around 1978, which lines up with the general impressions of luminous dial collectors. While the late '70s were the end of Radium Dial, USRC was just beginning its corporate transformation. From 1980 to 1982, a confusing series of spinoffs and mergers lead to USR Industries, parent company of Metreal, parent company of Safety Light Corporation, which manufactured products to be marketed and distributed by Isolite. All of these companies were ultimately part of USR Industries, the former USRC, but the org chart sure did get more complex. The Nuclear Regulatory Commission expressed some irritation in their observation, decades later, that they weren't told about any of this restructuring until they noticed it on their own. Safety Light, as expressed by the name, focused on a new application for tritium radioluminescence: safety signage, mostly self-powered illuminated exit signs and evacuation signage for aircraft. Safety Light continued to manufacture tritium exit signs until 2007, when they shut down following some tough interactions with the NRC and the EPA. They had been, in the fashion typical of early nuclear industry, disposing of their waste by putting it in a hole in the ground. They had persisted in doing this much longer than was socially acceptable, and ultimately seem to have been bankrupted by their environmental obligations... obligations which then had to be assumed by the Superfund program. The specific form of illumination used in these exit signs, and by far the most common type of radioluminescence today, is the Gaseous Tritium Light Source or GTLS. GTLS are small glass tubes or vials, usually made with borosilicate glass, containing tritium gas and an internal coating of phosphor. GTLS are simple, robust, and due to the very small amount of tritium required, fairly inexpensive. They can be made large enough to illuminate a letter in an exit sign, or small enough to be embedded into a watch hand. Major applications include watch faces, gun sights, and the keychains of "EDC" enthusiasts. Plenty of GTLS manufacturers have come and gone over the years. In the UK, defense contractor Saunders-Roe got into the GTLS business during WWII. Their GTLS product line moved to Brandhurst Inc., which had a major American subsidiary. It is an interesting observation that the US always seems to have been the biggest market for GTLS, but their manufacture has increasingly shifted overseas. Brandhurst is no longer even British, having gone the way of so much of the nuclear world by becoming Canadian. A merger with Canadian company SRB created SRB Technologies in Pembroke, Ontario, which continues to manufacture GTLS today. Other Canadian GTLS manufacturers have not fared as well. Shield Source Inc., of Peterborough, Ontario, began filling GTLS vials in 1987. I can't find a whole lot of information on Shield Source's early days, but they seem to have mostly made tubes for exit signs, and perhaps some other self-powered signage. In 2012, the Canadian Nuclear Safety Commission (CNSC) detected a discrepancy in Shield Source's tritium emissions monitoring. I am not sure of the exact details, because CNSC seems to make less information public in general than the US NRC [2]. Here's what appears to have happened: tritium is a gas, which makes it tricky to safely handle. Fortunately, the activity of tritium is relatively low and its half life is relatively short. This means that it's acceptable to manage everyday leakage (for example when connecting and disconnecting things) in a tritium workspace by ventilating it to a stack, releasing it to the atmosphere for dilution and decay. The license of a tritium facility will specify a limit for how much radioactivity can be released this way, and monitoring systems (usually several layers of monitoring systems) have to be used to ensure that the permit limit is not exceeded. In the case of Shield Source, some kind of configuration error with the tritium ventilation monitoring system combined with a failure to adequately test and audit it. The CNSC discovered that during 2010 and 2011, the facility had undercounted their tritium emissions, and in fact exceeded the limits of their license. Air samplers located around the facility, some of which were also validated by an independent laboratory, did not detect tritium in excess of the environmental limits. This suggests that the excess releases probably did not have an adverse impact on human health or the environment. Still, exceeding license terms and then failing to report and correct the problem for two years is a very serious failure by a licensee. In 2012, when the problem was discovered, CNSC ordered Shield Source's license modified to prohibit actual tritium handling. This can seem like an odd maneuver but something similar can happen in the US. Just having radioisotope-contaminated equipment, storing test sources, and managing radioactive waste requires a license. By modifying Shield Source's license to prohibit tritium vial filling, the CNSC effectively shut the plant down while allowing Shield Source to continue their radiological protection and waste management functions. This is the same reason that long-defunct radiological facilities often still hold licenses from NRC in the US: they retain the licenses to allow them to store and process waste and contaminated materials during decommissioning. In the case of Shield Source, while the violation was serious, CNSC does not seem to have anticipated a permanent shutdown. The terms agreed in 2012 were that Shield Source could regain a license to manufacture GTLS if it produced for CNSC a satisfactory report on the root cause of the failure and actions taken to prevent a recurrence. Shield Source did produce such a report, and CNSC seems to have mostly accepted it with some comments requesting further work (the actual report does not appear to be public). Still, in early 2013, Shield Source informed CNSC that it did not intend to resume manufacturing. The license was converted to a one-year license to facilitate decommissioning. Tritium filling and ventilation equipment, which had been contaminated by long-term exposure to tritium, was "packaged" and disposed. This typically consists of breaking things down into parts small enough to fit into 55-gallon drums, "overpacking" those drums into 65-gallon drums for extra protection, and then coordinating with transportation authorities to ship the materials in a suitable way to a facility licensed to dispose of them. This is mostly done by burying them in the ground in an area where the geology makes groundwater interaction exceedingly unlikely, like a certain landfill on the Texas-New Mexico border near Eunice. Keep in mind that tritium's short half life means this is not a long-term geological repository situation; the waste needs to be safely contained for only, say, fifty years to get down to levels not much different from background. I don't know where the Shield Source waste went, CNSC only says it went to a licensed facility. Once the contaminated equipment was removed, drywall and ceiling and floor finishes were removed in the tritium handling area and everything left was thoroughly cleaned. A survey confirmed that remaining tritium contamination was below CNSC-determined limits (for example, in-air concentrations that would lead to a dose of less than 0.01 mSv/year for 9-5 occupational exposure). At that point, the Shield Source building was released to the landlord they had leased it from, presumably to be occupied by some other company. Fortunately tritium cleanup isn't all that complex. You might wonder why Shield Source abruptly closed down. I assume there was some back-and-forth with CNSC before they decided to throw in the towel, but it is kind of odd that they folded entirely during the response to an incident that CNSC seems to have fully expected them to survive. I suspect that a full year of lost revenue was just too much for Shield Source: by 2012, when all of this was playing out, the radioluminescence market had seriously declined. There are a lot of reasons. For one, the regulatory approach to tritium has become more and more strict over time. Radium is entirely prohibited in consumer goods, and the limit on tritium activity is very low. Even self-illuminating exit signs now require NRC oversight in the US, discussed shortly. Besides, public sentiment has increasingly turned against the Friendly Atom is consumer contexts, and you can imagine that people are especially sensitive to the use of tritium in classic institutional contexts for self-powered exit signs: schools and healthcare facilities. At the same time, alternatives have emerged. Non-radioactive luminescent materials, the kinds of things we tend to call "glow in the dark," have greatly improved since WWII. Strontium aluminate is a typical choice today---the inclusion of strontium might suggest otherwise, but strontium aluminate uses the stable natural isotope of strontium, Sr-88, and is not radioactive. Strontium aluminate has mostly displaced radioluminescence in safety applications, and for example the FAA has long allowed it for safety signage and path illumination on aircraft. Keep in mind that these luminescent materials are not self-powered. They must be "charged" by exposure to light. Minor adaptations are required, for example a requirement that the cabin lights in airliners be turned on for a certain period of time before takeoff, but in practice these limitations are considered preferable to the complexity and risks involved in the use of radioisotopes. You are probably already thinking that improving electronics have also made radioluminescence less relevant. Compact, cool-running, energy-efficient LEDs and a wide variety of packages and form factors mean that a lot of traditional applications of radioluminescence are now simply electric. Here's just a small example: in the early days of LCD digital watches, it was not unusual for higher-end models to use a radioluminescent source as a backlight. Today that's just nonsensical, a digital watch needs a power source anyway and in even the cheapest Casios a single LED offers a reasonable alternative. Radioluminescent digital watches were very short lived. Now that we've learned about a few historic radioluminescent manufacturers, you might have a couple of questions. Where were the radioisotopes actually sourced? And why does Ontario come up twice? These are related. From the 1910s to the 1950s, radioluminescent products were mostly using radium sourced from Standard Chemical, who extracted it from mines in the Southwest. The domestic radium mining industry collapsed by 1955 due to a combination of factors: declining demand after WWII, cheaper radium imported from Brazil, and a broadly changing attitude towards radium that lead the NRC to note in the '90s that we might never again find the need to extract radium: radium has a very long half life that makes it considerably more difficult to manage than strontium or tritium. Today, you could say that the price of radium has gone negative, in that you are far more likely to pay an environmental management company to take it away (at rather high prices) than to buy more. But what about tritium? Tritium is not really naturally occurring; there technically is some natural tritium but it's at extremely low concentrations and very hard to get at. But, as it happens, irradiating water produces a bit of tritium, and nuclear reactors incidentally irradiate a lot of water. With suitable modifications, the tritium produced as a byproduct of civilian reactors can be concentrated and sold. Ontario Hydro has long had facilities to perform this extraction, and recently built a new plant at the Darlington Nuclear Station that processes heavy water shipped from CANDU reactors throughout Ontario. The primary purpose of this plant is to reduce environmental exposure from the release of "tritiated" heavy water; it produces more tritium than can reasonably be sold, so much of it is stored for decay. The result is that tritium is fairly abundant and cheap in Ontario. Besides SRB Technologies which packages tritium from Ontario Hydro into GTLS, another major manufacturer of GTLS is the Swiss company mb-microtec. mb-microtec is the parent of watch brand Traser and GTLS brand Trigalight, and seem to be one of the largest sources of consumer GTLS overall. Many of the tritium keychains you can buy, for example, use tritium vials manufactured by mb-microtec. NRC documents suggest that mb-microtec contracts a lot of their finished product manufacturing to a company in Hong Kong and that some of the finished products you see using their GTLS (like watches and fobs) are in fact white-labeled from that plant, but unfortunately don't make the original source of the tritium clear. mb-microtec has the distinction of operating the only recycling plant for tritium gas, and press releases surrounding the new recycling operation say they purchase the rest of their tritium supply. I assume from the civilian nuclear power industry in Switzerland, which has several major reactors operating. A number of other manufacturers produce GTLS primarily for military applications, with some safety signage side business. And then there is, of course, the nuclear weapons program, which consumes the largest volume of tritium in the US. The US's tritium production facility for much of the Cold War actually shut down in 1988, one of the factors in most GTLS manufacturers being overseas. In the interim period, the sole domestic tritium supply was recycling of tritium in dismantled weapons and other surplus equipment. Since tritium has such a short half-life, this situation cannot persist indefinitely, and tritium production was resumed in 2004 at the Tennessee Valley Authority's Watts Bar nuclear generating station. Tritium extracted from that plant is currently used solely by the Department of Energy, primarily for the weapons program. Finally, let's discuss the modern state of radioluminescence. GTLS, based on tritium, are the only type of radioluminescence available to consumers. All importation and distribution of GTLS requires an NRC license, although companies that only distribute products that have been manufactured and tested by another licensee fall under a license exemption category that still requires NRC reporting but greatly simplifies the process. Consumers that purchase these items have no obligations to the NRC. Major categories of devices under these rules include smoke detectors, detection instruments and small calibration sources, and self-luminous products using tritium, krypton, or promethium. You might wonder, "how big of a device can a I buy under these rules?" The answer to that question is a bit complicated, so let me explain my understanding of the rules using a specific example. Let's say you buy a GTLS keychain from massdrop or wherever people get EDC baubles these days [3]. The business you ordered it from almost certainly did not make it, and is acting as an NRC exempt distributor of a product. In NRC terms, your purchase of the product is not the "initial sale or distribution," that already happened when the company you got it from ordered it from their supplier. Their supplier, or possibly someone further up in the chain, does need to hold a license: an NRC specific license is required to manufacture, process, produce, or initially transfer or sell tritium products. This is the reason that overseas companies like SRB and mb-microtec hold NRC licenses; this is the only way for consumers to legally receive their products. It is important to note the word "specific" in "NRC specific license." These licenses are very specific; the NRC approves each individual product including the design of the containment and and labeling. When a license is issued, the individual products are added to a registry maintained by the NRC. When evaluating license applications, the NRC considers a set of safety objectives rather than specific criteria. For example, and if you want to read along we're in 10 CFR 32.23: In normal use and disposal of a single exempt unit, it is unlikely that the external radiation dose in any one year, or the dose commitment resulting from the intake of radioactive material in any one year, to a suitable sample of the group of individuals expected to be most highly exposed to radiation or radioactive material from the product will exceed the dose to the appropriate organ as specified in Column I of the table in § 32.24 of this part. So the rules are a bit soft, in that a licensee can argue back and forth with the NRC over means of calculating dose risk and so on. It is, ultimately, the NRC's discretion as to whether or not a device complies. It's surprisingly hard to track down original licensing paperwork for these products because of how frequently they are rebranded, and resellers never seem to provide detailed specifications. I suspect this is intentional, as I've found some cases of NRC applications that request trade secret confidentiality on details. Still, from the license paperwork I've found with hard numbers, it seems like manufacturers keep the total activity of GTLS products (e.g. a single GTLS sold alone, or the total of the GTLS in a watch) under 25 millicurie. There do exist larger devices, of which exit signs are the largest category. Self-powered exit signs are also manufactured under NRC specific licenses, but their activity and resulting risk is too high to qualify for exemption at the distribution and use stage. Instead, all users of self-powered safety signs do so under a general license issued by the NRC (a general license meaning that it is implicitly issued to all such users). The general license is found in 10 CFR 31. Owners of tritium exit signs are required to designate a person to track and maintain the signs, inform the NRC of that person's contact information and any changes in that person, to inform the NRC of any lost, stolen, or damaged signs. General licensees are not allowed to sell or otherwise transfer tritium signs, unless they are remaining in the same location (e.g. when a building is sold), in which case they must notify the NRC and disclose NRC requirements to the transferee. When tritium exit signs reach the end of their lifespan, they must be disposed of by transfer to an NRC license holder who can recycle them. The general licensee has to notify the NRC of that transfer. Overall, the intent of the general license regulations is to ensure that they are properly disposed of: reporting transfers and events to the NRC, along with serial numbers, allows the NRC to audit for signs that have "disappeared." Missing tritium exit signs are a common source of NRC event reports. It should also be said that, partly for these reasons, tritium exit signs are pretty expensive. Roughly $300 for a new one, and $150 to dispose of an old one. Other radioluminescent devices you will find are mostly antiques. Radium dials are reasonably common, anything with a luminescent dial made before, say, 1960 is probably radium, and specifically Westclox products to 1978 likely use radium. The half-life of radium-226 is 1,600 years, so these radium dials have the distinction of often still working, although the paints have usually held up more poorly than the isotopes they contain. These items should be handled with caution, since the failure of the paint creates the possibility of inhaling or ingesting radium. They also emit radon as a decay product, which becomes hazardous in confined spaces, so radium dials should be stored in a well-ventilated environment. Strontium-90 has a half-life of 29 years, and tritium 12 years, so vintage radioluminescent products using either have usually decayed to the extent that they no longer shine brightly or even at all. The phosphors used for these products will usually still fluoresce brightly under UV light and might even photoluminesce for a time after light exposure, but they will no longer stay lit in a dark environment. Fortunately, the decay that makes them not work also makes them much safer to handle. Tritium decays to helium-3 which is quite safe, strontium-90 to yttrium-90 which quickly decays to zirconium-90. Zirconium-90 is stable and only about as toxic as any other heavy metal. You can see why these radioisotopes are now much preferred over radium. And that's the modern story of radioluminescence. Sometime soon, probably tomorrow, I will be sending out my supporter's newsletter, EYES ONLY, with some more detail on environmental remediation at historic processing facilities for radioluminescent products. You can learn a bit more about how US Radium was putting their waste in a hole in the ground, and also into a river, and sort of wherever else. You know Radium Dial Company was up to similar abuses. [1] The assertion that Ottawa is conveniently close to Peru is one of those oddities of naming places after bigger, more famous places. [2] CNSC's whole final report on Shield Source is only 25 pages. A similar decommissioning process in the US would produce thousands of pages of public record typically culminating in EPA Five Year Reviews which would be, themselves, perhaps a hundred pages depending on the amount of post-closure monitoring. I'm not familiar with the actual law but it seems like most of the difference is that CNSC does not normally publish technical documentation or original data (although one document does suggest that original data is available on request). It's an interesting difference... the 25-page report, really only 20 pages after front matter, is a lot more approachable for the public than a 400 page set of close-out reports. Much of the standard documentation in the US comes from NEPA requirements, and NEPA is infamous in some circles for requiring exhaustive reports that don't necessarily do anything useful. But from my perspective it is weird for the formal, published documentation on closure of a radiological site to not include hydrology discussion, demographics, maps, and fifty pages of data tables as appendices. Ideally a bunch of one-sentence acceptance emails stapled to the end for good measure. When it comes to describing the actual problem, CNSC only gives you a couple of paragraphs of background. [3] Really channeling Guy Debord with my contempt for keychains here. during the writing of this article, I bought myself a tritium EDC bauble, so we're all in the mud together.

2 months ago 22 votes

More in technology

Join Arduino at Automate 2025 to explore the future of automation

We’re heading to Automate 2025, the largest robotics and automation event in the Americas, happening May 12-15 at Huntington Place in Detroit – and we’re bringing a lineup of fresh innovations, live demos, and exciting new launches. You’ll find us in Booth #6632, right next to our partners at Weintek. This year is extra special […] The post Join Arduino at Automate 2025 to explore the future of automation appeared first on Arduino Blog.

13 hours ago 1 votes
Nintendo's financial results are actually pretty interesting

Nintendo's fiscal year just ended on March 31, and their annual results report has some goodies in there I thought were worth shouting out. First up, the company made a lot less money than in the previous year, with 30% lower revenue and 46% lower profits. As someone

9 hours ago 1 votes
This plant always gets enough sunlight thanks to its robotic legs

Plants of all kinds are quite infamous for their inability to move, and this can be especially problematic for houseplants that rely on consistently sunny locations within a room in order to get enough light. Driven by wanting their plant to have the best possible growing conditions in their north-facing room, GitHub user MarinaXP has […] The post This plant always gets enough sunlight thanks to its robotic legs appeared first on Arduino Blog.

9 hours ago 1 votes
Raycast does this

The gang gets to work defending their Mac login items. Who has the most minimal startup? Who's got the craziest apps? This episode has more new apps mentioned in any episode of Comfort Zone ever! Watch or listen now. Other Things Discussed Chris's hyper key video

13 hours ago 1 votes
Vote for the May 2025 + Post Topic

Make your vote count.

16 hours ago 1 votes