Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
116
In the 1450s, German inventor Johannes Gutenburg designed the movable-type printing press, the first practical method of mass-duplicating text. After various other projects, he applied his press to the production of the Bible, yielding over one hundred copies of a text that previously had to be laboriously hand-copied. His Bible was a tremendous cultural success, triggering revolutions not only in printed matter but also in religion. It was not a financial success: Gutenburg had apparently misspent the funds loaned to him for the project. Gutenburg lost a lawsuit and, as a result of the judgment, lost his workshop. He had made printing vastly cheaper, but it remained costly in volume. Sustaining the revolution of the printing press evidently required careful accounting. For as long as there have been documents, there has been a need to copy. The printing press revolutionized printed matter, but setting up plates was a labor-intensive process, and a large number of copies needed to be...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from computers are bad

2025-05-27 the first smart homes

Sometimes I think I should pivot my career to home automation critic, because I have many opinions on the state of the home automation industry---and they're pretty much all critical. Virtually every time I bring up home automation, someone says something about the superiority of the light switch. Controlling lights is one of the most obvious applications of home automation, and there is a roughly century long history of developments in light control---yet, paradoxically, it is an area where consumer home automation continues to struggle. An analysis of how and why billion-dollar tech companies fail to master the simple toggling of lights in response to human input will have to wait for a future article, because I will have a hard time writing one without descending into incoherent sobbing about the principles of scene control and the interests of capital. Instead, I want to just dip a toe into the troubled waters of "smart lighting" by looking at one of its earliest precedents: low-voltage lighting control. A source I generally trust, the venerable "old internet" website Inspectapedia, says that low-voltage lighting control systems date back to about 1946. The earliest conclusive evidence I can find of these systems is a newspaper ad from 1948, but let's be honest, it's a holiday and I'm only making a half effort on the research. In any case, the post-war timing is not a coincidence. The late 1940s were a period of both rapid (sub)urban expansion and high copper prices, and the original impetus for relay systems seems to have been the confluence of these two. But let's step back and explain what a relay or low-voltage lighting control system is. First, I am not referring to "low voltage lighting" meaning lights that run on 12 or 24 volts DC or AC, as was common in landscape lighting and is increasingly common today for integrated LED lighting. Low-voltage lighting control systems are used for conventional 120VAC lights. In the most traditional construction, e.g. in the 1940s, lights would be served by a "hot" wire that passed through a wall box containing a switch. In many cases the neutral (likely shared with other fixtures) went directly from the light back to the panel, bypassing the switch... running both the hot and neutral through the switch box did not become conventional until fairly recently, to the chagrin of anyone installing switches that require a neutral for their own power, like timers or "smart" switches. The problem with this is that it lengthens the wiring runs. If you have a ceiling fixture with two different switches in a three-way arrangement, say in a hallway in a larger house, you could be adding nearly 100' in additional wire to get the hot to the switches and the runner between them. The cost of that wiring, in the mid-century, was quite substantial. Considering how difficult it is to find an employee to unlock the Romex cage at Lowes these days, I'm not sure that's changed that much. There are different ways of dealing with this. In the UK, the "ring main" served in part to reduce the gauge (and thus cost) of outlet wiring, but we never picked up that particular eccentricity in the US (for good reason). In commercial buildings, it's not unusual for lighting to run on 240v for similar reasons, but 240v is discouraged in US residential wiring. Besides, the mid-century was an age of optimism and ambition in electrical technology, the days of Total Electric Living. Perhaps the technology of the relay, refined by so many innovations of WWII, could offer a solution. Switch wiring also had to run through wall cavities, an irritating requirement in single-floor houses where much of the lighting wiring could be contained to the attic. The wiring of four-way and other multi-switch arrangements could become complex and require a lot more wall runs, discouraging builders providing switches in the most convenient places. What if relays also made multiple switches significantly easier to install and relocate? You probably get the idea. In a typical low-voltage lighting control system, a transformer provides a low voltage like 24VAC, much the same as used by doorbells. The light switches simply toggle the 24VAC control power to the coils of relays. Some (generally older) systems powered the relay continuously, but most used latching relays. In this case, all light switches are momentary, with an "on" side and an "off" side. This could be a paddle that you push up or down (much like a conventional light switch), a bar that you push the left or right sides of, or a pair of two push buttons. In most installations, all of the relays were installed together in a single enclosure, usually in the attic where the high-voltage wiring to the actual lights would be fairly short. The 24VAC cabling to the switches was much smaller gauge, and depending on the jurisdiction might not require any sort of license to install. Many systems had enclosures with separate high voltage and low voltage components, or mounted the relays on the outside of an enclosure such that the high voltage wiring was inside and low voltage outside. Both arrangements helped to meet code requirements for isolating high and low voltage systems and provided a margin of safety in the low voltage wiring. That provided additional cost savings as well; low voltage wiring was usually installed without any kind of conduit or sheathed cable. By 1950, relay lighting controls were making common appearances in real estate listings. A feature piece on the "Melody House," a builder's model home, in the Tacoma News Tribune reads thus: Newest features in the house are the low voltage touch plate and relay system lighting controls, with wide plates instead of snap buttons---operated like the stops of a pipe organ, with the merest flick of a finger. The comparison to a pipe organ is interesting, first in its assumption that many readers were familiar with typical organ stops. Pipe organs were, increasingly, one of the technological marvels of the era: while the concept of the pipe organ is very old, this same era saw electrical control systems (replete with relays!) significantly reduce the cost and complexity of organ consoles. What's more, the tonewheel electric organ had become well-developed and started to find its way into homes. The comparison is also interesting because of its deficiencies. The Touch-Plate system described used wide bars, which you pressed the left or right side of---you could call them momentary SPDT rocker switches if you wanted. There were organs with similar rocker stops but I do not think they were common in 1950. My experience is that such rocker switch stops usually indicate a fully digital control system, where they make momentary action unobtrusive and avoid state synchronization problems. I am far from an expert on organs, though, which is why I haven't yet written about them. If you have a guess at which type of pipe organ console our journalist was familiar with, do let me know. Touch-Plate seems to have been one of the first manufacturers of these systems, although I can't say for sure that they invented them. Interestingly, Touch-Plate is still around today, but their badly broken WordPress site ("Welcome to the new touch-plate.com" despite it actually being touchplate.com) suggests they may not do much business. After a few pageloads their WordPress plugin WAF blocked me for "exceed[ing] the maximum number of page not found errors per minute for humans." This might be related to my frustration that none of the product images load. It seems that the Touch-Plate company has mostly pivoted to reselling imported LED lighting (touchplateled.com), so I suppose the controls business is withering on the vine. The 1950s saw a proliferation of relay lighting control brands, with GE introducing a particularly popular system with several generations of fixtures. Kyle Switch Plates, who sell replacement switch plates (what else?), list options for Remcon, Sierra, Bryant, Pyramid, Douglas, and Enercon systems in addition to the two brands we have met so far. As someone who pays a little too much attention to light switches, I have personally seen four of these brands, three of them still in use and one apparently abandoned in place. Now, you might be thinking that simply economizing wiring by relocating the switches does not constitute "home automation," but there are other features to consider. For one, low-voltage light control systems made it feasible to install a lot more switches. Houses originally built with them often go a little wild with the n-way switching, every room providing lightswitches at every door. But there is also the possibility of relay logic. From the same article: The necessary switches are found in every room, but in the master bedroom there is a master control panel above the bed, from where the house and yard may be flooded with instant light in case of night emergency. Such "master control panels" were a big attraction for relay lighting, and the finest homes of the 1950s and 1960s often displayed either a grid of buttons near the head of the master bed, or even better, a GE "Master Selector" with a curious system of rotary switches. On later systems, timers often served as auxiliary switches, so you could schedule exterior lights. With a creative installer, "scenes" were even possible by wiring switches to arbitrary sets of relays (this required DC or half-wave rectified control power and diodes to isolate the switches from each other). Many of these relay control systems are still in use today. While they are quite outdated in a certain sense, the design is robust and the simple components mean that it's usually not difficult to find replacement parts when something does fail. The most popular system is the one offered by GE, using their RR series relays (RR3, RR4, etc., to the modern RR9). That said, GE suggests a modernization path to their LightSweep system, which is really a 0-10v analog dimming controller that has the add-on ability to operate relays. The failure modes are mostly what you would expect: low voltage wiring can chafe and short, or the switches can become stuck. This tends to cause the lights to stick on or off, and the continuous current through the relay coil often burns it out. The fix requires finding the stuck switch or short and correcting it, and then replacing the relay. One upside of these systems that persists today is density: the low voltage switches are small, so with most systems you can fit 3 per gang. Another is that they still make N-way switching easier. There is arguably a safety benefit, considering the reduction in mains-voltage wire runs. Yet we rarely see such a thing installed in homes newer than around the '80s. I don't know that I can give a definitive explanation of the decline of relay lighting control, but reduced prices for copper wiring were probably a main factor. The relays added a failure point, which might lead to a perception of unreliability, and the declining familiarity of electricians means that installing a relay system could be expensive and frustrating today. What really interests me about relay systems is that they weren't really replaced... the idea just went away. It's not like modern homes are providing a master control panel in the bedroom using some alternative technology. I mean, some do, those with prices in the eight digits, but you'll hardly ever see it. That gets us to the tension between residential lighting and architectural lighting control systems. In higher-end commercial buildings, and in environments like conference rooms and lecture halls, there's a well established industry building digital lighting control systems. Today, DALI is a common standard for the actual lighting control, but if you look at a range of existing buildings you will find everything from completely proprietary digital distributed dimming to 0-10v analog dimming to central dimmer racks (similar to traditional theatrical lighting). Relay lighting systems were, in a way, a nascent version of residential architectural lighting control. And the architectural lighting control industry continues to evolve. If there is a modern equivalent to relay lighting, it's something like Lutron QSX. That's a proprietary digital lighting (and shade) control system, marketed for both residential and commercial use. QSX offers a wide range of attractive wall controls, tight integration to Lutron's HomeSense home automation platform, and a price tag that'll make your eyes water. Lutron has produced many generations of these systems, and you could make an argument that they trace their heritage back to the relay systems of the 1940s. But they're just priced way beyond the middle-class home. And, well, I suppose that requires an argument based on economics. Prices have gone up. Despite tract construction being a much older idea than people often realize, it seems clear that today's new construction homes have been "value engineered" to significantly lower feature and quality levels than those of the mid-century---but they're a lot bigger. There is a sort of maxim that today's home buyers don't care about anything but square footage, and if you've seen what Pulte or D. R. Horton are putting up... well, I never knew that 3,000 sq ft could come so cheap, and look it too. Modern new-construction homes just don't come with the gizmos that older ones did, especially in the '60s and '70s. Looking at the sales brochure for a new development in my own Albuquerque ("Estates at La Cuentista"), besides 21st century suburbanization (Gated Community! "East Access to Paseo del Norte" as if that's a good thing!) most of the advertised features are "big." I'm serious! If you look at the "More Innovation Built In" section, the "innovations" are a home office (more square footage), storage (more square footage), indoor and outdoor gathering spaces (to be fair, only the indoor ones are square footage), "dedicated learning areas" for kids (more square footage), and a "basement or bigger garage" for a home gym (more square footage). The only thing in the entire innovation section that I would call a "technical" feature is water filtration. You can scroll down for more details, and you get to things like "space for a movie room" and a finished basement described eight different ways. Things were different during the peak of relay lighting in the '60s. A house might only be 1,600 sq ft, but the builder would deck it out with an intercom (including multi-room audio of a primitive sort), burglar alarm, and yes, relay lighting. All of these technologies were a lot newer and people were more excited about them; I bring up Total Electric Living a lot because of an aesthetic obsession but it was a large-scale advertising and partnership campaign by the electrical industry (particularly Westinghouse) that gave builders additional cross-promotion if they included all of these bells and whistles. Remember, that was when people were watching those old videos about the "kitchen of the future." What would a 2025 "Kitchen of the Future" promotional film emphasize? An island bigger than my living room and a nook for every meal, I assume. Features like intercoms and even burglar alarms have become far less common in new construction, and even if they were present I don't think most buyers would use them. But that might seem a little odd, right, given the push towards home automation? Well, built-in home automation options have existed for longer than any of today's consumer solutions, but "built in" is a liability for a technology product. There are practical reasons, in that built-in equipment is harder to replace, but there's also a lamer commercial reason. Consumer technology companies want to sell their products like consumer technology, so they've recontextualized lighting control as "IoT" and "smart" and "AI" rather than something an electrician would hook up. While I was looking into relay lighting control systems, I ran into an interesting example. The Lutron Lu Master Lumi 5. What a name! Lutron loves naming things like this. The Lumi 5 is a 1980s era product with essentially the same features as a relay system, but architected in a much stranger way. It is, essentially, five three way switches in a box with remote controls. That means that each of the actual light switches in the house (which could also be dimmers) need mains-voltage wiring, including runner, back to the Lumi 5 "interface." Pressing a button on one of the Lutron wall panels toggles the state of the relay in the "interface" cabinet, toggling the light. But, since it's all wired as a three-way switch, toggling the physical switch at the light does the same thing. As is typical when combining n-way switches and dimming, the Lumi 5 has no control over dimmers. You can only dim a light up or down at the actual local control, the Lumi 5 can just toggle the dimmer on and off using the 3-way runner. The architecture also means that you have two fundamentally different types of wall panels in your house: local switches or dimmers wired to each light, and the Lu Master panels with their five buttons for the five circuits, along with "all on" and "all off." The Lumi 5 "interface" uses simple relay logic to implement a few more features. Five mains-voltage-level inputs can be wired to time clocks, so that you can schedule any combination(s) of the circuits to turn on and off. The manual recommends models including one with an astronomical clock for sunrise/sunset. An additional input causes all five circuits to turn on; it's suggested for connection to an auxiliary relay on a burglar alarm to turn all of the lights on should the alarm be triggered. The whole thing is strange and fascinating. It is basically a relay lighting control system, like so many before it, but using a distinctly different wiring convention. I think the main reason for the odd wiring was to accommodate dimmers, an increasingly popular option in the 1980s that relay systems could never really contend with. It doesn't have the cost advantages of relay systems at all, it will definitely be more expensive! But it adds some features over the fancy Lutron switches and dimmers you were going to install anyway. The Lu Master is the transitional stage between relay lighting systems and later architectural lighting controls, and it straddled too the end of relay light control in homes. It gives an idea of where relay light control in homes would have evolved, had the whole technology not been doomed to the niche zone of conference centers and universities. If you think about it, the Lu Master fills the most fundamental roles of home automation in lighting: control over multiple lights in a convenient place, scheduling and triggers, and an emergency function. It only lacks scenes, which I think we can excuse considering that the simple technology it uses does not allow it to adjust dimmers. And all of that with no Node-RED in sight! Maybe that conveys what most frustrates me about the "home automation" industry: it is constantly reinventing the wheel, an oligopoly of tech companies trying to drag people's homes into their "ecosystem." They do so by leveraging the buzzword of the moment, IoT to voice assistants to, I guess now AI?, to solve a basic set of problems that were pretty well solved at least as early as 1948. That's not to deny that modern home automation platforms have features that old ones don't. They are capable of incredibly sophisticated things! But realistically, most of their users want only very basic functionality: control in convenient places, basic automation, scenes. It wouldn't sting so much if all these whiz-bang general purpose computers were good at those tasks, but they aren't. For the very most basic tasks, things like turning on and off a group of lights, major tech ecosystems like HomeKit provide a user experience that is significantly worse than the model home of 1950. You could install a Lutron system, and it would solve those fundamental tasks much better... for a much higher price. But it's not like Lutron uses all that money to be an absolute technical powerhouse, a center of innovation at the cutting edge. No, even the latest Lutron products are really very simple, technically. The technical leaders here, Google, Apple, are the companies that can't figure out how to make a damn light switch. The problem with modern home automation platforms is that they are too ambitious. They are trying to apply enormously complex systems to very simple tasks, and thus contaminating the simplest of electrical systems with all the convenience and ease of a Smart TV. Sometimes that's what it feels like this whole industry is doing: adding complexity while the core decays. From automatic programming to AI coding agents, video terminals to Electron, the scope of the possible expands while the fundamentals become more and more irritating. But back to the real point, I hope you learned about some cool light switches. Check out the Kyle Switch Plates reference and you'll start seeing these buildings and homes, at least if you live in an area that built up during the era that they were common (1950s to the 1970s).

a week ago 8 votes
2025-05-11 air traffic control

Air traffic control has been in the news lately, on account of my country's declining ability to do it. Well, that's a long-term trend, resulting from decades of under-investment, severe capture by our increasingly incompetent defense-industrial complex, no small degree of management incompetence in the FAA, and long-lasting effects of Reagan crushing the PATCO strike. But that's just my opinion, you know, maybe airplanes got too woke. In any case, it's an interesting time to consider how weird parts of air traffic control are. The technical, administrative, and social aspects of ATC all seem two notches more complicated than you would expect. ATC is heavily influenced by its peculiar and often accidental development, a product of necessity that perpetually trails behind the need, and a beneficiary of hand-me-down military practices and technology. Aviation Radio In the early days of aviation, there was little need for ATC---there just weren't many planes, and technology didn't allow ground-based controllers to do much of value. There was some use of flags and signal lights to clear aircraft to land, but for the most part ATC had to wait for the development of aviation radio. The impetus for that work came mostly from the First World War. Here we have to note that the history of aviation is very closely intertwined with the history of warfare. Aviation technology has always rapidly advanced during major conflicts, and as we will see, ATC is no exception. By 1913, the US Army Signal Corps was experimenting with the use of radio to communicate with aircraft. This was pretty early in radio technology, and the aircraft radios were huge and awkward to operate, but it was also early in aviation and "huge and awkward to operate" could be similarly applied to the aircraft of the day. Even so, radio had obvious potential in aviation. The first military application for aircraft was reconnaissance. Pilots could fly past the front to find artillery positions and otherwise provide useful information, and then return with maps. Well, even better than returning with a map was providing the information in real-time, and by the end of the war medium-frequency AM radios were well developed for aircraft. Radios in aircraft lead naturally to another wartime innovation: ground control. Military personnel on the ground used radio to coordinate the schedules and routes of reconnaissance planes, and later to inform on the positions of fighters and other enemy assets. Without any real way to know where the planes were, this was all pretty primitive, but it set the basic pattern that people on the ground could keep track of aircraft and provide useful information. Post-war, civil aviation rapidly advanced. The early 1920s saw numerous commercial airlines adopting radio, mostly for business purposes like schedule coordination. Once you were in contact with someone on the ground, though, it was only logical to ask about weather and conditions. Many of our modern practices like weather briefings, flight plans, and route clearances originated as more or less formal practices within individual airlines. Air Mail The government was not left out of the action. The Post Office operated what may have been the largest commercial aviation operation in the world during the early 1920s, in the form of Air Mail. The Post Office itself did not have any aircraft; all of the flying was contracted out---initially to the Army Air Service, and later to a long list of regional airlines. Air Mail was considered a high priority by the Post Office and proved very popular with the public. When the transcontinental route began proper operation in 1920, it became possible to get a letter from New York City to San Francisco in just 33 hours by transferring it between airplanes in a nearly non-stop relay race. The Post Office's largesse in contracting the service to private operators provided not only the funding but the very motivation for much of our modern aviation industry. Air travel was not very popular at the time, being loud and uncomfortable, but the mail didn't complain. The many contract mail carriers of the 1920s grew and consolidated into what are now some of the United States' largest companies. For around a decade, the Post Office almost singlehandedly bankrolled civil aviation, and passengers were a side hustle [1]. Air mail ambition was not only of economic benefit. Air mail routes were often longer and more challenging than commercial passenger routes. Transcontinental service required regular flights through sparsely populated parts of the interior, challenging the navigation technology of the time and making rescue of downed pilots a major concern. Notably, air mail operators did far more nighttime flying than any other commercial aviation in the 1920s. The post office became the government's de facto technical leader in civil aviation. Besides the network of beacons and markers built to guide air mail between cities, the post office built 17 Air Mail Radio Stations along the transcontinental route. The Air Mail Radio Stations were the company radio system for the entire air mail enterprise, and the closest thing to a nationwide, public air traffic control service to then exist. They did not, however, provide what we would now call control. Their role was mainly to provide pilots with information (including, critically, weather reports) and to keep loose tabs on air mail flights so that a disappearance would be noticed in time to send search and rescue. In 1926, the Watres Act created the Aeronautic Branch of the Department of Commerce. The Aeronautic Branch assumed a number of responsibilities, but one of them was the maintenance of the Air Mail routes. Similarly, the Air Mail Radio Stations became Aeronautics Branch facilities, and took on the new name of Flight Service Stations. No longer just for the contract mail carriers, the Flight Service Stations made up a nationwide network of government-provided services to aviators. They were the first edifices in what we now call the National Airspace System (NAS): a complex combination of physical facilities, technologies, and operating practices that enable safe aviation. In 1935, the first en-route air traffic control center opened, a facility in Newark owned by a group of airlines. The Aeronautic Branch, since renamed the Bureau of Air Commerce, supported the airlines in developing this new concept of en-route control that used radio communications and paperwork to track which aircraft were in which airways. The rising number of commercial aircraft made in-air collisions a bigger problem, so the Newark control center was quickly followed by more facilities built on the same pattern. In 1936, the Bureau of Air Commerce took ownership of these centers, and ATC became a government function alongside the advisory and safety services provided by the flight service stations. En route center controllers worked off of position reports from pilots via radio, but needed a way to visualize and track aircraft's positions and their intended flight paths. Several techniques helped: first, airlines shared their flight planning paperwork with the control centers, establishing "flight plans" that corresponded to each aircraft in the sky. Controllers adopted a work aid called a "flight strip," a small piece of paper with the key information about an aircraft's identity and flight plan that could easily be handed between stations. By arranging the flight strips on display boards full of slots, controllers could visualize the ordering of aircraft in terms of altitude and airway. Second, each center was equipped with a large plotting table map where controllers pushed markers around to correspond to the position reports from aircraft. A small flag on each marker gave the flight number, so it could easily be correlated to a flight strip on one of the boards mounted around the plotting table. This basic concept of air traffic control, of a flight strip and a position marker, is still in use today. Radar The Second World War changed aviation more than any other event of history. Among the many advancements were two British inventions of particular significance: first, the jet engine, which would make modern passenger airliners practical. Second, the radar, and more specifically the magnetron. This was a development of such significance that the British government treated it as a secret akin to nuclear weapons; indeed, the UK effectively traded radar technology to the US in exchange for participation in US nuclear weapons research. Radar created radical new possibilities for air defense, and complimented previous air defense development in Britain. During WWI, the organization tasked with defending London from aerial attack had developed a method called "ground-controlled interception" or GCI. Under GCI, ground-based observers identify possible targets and then direct attack aircraft towards them via radio. The advent of radar made GCI tremendously more powerful, allowing a relatively small number of radar-assisted air defense centers to monitor for inbound attack and then direct defenders with real-time vectors. In the first implementation, radar stations reported contacts via telephone to "filter centers" that correlated tracks from separate radars to create a unified view of the airspace---drawn in grease pencil on a preprinted map. Filter center staff took radar and visual reports and updated the map by moving the marks. This consolidated information was then provided to air defense bases, once again by telephone. Later technical developments in the UK made the process more automated. The invention of the "plan position indicator" or PPI, the type of radar scope we are all familiar with today, made the radar far easier to operate and interpret. Radar sets that automatically swept over 360 degrees allowed each radar station to see all activity in its area, rather than just aircraft passing through a defensive line. These new capabilities eliminated the need for much of the manual work: radar stations could see attacking aircraft and defending aircraft on one PPI, and communicated directly with defenders by radio. It became routine for a radar operator to give a pilot navigation vectors by radio, based on real-time observation of the pilot's position and heading. A controller took strategic command of the airspace, effectively steering the aircraft from a top-down view. The ease and efficiency of this workflow was a significant factor in the end of the Battle of Britain, and its remarkable efficacy was noticed in the US as well. At the same time, changes were afoot in the US. WWII was tremendously disruptive to civil aviation; while aviation technology rapidly advanced due to wartime needs those same pressing demands lead to a slowdown in nonmilitary activity. A heavy volume of military logistics flights and flight training, as well as growing concerns about defending the US from an invasion, meant that ATC was still a priority. A reorganization of the Bureau of Air Commerce replaced it with the Civil Aeronautics Authority, or CAA. The CAA's role greatly expanded as it assumed responsibility for airport control towers and commissioned new en route centers. As WWII came to a close, CAA en route control centers began to adopt GCI techniques. By 1955, the name Air Route Traffic Control Center (ARTCC) had been adopted for en route centers and the first air surveillance radars were installed. In a radar-equipped ARTCC, the map where controllers pushed markers around was replaced with a large tabletop PPI built to a Navy design. The controllers still pushed markers around to track the identities of aircraft, but they moved them based on their corresponding radar "blips" instead of radio position reports. Air Defense After WWII, post-war prosperity and wartime technology like the jet engine lead to huge growth in commercial aviation. During the 1950s, radar was adopted by more and more ATC facilities (both "terminal" at airports and "en route" at ARTCCs), but there were few major changes in ATC procedure. With more and more planes in the air, tracking flight plans and their corresponding positions became labor intensive and error-prone. A particular problem was the increasing range and speed of aircraft, and corresponding longer passenger flights, that meant that many aircraft passed from the territory of one ARTCC into another. This required that controllers "hand off" the aircraft, informing the "next" ARTCC of the flight plan and position at which the aircraft would enter their airspace. In 1956, 128 people died in a mid-air collision of two commercial airliners over the Grand Canyon. In 1958, 49 people died when a military fighter struck a commercial airliner over Nevada. These were not the only such incidents in the mid-1950s, and public trust in aviation started to decline. Something had to be done. First, in 1958 the CAA gave way to the Federal Aviation Administration. This was more than just a name change: the FAA's authority was greatly increased compared tot he CAA, most notably by granting it authority over military aviation. This is a difficult topic to explain succinctly, so I will only give broad strokes. Prior to 1958, military aviation was completely distinct from civil aviation, with no coordination and often no communication at all between the two. This was, of course, a factor in the 1958 collision. Further, the 1956 collision, while it did not involve the military, did result in part from communications issues between separate distinct CAA facilities and the airline's own control facilities. After 1958, ATC was completely unified into one organization, the FAA, which assumed the work of the military controllers of the time and some of the role of the airlines. The military continues to have its own air controllers to this day, and military aircraft continue to include privileges such as (practical but not legal) exemption from transponder requirements, but military flights over the US are still beholden to the same ATC as civil flights. Some exceptions apply, void where prohibited, etc. The FAA's suddenly increased scope only made the practical challenges of ATC more difficult, and commercial aviation numbers continued to rise. As soon as the FAA was formed, it was understood that there needed to be major investments in improving the National Airspace System. While the first couple of years were dominated by the transition, the FAA's second director (Najeeb Halaby) prepared two lengthy reports examining the situation and recommending improvements. One of these, the Beacon report (also called Project Beacon), specifically addressed ATC. The Beacon report's recommendations included massive expansion of radar-based control (called "positive control" because of the controller's access to real-time feedback on aircraft movements) and new control procedures for airways and airports. Even better, for our purposes, it recommended the adoption of general-purpose computers and software to automate ATC functions. Meanwhile, the Cold War was heating up. US air defense, a minor concern in the few short years after WWII, became a higher priority than ever before. The Soviet Union had long-range aircraft capable of reaching the United States, and nuclear weapons meant that only a few such aircraft had to make it to cause massive destruction. Considering the vast size of the United States (and, considering the new unified air defense command between the United States and Canada, all of North America) made this a formidable challenge. During the 1950s, the newly minted Air Force worked closely with MIT's Lincoln Laboratory (an important center of radar research) and IBM to design a computerized, integrated, networked system for GCI. When the Air Force committed to purchasing the system, it was christened the Semi-Automated Ground Environment, or SAGE. SAGE is a critical juncture in the history of the computer and computer communications, the first system to demonstrate many parts of modern computer technology and, moreover, perhaps the first large-scale computer system of any kind. SAGE is an expansive topic that I will not take on here; I'm sure it will be the focus of a future article but it's a pretty well-known and well-covered topic. I have not so far felt like I had much new to contribute, despite it being the first item on my "list of topics" for the last five years. But one of the things I want to tell you about SAGE, that is perhaps not so well known, is that SAGE was not used for ATC. SAGE was a purely military system. It was commissioned by the Air Force, and its numerous operating facilities (called "direction centers") were located on Air Force bases along with the interceptor forces they would direct. However, there was obvious overlap between the functionality of SAGE and the needs of ATC. SAGE direction centers continuously received tracks from remote data sites using modems over leased telephone lines, and automatically correlated multiple radar tracks to a single aircraft. Once an operator entered information about an aircraft, SAGE stored that information for retrieval by other radar operators. When an aircraft with associated data passed from the territory of one direction center to another, the aircraft's position and related information were automatically transmitted to the next direction center by modem. One of the key demands of air defense is the identification of aircraft---any unknown track might be routine commercial activity, or it could be an inbound attack. The air defense command received flight plan data on commercial flights (and more broadly all flights entering North America) from the FAA and entered them into SAGE, allowing radar operators to retrieve "flight strip" data on any aircraft on their scope. Recognizing this interconnection with ATC, as soon as SAGE direction centers were being installed the Air Force started work on an upgrade called SAGE Air Traffic Integration, or SATIN. SATIN would extend SAGE to serve the ATC use-case as well, providing SAGE consoles directly in ARTCCs and enhancing SAGE to perform non-military safety functions like conflict warning and forward projection of flight plans for scheduling. Flight strips would be replaced by teletype output, and in general made less necessary by the computer's ability to filter the radar scope. Experimental trial installations were made, and the FAA participated readily in the research efforts. Enhancement of SAGE to meet ATC requirements seemed likely to meet the Beacon report's recommendations and radically improve ARTCC operations, sooner and cheaper than development of an FAA-specific system. As it happened, well, it didn't happen. SATIN became interconnected with another planned SAGE upgrade to the Super Combat Centers (SCC), deep underground combat command centers with greatly enhanced SAGE computer equipment. SATIN and SCC planners were so confident that the last three Air Defense Sectors scheduled for SAGE installation, including my own Albuquerque, were delayed under the assumption that the improved SATIN/SCC equipment should be installed instead of the soon-obsolete original system. SCC cost estimates ballooned, and the program's ambitions were reduced month by month until it was canceled entirely in 1960. Albuquerque never got a SAGE installation, and the Albuquerque air defense sector was eliminated by reorganization later in 1960 anyway. Flight Service Stations Remember those Flight Service Stations, the ones that were originally built by the Post Office? One of the oddities of ATC is that they never went away. FSS were transferred to the CAB, to the CAA, and then to the FAA. During the 1930s and 1940s many more were built, expanding coverage across much of the country. Throughout the development of ATC, the FSS remained responsible for non-control functions like weather briefing and flight plan management. Because aircraft operating under instrument flight rules must closely comply with ATC, the involvement of FSS in IFR flights is very limited, and FSS mostly serve VFR traffic. As ATC became common, the FSS gained a new and somewhat odd role: playing go-between for ATC. FSS were more numerous and often located in sparser areas between cities (while ATC facilities tended to be in cities), so especially in the mid-century, pilots were more likely to be able to reach an FSS than ATC. It was, for a time, routine for FSS to relay instructions between pilots and controllers. This is still done today, although improved communications have made the need much less common. As weather dissemination improved (another topic for a future post), FSS gained access to extensive weather conditions and forecasting information from the Weather Service. This connectivity is bidirectional; during the midcentury FSS not only received weather forecasts by teletype but transmitted pilot reports of weather conditions back to the Weather Service. Today these communications have, of course, been computerized, although the legacy teletype format doggedly persists. There has always been an odd schism between the FSS and ATC: they are operated by different departments, out of different facilities, with different functions and operating practices. In 2005, the FAA cut costs by privatizing the FSS function entirely. Flight service is now operated by Leidos, one of the largest government contractors. All FSS operations have been centralized to one facility that communicates via remote radio sites. While flight service is still available, increasing automation has made the stations far less important, and the general perception is that flight service is in its last years. Last I looked, Leidos was not hiring for flight service and the expectation was that they would never hire again, retiring the service along with its staff. Flight service does maintain one of my favorite internet phenomenon, the phone number domain name: 1800wxbrief.com. One of the odd manifestations of the FSS/ATC schism and the FAA's very partial privatization is that Leidos maintains an online aviation weather portal that is separate from, and competes with, the Weather Service's aviationweather.gov. Since Flight Service traditionally has the responsibility for weather briefings, it is honestly unclear to what extend Leidos vs. the National Weather Service should be investing in aviation weather information services. For its part, the FAA seems to consider aviationweather.gov the official source, while it pays for 1800wxbrief.com. There's also weathercams.faa.gov, which duplicates a very large portion (maybe all?) of the weather information on Leidos's portal and some of the NWS's. It's just one of those things. Or three of those things, rather. Speaking of duplication due to poor planning... The National Airspace System Left in the lurch by the Air Force, the FAA launched its own program for ATC automation. While the Air Force was deploying SAGE, the FAA had mostly been waiting, and various ARTCCs had adopted a hodgepodge of methods ranging from one-off computer systems to completely paper-based tracking. By 1960 radar was ubiquitous, but different radar systems were used at different facilities, and correlation between radar contacts and flight plans was completely manual. The FAA needed something better, and with growing congressional support for ATC modernization, they had the money to fund what they called National Airspace System En Route Stage A. Further bolstering historical confusion between SAGE and ATC, the FAA decided on a practical, if ironic, solution: buy their own SAGE. In an upcoming article, we'll learn about the FAA's first fully integrated computerized air traffic control system. While the failed detour through SATIN delayed the development of this system, the nearly decade-long delay between the design of SAGE and the FAA's contract allowed significant technical improvements. This "New SAGE," while directly based on SAGE at a functional level, used later off-the-shelf computer equipment including the IBM System/360, giving it far more resemblance to our modern world of computing than SAGE with its enormous, bespoke AN/FSQ-7. And we're still dealing with the consequences today! [1] It also laid the groundwork for the consolidation of the industry, with a 1930 decision that took air mail contracts away from most of the smaller companies and awarded them instead to the precursors of United, TWA, and American Airlines.

3 weeks ago 18 votes
2025-05-04 iBeacons

You know sometimes a technology just sort of... comes and goes? Without leaving much of an impression? And then gets lodged in your brain for the next decade? Let's talk about one of those: the iBeacon. I think the reason that iBeacons loom so large in my memory is that the technology was announced at WWDC in 2013. Picture yourself in 2013: Steve Jobs had only died a couple of years ago, Apple was still widely viewed as a visionary leader in consumer technology, and WWDC was still happening. Back then, pretty much anything announced at an Apple event was a Big Deal that got Big Coverage. Even, it turns out, if it was a minor development for a niche application. That's the iBeacon, a specific solution to a specific problem. It's not really that interesting, but the valance of it's Apple origin makes it seem cool? iBeacon Technology Let's start out with what iBeacon is, as it's so simple as to be underwhelming. Way back in the '00s, a group of vendors developed a sort of "Diet Bluetooth": a wireless protocol that was directly based on Bluetooth but simplified and optimized for low-power, low-data-rate devices. This went through an unfortunate series of names, including the delightful Wibree, but eventually settled on Bluetooth Low Energy (BLE). BLE is not just lower-power, but also easier to implement, so it shows up in all kinds of smart devices today. Back in 2011, it was quite new, and Apple was one of the first vendors to adopt it. BLE is far less connection-oriented than regular Bluetooth; you may have noticed that BLE devices are often used entirely without conventional "pairing." A lot of typical BLE profiles involve just broadcasting some data into the void for any device that cares (and is in short range) to receive, which is pretty similar to ANT+ and unsurprisingly appears in ANT+-like applications of fitness monitors and other sensors. Of course, despite the simpler association model, BLE applications need some way to find devices, so BLE provides an advertising mechanism in which devices transmit their identifying info at regular intervals. And that's all iBeacon really is: a standard for very simple BLE devices that do nothing but transmit advertisements with a unique ID as the payload. Add a type field on the advertising packet to specify that the device is trying to be an iBeacon and you're done. You interact with an iBeacon by receiving its advertisements, so you know that you are near it. Any BLE device with advertisements enabled could be used this way, but iBeacons are built only for this purpose. The applications for iBeacon are pretty much defined by its implementation in iOS; there's not much of a standard even if only for the reason that there's not much to put in a standard. It's all obvious. iOS provides two principle APIs for working with iBeacons: the region monitoring API allows an app to determine if it is near an iBeacon, including registering the region so that the app will be started when the iBeacon enters range. This is useful for apps that want to do something in response to the user being in a specific location. The ranging API allows an app to get a list of all of the nearby iBeacons and a rough range from the device to the iBeacon. iBeacons can actually operate at substantial ranges---up to hundreds of meters for more powerful beacons with external power, so ranging mode can potentially be used as sort of a lightweight local positioning system to estimate the location of the user within a larger space. iBeacon IDs are in the format of a UUID, followed by a "major" number and a "minor" number. There are different ways that these get used, especially if you are buying cheap iBeacons and not reconfiguring them, but the general idea is roughly that the UUID identifies the operator, the major a deployment, and the minor a beacon within the deployment. In practice this might be less common than just every beacon having its own UUID due to how they're sourced. It would be interesting to survey iBeacon applications to see which they do. Promoted Applications So where do you actually use these? Retail! Apple seems to have designed the iBeacon pretty much exclusively for "proximity marketing" applications in the retail environment. It goes something like this: when you're in a store and open that store's app, the app will know what beacons you are nearby and display relevant content. For example, in a grocery store, the grocer's app might offer e-coupons for cosmetics when you are in the cosmetics section. That's, uhh, kind of the whole thing? The imagined universe of applications around the launch of iBeacon was pretty underwhelming to me, even at the time, and it still seems that way. That's presumably why iBeacon had so little success in consumer-facing applications. You might wonder, who actually used iBeacons? Well, Apple did, obviously. During 2013 and into 2014 iBeacons were installed in all US Apple stores, and prompted the Apple Store app to send notifications about upgrade offers and other in-store deals. Unsurprisingly, this Apple Store implementation was considered the flagship deployment. It generated a fair amount of press, including speculation as to whether or not it would prove the concept for other buyers. Around the same time, Apple penned a deal with Major League Baseball that would see iBeacons installed in MLB stadiums. For the 2014 season, MLB Advanced Marketing, a joint venture of team owners, had installed iBeacon technology in 20 stadiums. Baseball fans will be able to utilize iBeacon technology within MLB.com At The Ballpark when the award-winning app's 2014 update is released for Opening Day. Complete details on new features being developed by MLBAM for At The Ballpark, including iBeacon capabilities, will be available in March. What's the point? the iBeacons "enable the At The Ballpark app to play specific videos or offer coupons." This exact story repeats for other retail companies that have picked the technology up at various points, including giants like Target and WalMart. The iBeacons are simply a way to target advertising based on location, with better indoor precision and lower power consumption than GPS. Aiding these applications along, Apple integrated iBeacon support into the iOS location framework and further blurred the lines between iBeacon and other positioning services by introducing location-based-advertising features that operated on geofencing alone. Some creative thinkers did develop more complex applications for the iBeacon. One of the early adopters was a company called Exact Editions, which prepared the Apple Newsstand version of a number of major magazines back when "readable on iPad" was thought to be the future of print media. Exact Editions explored a "read for free" feature where partner magazines would be freely accessible to users at partnering locations like coffee shops and book stores. This does not seem to have been a success, but using the proximity of an iBeacon to unlock some paywalled media is at least a little creative, if probably ill-advised considering security considerations we'll discuss later. The world of applications raises interesting questions about the other half of the mobile ecosystem: how did this all work on Android? iOS has built-in support for iBeacons. An operating system service scans for iBeacons and dispatches notifications to apps as appropriate. On Android, there has never been this type of OS-level support, but Android apps have access to relatively rich low-level Bluetooth functionality and can easily scan for iBeacons themselves. Several popular libraries exist for this purpose, and it's not unusual for them to be used to give ported cross-platform apps more or less equivalent functionality. These apps do need to run in the background if they're to notify the user proactively, but especially back in 2013 Android was far more generous about background work than iOS. iBeacons found expanded success through ShopKick, a retail loyalty platform that installed iBeacons in locations of some major retailers like American Eagle. These powered location-based advertising and offers in the ShopKick app as well as retailer-specific apps, which is kind of the start of a larger, more seamless network, but it doesn't seem to have caught on. Honestly, consumers just don't seem to want location-based advertising that much. Maybe because, when you're standing in an American Eagle, getting ads for products carried in the American Eagle is inane and irritating. iBeacons sort of foresaw cooler screens in this regard. To be completely honest, I'm skeptical that anyone ever really believed in the location-based advertising thing. I mean, I don't know, the advertising industry is pretty good at self-deception, but I don't think there were ever any real signs of hyper-local smartphone-based advertising taking off. I think the play was always data collection, and advertising and special offers just provided a convenient cover story. Real Applications iBeacons are one of those technologies that feels like a flop from a consumer perspective but has, in actuality, enjoyed surprisingly widespread deployments. The reason, of course, is data mining. To Apple's credit, they took a set of precautions in the design of the iBeacon iOS features that probably felt sufficient in 2013. Despite the fact that a lot of journalists described iBeacons as being used to "notify a user to install an app," that was never actually a capability (a very similar-seeming iOS feature attached to Siri actually used conventional geofencing rather than iBeacons). iBeacons only did anything if the user already had an app installed that either scanned for iBeacons when in the foreground or registered for region notifications. In theory, this limited iBeacons to companies with which consumers already had some kind of relationship. What Apple may not have foreseen, or perhaps simply accepted, is the incredible willingness of your typical consumer brand to sell that relationship to anyone who would pay. iBeacons became, in practice, just another major advancement in pervasive consumer surveillance. The New York Times reported in 2019 that popular applications were including SDKs that reported iBeacon contacts to third-party consumer data brokers. This data became one of several streams that was used to sell consumer location history to advertisers. It's a little difficult to assign blame and credit, here. Apple, to their credit, kept iBeacon features in iOS relatively locked down. This suggests that they weren't trying to facilitate massive location surveillance. That said, Apple always marketed iBeacon to developers based on exactly this kind of consumer tracking and micro-targeting, they just intended for it to be done under the auspices of a single brand. That industry would obviously form data exchanges and recruit random apps into reporting everything in your proximity isn't surprising, but maybe Apple failed to foresee it. They certainly weren't the worst offender. Apple's promotion of iBeacon opened the floodgates for everyone else to do the same thing. During 2014 and 2015, Facebook started offering bluetooth beacons to businesses that were ostensibly supposed to facilitate in-app special offers (though I'm not sure that those ever really materialized) but were pretty transparently just a location data collection play. Google jumped into the fray in their Signature Google style, with an offering that was confusing, semi-secret, incoherently marketed, and short lived. Google's Project Beacon, or Google My Business, also shipped free Bluetooth beacons out to businesses to give Android location services a boost. Google My Business seems to have been the source of a fair amount of confusion even at the time, and we can virtually guarantee that (as reporters speculated at the time) Google was intentionally vague and evasive about the system to avoid negative attention from privacy advocates. In the case of Facebook, well, they don't have the level of opsec that Google does so things are a little better documented: Leaked documents show that Facebook worried that users would 'freak out' and spread 'negative memes' about the program. The company recently removed the Facebook Bluetooth beacons section from their website. The real deployment of iBeacons and closely related third-party iBeacon-like products [1] occurred at massive scale but largely in secret. It became yet another dark project of the advertising-industrial complex, perhaps the most successful yet of a long-running series of retail consumer surveillance systems. Payments One interesting thing about iBeacon is how it was compared to NFC. The two really aren't that similar, especially considering the vast difference in usable ranges, but NFC was the first radio technology to be adopted for "location marketing" applications. "Tap your phone to see our menu," kinds of things. Back in 2013, Apple had rather notably not implemented NFC in its products, despite its increasing adoption on Android. But, there is much more to this story than learning about new iPads and getting a surprise notification that you are eligible for a subsidized iPhone upgrade. What we're seeing is Apple pioneering the way mobile devices can be utilized to make shopping a better experience for consumers. What we're seeing is Apple putting its money where its mouth is when it decided not to support NFC. (MacObserver) Some commentators viewed iBeacon as Apple's response to NFC, and I think there's more to that than you might think. In early marketing, Apple kept positioning iBeacon for payments. That's a little weird, right, because iBeacons are a purely one-way broadcast system. Still, part of Apple's flagship iBeacon implementation was a payment system: Here's how he describes the purchase he made there, using his iPhone and the EasyPay system: "We started by using the iPhone to scan the product barcode and then we had to enter our Apple ID, pretty much the way we would for any online Apple purchase [using the credit card data on file with one's Apple account]. The one key difference was that this transaction ended with a digital receipt, one that we could show to a clerk if anyone stopped us on the way out." Apple Wallet only kinda-sorta existed at the time, although Apple was clearly already midway into a project to expand into consumer payments. It says a lot about this point in time in phone-based payments that several reporters talk about iBeacon payments as a feature of iTunes, since Apple was mostly implementing general-purpose billing by bolting it onto iTunes accounts. It seems like what happened is that Apple committed to developing a pay-by-phone solution, but decided against NFC. To be competitive with other entrants in the pay-by-phone market, they had to come up with some kind of technical solution to interact with retail POS, and iBeacon was their choice. From a modern perspective this seems outright insane; like, Bluetooth broadcasts are obviously not the right way to initiate a payment flow, and besides, there's a whole industry-standard stack dedicated to that purpose... built on NFC. But remember, this was 2013! EMV was not yet in meaningful use in the US; several major banks and payment networks had just committed to rolling it out in 2012 and every American can tell you that the process was long and torturous. Because of the stringent security standards around EMV, Android devices did not implement EMV until ARM secure enclaves became widely available. EMVCo, the industry body behind EMV, did not have a certification program for smartphones until 2016. Android phones offered several "tap-to-pay" solutions, from Google's frequently rebranded Google Wallet^w^wAndroid Pay^w^wGoogle Wallet to Verizon's embarrassingly rebranded ISIS^wSoftcard and Samsung Pay. All of these initially relied on proprietary NFC protocols with bespoke payment terminal implementations. This was sketchy enough, and few enough phones actually had NFC, that the most successful US pay-by-phone implementations like Walmart's and Starbucks' used barcodes for communication. It would take almost a decade before things really settled down and smartphones all just implemented EMV. So, in that context, Apple's decision isn't so odd. They must have figured that iBeacon could solve the same "initial handshake" problem as Walmart's QR codes, but more conveniently and using radio hardware that they already included in their phones. iBeacon-based payment flows used the iBeacon only to inform the phone of what payment devices were nearby, everything else happened via interaction with a cloud service or whatever mechanism the payment vendor chose to implement. Apple used their proprietary payments system through what would become your Apple Account, PayPal slapped together an iBeacon-based fast path to PayPal transfers, etc. I don't think that Apple's iBeacon-based payments solution ever really shipped. It did get some use, most notably by Apple, but these all seem to have been early-stage implementations, and the complete end-to-end SDK that a lot of developers expected never landed. You might remember that this was a very chaotic time in phone-based payments, solutions were coming and going. When Apple Pay was properly announced a year after iBeacons, there was little mention of Bluetooth. By the time in-store Apple Pay became common, Apple had given up and adopted NFC. Limitations One of the great weaknesses of iBeacon was the security design, or lack thereof. iBeacon advertisements were sent in plaintext with no authentication of any type. This did, of course, radically simplify implementation, but it also made iBeacon untrustworthy for any important purpose. It is quite trivial, with a device like an Android phone, to "clone" any iBeacon and transmit its identifiers wherever you want. This problem might have killed off the whole location-based-paywall-unlocking concept had market forces not already done so. It also opens the door to a lot of nuisance attacks on iBeacon-based location marketing, which may have limited the depth of iBeacon features in major apps. iBeacon was also positioned as a sort of local positioning system, but it really wasn't. iBeacon offers no actual time-of-flight measurements, only RSSI-based estimation of range. Even with correct on-site calibration (which can be aided by adjusting a fixed RSSI-range bias value included in some iBeacon advertisements) this type of estimation is very inaccurate, and in my little experiments with a Bluetooth beacon location library I can see swings from 30m to 70m estimated range based only on how I hold my phone. iBeacon positioning has never been accurate enough to do more than assert whether or not a phone is "near" the beacon, and "near" can take on different values depending on the beacon's transmit power. Developers have long looked towards Bluetooth as a potential local positioning solution, and it's never quite delivered. The industry is now turning towards Ultra-Wideband or UWB technology, which combines a high-rate, high-bandwidth radio signal with a time-of-flight radio ranging protocol to provide very accurate distance measurements. Apple is, once again, a technical leader in this field and UWB radios have been integrated into the iPhone 11 and later. Senescence iBeacon arrived to some fanfare, quietly proliferated in the shadows of the advertising industry, and then faded away. The Wikipedia article on iBeacons hasn't really been updated since support on Windows Phone was relevant. Apple doesn't much talk about iBeacons any more, and their compatriots Facebook and Google both sunset their beacon programs years ago. Part of the problem is, well, the pervasive surveillance thing. The idea of Bluetooth beacons cooperating with your phone to track your every move proved unpopular with the public, and so progressively tighter privacy restrictions in mobile operating systems and app stores have clamped down on every grocery store app selling location data to whatever broker bids the most. I mean, they still do, but it's gotten harder to use Bluetooth as an aid. Even Android, the platform of "do whatever you want in the background, battery be damned," strongly discourages Bluetooth scanning by non-foreground apps. Still, the basic technology remains in widespread use. BLE beacons have absolutely proliferated, there are plenty of apps you can use to list nearby beacons and there almost certainly are nearby beacons. One of my cars has, like, four separate BLE beacons going on all the time, related to a phone-based keyless entry system that I don't think the automaker even supports any more. Bluetooth beacons, as a basic primitive, are so useful that they get thrown into all kinds of applications. My earbuds are a BLE beacon, which the (terrible, miserable, no-good) Bose app uses to detect their proximity when they're paired to another device. A lot of smart home devices like light bulbs are beacons. The irony, perhaps, of iBeacon-based location tracking is that it's a victim of its own success. There is so much "background" BLE beacon activity that you scarcely need to add purpose-built beacons to track users, and only privacy measures in mobile operating systems and the beacons themselves (some of which rotate IDs) save us. Apple is no exception to the widespread use of Bluetooth beacons: iBeacon lives on in virtually every apple device. If you do try out a Bluetooth beacon scanning app, you'll discover pretty much every Apple product in a 30 meter radius. From MacBooks Pro to Airpods, almost all Apple products transmit iBeacon advertisements to their surroundings. These are used for the initial handshake process of peer-to-peer features like Airdrop, and Find My/AirTag technology seems to be derived from the iBeacon protocol (in the sense that anything can be derived from such a straightforward design). Of course, pretty much all of these applications now randomize identifiers to prevent passive use of device advertisements for long-term tracking. Here's some good news: iBeacons are readily available in a variety of form factors, and they are very cheap. Lots of libraries exist for working with them. If you've ever wanted some sort of location-based behavior for something like home automation, iBeacons might offer a good solution. They're neat, in an old technology way. Retrotech from the different world of 2013. It's retro in more ways than one. It's funny, and a bit quaint, to read the contemporary privacy concerns around iBeacon. If only they had known how bad things would get! Bluetooth beacons were the least of our concerns. [1] Things can be a little confusing here because the iBeacon is such a straightforward concept, and Apple's implementation is so simple. We could define "iBeacon" as including only officially endorsed products from Apple affiliates, or as including any device that behaves the same as official products (e.g. by using the iBeacon BLE advertisement type codes), or as any device that is performing substantially the same function (but using a different advertising format). I usually mean the latter of these three as there isn't really much difference between an iBeacon and ten million other BLE beacons that are doing the same thing with a slightly different identifier format. Facebook and Google's efforts fall into this camp.

a month ago 11 votes
2025-04-18 white alice

When we last talked about Troposcatter, it was Pole Vault. Pole Vault was the first troposcatter communications network, on the east coast of Canada. It would not be alone for long. By the time the first Pole Vault stations were complete, work was already underway on a similar network for Alaska: the White Alice Communication System, WACS. Alaska has long posed a challenge for communications. In the 1960s, Western Union wanted to extent their telegraph network from the United States into Europe. Although the technology would be demonstrated shortly after, undersea telegraph cables were still notional and it seemed that a route that minimized the ocean crossing would be preferable---of course, that route maximized the length on land, stretching through present-day Alaska and Siberia on each side of the Bering Strait. This task proved more formidable than Western Union had imagined, and the first transatlantic telegraph cable (on a much further south crossing) was completed before the arctic segments of the overland route. The "Western Union Telegraph Expedition" abandoned its work, leaving a telegraph line well into British Columbia that would serve as one of the principle communications assets in the region for decades after. This ill-fated telegraph line failed to link San Francisco to Moscow, but its aftermath included a much larger impact on Russian interests in North America: the purchase of Alaska in 1867. Shortly after, the US military began its expansion into the new frontier. The Army Signal Corps, mostly to fulfill its function in observing the weather, built and staffed small installations that stretched further and further west. Later, in the 1890s, a gold rush brought a sudden influx of American settlers to Alaska's rugged terrain. The sudden economic importance of Klondike, and the rather colorful personalities of the prospectors looking to exploit it, created a much larger need for military presence. Fortuitously, many of the forts present had been built by the Signal Corps, which had already started on lines of communication. Construction was difficult, though, and without Alaskan communications as major priority there was only minimal coverage. Things changed in 1900, when Congress appropriated a substantial budget to the Washington-Alaska Military Cable and Telegraph System. The Signal Corps set on Alaska like, well, like an army, and extensive telegraph and later telephone lines were built to link the various military outposts. Later renamed the Alaska Communications System, these cables brought the first telecommunication to much of Alaska. The arrival of the telegraph was quite revolutionary for remote towns, who could now receive news in real-time that had previously been delayed by as much as a year [1]. Telegraphy was important to civilians as well, something that Congress had anticipated: The original act authorizing the Alaska Communications System dictated that it would carry commercial traffic as well. The military had an unusual role in Alaska, and one aspect of it was telecommunications provider. In 1925, an outbreak of diphtheria began to claim the lives of children in Nome, a town in far western Alaska on the Seward Peninsula. The daring winter delivery of antidiphtheria serum by dog sled is widely remembered due to its tangential connection to the Iditarod, but there were two sides of the "serum run." The message from Nome's sole doctor requesting the urgent shipment was transmitted from Nome to the Public Health Service in DC over the Alaska Communications System. It gives us some perspective on the importance of the telegraph in Alaska that the 600 mile route to Nome took five days and many feats of heroism---but at the same time could be crossed instantaneously by telegrams. The Alaska Communications System included some use of radio from the beginning. A pair of HF radio stations specifically handled traffic for Nome, covering a 100-mile stretch too difficult for even the intrepid Signal Corps. While not a totally new technology to the military, radio was quite new to the telegraph business, and the ACS to Nome was probably the first commercial radiotelegraph system on the continent. By the 1930s, the condition of the Alaskan telegraph cables had decayed while demand for telephony had increased. Much of ACS was upgraded and modernized to medium-frequency radiotelephone links. In towns small and large, even in Anchorage itself, the sole telephone connection to the contiguous United States was an ACS telephone installed in the general store. Alaskan communications became an even greater focus of the military with the onset of the Second World War. A few weeks after Pearl Harbor, the Japanese attacked Fort Mears in the Aleutian Islands. Fort Mears had no telecommunications connections, so despite the proximity of other airbases support was slow to come. The lack of a telegraph or telephone line contributed to 43 deaths and focused attention on the ACS. By 1944, the Army Signal Corps had a workforce of 2,000 dedicated to Alaska. WWII brought more than one kind of attention to Alaska. Several Japanese assaults on the Aleutian islands represented the largest threats to American soil outside of Pearl Harbor, showing both Alaska's vulnerability and the strategic importance given to it by its relative proximity to Eurasia. WWII ended but, in 1949, the USSR demonstrated an atomic weapon. A combination of Soviet expansionism and the new specter of nuclear war turned military planners towards air defense. Like the Canadian Maritimes in the East, Alaska covered a huge swath of the airspace through which Soviet bombers might approach the US. Alaska was, once again, a battleground. The early Cold War military buildup of Alaska was particularly heavy on air defense. During the late '40s and early '50s, more than a dozen new radar and control sites were built. The doctrine of ground-controlled interception requires real-time communication between radar centers, stressing the limited number of voice channels available on the ACS. As early as 1948, the Signal Corps had begun experiments to choose an upgrade path. Canadian early-warning radar networks, including the Distant Early Warning Line, were on the drawing board and would require many communications channels in particularly remote parts of Alaska. Initially, point-to-point microwave was used in relatively favorable terrain (where the construction of relay stations about every 50 miles was practical). For the more difficult segments, the Signal Corps found that VHF radio could provide useful communications at ranges over 100 miles. VHF radiotelephones were installed at air defense radar stations, but there was a big problem: the airspace surveillance radar of the 1950s also operated in the VHF band, and caused so much interference with the radiotelephones that they were difficult to use. The radar stations were probably the most important users of the network, so VHF would have to be abandoned. In 1954, a military study group was formed to evaluate options for the ACS. That group, in turn, requested a proposal from AT&T. Bell Laboratories had been involved in the design and evaluation of Pole Vault, the first sites of which had been completed two years before, so they naturally positioned troposcatter as the best option. It is worth mentioning the unusual relationship AT&T had with Alaska, or rather, the lack of one. While the Bell System enjoyed a monopoly on telephony in most of the United States [2], they had never expanded into Alaska. Alaska was only a territory, after all, and a very sparsely populated one at that. The paucity of long-distance leads to or from Alaska (only one connected to Anchorage, for example) limited the potential for integration of Alaska into the broader Bell System anyway. Long-distance telecommunications in Alaska were a military project, and AT&T was involved only as a vendor. Because of the high cost of troposcatter stations, proven during Pole Vault construction, a hybrid was proposed: microwave stations could be spaced every 50 miles along the road network, while troposcatter would cover the long stretches without roads. In 1955, the Signal Corps awarded Western Electric a contract for the White Alice Communications System. The Corps of Engineers surveyed the locations of 31 sites, verifying each by constructing a temporary antenna tower. The Corps of Engineers led construction of the first 11 sites, and the final 20 were built on contract by Western Electric itself. All sites used radio equipment furnished by Western Electric and were built to Western Electric designs. Construction was far from straightforward. Difficult conditions delayed completion of the original network until 1959, two years later than intended. A much larger issue, though, was the budget. The original WACS was expected to cost $38 million. By the time the first 31 sites were complete, the bill totaled $113 million---equivalent to over a billion dollars today. Western Electric had underestimated not only the complexity of the sites but the difficulty of their construction. A WECo report read: On numerous occasions, the men were forced to surrender before the onslaught of cold, wind and snow and were immobilized for days, even weeks . This ordeal of waiting was of times made doubly galling by the knowledge that supplies and parts needed for the job were only a few miles distant but inaccessible because the white wall of winter had become impenetrable WACS initial capability included 31 stations, of which 22 were troposcatter and the remainder only microwave (using Western Electric's TD-2). A few stations were equipped with both troposcatter and microwave, serving as relays between the two carriers. In 1958, construction started on the Ballistic Missile Early Warning System or BMEWS. BMEWS was an over-the-horizon radar system intended to provide early warning of a Soviet attack. BMEWS would provide as little as 15 minutes warning, requiring that alerts reach NORAD in Colorado as quickly as possible. One BMEWS set was installed in Greenland, where the Pole Vault system was expanded to provide communications. Similarly, the BMEWS set at Clear Missile Early Warning Station in central Alaska relied on White Alice. Planners were concerned about the ability of the Soviet Union to suppress an alert by destroying infrastructure, so two redundant chains of microwave sites were added to White Alice. One stretched from Clear to Ketchikan where it connected to an undersea cable to Seattle. The other went east, towards Canada, where it met existing telephone cables on the Alaska Highway. A further expansion of White Alice started the next year, in 1959. Troposcatter sites were extended through the Aleutian islands in "Project Stretchout" to serve new DEW Line stations. During the 1960s, existing WACS sites were expanded and new antennas were installed at Air Force installations. These were generally microwave links connecting the airbases to existing troposcatter stations. In total, WACS reached 71 sites. Four large sites served as key switching points with multiple radio links and telephone exchanges. Pedro Dome, for example, had a 15,000 square foot communications building with dormitories, a power plant, and extensive equipment rooms. Support facilities included a vehicle maintenance building, storage warehouse, and extensive fuel tanks. A few WACS sites even had tramways for access between the "lower camp" (where equipment and personnel were housed) and the "upper camp" (where the antennas were located)... although they apparently did not fare well in the Alaskan conditions. While Western Electric had initially planned for six people and 25 KW of power at each station, the final requirements were typically 20 people and 120-180 KW of generator capacity. Some sites stored over half a million gallons of fuel---conditions often meant that resupply was only possible during the summer. Besides troposcatter and microwave radios, the equipment included tandem telephone exchanges. These are described in a couple of documents as "ATSS-4A," ATSS standing for Alaska Telephone Switching System. Based on the naming and some circumstantial evidence, I believe these were Western Electric 4A crossbar exchanges. They were later incorporated into AUTOVON, but also handled commercial long-distance traffic between Alaskan towns. With troposcatter comes large antennas, and depending on connection lengths, WACS troposcatter antennas ranged from 30' dishes to 120' "billboard" antennas similar to those seen at Pole Vault sites. The larger antennas handled up to 50kW of transmit power. Some 60' and 120' antennas included their own fuel tanks and steam plants that heated the antennas through winter to minimize snow accumulation. WACS represented an enormous improvement in Alaskan communications. The entire system was multi-channel with redundancy in many key parts of the network. Outside of the larger cities, WACS often brought the first usable long-distance telephone service. Even in Anchorage, WACS provided the only multi-channel connection. Despite these achievements, WACS was set for much the same fate as other troposcatter systems: obsolescence after the invention of communications satellites. The experimental satellites Telstar 1 and 2 launched in the early 1960s, and the military began a shift towards satellite communications shortly after. Besides, the formidable cost of WACS had become a political issue. Maintenance of the system overran estimates by just as much as construction, and placing this cost on taxpayers was controversial since much of the traffic carried by the system consisted of regular commercial telephone calls. Besides, a general reticence to allocate money to WACS had lead to a general decay of the system. WACS capacity was insufficient for the rapidly increasing long-distance telephone traffic of the '60s, and due to decreased maintenance funding reliability was beginning to decline. The retirement of a Cold War communications system is not unusual, but the particular fate of WACS is. It entered a long second life. After acting as the sole long-distance provider for 60 years, the military began its retreat. In 1969, Congress passed the Alaska Communications Disposal Act. It called for complete divestment of the Alaska Communications System and WACS, to a private owner determined by a bidding process. Several large independent communications companies bid, but the winner was RCA. Committing to a $28.5 million purchase price followed by $30 million in upgrades, RCA reorganized the Alaska Communications System as RCA Alascom. Transfer of the many ACS assets from the military to RCA took 13 years, involving both outright transfer of property and complex lease agreements on sites colocated with military installations. RCA's interest in Alaskan communications was closely connected to the coming satellite revolution: RCA had just built the Bartlett Earth Station, the first satellite ground station in Alaska. While Bartlett was originally an ACS asset owned by the Signal Corps, it became just the first of multiple ground stations that RCA would build for Alascom. Several of the new ground stations were colocated with WACS sites, establishing satellite as an alternative to the troposcatter links. Alascom appears to have been the first domestic satellite voice network in commercial use, initially relying on a Canadian communications satellite [3]. In 1974, SATCOM 1 and 2 launched. These were not the first commercial communications satellites, but they represented a significant increase in capacity over previous commercial designs and are sometimes thought of as the true beginning of the satellite communications era. Both were built and owned by RCA, and Alascom took advantage of the new transponders. At the same time, Alascom launched a modernization effort. 22 of the former WACS stations were converted to satellite ground stations, a project that took much of the '70s as Alascom struggled with the same conditions that had made WACS so challenging to begin with. Modernization also included the installation of DMS-10 telephone switches and conversion of some connections to digital. A series of regulatory and business changes in the 1970s lead RCA to step away from the domestic communications industry. In 1979, Alascom sold to Pacific Power and Light, now for $200 million and $90 million in debt. PP&L continued on much the same trajectory, expanding the Alascom system to over 200 ground stations and launching the satellite Aurora I---the first of a small series of satellites that gave Alaska the distinction of being the only state with its own satellite communications network. For much of the '70s to the '00s, large parts of Alaska relied on satellite relay for calls between towns. In a slight twist of irony considering its long lack of interest in the state, AT&T purchased parts of Alascom from PP&L in 1995, forming AT&T Alascom which has faded away as an independent brand. Other parts of the former ACS network, generally non-toll (or non-long-distance) operations, were split off into then PP&L subsidiary CenturyTel. While CenturyTel has since merged into CenturyLink, the Alaskan assets were first sold to Alaska Communications. Alaska Communications considers itself the successor of the ACS heritage, giving them a claim to over 100 years of communications history. As electronics technology has continued to improve, penetration of microwave relays into inland Alaska has increased. Fewer towns rely on satellite today than in the 1970s, and the half-second latency to geosynchronous orbit is probably not missed. Alaska communications have also become more competitive, with long-distance connectivity available from General Communications (GCI) as well as AT&T and Alaska Communications. Still, the legacy of Alaska's complex and expensive long-distance infrastructure still echoes in our telephone bills. State and federal regulators have allowed for extra fees on telephone service in Alaska and calls into Alaska, both intended to offset the high cost of infrastructure. Alaska is generally the most expensive long-distance calling destination in the United States, even when considering the territories. But what of White Alice? The history of the Alaska Communications System's transition to private ownership is complex and not especially well documented. While RCA's winning bid following the Alaska Communications Disposal Act set the big picture, the actual details of the transition were established by many individual negotiations spanning over a decade. Depending on the station, WACS troposcatter sites generally conveyed to RCA in 1973 or 1974. Some, colocated with active military installations, were leased rather than included in the sale. RCA generally decommissioned each WACS site once a satellite ground station was ready to replace it, either on-site or nearby. For some WACS sites, this meant the troposcatter equipment was shut down in 1973. Others remained in use later. The Boswell Bay troposcatter station seems to have been the last turned down, in 1985. The 1980s were decidedly the end of WACS. Alascom's sale to PP&L cemented the plan to shut down all troposcatter operations, and the 1980 Comprehensive Environmental Response, Compensation, and Liability Act lead to the establishment of the Formerly Used Defense Sites (FUDS) program within DoD. Under FUDS, the Corps of Engineers surveyed the disused WACS sites and found nearly all had significant contamination by asbestos (used in seemingly every building material in the '50s and '60s) and leaked fuel oil. As a result, most White Alice sites were demolished between 1986 and 1999. The cost of demolition and remediation in such remote locations was sometimes greater than the original construction. No WACS sites remain intact today. Postscript: A 1988 Corps of Engineers historical inventory of WACS, prepared due to the demolition of many of the stations, mentions that meteor burst communications might replace troposcatter. Meteor burst is a fascinating communications mode, similar in many ways to troposcatter but with the twist that the reflecting surface is not the troposphere but the ionized trail of meteors entering the atmosphere. Meteor burst connections only work when there is a meteor actively vaporizing in the upper atmosphere, but atmospheric entry of small meteors is common enough that meteor burst communications are practical for low-rate packetized communications. For example, meteor burst has been used for large weather and agricultural telemetry systems. The Alaska Meteor Burst Communications System was implemented in 1977 by several federal agencies, and was used primarily for automated environmental telemetry. Unlike most meteor burst systems, though, it seems to have been used for real-time communications by the BLM and FAA. I can't find much information, but they seem to have built portable teleprinter terminals for this use. Even more interesting, the Air Force's Alaskan Air Command built its own meteor burst network around the same time. This network was entirely for real-time use, and demonstrated the successful transmission of radar track data from radar stations across the state to Elmendorf Air Force base. Even better, the Air Force experimented with the use of meteor burst for intercept control by fitting aircraft with a small speech synthesizer that translated coded messages into short phrases. The Air Force experimented with several meteor burst systems during the Cold War, anticipating that it might be a survivable communications system in wartime. More details on these will have to fill a future article. [1] Crews of the Western Union Telegraph Expedition reportedly continued work for a full year after the completion of the transatlantic telephone cable, because news of it hadn't reached them yet. [2] Eliding here some complexities like GTE and their relationship to the Bell System. [3] Perhaps owing to the large size of the country and many geographical challenges to cable laying, Canada has often led North America in satellite communications technology.

a month ago 18 votes
2025-04-06 Airfone

We've talked before about carphones, and certainly one of the only ways to make phones even more interesting is to put them in modes of transportation. Installing telephones in cars made a lot of sense when radiotelephones were big and required a lot of power; and they faded away as cellphones became small enough to have a carphone even outside of your car. There is one mode of transportation where the personal cellphone is pretty useless, though: air travel. Most readers are probably well aware that the use of cellular networks while aboard an airliner is prohibited by FCC regulations. There are a lot of urban legends and popular misconceptions about this rule, and fully explaining it would probably require its own article. The short version is that it has to do with the way cellular devices are certified and cellular networks are planned. The technical problems are not impossible to overcome, but honestly, there hasn't been a lot of pressure to make changes. One line of argument that used to make an appearance in cellphones-on-airplanes discourse is the idea that airlines or the telecom industry supported the cellphone ban because it created a captive market for in-flight telephone services. Wait, in-flight telephone services? That theory has never had much to back it up, but with the benefit of hindsight we can soundly rule it out: not only has the rule persisted well past the decline and disappearance of in-flight telephones, in-flight telephones were never commercially successful to begin with. Let's start with John Goeken. A 1984 Washington Post article tells us that "Goeken is what is called, predictably enough, an 'idea man.'" Being the "idea person" must not have had quite the same connotations back then, it was a good time for Goeken. In the 1960s, conversations with customers at his two-way radio shop near Chicago gave him an idea for a repeater network to allow truckers to reach their company offices via CB radio. This was the first falling domino in a series that lead to the founding of MCI and the end of AT&T's long-distance monopoly. Goeken seems to have been the type who grew bored with success, and he left MCI to take on a series of new ventures. These included an emergency medicine messaging service, electrically illuminated high-viz clothing, and a system called the Mercury Network that built much of the inertia behind the surprisingly advanced computerization of florists [1]. "Goeken's ideas have a way of turning into dollars, millions of them," the Washington Post continued. That was certainly true of MCI, but every ideas guy had their misses. One of the impressive things about Goeken was his ability to execute with speed and determination, though, so even his failures left their mark. This was especially true of one of his ideas that, in the abstract, seemed so solid: what if there were payphones on commercial flights? Goeken's experience with MCI and two-way radios proved valuable, and starting in the mid-1970s he developed prototype air-ground radiotelephones. In its first iteration, "Airfone" consisted of a base unit installed on an aircraft bulkhead that accepted a credit card and released a cordless phone. When the phone was returned to the base station, the credit card was returned to the customer. This equipment was simple enough, but it would require an extensive ground network to connect callers to the telephone system. The infrastructure part of the scheme fell into place when long-distance communications giant Western Union signed on with Goeken Communications to launch a 50/50 joint venture under the name Airfone, Inc. Airfone was not the first to attempt air-ground telephony---AT&T had pursued the same concept in the 1970s, but abandoned it after resistance from the FCC (unconvinced the need was great enough to justify frequency allocations) and the airline industry (which had formed a pact, blessed by the government, that prohibited the installation of telephones on aircraft until such time as a mature technology was available to all airlines). Goeken's hard-headed attitude, exemplified in the six-year legal battle he fought against AT&T to create MCI, must have helped to defeat this resistance. Goeken brought technical advances, as well. By 1980, there actually was an air-ground radiotelephone service in general use. The "General Aviation Air-Ground Radiotelephone Service" allocated 12 channels (of duplex pairs) for radiotelephony from general aviation aircraft to the ground, and a company called Wulfsberg had found great success selling equipment for this service under the FliteFone name. Wulfsberg FliteFones were common equipment on business aircraft, where they let executives shout "buy" and "sell" from the air. Goeken referred to this service as evidence of the concept's appeal, but it was inherently limited by the 12 allocated channels. General Aviation Air-Ground Radiotelephone Service, which I will call AGRAS (this is confusing in a way I will discuss shortly), operated at about 450MHz. This UHF band is decidedly line-of-sight, but airplanes are very high up and thus can see a very long ways. The reception radius of an AGRAS transmission, used by the FCC for planning purposes, was 220 miles. This required assigning specific channels to specific cities, and there the limits became quite severe. Albuquerque had exactly one AGRAS channel available. New York City got three. Miami, a busy aviation area but no doubt benefiting from its relative geographical isolation, scored a record-setting four AGRAS channels. That meant AGRAS could only handle four simultaneous calls within a large region... if you were lucky enough for that to be the Miami region; otherwise capacity was even more limited. Back in the 1970s, AT&T had figured that in-flight telephones would be very popular. In a somewhat hand-wavy economic analysis, they figured that about a million people flew in the air on a given day, and about a third of them would want to make telephone calls. That's over 300,000 calls a day, clearly more than the limited AGRAS channels could handle... leading to the FCC's objection that a great deal of spectrum would have to be allocated to make in-flight telephony work. Goeken had a better idea: single-sideband. SSB is a radio modulation technique that allows a radio transmission to fit within a very narrow bandwidth (basically by suppressing half of the signal envelope), at the cost of a somewhat more fiddly tuning process for reception. SSB was mostly used down in the HF bands, where the low frequencies meant that bandwidth was acutely limited. Up in the UHF world, bandwidth seemed so plentiful that there was little need for careful modulation techniques... until Goeken found himself asking the FCC for 10 blocks of 29 channels each, a lavish request that wouldn't really fit anywhere in the popular UHF spectrum. The use of UHF SSB, pioneered by Airfone, allowed far more efficient use of the allocation. In 1983, the FCC held hearings on Airfone's request for an experimental license to operate their SSB air-ground radiotelephone system in two allocations (separate air-ground and ground-air ranges) around 850MHz and 895MHz. The total spectrum allocated was about 1.5MHz in each of the two directions. The FCC assented and issued the experimental license in 1984, and Airfone was in business. Airfone initially planned 52 ground stations for the system, although I'm not sure how many were ultimately built---certainly 37 were in progress in 1984, at a cost of about $50 million. By 1987, the network had reportedly grown to 68. Airfone launched on six national airlines (a true sign of how much airline consolidation has happened in recent decades---there were six national airlines?), typically with four cordless payphones on a 727 or similar aircraft. The airlines received a commission on the calling rates, and Airfone installed the equipment at their own expense. Still, it was expected to be profitable... Airfone projected that 20-30% of passengers would have calls to make. I wish I could share more detail on these ground stations, in part because I assume there was at least some reuse of existing Western Union facilities (WU operated a microwave network at the time and had even dabbled in cellular service in the 1980s). I can't find much info, though. The antennas for the 800MHz band would have been quite small, but the 1980s multiplexing and control equipment probably took a fare share of floorspace. Airfone was off to a strong start, at least in terms of installation base and press coverage. I can't say now how many users it actually had, but things looked good enough that in 1986 Western Union sold their share of the company to GTE. Within a couple of years, Goeken sold his share to GTE as well, reportedly as a result of disagreements with GTE's business strategy. Airfone's SSB innovation was actually quite significant. At the same time, in the 1980s, a competitor called Skytel was trying to get a similar idea off the ground with the existing AGRAS allocation. It doesn't seem to have gone anywhere, I don't think the FCC ever approved it. Despite an obvious concept, Airfone pretty much launched as a monopoly, operating under an experimental license that named them alone. Unsurprisingly there was some upset over this apparent show of favoritism by the FCC, including from AT&T, which vigorously opposed the experimental license. As it happened, the situation would be resolved by going the other way: in 1990, the FCC established the "commercial aviation air-ground service" which normalized the 800 MHz spectrum and made licenses available to other operators. That was six years after Airfone started their build-out, though, giving them a head start that severely limited competition. Still, AT&T was back. AT&T introduced a competing service called AirOne. AirOne was never as widely installed as Airfone but did score some customers including Southwest Airlines, which only briefly installed AirOne handsets on their fleet. "Only briefly" describes most aspects of AirOne, but we'll get to that in a moment. The suddenly competitive market probably gave GTE Airfone reason to innovate, and besides, a lot had changed in communications technology since Airfone was designed. One of Airfone's biggest limitations was its lack of true roaming: an Airfone call could only last as long as the aircraft was within range of the same ground station. Airfone called this "30 minutes," but you can imagine that people sometimes started their call near the end of this window, and the problem was reportedly much worse. Dropped calls were common, adding insult to the injury that Airfone was decidedly expensive. GTE moved towards digital technology and automation. 1991 saw the launch of Airfone GenStar, which used QAM digital modulation to achieve better call quality and tighter utilization within the existing bandwidth. Further, a new computerized network allowed calls to be handed off from one ground station to another. Capitalizing on the new capacity and reliability, the aircraft equipment was upgraded as well. The payphone like cordless stations were gone, replaced by handsets installed in seatbacks. First class cabins often got a dedicated handset for every seat, economy might have one handset on each side of a row. The new handsets offered RJ11 jacks, allowing the use of laptop modems while in-flight. Truly, it was the future. During the 1990s, satellites were added to the Airfone network as well, improving coverage generally and making telephone calls possible on overseas flights. Of course, the rise of satellite communications also sowed the seeds of Airfone's demise. A company called Aircell, which started out using the cellular network to connect calls to aircraft, rebranded to Gogo and pivoted to satellite-based telephone services. By the late '90s, they were taking market share from Airfone, a trend that would only continue. Besides, for all of its fanfare, Airfone was not exactly a smash hit. Rates were very high, $5 a minute in the late '90s, giving Airfone a reputation as a ripoff that must have cut a great deal into that "20-30% of fliers" they hoped to serve. With the rise of cellphones, many preferred to wait until the aircraft was on the ground to use their own cellphone at a much lower rate. GTE does not seem to have released much in the way of numbers for Airfone, but it probably wasn't making them rich. Goeken, returning to the industry, inadvertently proved this point. He aggressively lobbied the FCC to issue competitive licenses, and ultimately succeeded. His second company in the space, In-Flight Phone Inc., became one of the new competitors to his old company. In-Flight Phone did not last for long. Neither did AT&T AirOne. A 2005 FCC ruling paints a grim picture: Current 800 MHz Air-Ground Radiotelephone Service rules contemplate six competing licensees providing voice and low-speed data services. Six entities were originally licensed under these rules, which required all systems to conform to detailed technical specifications to enable shared use of the air-ground channels. Only three of the six licensees built systems and provided service, and two of those failed for business reasons. In 2002, AT&T pulled out, and Airfone was the only in-flight phone left. By then, GTE had become Verizon, and GTE Airfone was Verizon Airfone. Far from a third of passengers, the CEO of Airfone admitted in an interview that a typical flight only saw 2-3 phone calls. Considering the minimum five-figure capital investment in each aircraft, it's hard to imagine that Airfone was profitable---even at $5 minute. Airfone more or less faded into obscurity, but not without a detour into the press via the events of 9/11. Flight 93, which crashed in Pennsylvania, was equipped with Airfone and passengers made numerous calls. Many of the events on board this aircraft were reconstructed with the assistance of Airfone records, and Claircom (the name of the operator of AT&T AirOne, which never seems to have been well marketed) also produced records related to other aircraft involved in the attacks. Most notably, flight 93 passenger Todd Beamer had a series of lengthy calls with Airfone operator Lisa Jefferson, through which he relayed many of the events taking place on the plane in real time. During these calls, Beamer seems to have coordinated the effort by passengers to retake control of the plane. The significance of Airfone and Claircom records to 9/11 investigations is such that 9/11 conspiracy theories may be one of the most enduring legacies of Claircom especially. In an odd acknowledgment of their aggressive pricing, Airfone decided not to bill for any calls made on 9/11, and temporarily introduced steep discounts (to $0.99 a minute) in the weeks after. This rather meager show of generosity did little to reverse the company's fortunes, though, and it was already well into a backslide. In 2006, the FCC auctioned the majority of Airfone's spectrum to new users. The poor utilization of Airfone was a factor in the decision, as well as Airfone's relative lack of innovation compared to newer cellular and satellite systems. In fact, a large portion of the bandwidth was purchased by Gogo, who years later would use to to deliver in-flight WiFi. Another portion went to a subsidiary of JetBlue that provided in-flight television. Verizon announced the end of Airfone in 2006, pending an acquisition by JetBlue, and while the acquisition did complete JetBlue does not seem to have continued Airfone's passenger airline service. A few years later, Gogo bought out JetBlue's communications branch, making them the new monopoly in 800MHz air ground radiotelephony. Gogo only offered telephone service for general aviation aircraft; passenger aircraft telephones had gone the way of the carphone. It's interesting to contrast the fate of Airfone to to its sibling, AGRAS. Depending on who you ask, AGRAS refers to the radio service or to the Air Ground Radiotelephone Automated Service operated by Mid-America Computer Corporation. What an incredible set of names. This was a situation a bit like ARINC, the semi-private company that for some time held a monopoly on aviation radio services. MACC had a practical monopoly on general aviation telephone service throughout the US, by operating the billing system for calls. MACC still exists today as a vendor of telecom billing software and this always seems to have been their focus---while I'm not sure, I don't believe that MACC ever operated ground stations, instead distributing rate payments to private companies that operated a handful of ground stations each. Unfortunately the history of this service is quite obscure and I'm not sure how MACC came to operate the system, but I couldn't resist the urge to mention the Mid-America Computer Corporation. AGRAS probably didn't make anyone rich, but it seems to have been generally successful. Wulfsberg FliteFones operating on the AGRAS network gave way to Gogo's business aviation phone service, itself a direct descendent of Airfone technology. The former AGRAS allocation at 450MHz somehow came under the control of a company called AURA Network Systems, which for some years has used a temporary FCC waiver of AGRAS rules to operate data services. This year, the FCC began rulemaking to formally reallocate the 450MHz air ground allocation to data services for Advanced Air Mobility, a catch-all term for UAS and air taxi services that everyone expects to radically change the airspace system in coming years. New uses of the band will include command and control for long-range UAS, clearance and collision avoidance for air taxis, and ground and air-based "see and avoid" communications for UAS. This pattern, of issuing a temporary authority to one company and later performing rulemaking to allow other companies to enter, is not unusual for the FCC but does make an interesting recurring theme in aviation radio. It's typical for no real competition to occur, the incumbent provider having been given such a big advantage. When reading about these legacy services, it's always interesting to look at the licenses. ULS has only nine licenses on record for the original 800 MHz air ground service, all expired and originally issued to Airfone (under both GTE and Verizon names), Claircom (operating company for AT&T AirOne), and Skyway Aircraft---this one an oddity, a Florida-based company that seems to have planned to introduce in-flight WiFi but not gotten all the way there. Later rulemaking to open up the 800MHz allocation to more users created a technically separate radio service with two active licenses, both held by AC BidCo. This is an intriguing mystery until you discover that AC BidCo is obviously a front company for Gogo, something they make no effort to hide---the legalities of FCC bidding processes are such that it's very common to use shell companies to hold FCC licenses, and we could speculate that AC BidCo is the Aircraft Communications Bidding Company, created by Gogo for the purpose of the 2006-2008 auctions. These two licenses are active for the former Airfone band, and Gogo reportedly continues to use some of the original Airfone ground stations. Gogo's air-ground network, which operates at 800MHz as well as in a 3GHz band allocated specifically to Gogo, was originally based on CDMA cellular technology. The ground stations were essentially cellular stations pointed upwards. It's not clear to me if this CDMA-derived system is still in use, but Gogo relies much more heavily on their Ku-band satellite network today. The 450MHz licenses are fascinating. AURA is the only company to hold current licenses, but the 246 reveal the scale of the AGRAS business. Airground of Idaho, Inc., until 1999 held a license for an AGRAS ground station on Brundage Mountain McCall, Idaho. The Arlington Telephone Company, until a 2004 cancellation, held a license for an AGRAS ground station atop their small telephone exchange in Arlington, Nebraska. AGRAS ground stations seem to have been a cottage industry, with multiple licenses to small rural telephone companies and even sole proprietorships. Some of the ground stations appear to have been the roofs of strip mall two-way radio installers. In another life, maybe I would be putting a 450MHz antenna on my roof to make a few dollars. Still, there were incumbents: numerous licenses belonged to SkyTel, which after the decline of AGRAS seems to have refocused on paging and, then, gone the same direction as most paging companies: an eternal twilight as American Messaging ("The Dependable Choice"), promoting innovation in the form of longer-range restaurant coaster pagers. In another life, I'd probably be doing that too. [1] This is probably a topic for a future article, but the Mercury Network was a computerized system that Goeken built for a company called Florist's Telegraph Delivery (FTD). It was an evolution of FTD's telegraph system that allowed a florist in one city to place an order to be delivered by by a florist in another city, thus enabling the long-distance gifting of flowers. There were multiple such networks and they had an enduring influence on the florist industry and broader business telecommunications.

2 months ago 19 votes

More in technology

This DIY standing desk controller provides luxury car-style memory settings

One of the best features you’ll find on a fancy luxury car is seat position memory. Typically, there are at least two profiles that “save” the position of the seat. When switching drivers, the new seat occupant can simply push the button for their profile and the seat will automatically move to their saved position. […] The post This DIY standing desk controller provides luxury car-style memory settings appeared first on Arduino Blog.

2 days ago 2 votes
The Tandy Corporation, Part 2

Trash-80s get Colorful, and Trash-80s get into your pockets

4 days ago 6 votes
Plus Post: Sony SMC-777C

One person, Eight languages. The 8bit.

5 days ago 5 votes
Harpoom: of course the Apple Network Server can be hacked into running Doom

a $10,000+ Apple server running IBM AIX. Of course you can. Well, you can now. Now, let's go ahead and get the grumbling out of the way. No, the ANS is not running Linux or NetBSD. No, this is not a backport of NCommander's AIX Doom, because that runs on AIX 4.3. The Apple Network Server could run no version of AIX later than 4.1.5 and there are substantial technical differences. (As it happens, the very fact it won't run on an ANS was what prompted me to embark on this port in the first place.) And no, this is not merely an exercise in flogging a geriatric compiler into building Doom Generic, though we'll necessarily do that as part of the conversion. There's no AIX sound driver for ANS audio, so this port is mute, but at the end we'll have a Doom executable that runs well on the ANS console under CDE and has no other system prerequisites. We'll even test it on one of IBM's PowerPC AIX laptops as well. Because we should. almost by default, Apple's first true Unix server since the A/UX-based Workgroup Server 95, but IBM AIX has a long history of its own dating back to the 1986 IBM RT PC. That machine was based on the IBM ROMP CPU as derived from the IBM 801, generally considered the first modern RISC design. AIX started as an early port of UNIX System V Release 3 and is thus a true Unix descendent, but also merged in some code from BSD 4.2 and 4.3. The RT PC ran AIX v1 and v2 as its primary operating systems, though IBM also supported 4.3BSD (ported by IBM as the Academic Operating System) and a spin of Pick OS. Although a truly full-fledged workstation with aggressive support from IBM, it ended up a failure in the market due to comparatively poor performance and notorious problems with its floating point support. Nevertheless, AIX's workstation roots persisted even through the substantial rewrite that became version 3 in 1989, and was likewise the primary operating system for its next-generation technical workstations now based on POWER. AIX 3 introduced AIXwindows, a licensed port of X.desktop from IXI Limited (later acquired by another licensee, SCO) initially based on X11R3 with Motif as the toolkit. In 1993 the "SUUSHI" partnership — named for its principals, Sun, Unix System Laboratories, the Univel joint initiative of Novell and AT&T, SCO, HP and IBM — negotiated an armistice in the Unix Wars, their previous hostilities now being seen as counterproductive against common enemy Microsoft. This partnership led to the Common Open Software Environment (COSE) initiative and the Common Desktop Environment (CDE), derived from HP VUE, which was also Motif-based. AIX might have been the next Mac OS. For that matter, OS/2 was still a thing on the desktop (as Warp 4) despite Workplace OS's failure, Ultimedia was a major IBM initiative in desktop multimedia, and the Common User Access model was part of CDE too. AIX 4 had multimedia capabilities as well through its own native port of Ultimedia, supporting applications like video capture and desktop video conferencing, and even featured several game ports IBM themselves developed — two for AIX 4.1 (Quake and Abuse) and one later for 4.3 (Quake II). The 4.1 game ports run well on ANS AIX with the Ultimedia libraries installed, though oddly Doom was never one of them. IBM cancelled all this with AIX 5L and never looked back. ANS "Harpoon" AIX only made two standard releases, 4.1.4.1 and 4.1.5, prior to Gil Amelio cancelling the line in April 1997. However, ANS AIX is almost entirely binary-compatible with regular 4.1 and there is pretty much no reason not to run 4.1.5, so we'll make that our baseline. Although AIX 4.3 was a big jump forward, for our purposes the major difference is support for X11R6 as 4.1 only supports X11R5. Upgrading the X11 libraries is certainly possible but leads to a non-standard configuration, and anyway, the official id Linux port by Dave Taylor hails from 1994 when many X11R5 systems would have still been out there. We would also rather avoid forcing people to install Ultimedia. There shouldn't be anything about basic Doom that would require anything more than the basic operating system we have. NCommander AIX Doom port is based on Chocolate Doom, taking advantage of SDL 1.2's partial support for AIX. Oddly, the headers for the MIT Shared Memory Extension were reportedly missing on 4.3 despite the X server being fully capable of it, and he ended up cribbing them from a modern copy of Xorg to get it to build. Otherwise, much of his time was spent bringing in other necessary SDL libraries and getting the sound working, neither of which we're going to even attempt. Owing to the ANS' history as a heavily modified Power Macintosh 9500, it thus uses AWACS audio for which no driver was ever written for AIX, and AIX 4.1 only supports built-in audio on IBM hardware. Until that changes or someone™ figures out an alternative, the most audio playback you'll get from Harpoon AIX is the server quacking on beeps (yes, I said quacking, the same as the Mac alert sound). However, Doom Generic is a better foundation for exotic Doom ports because it assumes very little about the hardware and has straight-up Xlib support, meaning we won't need SDL or even MIT-SHM. It also removes architecture-specific code and is endian-neutral, important because AIX to this day remains big-endian, though this is less of a issue with Doom specifically since it was originally written on big-endian NeXTSTEP 68K and PA-RISC. We now need to install a toolchain, since Harpoon AIX does not include an xlC license, and I'd be interested to hear from anyone trying to build this with it. Although the venerable AIXPDSLIB archive formerly at UCLA has gone to the great bitbucket in the sky, there are some archives of it around and I've reposted the packages I personally kept for 4.1 and 3.2.5 on the Floodgap gopher server. The most recent compiler AIXPDSLIB had for 4.1 was gcc 2.95.2, though for fun I installed the slightly older egcs 2.91.66, and you will also need GNU make, for which 3.81 is the latest available. These compilers use the on-board assembler and linker. I did not attempt to build a later compiler with this compiler. It may work and you get to try that yourself. Optionally you can also install gdb 5.3, which I did to stomp out some glitches. These packages are all uncompressed and un-tarred from the root directory in place; they don't need to be installed through smit. I recommend symlinking /usr/local/bin/make as /usr/local/bin/gmake so we know which one we're using. Finally, we'll need a catchy name for our port. Originally it was going to be ANS Doom, but that sounded too much like Anus Doom, which I proffer as a free metal band name and I look forward to going to one of their concerts soon. Eventually I settled on Harpoom, which I felt was an appropriate nod to its history while weird enough to be notable. All of my code is on Github along with pre-built binaries and all development was done on stockholm, my original Apple Network Server 500 that I've owned continuously since 1998, with a 200MHz PowerPC 604e, 1MB of cache, 512MB of parity RAM and a single disk here running a clean install of 4.1.5. Starting with Doom Generic out of the box, we'll begin with a Makefile to create a basic Doom that I can run over remote X for convenience. (Since the ANS runs big-endian, if you run a recent little-endian desktop as I do with my POWER9 you'll need to start your local X server with +byteswappedclients or a configuration file change, or the connection will fail.) I copied Makefile.freebsd and stripped it down to I also removed -Wl,-Map,$(OUTPUT).map from the link step in advance because AIX ld will barf on that. gmake understood the Makefile fine but the compile immediately bombed. It's time to get out that clue-by-four and start bashing the compiler with it. There is, in fact, no inttypes.h or stdint.h on AIX 4.1. So let's create an stdint.h! We could copy it from somewhere else, but I wanted this to only specify what it needed to. After several false starts, the final draft was and we include that instead of inttypes.h. Please note this is only valid for 32 bit systems like this one. Obviously we'll change that from <stdint.h> to "stdint.h". doomtype.h has this definition for a boolean: Despite this definition, undef isn't actually used in the codebase anywhere, and if C++ bool is available then it just typedefs it to boolean. But egcs and gcc come with their own definition, here in its entirety: This is almost identical. Since we know we don't really need undef, we comment out the old definition in doomtype.h, #include <stdbool.h> and just typedef bool boolean like C++. The col_t is an AIX specific problem that conflicts with AIX locales. Since col_t is only found in i_video.c, we'll just change it in four places to doomcol_t. The last problem was this bit of code at the end of I_InitGraphics(): Here we can cheat, being pre-C99, by merely removing the declaration. This is aided by the fact I_InitInput neither passes nor returns anything. The compiler accepted that. X11R5 does not support the X Keyboard Extension (Xkb). To make the compile go a bit farther I switched out X11/XKBlib.h for X11/keysym.h. We're going to have some missing symbols at link time but we'll deal with that momentarily. DG_Init() is naughty and didn't declare all its variables at the beginning. This version of the compiler can't cope with that and I had to rework the function. Although my revisions compiled, the link failed, as expected: XkbSetDetectableAutoRepeat tells the keyboard driver to not generate synthetic KeyRelease events for this X client when a key is auto-repeating. X11R5 doesn't have this capability, so the best we can do is XAutoRepeatOff, which still gives us single KeyPress and KeyRelease events but that's because it disables key repeat globally. (A footnote on this later on.) There's not a lot we can do about that, though we can at least add an atexit to ensure the previous keyboard repeat status is restored on quit. Similarly, there is no exact equivalent for XkbKeycodeToKeysym, though we can sufficiently approximate it for our purposes with XLookupKeysym in both places: That was enough to link Doom Generic. Nevertheless, with our $DISPLAY properly set and using the shareware WAD, it immediately fails: This error comes from this block of code in w_wad.c: With some debugging printfs, we discover the value of additional lumps we're being asked to allocate is totally bogus: This nonsense number is almost certainly caused by an unconverted little-endian value. Values in WADs are stored little-endian, even in the native Macintosh port. Doom Generic does have primitives for handling byteswaps, however, so it seems to have incorrectly detected us as little-endian. After some grepping this detection quite logically comes from i_swap.h. As we have intentionally not enabled sound, for some reason (probably an oversight) this file ends up defaulting to little endian: Ordinarily this would be a great place to use gcc's byteswap intrinsics, buuuuuuuuut (and I was pretty sure this would happen) ... so we're going to have to write some. Since they've been defined as quasi-functions, I decided to do this as actual inlineable functions with a sprinkling of inline PowerPC assembly. The semantics of static inline here are intended to take advantage of the way gcc of this era handled it. These snippets are very nearly the optimal code sequences, at least if the value is already in a register. If the value was being fetched from memory, you can do the conversion in one step with single instructions (lwbrx or lhbrx), but the way the code is structured we won't know where the value is coming from, so this is the best we can do for now. Atypically, these conversions must be signed. If you miss this detail and only handle the unsigned case, as yours truly did in a first draft, you get weird things like this: was extended on values we did not, 16-bit values started picking up wacky negative quantities because the most significant byte eventually became all ones and 32-bit PowerPC GPRs are 32 bits, all the time. Properly extending the sign after conversion was enough to fix it. CMAP256. Since this is a compile-time choice, and we want to support both remote X and console X, we'll just make two builds. I rebuilt the executable this time adding -DCMAP256 to the CFLAGS in the Makefile. PseudoColor 8-bit visuals, so we must not be creating a colourmap for the window nor updating it, and indeed there is none in the Doom Generic source code. Fortunately, there is in the original O.G. Linux Doom, so I cribbed some code from it. I added a new function DG_Upload_Palette to accept a 256-colour internal palette from the device-independent portion, turn it into an X Colormap, and push it to the X server with XStoreColors. Because the engine changes the entire palette every time (on damage, artifacts, etc.), we must set every colour change flag in the Colormap, which we do and cache on first run just like O.G. Linux Doom did. The last step is to tag the Colormap we create to the X window using XSetWindowColormap. the other AIX games, by the way. Here are some direct grabs from the framebuffer using xwd -icmap -screen -root -out. map to nothing, not Meta, Super or even Hyper in Harpoon's X server. Instead, when pressed or released each Command key generates an XEvent with an unexpected literal keycode of zero. After some experimentation, it turns out that no other key (tested with a full Apple Extended Keyboard II) on a connected ADB keyboard will generate this keycode. I believe this was most likely an inadvertent bug on Apple's part but I decided to take advantage of it. I don't think it's a good idea to do this if you're running a remote X session and the check is disabled there, but if you run the 256-colour version on the console, you can use the Command keys to strafe instead (Alt works in either version). Lastly, I added some code to check the default or available visuals so that you can't (easily) run the wrong version in the wrong place and bumped the optimization level to -O3. And that's the game. Here's a video of it on the console, though I swapped in an LCD display so that the CRT flicker won't set you off. This is literally me pointing my Pixel 7 Pro camera at the screen. RISC ThinkPad-like laptop that isn't, technically, a ThinkPad. You might see this machine in a future entry. precompiled builds both for 24-bit and 8-bit colour are available on Github. Like Doom Generic and the original Doom, Harpoom is released under the GNU General Public License v2.

5 days ago 7 votes
Embedding Godot games in iOS is easy

Recently there’s been very exciting developments in the Godot game engine, that have allowed really easy and powerful integration into an existing normal iOS or Mac app. I couldn’t find a lot of documentation or discussion about this, so I wanted to shine some light on why this is so cool, and how easy it is to do! What’s Godot? For the uninitiated, Godot is an engine for building games, other common ones you might know of are Unity and Unreal Engine. It’s risen in popularity a lot over the last couple years due to its open nature: it’s completely open source, MIT licensed, and worked on in the open. But beyond that, it’s also a really well made tool for building games with (both 2D and 3D), with a great UI, beautiful iconography, a ton of tutorials and resources, and as a bonus, it’s very lightweight. I’ve had a lot of fun playing around with it (considering potentially integrating it into Pixel Pals), and while Unity and Unreal Engine are also phenomenal tools, Godot has felt lightweight and approachable in a really nice way. As an analogy, Godot feels closer to Sketch and Figma whereas Unity and Unreal feel more like Photoshop/Illustrator or other kinda bulky Adobe products. Even Apple has taken interest in it, contributing a substantial pull request for visionOS support in Godot. Why use it with iOS? You’ve always been able to build a game in Godot and export it to run on iOS, but recently thanks to advancements in the engine and work by amazing folks like Miguel de Icaza, you can now embed a Godot game in an existing normal SwiftUI or UIKit app just as you would an extra UITextView or ScrollView. Why is this important? Say you want to build a game or experience, but you don’t want it to feel just like another port, you want it to integrate nicely with iOS and feel at home there through use of some native frameworks and UI here and there to anchor the experience (share sheets, local notifications, a simple SwiftUI sheet for adding a friend, etc.). Historically your options have been very limited or difficult. You no longer have to have “a Godot game” or “an iOS app”, you can have the best of both worlds. A fun game built entirely in Godot, while having your share sheets, Settings screens, your paywall, home screen widgets, onboarding, iCloud sync, etc. all in native Swift code. Dynamically choosing which tool you want for the job. (Again, this was technically possible before and with other engines, but was much, much more complicated. Unity’s in particular seems to have been last updated during the first Obama presidency.) And truly, this doesn’t only benefit “game apps”. Heck, if the user is doing something that will take awhile to complete (uploading a video, etc.) you could give them a small game to play in the interim. Or just for some fun you could embed a little side scroller easter egg in one of your Settings screens to delight spelunking users. Be creative! SpriteKit? A quick aside. It wouldn’t be an article about game dev on iOS without mentioning SpriteKit, Apple’s native 2D game framework (Apple also has SceneKit for 3D). SpriteKit is well done, and actually what I built most of Pixel Pals in. But it has a lot of downsides versus a proper, dedicated game engine: Godot has a wealth of tutorials on YouTube and elsewhere, bustling Discord communities for help, where SpriteKit being a lot more niche can be quite hard to find details on The obvious one: SpriteKit only works on Apple platforms, so if you want to port your game to Android or Windows you’re probably not going to have a great time, where Godot is fully cross platform Godot being a full out game engine has a lot more tools for game development than can be handy, from animation tools, to sprite sheet editors, controls that make experimenting a lot easier, handy tools for creating shaders, and so much more than I could hope to go over in this article. If you ever watch a YouTube video of someone building a game in a full engine, the wealth of tools they have for speeding up development is bonkers. Godot is updated frequently by a large team of employees and volunteers, SpriteKit conversely isn’t exactly one of Apple’s most loved frameworks (I don’t think it’s been mentioned at WWDC in years) and kinda feels like something Apple ins’t interested in putting much more work into. Maybe that’s because it does everything Apple wants and is considered “finished” (if so I think that would be incorrect, see previous point for many things that it would be helpful for SpriteKit to have), but if you were to encounter a weird bug I’d feel much better about the likelihood of it getting fixed in Godot than SpriteKit I’m a big fan of using the right tool for the job. For iOS apps, most of the time that’s building something incredible in SwiftUI and UIKit. But for building a game, be it small or large, using something purpose built to be incredible at that seems like the play to me, and Godot feels like a great candidate there. Setup Simply add the SwiftGodotKit package to your Xcode project by selecting your project in the sidebar, ensuring your project is selected in the new sidebar, selecting the Package Dependencies tab, click the +, then paste the GitHub link. After adding it, you will also need to select the target that you added it to in the sidebar, select the Build Settings tab, then select “Other Linker Flags” and add -lc++. Lastly, with that same target, under the General tab add MetalFX.framework to Frameworks, Libraries, and Embedded Content. (Yeah you got me, I don’t know why we have to do that.) After that, you should be able to import SwiftGodotKit. Usage Now we’re ready to use Godot in our iOS app! What excites me most and I want to focus on is embedding an existing Godot game in your iOS app and communicating back and forth with it from your iOS app. This way, you can do the majority of the game development in Godot without even opening Xcode, and then sprinkle in delightful iOS integration by communicating between iOS and Godot where needed. To start, we’ll build a very simple game called IceCreamParlor, where we select from a list of ice cream options in SwiftUI, which then gets passed into Godot. Godot will have a button the user can tap to send a message back to SwiftUI with the total amount of ice cream. This will not be an impressive “game” by any stretch of the imagination, but should be easy to set up and understand the concepts so you can apply it to an actual game. To accomplish our communication, in essence we’ll be recreating iOS’ NotificationCenter to send messages back and forth between Godot and iOS, and like NotificationCenter, we’ll create a simple singleton to accomplish this. Those messages will be sent via Signals. This is Godot’s system for, well, signaling an event occurred, and can be used to signify everything from a button press, to a player taking damage, to a timer ending. Keeping with the NotificationCenter analogy, this would the be Notification that gets posted (except in Godot, it’s used for everything, where in iOS land you really wouldn’t use NotificationCenter for a button press.) And similar to Notification that has a userInfo field to provide more information about the notification, Godot signals can also take an argument that provides more information. (For example if the notification was “player took damage” the argument might be an integer that includes how much damage they took.) Like userInfo, this is optional however and you can also fire off a signal with no further information, something like “userUnlockedPro” for when they activate Pro after your SwiftUI paywall. For our simple example, we’re going to send a “selectedIceCream” signal from iOS to Godot, and a “updatedIceCreamCount” signal from Godot to iOS. The former will have a string argument for which ice cream was selected, and the latter will have an integer argument with the updated count. Setting up our Godot project Open Godot.app (available to download from their website) and create a new project, I’ll type in IceCreamParlor, choose the Mobile renderer, then click Create. Godot defaults to a 3D scene, so I’ll switch to 2D at the top, and then in the left sidebar click 2D Scene to create that as our root node. I’ll right-click the sidebar to add a child node, and select Label. We’ll set the text to the “Ice cream:”. In the right sidebar, we’ll go to Theme Overrides and increase the font size to 80 to make it more visible, and we’ll also rename it in the left sidebar from Label to IceCreamLabel. We’ll also do the same to add a Button to the scene, which we’ll call UpdateButton and sets its text to “Update Ice Cream Count”. If you click the Play button in the top right corner of Godot, it will run and you can click the button, but as of now it doesn’t do anything. We’ll select our root node (Node2D) in the sidebar, right click, and select “Attach Script”. Leave everything as default, and click Create. This will now present us with an area where we can actually code in GDScript, and we can refer to the objects in our scene by prefixing their name with a dollar sign. Inside our script, we’ll implement the _ready function, which is essentially Godot’s equivalent of viewDidLoad, and inside we’ll connect to our simple signal we discussed earlier. We’ll do this by grabbing a reference to our singleton, then reference the signal we want, then connect to it by passing a function we want to be called when the signal is received. And of course the function takes a String as a parameter because our signal includes what ice cream was selected. extends Node2D var ice_cream: Array[String] = [] func _ready() -> void: var singleton = Engine.get_singleton("GodotSwiftMessenger") singleton.ice_cream_selected.connect(_on_ice_cream_selected_signal_received) func _on_ice_cream_selected_signal_received(new_ice_cream: String) -> void: # We received a signal! Probably should do something… pass Note that we haven’t actually created the singleton yet, but we will shortly. Also note that normally in Godot, you have to declare custom signals like the ones we’re using, but we’re going to declare them in Swift. As long as they’re declared somewhere, Godot is happy! Let’s also hook up our button by going back to our scene, selecting our button in the canvas, selecting the “Node” tab in the right sidebar, and double-clicking the pressed() option. We can then select that same Node2D script and name the function _on_update_button_pressed to add a function that executes when the button is pressed (fun fact: the button being pressed event is also powered by signals). func _on_update_button_pressed() -> void: pass Setting up our iOS/Swift project Let’s jump over to Xcode and create a new SwiftUI project there as well, also calling it IceCreamParlor. We’ll start by adding the Swift package for SwiftGodotKit to Swift Package Manager, add -lc++ to our “Other Linker Flags” under “Build Settings”, add MetalFX, then go to ContentView.swift and add import SwiftGodotKit at the top. From here, let’s create a simple SwiftUI view so we can choose from some ice cream options. var body: some View { HStack { Button { } label: { Text("Chocolate") } Button { } label: { Text("Strawberry") } Button { } label: { Text("Vanilla") } } .buttonStyle(.bordered) } We’ll also create a new file in Xcode called GodotSwiftMessenger.swift. This will be where we implement our singleton that is akin to NotificationCenter. import SwiftGodot @Godot class GodotSwiftMessenger: Object { public static let shared = GodotSwiftMessenger() @Signal var iceCreamSelected: SignalWithArguments<String> @Signal var iceCreamCountUpdated: SignalWithArguments<Int> } We first import SwiftGodot (minus the Kit), essentially because this part is purely about interfacing with Godot through Godot, and doesn’t care about whether or not it’s embedded in an iOS app. For more details on SwiftGodot see its section below. Then, we annotate our class with the @Godot Swift Macro, which basically just says “Hey make Godot aware that this class exists”. The class is a subclass of Object as everything in Godot needs to inherit from Object, it’s essentially the parent class of everything. Following that is your bog standard Swift singleton initialization. Then, with another Swift Macro, we annotate a variable we want to be our signal which signifies that it’s a Signal to Godot. You can either specify its type as Signal or SignalWithArguments<T> depending on whether or not the specific signal also sends any data alongside it. We’ll use that “somethingHappened” signal we mentioned early, which includes a string for more details on what happened. Note that we used “ice_cream_selected” in Godot but “iceCreamSelected” in Swift, this is because the underscore convention is used in Godot, and SwiftGodotKit will automatically map the camelCase Swift convention to it. Now we need to tell Godot about this singleton we just made. We want Godot to know about it as soon as possible, otherwise if things aren’t hooked up, Godot might emit a signal that we wouldn’t receive in Swift, or vice-versa. So, we’ll hook it up very early in our app cycle. In SwiftUI, you might do this in the init of your main App struct as I’ll show below, and in UIKit in applicationDidFinishLaunching. @main struct IceCreamParlor: App { init() { initHookCb = { level in guard level == .scene else { return } register(type: GodotSwiftMessenger.self) Engine.registerSingleton(name: "GodotSwiftMessenger", instance: GodotSwiftMessenger.shared) } } var body: some Scene { WindowGroup { ContentView() } } } In addition to the boilerplate code Xcode gives us, we’ve added an extra step to the initializer, where we set a callback on initHookCb. This is just a callback that fires as Godot is setup, and it specifies what level of setup has occurred. We want to wait until the level setup is reached, which means the game is ready to go (you could set it up at an even earlier level if you see that as beneficial). Then, we just tell Godot about this type by calling register, and then we register the singleton itself with a name we want it to be accessible under. Again, we want to do this early, as if Godot was already setup in our app, and then we set initHookCb, its contents would never fire and thus we wouldn’t register anything. But don’t worry, this hook won’t fire until we first initialize our Godot game in iOS ourself, so as long as this code is called before then, we’re golden. Lastly, everything is registered in iOS land, but there’s still nothing that emits or receives signals. Let’s change that by going to ContentView.swift, and change our body to the following: import SwiftUI import SwiftGodotKit import SwiftGodot struct ContentView: View { @State var totalIceCream = 0 @State var godotApp: GodotApp = GodotApp(packFile: "main.pck") var body: some View { VStack { GodotAppView() .environment(\.godotApp, godotApp) Text("Total Ice Cream: \(totalIceCream)") HStack { Button { GodotSwiftMessenger.shared.iceCreamSelected.emit("chocolate") } label: { Text("Chocolate") } Button { GodotSwiftMessenger.shared.iceCreamSelected.emit("strawberry") } label: { Text("Strawberry") } Button { GodotSwiftMessenger.shared.iceCreamSelected.emit("vanilla") } label: { Text("Vanilla") } } .buttonStyle(.bordered) } .onAppear { GodotSwiftMessenger.shared.iceCreamCountUpdated.connect { newTotalIceCream in totalIceCream = newTotalIceCream } } } } There’s quite a bit going on here, but let’s break it down because it’s really quite simple. We have two new state variables, the first is to keep track of the new ice cream count. Could we just do this ourselves purely in SwiftUI? Totally, but for fun we’re going to be totally relying on Godot to keep us updated there, and we’ll just reflect that in SwiftUI to show the communication. Secondly and more importantly, we need to declare a variable for our actual game file so we can embed it. We do this embedding at the top of the VStack by creating a GodotAppView, a handy SwiftUI view we can now leverage, and we do so by just setting its environment variable to the game we just declared. Then, we change our buttons to actually emit the selections via signals, and when the view appears, we make sure we connect to the signal that keeps us updated on the count so we can reflect that in the UI. Note that we don’t also connect to the iceCreamSelected signal, because we don’t care to receive it in SwiftUI, we’re just firing that one off for Godot to handle. Communicating Let’s update our gdscript in Godot to take advantage of these changes. func _on_ice_cream_selected_signal_received(new_ice_cream: String) -> void: ice_cream.append(new_ice_cream) $IceCreamLabel.text = "Ice creams: " + ", ".join(ice_cream) func _on_update_button_pressed() -> void: var singleton = Engine.get_singleton("GodotSwiftMessenger") singleton.ice_cream_count_updated.emit(ice_cream.size()) Not too bad! We now receive the signal from SwiftUI and update our UI and internal state in Godot accordingly, as well as the UI by making our ice cream into a comma separated list. And then when the user taps the update button, we then send (emit) that signal back to SwiftUI with the updated count. Running To actually see this live, first make sure you have an actual iOS device plugged in. Unfortunately Godot doesn’t work with the iOS simulator. Secondly, in Godot, select the Project menu bar item, then Export, then click the Add button and select “iOS”. This will bring you to a screen with a bunch of options, but my understanding is that this is 99% if you’re building your app entirely in Godot, you can plug in all the things you’d otherwise plug into Xcode here instead, and Godot will handle them for you. That doesn’t apply to us, we’re going to do all that normally in Xcode anyway, we just want the game files, so ignore all that and select “Export PCK/ZIP…” at the bottom. It’ll ask you where you want to save it, and I just keep it in the Godot project directory, make sure “Godot Project Pack (*.pck)” is selected in the dropdown, and then save it as main.pck. That’s our “game” bundled up, as meager as it is! We’ll then drop that into Xcode, making sure to add it to our target, then we can run it on the device! Here we’ll see choosing the ice cream flavor at the bottom in SwiftUI beams it into the Godot game that’s just chilling like a SwiftUI view, and then we can tap the update button in Godot land to beam the new count right back to SwiftUI to be displayed. Not exactly a AAA game but enough to show the basics of communication 😄 Look at you go! Take this as a leaping off point for all the cool SwiftUI and Godot interoperability that you can accomplish, be it tappings a Settings icon in Godot to bring up a beautifully designed, native SwiftUI settings screen, or confirmation to you your game when the user updated to the Pro version of your game through your SwiftUI paywall. Bonus: SwiftGodot (minus the “Kit”) An additional fun option (that sits at the heart of SwiftGodotKit) is SwiftGodot, which allows you to actually build your entire Godot game with Swift as the programming language if you so choose. Swift for iOS apps, Swift on the server, Swift for game dev. Swift truly is everywhere. For me, I’m liking playing around in GDScript, which is Godot’s native programming language, but it’s a really cool option to know about. Embed size A fear might be that embedding Godot into your app might bloat the binary and result in an enormous app download size. Godot is very lightweight, adding it to your codebase adds a relatively meager (at least by 2025 standards) 30MB to your binary size. That’s a lot larger than SpriteKit’s 0MB, but for all the benefits Godot offers that’s a pretty compelling trade. (30MB was measured by handy blog sponsor, Emerge Tools.) Tips Logging If you log something in Godot/GDScript via print("something") that will also print to the Xcode console, handy! Quickly embedding the pck into iOS Exporting the pck file from Godot to Xcode is quite a few clicks, so if you’re doing it a lot it would be nice to speed that up. We can use the command line to make this a lot nicer. Godot.app also has a headless mode you can use by going inside the .app file, then Contents > MacOS > Godot. But typing the full path to that binary is no fun, so let’s symlink the binary to /usr/local/bin. sudo ln -s "/Applications/Godot.app/Contents/MacOS/Godot" /usr/local/bin/godot Now we can simply type godot anywhere in the Terminal to either open the Godot app, or we can use godot --headless for some command line goodness. My favorite way to do this, is to do something like the following within your Godot project directory: godot --headless --export-pack "iOS" /path/to/xcodeproject/target/main.pck This will handily export the pck and add it to our Xcode project, overwriting any existing pck file, from which point we can simply compile our iOS app. Wrapping it up I really think Godot’s new interoperability with iOS is an incredibly exciting avenue for building games on iOS, be it a full fledged game or a small little easter egg integrated into an existing iOS app, and hats off to all the folks who did the hard work getting it working. Hopefully this serves as an easy way to get things up and running! It might seem like a lot at first glance, but most of the code shown above is just boilerplate to get an example Godot and iOS project up and running, the actual work to embed a game and communicate across them is so delightfully simple! (Also big shout out to Chris Backas and Miguel de Icaza for help getting this tutorial off the ground.)

6 days ago 9 votes